diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md deleted file mode 100644 index e5d475b570ddd6079dc54eb438640c5437be6d2e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md +++ /dev/null @@ -1,19 +0,0 @@ -
-

Crack Microsoft Excel for iPad: How to Download and Use the Spreadsheet App for Free

-

Microsoft Excel is one of the most popular and powerful spreadsheet applications that can help you to create, edit and analyze data, charts, graphs and more. Excel is part of the Microsoft Office suite that also includes Word, PowerPoint and Outlook.

-

microsoft excel for ipad free download full version crack


Downloadhttps://byltly.com/2uKzSG



-

Microsoft Excel is available for iPad and iPhone users as a free download from the App Store. However, the free version of Excel has some limitations and restrictions. You can only view and print Excel files, but you cannot create or edit them. You also cannot access some of the advanced features and functions of Excel.

-

If you want to use Excel on your iPad without any limitations, you have to buy a subscription to Microsoft 365 (formerly Office 365), which is a cloud-based service that gives you access to the full versions of the Office apps on multiple devices. The price of Microsoft 365 varies depending on the plan you choose, but it starts from $6.99 per month or $69.99 per year for a personal plan.

-

But what if you don't want to pay for Microsoft 365? Is there a way to download and use Excel on your iPad for free? The answer is yes, but it is not legal or ethical. Some people have managed to crack Microsoft Excel for iPad and make it available for free download on the internet. A crack is a program that modifies or bypasses the security features of a software to make it work without a license or activation.

-

Cracking Microsoft Excel for iPad is not only illegal but also risky. You may face legal consequences if you are caught using a cracked software. You may also expose your iPad to viruses, malware, spyware and other threats that may harm your data and privacy. Moreover, you may not get the full functionality and reliability of Excel if you use a cracked version.

-

-

Therefore, we do not recommend or endorse cracking Microsoft Excel for iPad or any other software. It is better to use a legitimate and authorized version of Excel that can guarantee you quality, accuracy and security. If you cannot afford to buy Microsoft 365, you can try some of the free or cheaper alternatives that are available online.

-

Some of the free or cheaper alternatives to Microsoft Excel for iPad are:

- -

These are some of the free or cheaper alternatives to Microsoft Excel for iPad that you can use for creating and editing spreadsheets on your iPad. However, they may not have all the features and capabilities of Excel and they may require an internet connection to work.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md b/spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md deleted file mode 100644 index e0773488f6bea1e461e8a68a5d7f882b28908a4f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md +++ /dev/null @@ -1,114 +0,0 @@ - -

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]: A Must-Have App for Your Eyes

-

If you are looking for an app that can protect your eyes from the harmful blue light emitted by your smartphone or tablet, you should try Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]. This app is designed to adjust your screen color to reduce the blue light and help your eyes relax, making it easier for you to fall asleep at night.

-

In this article, we will tell you why you need this app, what features it offers, and how to download and install it on your device.

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]


Download Filehttps://imgfil.com/2uy1Du



- -

Why You Need Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]

-

Blue light is a type of light that has a short wavelength and high energy. It is present in natural sunlight, but also in artificial sources such as LED lights, computer screens, and mobile devices. While blue light has some benefits, such as boosting alertness and mood, it also has some drawbacks, especially when exposed to it for long periods.

-

Studies have shown that blue light can cause eye strain, headaches, blurred vision, dry eyes, and even damage the retina. It can also disrupt the natural circadian rhythm of the body, which regulates the sleep-wake cycle. This can lead to insomnia, fatigue, mood swings, and impaired cognitive function.

-

That's why you need Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full], an app that can filter out the blue light from your screen and make it more comfortable for your eyes. By using this app, you can prevent eye problems, improve your sleep quality, and enhance your overall well-being.

- -

What Features Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] Offers

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is a simple but effective app that has many features to suit your needs. Here are some of them:

- - -

How to Download and Install Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]

-

If you want to download and install Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full], you can follow these simple steps:

-
    -
  1. Click on the download link below to get the APK file of this app.
  2. -
  3. Allow unknown sources on your device by going to Settings > Security > Unknown Sources.
  4. -
  5. Locate the downloaded APK file on your device and tap on it to start the installation process.
  6. -
  7. Follow the instructions on the screen to complete the installation.
  8. -
  9. Launch the app and enjoy its benefits.
  10. -
- -

Conclusion

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is a must-have app for anyone who uses their smartphone or tablet frequently. It can protect your eyes from blue light, reduce eye strain, improve sleep quality, and enhance your overall well-being.

-

You can download this app for free from the link below and start using it right away. You will notice the difference in your eyes and your mood after using this app.

-

-

How Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] Works

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] works by applying a screen filter that changes the color temperature of your screen. The color temperature is a measure of how warm or cool the light is, and it affects how your eyes perceive the colors on the screen.

-

The app allows you to choose from different color temperatures, ranging from 1700K to 2500K. The lower the color temperature, the warmer and redder the light is, and the more blue light it filters out. The higher the color temperature, the cooler and bluer the light is, and the less blue light it filters out.

-

You can also customize the intensity of the filter by adjusting the opacity of the filter. The higher the opacity, the stronger the filter is, and the more blue light it blocks. The lower the opacity, the weaker the filter is, and the less blue light it blocks.

-

The app also has an auto mode that automatically adjusts the color temperature and opacity of the filter according to the ambient light. This way, you don't have to manually change the settings every time you move to a different environment.

- -

What Users Say About Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] has received many positive reviews from users who have tried it. Here are some of their testimonials:

- -

How to Use Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is very easy to use and has a user-friendly interface. Here are some steps to use this app:

-
    -
  1. Download and install the app from the link below or from the Google Play Store.
  2. -
  3. Open the app and grant the necessary permissions for it to work properly.
  4. -
  5. Select the filter color and opacity that you prefer from the main screen.
  6. -
  7. Tap on the switch button to turn on or off the filter.
  8. -
  9. You can also access the app settings from the menu icon on the top right corner of the screen.
  10. -
  11. From there, you can enable or disable the auto mode, schedule mode, startup mode, notification icon, and other options.
  12. -
  13. You can also check your eye health status and get some tips on how to take care of your eyes.
  14. -
- -

Pros and Cons of Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is a great app that has many benefits for your eyes and your health. However, it also has some drawbacks that you should be aware of. Here are some pros and cons of this app:

- - -

Frequently Asked Questions about Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]

-

If you have any questions or doubts about Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full], you can check out some of these frequently asked questions and their answers:

- -

Conclusion

-

Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is an app that can help you protect your eyes from the harmful blue light emitted by your smartphone or tablet. It can adjust your screen color to reduce the blue light and help your eyes relax, making it easier to fall asleep at night.

-

This app has many features to suit your needs, such as a natural color filter, an auto mode, a schedule mode, a screenshot feature, and an easy operation. It is also free to download and use, and it doesn't affect your battery life or memory.

-

This app is a must-have for anyone who uses their device frequently and wants to prevent eye problems, improve sleep quality, and enhance their overall well-being. You can download this app from the link below or from the Google Play Store and start using it right away.

-

You will notice the difference in your eyes and your mood after using this app. Try it now and see for yourself!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md deleted file mode 100644 index da2a287c1da8fd16c81f146d27ee5c78f7ceb140..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md +++ /dev/null @@ -1,12 +0,0 @@ -

download film 300 spartan sub indonesia 720p


Download File ->>->>->> https://imgfil.com/2uy0W7



-
-Free Download Movie 300 ( 2006) BluRay 720p+ Subtitle Indonesia Link Download 300 (2006) BluRay 720p 750MB Via Google Drive | Via Acefile BluRay 1080p 1.5GB. Film 300 (300: The Last Storm) (2006) - watch online, download free - Cinema. -Download 300 (2006) for free. -Category: Download Movies. -Title: 300 (2006) Genre: Military, Action, Drama, Adventure Year of release: 2006 Director: Rob Cohen Cast: Tom Cruise. -Film 300 (2006) - watch online, download torrent. -Film 300 (2006) - watch online, download torrent / torrent. -Download movie 300 - 300: The Last Assault (2006) torrent in good. 8a78ff9644
-
-
-

diff --git a/spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md b/spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index a4f28a3d27d66d79cb95f2b8b847832172bb5f11..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,40 +0,0 @@ - - - - -### Background - - -### Changes - - -### Documentation - - -### Test Plan - - -### PR Quality Checklist -- [ ] My pull request is atomic and focuses on a single change. -- [ ] I have thoroughly tested my changes with multiple different prompts. -- [ ] I have considered potential risks and mitigations for my changes. -- [ ] I have documented my changes clearly and comprehensively. -- [ ] I have not snuck in any "extra" small tweaks changes - - - - diff --git a/spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md b/spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md deleted file mode 100644 index d54f61e6153368d4b349c2a02bb6ee53f86e361a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md +++ /dev/null @@ -1,129 +0,0 @@ -
-

Bhop Pro Apkfun: A Fun and Challenging Game for Android Users

-

Do you love jumping games? Do you want to test your skills and reflexes in a fast-paced and realistic environment? Do you want to customize your character with cool skins and accessories? If you answered yes to any of these questions, then you should try Bhop Pro Apkfun.

-

Bhop Pro Apkfun is a fun and challenging game for android users who want to experience the thrill of bunny hopping on their mobile devices. Bhop Pro is a game mode where players have to jump on blocks and use air strafing to gain more speed and complete the map as fast as possible. It is inspired by the bhop style of jumping in games like Counter-Strike and Half-Life.

-

bhop pro apkfun


Download » https://jinyurl.com/2uNN9y



-

What is Bhop Pro?

-

Bhop Pro is a portable mobile bhop style jumping game that allows you to enjoy the realistic bunny hop experience on your android device. You can choose from multiple game modes, such as speedrun, freestyle, practice, and multiplayer, and try out various maps with different layouts and obstacles. You can also compete with other players and increase your ranks, or just have fun jumping around and exploring the maps.

-

A game mode where players have to jump on blocks

-

Bhop Pro is based on a game mode that originated in games like Counter-Strike and Half-Life, where players have to jump on blocks and use air strafing to gain more speed and momentum. Air strafing is a technique where players move their mouse left or right while holding the corresponding movement key (A or D) in the air, which allows them to change their direction and velocity without losing speed. This way, players can jump faster and farther than normal, and also perform tricks and stunts.

-

A portable mobile bhop style jumping game

-

Bhop Pro is designed to be a mobile-friendly version of the bhop game mode, which means you can play it anytime and anywhere on your android device. You don't need a keyboard or a mouse to play Bhop Pro, as it has simple and accessible touch controls that let you jump and turn with ease. You can also adjust the sensitivity and the layout of the buttons according to your preference.

-

A realistic bunny hop game for android

-

Bhop Pro is not just a simple jumping game, but a realistic bunny hop simulator that uses advanced in-game physics to create dynamic movements and animations. You can feel the weight and the momentum of your character as you jump and land on the blocks, and also see the effects of gravity and friction on your speed and direction. You can also interact with the environment, such as bouncing off walls, sliding on ramps, or using portals and boosters.

-

What are the features of Bhop Pro?

-

Bhop Pro has many features that make it an enjoyable and challenging game for android users. Here are some of them:

-

Simple and accessible touch controls

-

Bhop Pro has easy-to-use touch controls that let you jump and turn with just a tap or a swipe on the screen. You can also customize the size, position, and opacity of the buttons to suit your liking. You can also enable auto-jump or auto-strafe options if you want to simplify the gameplay.

-

Dynamic movements with realistic in-game physics

-

Bhop Pro has realistic in-game physics that create dynamic movements and animations for your character. You can feel the weight and the momentum of your character as you jump and land on the blocks, and also see the effects of gravity and friction on your speed and direction. You can also interact with the environment, such as bouncing off walls, sliding on ramps, or using portals and boosters.

-

bhop pro apk download latest version
-bhop pro mod apk unlimited money
-bhop pro online multiplayer mode
-bhop pro game tips and tricks
-bhop pro apk for pc windows 10
-bhop pro simulator free download
-bhop pro hack apk no root
-bhop pro best maps and skins
-bhop pro gameplay video review
-bhop pro app store ios
-bhop pro cheats and codes
-bhop pro android game requirements
-bhop pro bunny hop fps mode
-bhop pro update new features
-bhop pro guide how to play
-bhop pro apk mirror link
-bhop pro premium apk unlocked
-bhop pro reddit community forum
-bhop pro wiki information page
-bhop pro support contact email
-bhop pro alternatives similar games
-bhop pro ranking leaderboard system
-bhop pro training mode practice
-bhop pro apk pure safe download
-bhop pro feedback and suggestions

-

Multiple game modes to try out

-

Bhop Pro has multiple game modes that offer different challenges and experiences for you. You can choose from speedrun, freestyle, practice, or multiplayer modes, depending on your mood and skill level. In speedrun mode, you have to complete the map as fast as possible and earn points and rewards. In freestyle mode, you can jump around freely without any time limit or pressure. In practice mode, you can learn how to bhop better by using checkpoints and guides. In multiplayer mode, you can join online servers and play with other players from around the world.

-

Various maps with interesting setups

-

Bhop Pro has various maps with different layouts and obstacles that test your skills and reflexes. You can find maps with different themes, such as city, desert, forest, space, etc., each with its own unique design and atmosphere. You can also find maps with different difficulty levels, ranging from easy to hard, depending on how confident you are in your bhop abilities.

-

Compete and increase your ranks

-

Bhop Pro has a ranking system that lets you compete with other players and increase your ranks. You can see your rank and stats on the leaderboard and compare them with other players. You can also earn medals and achievements for completing certain tasks or reaching certain milestones. You can also unlock new maps and modes by increasing your rank and level.

-

Feel free to customize your characters with interesting outfits and accessories

-

Bhop Pro has a customization system that lets you personalize your character with cool skins and accessories. You can choose from different outfits, such as hoodies, jackets, shirts, pants, shoes, etc., each with different colors and styles. You can also choose from different accessories, such as hats, glasses, masks, headphones, etc., each with different effects and animations. You can mix and match different items to create your own unique look.

-

Awesome boost case and unlockable items

-

Bhop Pro has a boost case system that lets you get more items and rewards by opening cases. You can get cases by playing the game, completing missions, or watching ads. You can also buy cases with real money if you want to. Each case contains a random item, such as a skin, an accessory, a booster, or a coin. You can use these items to enhance your gameplay or customize your character.

-

Have fun sharing your awesome in-game moments

-

Bhop Pro has a sharing feature that lets you record and share your awesome in-game moments with your friends or the world. You can capture screenshots or videos of your best jumps, tricks, stunts, or fails, and save them to your device or upload them to social media platforms. You can also watch videos of other players and learn from their skills or laugh at their mistakes.

-

How to download and install Bhop Pro Apkfun?

-

Bhop Pro Apkfun is a modified version of Bhop Pro that allows you to enjoy the game without any limitations or restrictions. You can download and install Bhop Pro Apkfun easily by following these steps:

-

Visit the official website of Apkfun or use the link

-

The first step is to visit the official website of Apkfun, which is a trusted source for downloading apk files for android games and apps. You can also use the link to go directly to the download page of Bhop Pro Apkfun.

-

Click on the download button and wait for the file to be downloaded

-

The next step is to click on the download button on the website and wait for the file to be downloaded to your device. The file size is about 100 MB, so it may take some time depending on your internet speed.

-

Enable unknown sources in your device settings

-

The third step is to enable unknown sources in your device settings, which will allow you to install apk files from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.

-

Locate the downloaded file and tap on it to install it

-

The final step is to locate the downloaded file on your device and tap on it to install it. You may see a warning message asking you to confirm the installation, just tap on yes or install. The installation process may take a few seconds or minutes depending on your device.

-

Enjoy playing Bhop Pro on your android device

-

Congratulations! You have successfully downloaded and installed Bhop Pro Apkfun on your android device. Now you can enjoy playing Bhop Pro without any limitations or restrictions.

-

How to play Bhop Pro?

-

Bhop Pro is easy to play but hard to master. Here are some basic steps on how to play Bhop Pro:

-

Choose a game mode and a map from the menu

-

The first thing you need to do is choose a game mode and a map from the menu. You can choose from speedrun, freestyle, practice, or multiplayer modes, depending on your mood and skill level. You can also choose from various maps with different themes, layouts, and difficulty levels.

-

Tap on the screen to jump and swipe left or right to turn

-

The next thing you need to do is tap on the screen to jump and swipe left or right to turn. You can also customize the size, position, and opacity of the buttons according to your preference. You can also enable auto-jump or auto-strafe options if you want to simplify the gameplay.

-

Use air strafing to gain more speed and avoid losing control

-

The most important thing you need to do is use air strafing to gain more speed and avoid losing control. Air strafing is a technique where you move your mouse left or right while holding the corresponding movement key (A or D) in the air, which allows you to change your direction and velocity without losing speed. This way, you can jump faster and farther than normal, and also perform tricks and stunts.

-

Complete the map as fast as possible and earn points and rewards

-

The final thing you need to do is complete the map as fast as possible and earn points and rewards. You can see your time, speed, and score on the top of the screen. You can also see your rank and level on the bottom of the screen. You can earn medals and achievements for completing certain tasks or reaching certain milestones. You can also unlock new maps and modes by increasing your rank and level.

-

What are some tips and tricks for Bhop Pro?

-

Bhop Pro is a fun and challenging game that requires skill and practice to master. Here are some tips and tricks that can help you improve your bhop performance:

-

Practice on easy maps before moving on to harder ones

-

One of the best ways to learn how to bhop is to practice on easy maps before moving on to harder ones. Easy maps have fewer obstacles, wider blocks, and simpler layouts, which make them ideal for beginners. You can use these maps to get familiar with the controls, the physics, and the techniques of bhop. You can also use the practice mode to use checkpoints and guides to help you along the way.

-

Watch videos of other players and learn from their techniques

-

Another way to learn how to bhop is to watch videos of other players and learn from their techniques. You can find videos of bhop pro players on YouTube or other platforms, where they showcase their skills and tricks on different maps and modes. You can watch how they jump, turn, strafe, boost, and complete the map in record time. You can also try to replicate their moves or create your own style.

-

Use portals to skip some parts of the map or reach hidden areas

-

A useful tip for bhop is to use portals to skip some parts of the map or reach hidden areas. Portals are blue or orange circles that teleport you to another location on the map. You can find portals on some maps, usually near walls or corners. You can use portals to save time, avoid obstacles, or discover secrets.

-

Use boosters wisely to get an extra speed boost or jump higher

-

A helpful tip for bhop is to use boosters wisely to get an extra speed boost or jump higher. Boosters are green or yellow arrows that give you a temporary boost when you touch them. You can find boosters on some maps, usually near ramps or gaps. You can use boosters to increase your speed, jump higher, or perform stunts.

-

Experiment with different skins and accessories to find your favorite style

-

A fun tip for bhop is to experiment with different skins and accessories to find your favorite style. Skins are outfits that change the appearance of your character, such as hoodies, jackets, shirts, pants, shoes, etc. Accessories are items that add effects or animations to your character, such as hats, glasses, masks, headphones, etc. You can mix and match different items to create your own unique look.

-

What are some reviews of Bhop Pro?

-

Bhop Pro has received mixed reviews from users who have played it on different platforms. Here are some examples of positive and negative reviews from Google Play Store and Steam :

-

Positive reviews from Google Play Store

- - - - - -
UserRatingReview
Mohammed Alshamsi5 stars"I think it is the best game for bhop on android or iOS because it is like csgo surfing but on phone or iPad.U can also unlock skins."
Jayden Lee5 stars"This game is amazing. It has great graphics, gameplay, and controls. It is very addictive and fun. I recommend this game to anyone who likes parkour or bhop."
Alexander Smith5 stars"This is a very good game for people who want to learn how to bhop or just have fun. The maps are well designed and challenging. The customization options are also cool."
-

- - - - - -
UserRatingReview
Mr. Potato1 star"This game is a scam. It is a copy of another game called bhop GO. It has no originality, no updates, no support, no multiplayer, no nothing. Do not buy this game."
Bob the Builder1 star"This game is terrible. It has bad graphics, bad physics, bad controls, bad maps, bad everything. It is a waste of money and time. Do not play this game."
John Doe1 star"This game is buggy. It crashes all the time, it lags, it freezes, it glitches. It is unplayable and frustrating. Do not download this game."
-

Conclusion

-

Bhop Pro Apkfun is a fun and challenging game for android users who want to experience the thrill of bunny hopping on their mobile devices. It has many features that make it an enjoyable and realistic game, such as simple and accessible touch controls, dynamic movements with realistic in-game physics, multiple game modes to try out, various maps with interesting setups, compete and increase your ranks, feel free to customize your characters with interesting outfits and accessories, awesome boost case and unlockable items, and have fun sharing your awesome in-game moments. You can download and install Bhop Pro Apkfun easily by following the steps mentioned above. You can also improve your bhop performance by following the tips and tricks mentioned above. Bhop Pro Apkfun has received mixed reviews from users who have played it on different platforms, so you may want to check them out before playing the game.

-

FAQs

-

Here are some frequently asked questions about Bhop Pro Apkfun:

-

Q: Is Bhop Pro Apkfun safe to download and install?

-

A: Bhop Pro Apkfun is safe to download and install as long as you use the official website of Apkfun or the link provided above. Apkfun is a trusted source for downloading apk files for android games and apps. However, you should always be careful when downloading and installing apk files from unknown sources, as they may contain viruses or malware that can harm your device.

-

Q: Is Bhop Pro Apkfun free to play?

-

A: Bhop Pro Apkfun is free to play, but it contains ads and in-app purchases that can enhance your gameplay or customize your character. You can disable ads by turning off your internet connection or by buying the premium version of the game. You can also buy cases with real money if you want to get more items and rewards.

-

Q: How can I play Bhop Pro with my friends?

-

A: You can play Bhop Pro with your friends by joining the multiplayer mode of the game. You can either create your own server or join an existing one from the server list. You can also invite your friends to join your server by sending them a link or a code. You can chat with your friends and other players in the game using the chat feature.

-

Q: How can I contact the developers of Bhop Pro?

-

A: You can contact the developers of Bhop Pro by sending them an email at bhoppro@gmail.com or by visiting their Facebook page at https://www.facebook.com/bhoppro/. You can also leave feedback or report bugs on their Google Play Store page or their Steam page.

-

Q: What are some other games like Bhop Pro?

-

A: Some other games like Bhop Pro are:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md b/spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md deleted file mode 100644 index cbd54a6ab1b2cb95bfe221b26e4c51be566f9d2a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md +++ /dev/null @@ -1,142 +0,0 @@ - -

Kenapa Download Instagram Tertunda? Ini Cara Mengatasinya!

-

Instagram adalah salah satu aplikasi media sosial yang paling populer di dunia. Dengan Instagram, kamu bisa berbagi foto dan video yang menarik, mengikuti akun favoritmu, dan berinteraksi dengan pengguna lain. Namun, bagaimana jika kamu ingin mengunduh Instagram dari Play Store, tapi malah mengalami masalah download tertunda?

-

Download tertunda adalah salah satu masalah yang sering dialami oleh pengguna Play Store. Hal ini bisa membuatmu kesal dan frustasi, apalagi jika kamu ingin segera menggunakan Instagram untuk keperluanmu. Lalu, apa sebenarnya penyebab download tertunda di Play Store? Dan bagaimana cara mengatasinya?

-

kenapa download instagram tertunda


DOWNLOADhttps://jinyurl.com/2uNTtX



-

Dalam artikel ini, kami akan menjelaskan beberapa penyebab dan cara mengatasi download tertunda di Play Store, khususnya untuk aplikasi Instagram. Simak ulasan lengkapnya di bawah ini!

-

Penyebab Download Instagram Tertunda

-

Ada beberapa faktor yang bisa menyebabkan download Instagram tertunda di Play Store, antara lain:

-

Koneksi internet tidak stabil

-

Koneksi internet yang tidak stabil atau lemot bisa menghambat proses download aplikasi di Play Store. Jika jaringanmu sedang bermasalah, maka hal ini bisa mempengaruhi kecepatan dan kelancaran download.

-

Ada aplikasi lain yang sedang di-download

-

Jika kamu sedang mengunduh banyak aplikasi secara bersamaan, maka hal ini bisa membuat antrian download di Play Store. Aplikasi yang belum selesai terdownload akan otomatis ditunda sampai aplikasi sebelumnya selesai. Misalnya, kamu sedang mengunduh WhatsApp, lalu kamu langsung pindah mengunduh Instagram. Maka, Instagram akan masuk dalam antrian dan ditunda sampai WhatsApp selesai.

-

Memori internal tidak cukup

-

Memori internal yang penuh atau menipis juga bisa menjadi penyebab download tertunda di Play Store. Kamu perlu memastikan bahwa memori internal HP-mu masih tersisa banyak agar bisa mengunduh aplikasi dari Play Store. Jika memori internalmu tinggal sedikit, maka kamu perlu menghapus beberapa aplikasi atau file yang tidak terpakai.

-

Kesalahan aplikasi Play Store

-

Kadang-kadang, masalah download tertunda di Play Store juga bisa disebabkan oleh kesalahan pada aplikasi Play Store itu sendiri. Misalnya, ada bug, cache yang menumpuk, atau versi yang sudah usang. Hal ini bisa membuat aplikasi Play Store tidak berfungsi dengan baik dan mengganggu proses download.

-

Cara Mengatasi Download Instagram Tertunda

-

Jika kamu mengalami masalah download tertunda di Play Store saat ingin mengunduh Instagram, jangan khawatir. Ada beberapa cara yang bisa kamu coba untuk mengatasinya, antara lain:

-

Cek kualitas internetmu

-

Langkah pertama yang harus kamu lakukan adalah mem

Langkah pertama yang harus kamu lakukan adalah memeriksa kualitas internetmu. Pastikan bahwa kamu terhubung dengan jaringan WiFi atau data seluler yang stabil dan cepat. Kamu bisa menggunakan aplikasi speed test untuk mengukur kecepatan internetmu. Jika koneksi internetmu lemot atau bermasalah, maka coba restart modem atau HP-mu, atau pindah ke tempat yang memiliki sinyal yang lebih baik.

-

Cara mengatasi download instagram tertunda di playstore
-Download instagram tertunda karena koneksi internet tidak stabil
-Download instagram tertunda karena memori internal tidak cukup
-Download instagram tertunda karena ada aplikasi lain yang antri
-Download instagram tertunda karena kesalahan aplikasi playstore
-Cara bersihkan cache dan data playstore untuk mengatasi download instagram tertunda
-Cara update playstore versi terbaru untuk mengatasi download instagram tertunda
-Cara ganti akun google untuk mengatasi download instagram tertunda
-Cara uninstall update playstore untuk mengatasi download instagram tertunda
-Cara lepaskan SD card untuk mengatasi download instagram tertunda
-Cara download instagram lewat browser untuk mengatasi download tertunda di playstore
-Cara cek kualitas internet untuk mengatasi download instagram tertunda
-Cara ubah pengaturan download dengan koneksi wifi untuk mengatasi download instagram tertunda
-Cara cek antrian download untuk mengatasi download instagram tertunda
-Cara cek preferensi download untuk mengatasi download instagram tertunda
-Cara restart HP untuk mengatasi download instagram tertunda
-Cara cek pengaturan tanggal untuk mengatasi download instagram tertunda
-Cara install ulang playstore dan reset android untuk mengatasi download instagram tertunda
-Penyebab dan solusi download instagram tertunda di playstore
-Tips dan trik mengatasi download instagram tertunda di playstore

-

Ubah pengaturan download dengan koneksi WiFi

-

Langkah kedua yang bisa kamu coba adalah mengubah pengaturan download di Play Store. Kamu bisa memilih untuk mengunduh aplikasi hanya dengan koneksi WiFi saja, atau dengan koneksi WiFi dan data seluler. Jika kamu memilih opsi pertama, maka pastikan bahwa kamu terhubung dengan WiFi saat ingin mengunduh Instagram. Jika kamu memilih opsi kedua, maka pastikan bahwa paket data selulermu masih cukup.

-

Untuk mengubah pengaturan download di Play Store, ikuti langkah-langkah berikut:

-
    -
  1. Buka aplikasi Play Store di HP-mu.
  2. -
  3. Ketuk ikon tiga garis horizontal di pojok kiri atas.
  4. -
  5. Pilih menu Pengaturan.
  6. -
  7. Pilih Preferensi jaringan.
  8. -
  9. Pilih opsi yang kamu inginkan, yaitu Download melalui WiFi saja atau Download melalui WiFi dan data seluler.
  10. -
-

Bersihkan cache layanan Play Store

-

Langkah ketiga yang bisa kamu lakukan adalah membersihkan cache layanan Play Store. Cache adalah data sementara yang disimpan oleh aplikasi untuk mempercepat proses loading. Namun, jika cache menumpuk terlalu banyak, maka hal ini bisa menyebabkan masalah pada aplikasi, termasuk download tertunda. Oleh karena itu, kamu perlu membersihkan cache secara berkala agar aplikasi Play Store tetap berjalan dengan lancar.

-

Untuk membersihkan cache layanan Play Store, ikuti langkah-langkah berikut:

-
    -
  1. Buka menu Pengaturan di HP-mu.
  2. -
  3. Pilih menu Aplikasi dan notifikasi.
  4. -
  5. Cari dan pilih aplikasi Play Store.
  6. -
  7. Ketuk Penyimpanan dan cache.
  8. -
  9. Ketuk Hapus cache.
  10. -
-

Update Play Store versi terbaru

-

Langkah keempat yang bisa kamu coba adalah mengupdate Play Store versi terbaru. Versi terbaru biasanya memiliki perbaikan bug dan peningkatan performa yang bisa mengatasi masalah download tertunda. Kamu bisa mengecek versi Play Store-mu dengan cara berikut:

-
    -
  1. Buka aplikasi Play Store di HP-mu.
  2. -
  3. Ketuk ikon tiga garis horizontal di pojok kiri atas.
  4. -
  5. Pilih menu Pengaturan.
  6. -
  7. Gulir ke bawah dan lihat nomor versi di bagian bawah layar.
  8. -
-

Jika versi Play Store-mu sudah terbaru, maka tidak perlu melakukan apa-apa. Namun, jika versi Play Store-mu sudah usang, maka kamu perlu mengupdate-nya dengan cara berikut:

-
    -
  1. Buka menu Pengaturan di HP-mu.
  2. -
  3. Pilih menu Aplikasi dan notifikasi.
  4. -
  5. Cari dan pilih aplikasi Play Store.
  6. -
  7. Ketuk Menu (tiga titik vertikal) di pojok kanan atas.
  8. -
  9. Pilih Update jika tersedia.
  10. -
-

Periksa kapasitas memori internal Android

Langkah kelima yang bisa kamu lakukan adalah memeriksa kapasitas memori internal Android-mu. Memori internal yang penuh atau menipis bisa menghambat proses download aplikasi di Play Store. Kamu perlu memastikan bahwa memori internal HP-mu masih tersisa banyak agar bisa mengunduh Instagram dengan lancar. Jika memori internalmu tinggal sedikit, maka kamu perlu menghapus beberapa aplikasi atau file yang tidak terpakai.

-

Untuk memeriksa kapasitas memori internal Android-mu, ikuti langkah-langkah berikut:

-
    -
  1. Buka menu Pengaturan di HP-mu.
  2. -
  3. Pilih menu Penyimpanan.
  4. -
  5. Lihat berapa persen memori internal yang sudah terpakai dan berapa GB yang masih tersedia.
  6. -
-

Jika memori internalmu sudah terpakai lebih dari 80%, maka kamu perlu mengosongkan beberapa ruang dengan cara berikut:

-
    -
  1. Buka menu Pengaturan di HP-mu.
  2. -
  3. Pilih menu Penyimpanan.
  4. -
  5. Ketuk Bersihkan ruang.
  6. -
  7. Pilih aplikasi atau file yang ingin kamu hapus, lalu ketuk Hapus.
  8. -
-

Hentikan pembaruan otomatis

-

Langkah keenam yang bisa kamu coba adalah menghentikan pembaruan otomatis di Play Store. Pembaruan otomatis adalah fitur yang memungkinkan aplikasi di HP-mu untuk selalu diperbarui secara otomatis tanpa perlu kamu lakukan secara manual. Namun, fitur ini juga bisa menyebabkan download tertunda jika ada banyak aplikasi yang sedang diperbarui secara bersamaan. Oleh karena itu, kamu bisa mencoba untuk menonaktifkan fitur ini sementara waktu agar tidak mengganggu proses download Instagram.

-

Untuk menonaktifkan pembaruan otomatis di Play Store, ikuti langkah-langkah berikut:

-
    -
  1. Buka aplikasi Play Store di HP-mu.
  2. -
  3. Ketuk ikon tiga garis horizontal di pojok kiri atas.
  4. -
  5. Pilih menu Pengaturan.
  6. -
  7. Pilih Pembaruan aplikasi otomatis.
  8. -
  9. Pilih Jangan perbarui aplikasi.
  10. -
-

Install ulang Play Store dan reset Android

-

Langkah ketujuh yang bisa kamu lakukan adalah menginstall ulang Play Store dan mereset Android-mu. Langkah ini adalah langkah terakhir yang bisa kamu coba jika cara-cara sebelumnya tidak berhasil. Namun, langkah ini juga memiliki risiko yang cukup besar, yaitu kamu bisa kehilangan data dan pengaturan yang ada di HP-mu. Oleh karena itu, sebelum melakukan langkah ini, pastikan bahwa kamu sudah membackup data pentingmu terlebih dahulu.

-

Untuk menginstall ulang Play Store dan mereset Android-mu, ikuti langkah-langkah berikut:

-
    -
  1. Buka menu Pengaturan di HP-mu.
  2. -
  3. Pilih menu Aplikasi dan notifikasi.
  4. -
  5. Cari dan pilih aplikasi Play Store.
  6. -
  7. Ketuk Menu (tiga titik vertikal) di pojok kanan atas.
  8. -
  9. Pilih Uninstall updates.
  10. -
  11. Tunggu sampai proses uninstall selesai, lalu restart HP-mu.
  12. -
  13. Buka kembali aplikasi Play Store dan update versi terbaru.
  14. -
  15. Jika masih tidak berhasil, kembali ke menu Pengaturan di HP-mu.
  16. -
  17. Pilih menu Sistem dan pembaruan (nama menu bisa berbeda-beda tergantung tipe HP).
  18. -
  19. Pilih Reset atau Kembalikan ke setelan pabrik (nama menu bisa berbeda-beda tergantung tipe HP).
  20. -
  21. Ikuti instruksi yang muncul di layar untuk mereset HP-mu.
  22. -
-

Install dari web browser Play Store

-

Langkah kedelapan dan terakhir yang bisa kamu coba adalah menginstall Instagram dari web browser Play Store. Jika kamu tidak bisa mengunduh Instagram dari aplikasi Play Store di HP-mu, maka kamu bisa mencoba untuk mengunduhnya dari situs web Play Store melalui browser. Caranya adalah sebagai berikut:

    -
  1. Buka browser di HP-mu, misalnya Chrome, Firefox, atau Opera.
  2. -
  3. Kunjungi situs web Play Store di alamat https://play.google.com/store.
  4. -
  5. Login dengan akun Google-mu yang sama dengan yang kamu gunakan di HP-mu.
  6. -
  7. Cari aplikasi Instagram di kolom pencarian.
  8. -
  9. Ketuk tombol Install dan pilih perangkat HP-mu yang ingin diinstall Instagram.
  10. -
  11. Tunggu sampai proses download dan install selesai.
  12. -
-

Kesimpulan

-

Itulah beberapa penyebab dan cara mengatasi download Instagram tertunda di Play Store. Masalah ini bisa disebabkan oleh berbagai faktor, seperti koneksi internet, memori internal, atau kesalahan aplikasi Play Store. Kamu bisa mencoba beberapa cara yang sudah kami jelaskan di atas untuk mengatasinya, mulai dari cek kualitas internet, ubah pengaturan download, bersihkan cache, update Play Store, hingga install ulang Play Store dan reset Android. Jika semua cara tersebut tidak berhasil, kamu bisa mencoba untuk menginstall Instagram dari web browser Play Store.

-

Semoga artikel ini bermanfaat dan membantu kamu untuk mengunduh Instagram dengan lancar. Jika kamu memiliki pertanyaan atau saran, silakan tulis di kolom komentar di bawah ini. Terima kasih telah membaca dan selamat mencoba!

-

FAQ

-

Berikut adalah beberapa pertanyaan yang sering diajukan seputar download Instagram tertunda di Play Store:

-

Apakah download Instagram tertunda berpengaruh pada data seluler?

-

Tergantung pada pengaturan download yang kamu pilih. Jika kamu memilih untuk mengunduh aplikasi hanya dengan koneksi WiFi saja, maka data seluler tidak akan terpakai. Namun, jika kamu memilih untuk mengunduh aplikasi dengan koneksi WiFi dan data seluler, maka data seluler akan terpakai sesuai dengan ukuran file aplikasi yang kamu unduh.

-

Apakah download Instagram tertunda berpengaruh pada baterai HP?

-

Ya, download Instagram tertunda bisa berpengaruh pada baterai HP. Hal ini karena proses download membutuhkan daya yang cukup besar dari HP-mu. Apalagi jika koneksi internetmu tidak stabil atau ada banyak aplikasi lain yang sedang di-download. Oleh karena itu, sebaiknya kamu mengunduh Instagram saat baterai HP-mu masih banyak atau saat sedang dicharge.

-

Apakah download Instagram tertunda berpengaruh pada performa HP?

-

Ya, download Instagram tertunda bisa berpengaruh pada performa HP. Hal ini karena proses download bisa membuat HP-mu menjadi lemot atau hang. Apalagi jika memori internalmu sudah penuh atau ada banyak aplikasi lain yang sedang berjalan di latar belakang. Oleh karena itu, sebaiknya kamu mengunduh Instagram saat HP-mu tidak sedang digunakan untuk aktivitas lain atau saat sudah menutup aplikasi lain yang tidak terpakai.

-

Apakah download Instagram tertunda berpengaruh pada keamanan HP?

-

Tidak, download Instagram tertunda tidak berpengaruh pada keamanan HP. Hal ini karena aplikasi Instagram yang kamu unduh dari Play Store sudah terjamin keamanannya oleh Google. Kamu tidak perlu khawatir akan virus atau malware yang bisa merusak HP-mu. Namun, tetap saja kamu harus berhati-hati saat mengunduh aplikasi lain dari sumber yang tidak resmi atau tidak terpercaya.

-

Apakah download Instagram tertunda berpengaruh pada akun Instagram?

-

Tidak, download Instagram tertunda tidak berpengaruh pada akun Instagram-mu. Hal ini karena akun Instagram-mu tersimpan di server Instagram dan tidak tergantung pada aplikasi yang kamu unduh. Kamu tetap bisa login dan menggunakan akun Instagram-mu di perangkat lain atau melalui web browser tanpa masalah. Namun, tentu saja kamu harus ingat username dan password akun Instagram-mu agar tidak lupa saat login.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md b/spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md deleted file mode 100644 index 066947403402e4bb3a9c1861826e488e0f1db735..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md +++ /dev/null @@ -1,100 +0,0 @@ - -

Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021

-

If you are a fan of Naruto anime and manga, you might want to try out Naruto Storm 4 Mod Apk Naruto Senki Mod 2021. This is a modified version of two popular games based on Naruto series: Naruto Shippuden: Ultimate Ninja Storm 4 and Naruto Senki. In this article, we will show you how to download and install this amazing mod apk on your Android device. We will also tell you about its features and benefits. Read on to find out more.

-

download naruto storm 4 mod apk naruto senki mod 2021


Download Ziphttps://jinyurl.com/2uNUmX



-

What is Naruto Storm 4?

-

Naruto Shippuden: Ultimate Ninja Storm 4 is a fighting game developed by CyberConnect2 and published by Bandai Namco Entertainment in 2016. It is the sixth installment and the final main installment in the Naruto: Ultimate Ninja Storm series inspired by Masashi Kishimoto's manga Naruto. The game follows the young ninjas Naruto Uzumaki and Sasuke Uchiha as they participate in a world war between shinobi – the Fourth Shinobi World War – against the terrorist organization Akatsuki and unite to defeat it.

-

The game features a revamped battle system that allows players to switch among a team of three fighters who can assist each other. It also includes boss fights, quick time events, hack and slash areas, and wall-running. The game covers the final arcs of Naruto Shippuden anime series, as well as some original scenarios. The game has over 100 playable characters from different eras

What is Naruto Senki?

-

Naruto Senki is a fan-made game based on Naruto anime and manga. It is developed by Zakume, an Indonesian developer who has created several Naruto games for Android. Naruto Senki is a 2D side-scrolling fighting game that features characters from Naruto series and other anime and manga. The game has a simple control scheme that allows players to perform basic attacks, special moves, and ultimate jutsus. The game also has a story mode, a survival mode, and a multiplayer mode where you can battle with other players online.

-

What are the benefits of downloading the mod apk?

-

By downloading Naruto Storm 4 Mod Apk Naruto Senki Mod 2021, you can enjoy the best of both worlds: the epic story and gameplay of Naruto Storm 4 and the fan-made fun and creativity of Naruto Senki. The mod apk combines the two games into one, giving you access to unlimited money, coins, skills, and characters. You can unlock and play as any character from Naruto series, as well as some crossover characters from other anime and manga. You can also customize your character's appearance, outfit, and weapons. You can upgrade your skills and items with unlimited money and coins. You can also enjoy the improved graphics, sound effects, and animations of the mod apk.

-

How to download and install the mod apk on Android?

-

Downloading and installing Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device is easy and fast. Just follow these simple steps:

-

naruto senki mod apk storm 4 download 2021
-naruto storm 4 mod apk free download naruto senki
-download naruto senki mod apk ultimate ninja storm 4
-naruto senki mod 2021 storm 4 apk download
-naruto storm 4 mod apk download for android naruto senki
-download naruto senki mod apk full character storm 4
-naruto senki mod apk unlimited money storm 4 download
-download naruto senki mod apk boruto storm 4
-naruto storm 4 mod apk offline download naruto senki
-download naruto senki mod apk terbaru storm 4
-naruto senki mod apk latest version storm 4 download
-download naruto senki mod apk no cooldown storm 4
-naruto storm 4 mod apk obb download naruto senki
-download naruto senki mod apk revdl storm 4
-naruto senki mod apk all characters unlocked storm 4 download
-download naruto senki mod apk by ricky storm 4
-naruto storm 4 mod apk rexdl download naruto senki
-download naruto senki mod apk cheat menu storm 4
-naruto senki mod apk unlimited skill storm 4 download
-download naruto senki mod apk zippyshare storm 4
-naruto storm 4 mod apk data download naruto senki
-download naruto senki mod apk versi lama storm 4
-naruto senki mod apk update terbaru storm 4 download
-download naruto senki mod apk kaguya storm 4
-naruto storm 4 mod apk highly compressed download naruto senki
-download naruto senki mod apk madara rikudo storm 4
-naruto senki mod apk new update storm 4 download
-download naruto senki mod apk pain nagato storm 4
-naruto storm 4 mod apk unlimited coins download naruto senki
-download naruto senki mod apk sasuke rinnegan storm 4
-naruto senki mod apk original version storm 4 download
-download naruto senki mod apk itachi susanoo storm 4
-naruto storm 4 mod apk android 1 download naruto senki
-download naruto senki mod apk hokage keempat storm 4
-naruto senki mod apk unlock all jutsu storm 4 download
-download naruto senki mod apk kakashi hatake storm 4
-naruto storm 4 mod apk mediafire download naruto senki
-download naruto senki mod apk minato namikaze storm 4
-naruto senki mod apk no root required storm 4 download
-download naruto senki mod apk obito uchiha storm 4

-

Allow unknown sources on your device

-

Before you can install the mod apk, you need to enable the installation of apps from external sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Google Play Store.

-

Unknown Sources

-

Download a file manager app

-

You will need a file manager app that can extract and install apk and obb files on your device. We recommend using ZArchiver, a free and powerful file manager app that can handle various types of files. You can download ZArchiver from the Google Play Store or from this link:

-

Download ZArchiver

-

Download the mod apk and obb files

-

Next, you need to download the mod apk and obb files for Naruto Storm 4 and Naruto Senki. You can get them from this link: . The mod apk file is about 120 MB in size, while the obb file is about 1 GB in size. Make sure you have enough storage space on your device before downloading them.

-

Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021

-

Install the mod apk file

-

After downloading the mod apk file, open ZArchiver and locate the file in your download folder. Tap on the file and select "Install". Wait for the installation process to finish.

-

Install Mod Apk

-

Extract and copy the obb file

-

After installing the mod apk file, go back to ZArchiver and locate the obb file in your download folder. Tap on the file and select "Extract". Choose a destination folder where you want to extract the file. We recommend extracting it to your internal storage.

-

Extract Obb File

-

After extracting the obb file, you will see a folder named "com.bandainamcoent.narutostorm4". Copy this folder and paste it to your Android > obb folder on your internal storage.

-

Copy Obb File

-

Launch the game and enjoy

-

You are now ready to play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device. Just tap on the game icon on your home screen or app drawer and start playing. You will see a menu where you can choose between Naruto Storm 4 or Naruto Sen ki. You can switch between them anytime you want. Have fun with the mod features and enjoy the game.

-

What are the features of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?

-

Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is not just a simple combination of two games. It is a complete overhaul of the original games that adds new and improved features that will enhance your gaming experience. Here are some of the features that you can expect from this mod apk:

-

Graphics

-

The graphics of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 are stunning and realistic. The mod apk enhances the graphics quality of the original games, making them more vibrant and detailed. The characters, environments, effects, and animations are all rendered in high definition, giving you a visual feast. You can also adjust the graphics settings according to your device's performance and preference.

-

Modes

-

The mod apk offers you a variety of game modes to choose from, depending on your mood and preference. You can play the story mode, where you can follow the epic saga of Naruto and his friends as they fight against Akatsuki and other enemies. You can also play the survival mode, where you can test your skills and endurance against waves of enemies. You can also play the multiplayer mode, where you can team up or compete with other players online in different modes, such as 1v1, 2v2, 3v3, 4v4, and 5v5. You can also create your own custom matches and invite your friends to join.

-

Characters

-

The mod apk boasts a full character roster that includes all the characters from Naruto series and some crossover characters from other anime and manga. You can unlock and play as any character you want, from Naruto, Sasuke, Sakura, Kakashi, Madara, Boruto, Sarada, Mitsuki, and more. You can also customize your character's appearance, outfit, and weapons with unlimited money and coins. You can mix and match different items and create your own unique look.

-

Skills

-

The mod apk also enhances the skills and abilities of each character in the game. You can use unlimited skills and jutsus without any cooldown or chakra limit. You can also unleash powerful ultimate jutsus that can deal massive damage to your enemies. You can also combine different skills and jutsus to create combos and strategies. You can also learn new skills and jutsus by playing the game and leveling up your character.

-

Items

-

The mod apk also gives you access to various items and upgrades that you can buy with unlimited money and coins. You can buy health potions, chakra potions, scrolls, kunai, shuriken, bombs, and more. You can also buy different types of weapons, such as swords, axes, hammers, spears, daggers, bows, guns, and more. You can also buy different types of outfits, such as ninja suits, samurai armors, casual clothes, school uniforms, swimsuits, and more. You can also buy different types of accessories, such as hats, masks, glasses, earrings , necklaces, rings, and more. You can also buy different types of pets, such as dogs, cats, birds, dragons, and more. You can use these items and upgrades to enhance your character's stats, appearance, and performance.

-

Conclusion

-

Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is a must-have mod apk for Naruto fans and gamers. It combines the best features of Naruto Storm 4 and Naruto Senki into one game that you can play on your Android device. You can enjoy the epic story and gameplay of Naruto Storm 4 and the fan-made fun and creativity of Naruto Senki. You can also enjoy the unlimited money, coins, skills, and characters that the mod apk offers. You can also customize your character's appearance, outfit, and weapons with various items and upgrades. You can also play with other players online in different game modes and create your own custom matches. You can also experience the improved graphics, sound effects, and animations of the mod apk.

-

If you want to download and install Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device, just follow the simple steps that we have provided in this article. You will be able to play this amazing mod apk in no time. Don't miss this opportunity to play as your favorite Naruto characters and unleash their skills and jutsus. Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 now and have fun.

-

FAQs

-

Here are some of the frequently asked questions and answers about Naruto Storm 4 Mod Apk Naruto Senki Mod 2021:

-

Q: Is Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 safe to download and install?

-

A: Yes, Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is safe to download and install on your Android device. The mod apk and obb files are free from viruses, malware, or any harmful content. However, you should always download them from a reliable source and scan them with an antivirus app before installing them.

-

Q: Do I need to root my device to use Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?

-

A: No, you do not need to root your device to use Naruto Storm 4 Mod Apk Naruto Senki Mod 2021. The mod apk works fine on any Android device that meets the minimum requirements. However, if you want to use some advanced features or mods that require root access, you may need to root your device first.

-

Q: Can I play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 offline?

-

A: Yes, you can play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 offline without any internet connection. However, you will not be able to access some features or modes that require online connectivity, such as multiplayer mode or online updates.

-

Q: Can I play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 with my friends?

-

A: Yes, you can play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 with your friends online or locally. You can join or create custom matches with your friends using the multiplayer mode. You can also use a hotspot or a Wi-Fi connection to play with your friends nearby using the local mode.

-

Q: How can I contact the developer of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?

-

A: If you have any questions, feedback, or suggestions about Naruto Storm 4 Mod Apk Naruto Senki Mod 2021, you can contact the developer of the mod apk through their social media accounts or email address. You can also visit their official website or blog for more information.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md b/spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md deleted file mode 100644 index f1f8435464d1aab46c88d4edd4841467e61dd1d2..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md +++ /dev/null @@ -1,117 +0,0 @@ - -

Dino Hunter Mod APK: A Thrilling Hunting Adventure

-

Do you love hunting games? Do you want to hunt down the most dangerous creatures in history? If yes, then you should try Dino Hunter Mod APK, a game that lets you hunt for dinosaurs in various wild locations. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, tips and tricks for playing it, and a review of its pros and cons.

-

dino hunter mod apk


Download –––––>>> https://jinyurl.com/2uNOWL



-

What is Dino Hunter Mod APK?

-

Dino Hunter Mod APK is a modified version of the original Dino Hunter game developed by Glu Games LLC. It is a first-person hunting simulator where you embark on the hunting expedition of a lifetime in pursuit of the ultimate game in Dino Hunter: Deadly Shores. You will journey to a hidden, untouched island and hunt the most ferocious animals in history, from the docile stegosaurus to the terrifying T. rex. You will also visit exotic locations, equip powerful weapons, master a unique challenge series, and experience amazing graphics.

-

The mod APK version of this game offers some advantages over the original game, such as unlimited money and gold, all weapons unlocked, free shopping and upgrades, and more. These features will make your hunting experience more enjoyable and easier.

-

Features of Dino Hunter Mod APK

-

Here are some of the features that you can enjoy when you play Dino Hunter Mod APK:

-

Unlimited money and gold

-

Money and gold are the main currencies in the game that you can use to buy weapons, upgrades, items, and more. With the mod APK version, you will have unlimited money and gold at your disposal, so you can buy anything you want without worrying about running out of resources.

-

All weapons unlocked

-

The game offers a wide range of weapons that you can use to hunt down dinosaurs, such as rifles, shotguns, assault rifles, rocket launchers, crossbows, and more. Each weapon has its own advantages and disadvantages, such as damage, range, accuracy, reload speed, etc. With the mod APK version, you will have access to all weapons from the start, so you can choose the best weapon for each hunt.

-

Free shopping and upgrades

-

Besides buying weapons, you can also shop for other items that can enhance your gameplay experience, such as cover scent, chrono drink, energy refill, etc. You can also upgrade your weapons to improve their performance and effectiveness. With the mod APK version, you can shop and upgrade for free, so you can get the best items and weapons without spending any money or gold.

-

dino hunter mod apk unlimited money and gold
-dino hunter mod apk all weapons unlocked
-dino hunter mod apk free download for android
-dino hunter mod apk latest version 2021
-dino hunter mod apk offline
-dino hunter mod apk unlimited energy
-dino hunter mod apk no ads
-dino hunter mod apk unlimited gems
-dino hunter mod apk rexdl
-dino hunter mod apk revdl
-dino hunter mod apk hack
-dino hunter mod apk android 1
-dino hunter mod apk unlimited ammo
-dino hunter mod apk unlimited everything
-dino hunter mod apk happymod
-dino hunter mod apk unlimited coins
-dino hunter mod apk free shopping
-dino hunter mod apk download apkpure
-dino hunter mod apk unlimited cash
-dino hunter mod apk android oyun club
-dino hunter mod apk obb
-dino hunter mod apk 5.9.3
-dino hunter mod apk 5.9.2
-dino hunter mod apk 5.9.1
-dino hunter mod apk 5.8.9
-dino hunter mod apk 5.8.8
-dino hunter mod apk 5.8.7
-dino hunter mod apk 5.8.6
-dino hunter mod apk 5.8.5
-dino hunter mod apk 5.8.4
-dino hunter mod apk 5.8.3
-dino hunter mod apk 5.8.2
-dino hunter mod apk 5.8.1
-dino hunter mod apk 5.8.0
-dino hunter mod apk 5.7.9
-dino hunter mod apk 5.7.8
-dino hunter mod apk 5.7.7
-dino hunter mod apk 5.7.6
-dino hunter mod apk 5.7.5
-dino hunter mod apk 5.7.4
-dino hunter mod apk 5.7.3
-dino hunter mod apk 5.7.2
-dino hunter mod apk 5.7.1
-dino hunter mod apk 5.7.0
-dino hunter deadly shores hack version download for android

-

High-quality graphics and sound effects

-

The game features high-quality graphics that make the dinosaurs look realistic and detailed. You can also see dynamic shadows, hi-res textures, and realistic models that make the game more immersive. The sound effects are also impressive, as you can hear the roars of dinosaurs, the gunshots of weapons, and the ambient sounds of nature. The game also supports night vision mode that lets you hunt in dark environments.

-

How to download and install Dino Hunter Mod APK?

-

If you want to download and install Dino Hunter Mod APK, you can follow these simple steps:

-

Step 1: Download the mod APK file from a trusted source

-

The first thing you need to do is to download the mod APK file of Dino Hunter from a reliable source. You can search for it on the internet or use the link provided below. Make sure that the file is compatible with your device and has the latest version of the game.

-

Download Dino Hunter Mod APK

-

Step 2: Enable unknown sources on your device settings

-

The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.

-

Step 3: Install the mod APK file and launch the game

-

The final thing you need to do is to install the mod APK file and launch the game. To do this, locate the downloaded file on your device storage, tap on it, and follow the instructions on the screen. Once the installation is complete, you can open the game and enjoy hunting dinosaurs with unlimited resources.

-

Tips and tricks for playing Dino Hunter Mod APK

-

If you want to master the game and become the best hunter, you can use these tips and tricks that we have gathered for you:

-

Use the infrared to aim for specific body parts

-

One of the features that you can use in the game is the infrared mode that lets you see the vital organs of dinosaurs. This can help you aim for specific body parts that can deal more damage or cause instant kills. For example, you can aim for the heart, lungs, brain, or spine of dinosaurs to take them down faster. However, be careful not to waste your infrared energy as it is limited and needs time to recharge.

-

Upgrade your capacity and reload speed for boss battles

-

Another feature that you can use in the game is the upgrade system that lets you improve your weapons and items. One of the things that you should upgrade is your capacity and reload speed, especially for boss battles. Bosses are more powerful and resilient than normal dinosaurs, so you need to have enough ammo and fast reloads to keep shooting at them. You can also upgrade your damage and accuracy to make your shots more effective.

-

Use the cover scent to mask your smell from dinosaurs

-

Another item that you can use in the game is the cover scent that masks your smell from dinosaurs. This can help you avoid being detected by dinosaurs that have a keen sense of smell, such as raptors or tyrannosaurs. You can also use it to sneak up on dinosaurs and get a better shot at them. However, be careful not to run out of cover scent as it is limited and needs money or gold to buy more.

-

Use the M.I.S.T. device to track down dinosaurs and map pieces

-

Another device that you can use in the game is the M.I.S.T. (Mobile Integrated Sensor Technology) device that tracks down dinosaurs and map pieces. This can help you find your targets faster and easier, as well as collect map pieces that unlock new locations and challenges. You can also use it to scan dinosaurs and learn more about their characteristics and weaknesses.

-

Review of Dino Hunter Mod APK

-

To give you a better idea of what Dino Hunter Mod APK offers, we have prepared a review of its pros and cons, as well as user ratings and feedback.

-

Pros and cons of the mod APK

- - - - - - -
ProsCons
- Unlimited money and gold- May not work on some devices
- All weapons unlocked- May cause some glitches or bugs
- Free shopping and upgrades- May not be compatible with online mode
- High-quality graphics and sound effects- May consume a lot of battery power
-

User ratings and feedback

-

The mod APK version of Dino Hunter has received mostly positive ratings and feedback from users who have tried it. Here are some of their comments:

- -

Conclusion

-

Dino Hunter Mod APK is a game that lets you hunt for dinosaurs in various wild locations. It is a first-person hunting simulator that offers high-quality graphics, sound effects, weapons, items, and challenges. The mod APK version of this game gives you unlimited money and gold, all weapons unlocked, free shopping and upgrades, and more. These features will make your hunting experience more enjoyable and easier.

-

If you are looking for a thrilling hunting adventure, you should download and install Dino Hunter Mod APK on your device. You will not regret it.

-

FAQs

-

Here are some of the frequently asked questions about Dino Hunter Mod APK:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md b/spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md deleted file mode 100644 index 41d12a0edb6bf19b59c297f410a3f7d586aff5cc..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md +++ /dev/null @@ -1,146 +0,0 @@ -
- - -

Guess the Place: A Fun and Educational Geography Game

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md b/spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md deleted file mode 100644 index 96d0b9b3439e999e06dc838a9d0ba40170181f6b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md +++ /dev/null @@ -1,100 +0,0 @@ - -

FS 20 Indian Tractor Mod APK Download Unlimited Money

-

If you are a fan of farming simulation games, you might have heard of Farming Simulator 20, or FS 20 for short. This is a popular game that lets you experience the life of a farmer, from harvesting crops to raising animals. However, if you want to spice up your gameplay with some Indian flavor, you might want to try FS 20 Indian Tractor Mod APK. This is a modified version of the game that adds all kinds of Indian tractors and vehicles, as well as unlimited money and coins. In this article, we will tell you everything you need to know about this mod APK, including its features, how to download and install it, how to play it, and its pros and cons.

-

What is FS 20 Indian Tractor Mod APK?

-

FS 20 Indian Tractor Mod APK is a modified version of the original Farming Simulator 20 game that adds all kinds of Indian tractors and vehicles to the game. You can choose from a variety of brands and models, such as Swaraj, Sonalika, Preet, Massey, Ford, John Deere, etc. You can also customize your tractors and vehicles with different colors, stickers, lights, horns, etc. Moreover, this mod APK also gives you unlimited money and coins, so you can buy anything you want in the game without worrying about the cost. You can also enjoy realistic graphics and physics, as well as customizable farms and crops. You can play this mod APK offline or online with other players.

-

fs 20 indian tractor mod apk download unlimited money


Download File >> https://jinyurl.com/2uNP3x



-

Features of FS 20 Indian Tractor Mod APK

-

All Indian tractors and vehicles

-

One of the main features of this mod APK is that it adds all kinds of Indian tractors and vehicles to the game. You can choose from a variety of brands and models, such as Swaraj, Sonalika, Preet, Massey, Ford, John Deere, etc. You can also customize your tractors and vehicles with different colors, stickers, lights, horns, etc. You can use these tractors and vehicles to harvest your crops, transport your goods, tow your trailers, etc.

-

Unlimited money and coins

-

Another feature of this mod APK is that it gives you unlimited money and coins. This means that you can buy anything you want in the game without worrying about the cost. You can buy new equipment and upgrades for your tractors and vehicles, new animals and crops for your farm, new buildings and decorations for your land, etc. You can also use the money and coins to unlock new features and modes in the game.

-

Realistic graphics and physics

-

This mod APK also enhances the graphics and physics of the game. You can enjoy realistic graphics that show the details of your tractors and vehicles, your farm, your crops, your animals, etc. You can also experience realistic physics that affect the movement and behavior of your tractors and vehicles, the weather and seasons, the soil and water, etc. You can feel the difference between driving on different terrains, such as mud, sand, grass, etc.

-

Customizable farms and crops

-

This mod APK also allows you to customize your farms and crops. You can choose from a variety of crops to grow on your land, such as wheat, rice, sugarcane, cotton, etc. You can also choose from a variety of animals to raise on your farm, such as cows, sheep, chickens, etc. You can also build and decorate your farm with different buildings and objects, such as barns, silos, windmills, fences, etc. You can also adjust the settings of your farm, such as the difficulty level, the crop yield, the animal productivity, etc.

-

Offline and online modes

-

This mod APK also supports both offline and online modes. You can play this mod APK offline without an internet connection. You can enjoy the game at your own pace and explore the vast map and discover new locations. You can also play this mod APK online with other players. You can join or create a multiplayer session and cooperate or compete with other farmers. You can chat with other players, trade with them, help them with their tasks, challenge them to races, etc.

-

How to download and install FS 20 Indian Tractor Mod APK?

-

If you are interested in trying this mod APK, you need to follow these steps to download and install it on your device:

-

fs 20 indian tractor mod apk free download
-fs 20 indian tractor mod unlimited money and gold
-fs 20 farming simulator indian tractor mod apk
-fs 20 new map with indian tractor mod download
-fs 20 jhondeere tractor mod apk download
-fs 20 indian tractor mod gameplay and review
-fs 20 indian tractor mod latest version download
-fs 20 indian tractor mod for android and ios
-fs 20 indian tractor mod with realistic graphics
-fs 20 indian tractor mod features and benefits
-fs 20 indian tractor mod how to install and use
-fs 20 indian tractor mod best settings and tips
-fs 20 indian tractor mod comparison and ranking
-fs 20 indian tractor mod problems and solutions
-fs 20 indian tractor mod updates and news
-fs 20 indian tractor mod online and offline mode
-fs 20 indian tractor mod cheats and hacks
-fs 20 indian tractor mod support and feedback
-fs 20 indian tractor mod alternatives and competitors
-fs 20 indian tractor mod pros and cons
-fs 20 hr-pb tractors nishu deshwal mod apk download
-fs 20 timelapse gameplay with indian tractor mod
-fs 20 $10 million challenge with indian tractor mod
-fs 20 swaraj, mahindra, sonalika, escort, farmtrac, powertrac, new holland, eicher, hmt, standard, preet, arjun, indofarm, force motors, john deere, massey ferguson, tafe, kubota, ace, captain tractors mods apk download
-fs 20 all new indian tractors mods apk download link in comment box
-fs 20 indian tractors mods apk download for pc and laptop
-fs 20 indian tractors mods apk download without verification or survey
-fs 20 indian tractors mods apk download from google drive or mediafire
-fs 20 indian tractors mods apk download no root or jailbreak required
-fs 20 indian tractors mods apk download safe and secure

-

Step 1: Download the mod APK file from a trusted source

-

The first step is to download the mod APK file from a trusted source. You can find many websites that offer this mod APK file for free. However, you need to be careful and avoid downloading from unverified or malicious sources that may contain viruses or malware. We recommend you to download the mod APK file from this link: [FS 20 Indian Tractor Mod APK Download].

-

Step 2: Enable unknown sources on your device settings

-

The second step is to enable unknown sources on your device settings. This is necessary because this mod APK file is not from the official Google Play Store and therefore your device may not allow you to install it by default. To enable unknown sources, you need to go to your device settings > security > unknown sources and toggle it on.

-

Step 3: Install the mod APK file on your device

-

The third step is to install the mod APK file on your device. To do this, you need to locate the downloaded mod APK file on your device storage and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for the process to finish.

-

Step 4: Launch the game and enjoy

-

The final step is to launch the game and enjoy. To do this, you need to find the game icon on your device home screen or app drawer and tap on it. You will see a loading screen and then the game will start. You can now enjoy FS 20 Indian Tractor Mod APK with all its features.

-

How to play FS 20 Indian Tractor Mod APK?

-

If you are new to this game or this mod APK, you might wonder how to play it. Here are some tips and tricks that will help you get started:

-

Choose your favorite tractor and vehicle

-

The first thing you need to do is to choose your favorite tractor and vehicle from the available options. You can access the shop menu by tapping on the shopping cart icon on the top right corner of the screen. You will see a list of categories, such as tractors, vehicles, trailers, tools, etc. You can browse through them and select the one you like. You can also customize your tractor and vehicle with different colors, stickers, lights, horns, etc.

-

Harvest and sell your crops

-

The next thing you need to do is to harvest and sell your crops. You can access the map menu by tapping on the map icon on the top left corner of the screen. You will see a map of your farm and its surroundings. You will also see icons that indicate different fields, shops, warehouses, etc. You can tap on them to see more information or interact with them. You can also zoom in and out and move the map by swiping on the screen. To harvest your crops, you need to drive your tractor and vehicle to the field that has ripe crops. You will see a yellow icon that indicates the harvesting mode. You need to tap on it and then drive over the crops to collect them. You will see a meter that shows how much crops you have collected. You can also see the type and quantity of your crops in the inventory menu by tapping on the backpack icon on the top right corner of the screen. To sell your crops, you need to drive your tractor and vehicle to the shop or warehouse that buys them. You will see a green icon that indicates the selling mode. You need to tap on it and then select the crops you want to sell. You will see the price and quantity of your crops and the total amount you will receive. You can also negotiate the price by tapping on the haggle button. Once you are satisfied, you can confirm the deal and receive your money.

-

Buy new equipment and upgrades

-

The next thing you need to do is to buy new equipment and upgrades for your tractors and vehicles, your farm, your crops, your animals, etc. You can access the shop menu by tapping on the shopping cart icon on the top right corner of the screen. You will see a list of categories, such as tractors, vehicles, trailers, tools, animals, crops, buildings, decorations, etc. You can browse through them and select the one you want to buy. You will see the price and description of the item and the requirements to buy it. You can also compare different items by tapping on the compare button. Once you have decided, you can tap on the buy button and confirm your purchase. You will see your money deducted from your balance and your item added to your inventory.

-

Explore the vast map and discover new locations

-

The last thing you need to do is to explore the vast map and discover new locations. You can access the map menu by tapping on the map icon on the top left corner of the screen. You will see a map of your farm and its surroundings. You will also see icons that indicate different locations, such as fields, shops, warehouses, factories, landmarks, etc. You can tap on them to see more information or interact with them. You can also zoom in and out and move the map by swiping on the screen. To explore new locations, you need to drive your tractor and vehicle to them. You will see a blue icon that indicates the exploration mode. You need to tap on it and then drive around the location to discover its secrets. You may find new items, new tasks, new challenges, new events, etc.

-

Pros and cons of FS 20 Indian Tractor Mod APK

-

As with any mod APK, there are some pros and cons of using FS 20 Indian Tractor Mod APK. Here are some of them:

-

Pros

- -

Cons

- -

Conclusion

-

In conclusion, FS 20 Indian Tractor Mod APK is a modified version of Farming Simulator 20 that adds all kinds of Indian tractors and vehicles, as well as unlimited money and coins, to the game. It also enhances the graphics and physics, and allows you to customize your farms and crops. You can play this mod APK offline or online with other players. However, this mod APK also requires a lot of storage space, may not work on some devices, and may have some bugs and glitches. If you are interested in trying this mod APK, you can follow the steps we have provided to download and install it on your device. You can also use the tips and tricks we have shared to play it and enjoy it. We hope you have found this article helpful and informative.

-

FAQs

-

Here are some frequently asked questions about FS 20 Indian Tractor Mod APK:

-

Q: Is FS 20 Indian Tractor Mod APK safe to use?

-

A: FS 20 Indian Tractor Mod APK is safe to use as long as you download it from a trusted source and enable unknown sources on your device settings. However, you should always be careful and scan the file for viruses or malware before installing it.

-

Q: Is FS 20 Indian Tractor Mod APK legal to use?

-

A: FS 20 Indian Tractor Mod APK is not legal to use as it violates the terms and conditions of the original Farming Simulator 20 game. You may face some legal consequences if you use this mod APK. Therefore, we do not recommend or endorse the use of this mod APK.

-

Q: Can I update FS 20 Indian Tractor Mod APK?

-

A: FS 20 Indian Tractor Mod APK may not be compatible with the latest updates of the original Farming Simulator 20 game. You may lose some features or face some errors if you update this mod APK. Therefore, we suggest you to avoid updating this mod APK.

-

Q: Can I uninstall FS 20 Indian Tractor Mod APK?

-

A: Yes, you can uninstall FS 20 Indian Tractor Mod APK anytime you want. You just need to go to your device settings > apps > FS 20 Indian Tractor Mod APK and tap on uninstall. You will see a confirmation message and then the mod APK will be removed from your device.

-

Q: Can I play FS 20 Indian Tractor Mod APK with my friends?

-

A: Yes, you can play FS 20 Indian Tractor Mod APK with your friends online. You just need to have an internet connection and join or create a multiplayer session. You can chat with your friends, trade with them, help them with their tasks, challenge them to races, etc.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/ppnlp_patch_utils.py b/spaces/1toTree/lora_test/ppdiffusers/ppnlp_patch_utils.py deleted file mode 100644 index 8d13a8a837ffeff61bf0cada9bc702d4dd133b52..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/ppnlp_patch_utils.py +++ /dev/null @@ -1,509 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import builtins -import contextlib -import copy -import functools -import time -import weakref -from collections import OrderedDict -from types import FunctionType, MethodType -from typing import Any, Callable, Dict, List, Optional, Tuple - -from .utils import is_paddle_available, is_paddlenlp_available - - -def copy_func(f): - "Copy a non-builtin function (NB `copy.copy` does not work for this)" - if not isinstance(f, FunctionType): - return copy.copy(f) - fn = FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__) - fn.__kwdefaults__ = f.__kwdefaults__ - fn.__dict__.update(f.__dict__) - fn.__annotations__.update(f.__annotations__) - fn.__qualname__ = f.__qualname__ - return fn - - -# copied from https://github.com/fastai/fastcore/blob/c9b4c088d3706569c076e7c197c724730be190ab/fastcore/basics.py#L938-L954 -def patch_to(cls, as_prop=False, cls_method=False): - "Decorator: add `f` to `cls`" - if not isinstance(cls, (tuple, list)): - cls = (cls,) - - def _inner(f): - for c_ in cls: - nf = copy_func(f) - nm = f.__name__ - # `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually - for o in functools.WRAPPER_ASSIGNMENTS: - setattr(nf, o, getattr(f, o)) - nf.__qualname__ = f"{c_.__name__}.{nm}" - if cls_method: - setattr(c_, nm, MethodType(nf, c_)) - else: - setattr(c_, nm, property(nf) if as_prop else nf) - # Avoid clobbering existing functions - return globals().get(nm, builtins.__dict__.get(nm, None)) - - return _inner - - -if is_paddle_available(): - import paddle - import paddle.nn as nn - - @contextlib.contextmanager - def device_scope(device="cpu"): - new_device = device.replace("cuda", "gpu") - old_device = paddle.get_device() - if str(new_device) == str(old_device): - yield - else: - try: - paddle.set_device(new_device) - yield - finally: - paddle.set_device(old_device) - - paddle.device_scope = device_scope - - class RNGStatesTracker: - def __init__(self): - self.states_ = {} - - def reset(self): - self.states_ = {} - - def remove(self, generator_name=None): - if generator_name is not None: - del self.states_[generator_name] - - def manual_seed(self, seed, generator_name=None): - if generator_name is None: - generator_name = str(time.time()) - if generator_name in self.states_: - raise ValueError("state {} already exists".format(generator_name)) - orig_rng_state = paddle.get_cuda_rng_state() - paddle.seed(seed) - self.states_[generator_name] = paddle.get_cuda_rng_state() - paddle.set_cuda_rng_state(orig_rng_state) - return generator_name - - @contextlib.contextmanager - def rng_state(self, generator_name=None): - if generator_name is not None: - if generator_name not in self.states_: - raise ValueError("state {} does not exist".format(generator_name)) - orig_cuda_rng_state = paddle.get_cuda_rng_state() - paddle.set_cuda_rng_state(self.states_[generator_name]) - try: - yield - finally: - self.states_[generator_name] = paddle.get_cuda_rng_state() - paddle.set_cuda_rng_state(orig_cuda_rng_state) - else: - yield - - RNG_STATE_TRACKER = RNGStatesTracker() - - def get_rng_state_tracker(*args, **kwargs): - return RNG_STATE_TRACKER - - paddle.Generator = get_rng_state_tracker - randn = paddle.randn - - def randn_pt(shape, dtype=None, name=None, **kwargs): - generator = kwargs.get("generator", None) - if generator is None: - return randn(shape, dtype=dtype, name=name) - else: - with get_rng_state_tracker().rng_state(generator): - return randn(shape, dtype=dtype, name=name) - - paddle.randn = randn_pt - - rand = paddle.rand - - def rand_pt(shape, dtype=None, name=None, **kwargs): - generator = kwargs.get("generator", None) - if generator is None: - return randn(shape, dtype=dtype, name=name) - else: - with get_rng_state_tracker().rng_state(generator): - return rand(shape, dtype=dtype, name=name) - - paddle.rand = rand_pt - - @patch_to(nn.Layer) - def get_sublayer(self, target: str): - if target == "": - return self - - atoms: List[str] = target.split(".") - mod: nn.Layer = self - - for item in atoms: - if not hasattr(mod, item): - raise AttributeError(mod.__class__.__name__ + " has no " "attribute `" + item + "`") - - mod = getattr(mod, item) - - if not isinstance(mod, nn.Layer): - raise AttributeError("`" + item + "` is not " "an nn.Layer") - return mod - - class _WrappedHook: - def __init__(self, hook: Callable, module: Optional["nn.Layer"] = None): - self.hook: Callable = hook - functools.update_wrapper(self, hook) - - self.with_module: bool = False - - if module is not None: - self.module: weakref.ReferenceType["nn.Layer"] = weakref.ref(module) - self.with_module = True - - def __call__(self, *args: Any, **kwargs: Any) -> Any: - if self.with_module: - module = self.module() - if module is None: - raise RuntimeError("You are trying to call the hook of a dead Module!") - return self.hook(module, *args, **kwargs) - return self.hook(*args, **kwargs) - - def __getstate__(self) -> Dict: - result = {"hook": self.hook, "with_module": self.with_module} - if self.with_module: - result["module"] = self.module() - - return result - - def __setstate__(self, state: Dict): - self.hook = state["hook"] - self.with_module = state["with_module"] - - if self.with_module: - if state["module"] is None: - raise RuntimeError("You are trying to revive the hook of a dead Module!") - self.module = weakref.ref(state["module"]) - - from paddle.fluid.dygraph.layers import HookRemoveHelper - - @patch_to(nn.Layer) - def register_load_state_dict_pre_hook(self, hook, with_module=False): - handle = HookRemoveHelper(self.load_state_dict_pre_hooks) - self.load_state_dict_pre_hooks[handle._hook_id] = _WrappedHook(hook, self if with_module else None) - return handle - - raw_set_state_dict = nn.Layer.set_state_dict - - @patch_to(nn.Layer) - def set_state_dict(self, state_dict, use_structured_name: bool = True): - for hook in self.load_state_dict_pre_hooks.values(): - hook(state_dict) - return raw_set_state_dict(self, state_dict, use_structured_name=use_structured_name) - - nn.Layer.load_dict = nn.Layer.set_state_dict - nn.Layer.set_dict = nn.Layer.set_state_dict - - raw_init = nn.Layer.__init__ - - @patch_to(nn.Layer) - def __init__(self, name_scope=None, dtype="float32"): - raw_init(self, name_scope=name_scope, dtype=dtype) - self.load_state_dict_pre_hooks = OrderedDict() - - -if is_paddle_available() and is_paddlenlp_available(): - import paddle - - import paddlenlp.transformers - from paddlenlp.transformers import PretrainedModel - - @patch_to(PretrainedModel, as_prop=True) - def dtype(self): - try: - return next(self.named_parameters())[1].dtype - except StopIteration: - return paddle.get_default_dtype() - - @patch_to(PretrainedModel, as_prop=True) - def device(self): - try: - return next(self.named_parameters())[1].place - except StopIteration: - return paddle.get_device() - - try: - from paddlenlp.transformers import XLMRobertaTokenizer - except ImportError: - # patch xlm-roberta tokenizer - """Tokenization classes for XLM-RoBERTa model.""" - import os - from shutil import copyfile - - import sentencepiece as spm - - from paddlenlp.transformers.tokenizer_utils import ( - AddedToken, - PretrainedTokenizer, - ) - from paddlenlp.utils.log import logger - - SPIECE_UNDERLINE = "▁" - - class XLMRobertaTokenizer(PretrainedTokenizer): - - resource_files_names = {"vocab_file": "sentencepiece.bpe.model"} - pretrained_resource_files_map = {} - pretrained_init_configuration = {} - max_model_input_sizes = { - "xlm-roberta-base": 512, - "xlm-roberta-large": 512, - "xlm-roberta-large-finetuned-conll02-dutch": 512, - "xlm-roberta-large-finetuned-conll02-spanish": 512, - "xlm-roberta-large-finetuned-conll03-english": 512, - "xlm-roberta-large-finetuned-conll03-german": 512, - } - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - vocab_file, - bos_token="", - eos_token="", - sep_token="", - cls_token="", - unk_token="", - pad_token="", - mask_token="", - sp_model_kwargs: Optional[Dict[str, Any]] = None, - **kwargs - ) -> None: - # Mask token behave like a normal word, i.e. include the space before it - mask_token = ( - AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token - ) - - self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs - - super().__init__( - bos_token=bos_token, - eos_token=eos_token, - unk_token=unk_token, - sep_token=sep_token, - cls_token=cls_token, - pad_token=pad_token, - mask_token=mask_token, - sp_model_kwargs=self.sp_model_kwargs, - **kwargs, - ) - - self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) - self.sp_model.Load(str(vocab_file)) - self.vocab_file = vocab_file - - # Original fairseq vocab and spm vocab must be "aligned": - # Vocab | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 - # -------- | ------- | ------- | ------ | ------- | --- | --- | --- | ----- | ----- | ---- - # fairseq | '' | '' | '' | '' | ',' | '.' | '▁' | 's' | '▁de' | '-' - # spm | '' | '' | '' | ',' | '.' | '▁' | 's' | '▁de' | '-' | '▁a' - - # Mimic fairseq token-to-id alignment for the first 4 token - self.fairseq_tokens_to_ids = {"": 0, "": 1, "": 2, "": 3} - - # The first "real" token "," has position 4 in the original fairseq vocab and position 3 in the spm vocab - self.fairseq_offset = 1 - - self.fairseq_tokens_to_ids[""] = len(self.sp_model) + self.fairseq_offset - self.fairseq_ids_to_tokens = {v: k for k, v in self.fairseq_tokens_to_ids.items()} - - def __getstate__(self): - state = self.__dict__.copy() - state["sp_model"] = None - state["sp_model_proto"] = self.sp_model.serialized_model_proto() - return state - - def __setstate__(self, d): - self.__dict__ = d - - # for backward compatibility - if not hasattr(self, "sp_model_kwargs"): - self.sp_model_kwargs = {} - - self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) - self.sp_model.LoadFromSerializedProto(self.sp_model_proto) - - def build_inputs_with_special_tokens( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. An XLM-RoBERTa sequence has the following format: - - single sequence: ` X ` - - pair of sequences: ` A B ` - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - - if token_ids_1 is None: - return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] - cls = [self.cls_token_id] - sep = [self.sep_token_id] - return cls + token_ids_0 + sep + sep + token_ids_1 + sep - - def get_special_tokens_mask( - self, - token_ids_0: List[int], - token_ids_1: Optional[List[int]] = None, - already_has_special_tokens: bool = False, - ) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - if token_ids_1 is None: - return [1] + ([0] * len(token_ids_0)) + [1] - return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does - not make use of token type ids, therefore a list of zeros is returned. - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of zeros. - """ - - sep = [self.sep_token_id] - cls = [self.cls_token_id] - - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0] - - @property - def vocab_size(self): - return len(self.sp_model) + self.fairseq_offset + 1 # Add the token - - def get_vocab(self): - vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} - vocab.update(self.added_tokens_encoder) - return vocab - - def _tokenize(self, text: str) -> List[str]: - return self.sp_model.encode(text, out_type=str) - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - if token in self.fairseq_tokens_to_ids: - return self.fairseq_tokens_to_ids[token] - spm_id = self.sp_model.PieceToId(token) - - # Need to return unknown token if the SP model returned 0 - return spm_id + self.fairseq_offset if spm_id else self.unk_token_id - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - if index in self.fairseq_ids_to_tokens: - return self.fairseq_ids_to_tokens[index] - return self.sp_model.IdToPiece(index - self.fairseq_offset) - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (strings for sub-words) in a single string.""" - out_string = "".join(tokens).replace(SPIECE_UNDERLINE, " ").strip() - return out_string - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - out_vocab_file = os.path.join( - save_directory, - (filename_prefix + "-" if filename_prefix else "") + self.resource_files_names["vocab_file"], - ) - - if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile( - self.vocab_file - ): - copyfile(self.vocab_file, out_vocab_file) - elif not os.path.isfile(self.vocab_file): - with open(out_vocab_file, "wb") as fi: - content_spiece_model = self.sp_model.serialized_model_proto() - fi.write(content_spiece_model) - - return (out_vocab_file,) - - paddlenlp.transformers.XLMRobertaTokenizer = XLMRobertaTokenizer - - # patch BertModel forward - from paddlenlp.transformers import BertModel - - raw_forward = BertModel.forward - - @patch_to(BertModel) - def forward( - self, - input_ids: paddle.Tensor, - token_type_ids: Optional[paddle.Tensor] = None, - position_ids: Optional[paddle.Tensor] = None, - attention_mask: Optional[paddle.Tensor] = None, - past_key_values: Optional[Tuple[Tuple[paddle.Tensor]]] = None, - use_cache: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - if attention_mask is None: - attention_mask = paddle.ones_like(input_ids) - return raw_forward( - self, - input_ids, - token_type_ids, - position_ids, - attention_mask, - past_key_values, - use_cache, - output_hidden_states, - output_attentions, - return_dict, - ) diff --git a/spaces/4Taps/SadTalker/src/audio2pose_models/cvae.py b/spaces/4Taps/SadTalker/src/audio2pose_models/cvae.py deleted file mode 100644 index d017ce865a03bae40dfe066dbcd82e29839d89dc..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/audio2pose_models/cvae.py +++ /dev/null @@ -1,149 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn -from src.audio2pose_models.res_unet import ResUnet - -def class2onehot(idx, class_num): - - assert torch.max(idx).item() < class_num - onehot = torch.zeros(idx.size(0), class_num).to(idx.device) - onehot.scatter_(1, idx, 1) - return onehot - -class CVAE(nn.Module): - def __init__(self, cfg): - super().__init__() - encoder_layer_sizes = cfg.MODEL.CVAE.ENCODER_LAYER_SIZES - decoder_layer_sizes = cfg.MODEL.CVAE.DECODER_LAYER_SIZES - latent_size = cfg.MODEL.CVAE.LATENT_SIZE - num_classes = cfg.DATASET.NUM_CLASSES - audio_emb_in_size = cfg.MODEL.CVAE.AUDIO_EMB_IN_SIZE - audio_emb_out_size = cfg.MODEL.CVAE.AUDIO_EMB_OUT_SIZE - seq_len = cfg.MODEL.CVAE.SEQ_LEN - - self.latent_size = latent_size - - self.encoder = ENCODER(encoder_layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len) - self.decoder = DECODER(decoder_layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len) - def reparameterize(self, mu, logvar): - std = torch.exp(0.5 * logvar) - eps = torch.randn_like(std) - return mu + eps * std - - def forward(self, batch): - batch = self.encoder(batch) - mu = batch['mu'] - logvar = batch['logvar'] - z = self.reparameterize(mu, logvar) - batch['z'] = z - return self.decoder(batch) - - def test(self, batch): - ''' - class_id = batch['class'] - z = torch.randn([class_id.size(0), self.latent_size]).to(class_id.device) - batch['z'] = z - ''' - return self.decoder(batch) - -class ENCODER(nn.Module): - def __init__(self, layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len): - super().__init__() - - self.resunet = ResUnet() - self.num_classes = num_classes - self.seq_len = seq_len - - self.MLP = nn.Sequential() - layer_sizes[0] += latent_size + seq_len*audio_emb_out_size + 6 - for i, (in_size, out_size) in enumerate(zip(layer_sizes[:-1], layer_sizes[1:])): - self.MLP.add_module( - name="L{:d}".format(i), module=nn.Linear(in_size, out_size)) - self.MLP.add_module(name="A{:d}".format(i), module=nn.ReLU()) - - self.linear_means = nn.Linear(layer_sizes[-1], latent_size) - self.linear_logvar = nn.Linear(layer_sizes[-1], latent_size) - self.linear_audio = nn.Linear(audio_emb_in_size, audio_emb_out_size) - - self.classbias = nn.Parameter(torch.randn(self.num_classes, latent_size)) - - def forward(self, batch): - class_id = batch['class'] - pose_motion_gt = batch['pose_motion_gt'] #bs seq_len 6 - ref = batch['ref'] #bs 6 - bs = pose_motion_gt.shape[0] - audio_in = batch['audio_emb'] # bs seq_len audio_emb_in_size - - #pose encode - pose_emb = self.resunet(pose_motion_gt.unsqueeze(1)) #bs 1 seq_len 6 - pose_emb = pose_emb.reshape(bs, -1) #bs seq_len*6 - - #audio mapping - print(audio_in.shape) - audio_out = self.linear_audio(audio_in) # bs seq_len audio_emb_out_size - audio_out = audio_out.reshape(bs, -1) - - class_bias = self.classbias[class_id] #bs latent_size - x_in = torch.cat([ref, pose_emb, audio_out, class_bias], dim=-1) #bs seq_len*(audio_emb_out_size+6)+latent_size - x_out = self.MLP(x_in) - - mu = self.linear_means(x_out) - logvar = self.linear_means(x_out) #bs latent_size - - batch.update({'mu':mu, 'logvar':logvar}) - return batch - -class DECODER(nn.Module): - def __init__(self, layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len): - super().__init__() - - self.resunet = ResUnet() - self.num_classes = num_classes - self.seq_len = seq_len - - self.MLP = nn.Sequential() - input_size = latent_size + seq_len*audio_emb_out_size + 6 - for i, (in_size, out_size) in enumerate(zip([input_size]+layer_sizes[:-1], layer_sizes)): - self.MLP.add_module( - name="L{:d}".format(i), module=nn.Linear(in_size, out_size)) - if i+1 < len(layer_sizes): - self.MLP.add_module(name="A{:d}".format(i), module=nn.ReLU()) - else: - self.MLP.add_module(name="sigmoid", module=nn.Sigmoid()) - - self.pose_linear = nn.Linear(6, 6) - self.linear_audio = nn.Linear(audio_emb_in_size, audio_emb_out_size) - - self.classbias = nn.Parameter(torch.randn(self.num_classes, latent_size)) - - def forward(self, batch): - - z = batch['z'] #bs latent_size - bs = z.shape[0] - class_id = batch['class'] - ref = batch['ref'] #bs 6 - audio_in = batch['audio_emb'] # bs seq_len audio_emb_in_size - #print('audio_in: ', audio_in[:, :, :10]) - - audio_out = self.linear_audio(audio_in) # bs seq_len audio_emb_out_size - #print('audio_out: ', audio_out[:, :, :10]) - audio_out = audio_out.reshape([bs, -1]) # bs seq_len*audio_emb_out_size - class_bias = self.classbias[class_id] #bs latent_size - - z = z + class_bias - x_in = torch.cat([ref, z, audio_out], dim=-1) - x_out = self.MLP(x_in) # bs layer_sizes[-1] - x_out = x_out.reshape((bs, self.seq_len, -1)) - - #print('x_out: ', x_out) - - pose_emb = self.resunet(x_out.unsqueeze(1)) #bs 1 seq_len 6 - - pose_motion_pred = self.pose_linear(pose_emb.squeeze(1)) #bs seq_len 6 - - batch.update({'pose_motion_pred':pose_motion_pred}) - return batch diff --git a/spaces/812vaishnavi/gradio-land-cover-mapping/app.py b/spaces/812vaishnavi/gradio-land-cover-mapping/app.py deleted file mode 100644 index d28873bc1623a30150f842cbcd6260a0808ae8c7..0000000000000000000000000000000000000000 --- a/spaces/812vaishnavi/gradio-land-cover-mapping/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import gradio as gr -import PIL - -from tensorflow.keras.models import load_model -#import segmentation_models as sm -#import efficientnet.keras as efn -import matplotlib.pyplot as plt -import tensorflow as tf -import numpy as np -import cv2 - -lr=1e-5 - -#iou_score = [sm.metrics.IOUScore(threshold=0.5)] - -def iou_loss(y_true, y_pred): - y_true = tf.reshape(y_true, [-1]) - y_pred = tf.reshape(y_pred, [-1]) - intersection = tf.reduce_sum(tf.cast(y_true, tf.float32) * tf.cast(y_pred, tf.float32)) - score = (intersection + 1.) / (tf.reduce_sum(tf.cast(y_true, tf.float32)) + - tf.reduce_sum(tf.cast(y_pred, tf.float32)) - intersection + 1.) - return 1 - score - -def mean_iou(y_true, y_pred): - y_pred = tf.round(tf.cast(y_pred, tf.int32)) - intersect = tf.reduce_sum(tf.cast(y_true, tf.float32) * tf.cast(y_pred, tf.float32), axis=[1]) - union = tf.reduce_sum(tf.cast(y_true, tf.float32),axis=[1]) + tf.reduce_sum(tf.cast(y_pred, tf.float32),axis=[1]) - smooth = tf.ones(tf.shape(intersect)) - return tf.reduce_mean((intersect + smooth) / (union - intersect + smooth)) - -model1 = load_model('UNET[Scratch].h5', compile=False) - -model1.compile(optimizer = tf.keras.optimizers.Adam(lr), - loss=iou_loss, metrics=[mean_iou],) - -class_names = ['urban_land', 'agriculture_land', 'rangeland', 'forest_land', 'water','barren_land','unknown'] - -def Unet(img): - img_1=img.reshape(-1, 256, 256, 3) - prediction=model1.predict(img_1).flatten() - return {class_names[i]: float(prediction[i]) for i in range(7)} -iface1 = gr.Interface(fn=Unet, inputs = gr.inputs.Image(shape = (256, 256)), outputs = gr.outputs.Label(num_top_classes=7), title="Unet", - description="""Segmenting land from an image using a deep learning model. - This application aims to provide a user-friendly interface for segmenting land areas in images. - Firstly we get an intermediate output as a segmented image of the land cover, which is later converted into the percentage of the respective land classes. - Overall, we aim to make land segmentation accessible to a wide range of users and facilitating further analysis and decision-making based on the segmented land regions.""") - -''' -def fpn(img): - img_2=img.reshape(-1,256, 256, 3) - prediction=model2.predict(img_2).flatten() - return {class_names[i]: float(prediction[i]) for i in range(7)} -iface2 = gr.Interface(fn=fpn, inputs = gr.inputs.Image(shape = (256, 256)), outputs = gr.outputs.Label(num_top_classes=7), title="FPN",) - -# Combine both interfaces into a single Parallel interface -gr.Parallel(iface1, iface2, title="Land Segmentation: Unet vs FPN", - description="""Segmenting land from an image using a deep learning model. - This application aims to provide a user-friendly interface for segmenting land areas in images. - Firstly we get an intermediate output as a segmented image of the land cover, which is later converted into the percentage of the respective land classes. - Overall, we aim to make land segmentation accessible to a wide range of users and facilitating further analysis and decision-making based on the segmented land regions.""", - ).launch(share=True, debug=True, auth=("admin", "pass1234")) -''' -iface1.launch(inline=False) \ No newline at end of file diff --git a/spaces/A00001/bingothoo/src/components/markdown.tsx b/spaces/A00001/bingothoo/src/components/markdown.tsx deleted file mode 100644 index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/components/markdown.tsx +++ /dev/null @@ -1,9 +0,0 @@ -import { FC, memo } from 'react' -import ReactMarkdown, { Options } from 'react-markdown' - -export const MemoizedReactMarkdown: FC = memo( - ReactMarkdown, - (prevProps, nextProps) => - prevProps.children === nextProps.children && - prevProps.className === nextProps.className -) diff --git a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130.md b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130.md deleted file mode 100644 index 785c3166ef76cfc3467ae2eeb8984c146b745ac3..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130.md +++ /dev/null @@ -1,30 +0,0 @@ -# Engineering Interviews - -Last edited time: March 31, 2023 1:49 PM -Owner: Anonymous -Tags: Guides and Processes - - - -# Philosophy - -Create a quote by typing `/quote` and pressing `enter`. - -> Before you build a better mousetrap, it helps to know if there are any mice out there. —Yogi Berra -> - -# Interview Question Database - - - -[Questions](Engineering%20Interviews%204be8039581d04456b0151f2cc4b22130/Questions%20ede8818b3a0e447f80145905690eb3f6.md) - -# Further Reading - -For more on databases, check out this [Notion guide](https://www.notion.so/fd8cd2d212f74c50954c11086d85997e). \ No newline at end of file diff --git a/spaces/ADobrovsky/Plant_Disease_Classification_Project/README.md b/spaces/ADobrovsky/Plant_Disease_Classification_Project/README.md deleted file mode 100644 index a2b697636b5d29c12a9336e66e7617593252692f..0000000000000000000000000000000000000000 --- a/spaces/ADobrovsky/Plant_Disease_Classification_Project/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Plant Disease Classification Project -emoji: 💩 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_537227KB.py b/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_537227KB.py deleted file mode 100644 index 78e539250075d7fed2f349d05e3317dfe2c96804..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_537227KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/AIFILMS/StyleGANEX/models/encoders/psp_encoders.py b/spaces/AIFILMS/StyleGANEX/models/encoders/psp_encoders.py deleted file mode 100644 index b8ed6a10130312fa44923db44f953be90936f26d..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/encoders/psp_encoders.py +++ /dev/null @@ -1,357 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from torch.nn import Linear, Conv2d, BatchNorm2d, PReLU, Sequential, Module - -from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE -from models.stylegan2.model import EqualLinear - - -class GradualStyleBlock(Module): - def __init__(self, in_c, out_c, spatial, max_pooling=False): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - self.max_pooling = max_pooling - num_pools = int(np.log2(spatial)) - modules = [] - modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - # To make E accept more general H*W images, we add global average pooling to - # resize all features to 1*1*512 before mapping to latent codes - if self.max_pooling: - x = F.adaptive_max_pool2d(x, 1) ##### modified - else: - x = F.adaptive_avg_pool2d(x, 1) ##### modified - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - -class AdaptiveInstanceNorm(nn.Module): - def __init__(self, fin, style_dim=512): - super().__init__() - - self.norm = nn.InstanceNorm2d(fin, affine=False) - self.style = nn.Linear(style_dim, fin * 2) - - self.style.bias.data[:fin] = 1 - self.style.bias.data[fin:] = 0 - - def forward(self, input, style): - style = self.style(style).unsqueeze(2).unsqueeze(3) - gamma, beta = style.chunk(2, 1) - out = self.norm(input) - out = gamma * out + beta - return out - - -class FusionLayer(Module): ##### modified - def __init__(self, inchannel, outchannel, use_skip_torgb=True, use_att=0): - super(FusionLayer, self).__init__() - - self.transform = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=3, stride=1, padding=1), - nn.LeakyReLU()) - self.fusion_out = nn.Conv2d(outchannel*2, outchannel, kernel_size=3, stride=1, padding=1) - self.fusion_out.weight.data *= 0.01 - self.fusion_out.weight[:,0:outchannel,1,1].data += torch.eye(outchannel) - - self.use_skip_torgb = use_skip_torgb - if use_skip_torgb: - self.fusion_skip = nn.Conv2d(3+outchannel, 3, kernel_size=3, stride=1, padding=1) - self.fusion_skip.weight.data *= 0.01 - self.fusion_skip.weight[:,0:3,1,1].data += torch.eye(3) - - self.use_att = use_att - if use_att: - modules = [] - modules.append(nn.Linear(512, outchannel)) - for _ in range(use_att): - modules.append(nn.LeakyReLU(negative_slope=0.2, inplace=True)) - modules.append(nn.Linear(outchannel, outchannel)) - modules.append(nn.LeakyReLU(negative_slope=0.2, inplace=True)) - self.linear = Sequential(*modules) - self.norm = AdaptiveInstanceNorm(outchannel*2, outchannel) - self.conv = nn.Conv2d(outchannel*2, 1, 3, 1, 1, bias=True) - - def forward(self, feat, out, skip, editing_w=None): - x = self.transform(feat) - # similar to VToonify, use editing vector as condition - # fuse encoder feature and decoder feature with a predicted attention mask m_E - # if self.use_att = False, just fuse them with a simple conv layer - if self.use_att and editing_w is not None: - label = self.linear(editing_w) - m_E = (F.relu(self.conv(self.norm(torch.cat([out, abs(out-x)], dim=1), label)))).tanh() - x = x * m_E - out = self.fusion_out(torch.cat((out, x), dim=1)) - if self.use_skip_torgb: - skip = self.fusion_skip(torch.cat((skip, x), dim=1)) - return out, skip - - -class ResnetBlock(nn.Module): - def __init__(self, dim): - super(ResnetBlock, self).__init__() - - self.conv_block = nn.Sequential(Conv2d(dim, dim, 3, 1, 1), - nn.LeakyReLU(), - Conv2d(dim, dim, 3, 1, 1)) - self.relu = nn.LeakyReLU() - - def forward(self, x): - out = x + self.conv_block(x) - return self.relu(out) - -# trainable light-weight translation network T -# for sketch/mask-to-face translation, -# we add a trainable T to map y to an intermediate domain where E can more easily extract features. -class ResnetGenerator(nn.Module): - def __init__(self, in_channel=19, res_num=2): - super(ResnetGenerator, self).__init__() - - modules = [] - modules.append(Conv2d(in_channel, 16, 3, 2, 1)) - modules.append(nn.LeakyReLU()) - modules.append(Conv2d(16, 16, 3, 2, 1)) - modules.append(nn.LeakyReLU()) - for _ in range(res_num): - modules.append(ResnetBlock(16)) - for _ in range(2): - modules.append(nn.ConvTranspose2d(16, 16, 3, 2, 1, output_padding=1)) - modules.append(nn.LeakyReLU()) - modules.append(Conv2d(16, 64, 3, 1, 1, bias=False)) - modules.append(BatchNorm2d(64)) - modules.append(PReLU(64)) - self.model = Sequential(*modules) - - def forward(self, input): - return self.model(input) - -class GradualStyleEncoder(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(GradualStyleEncoder, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - - # for sketch/mask-to-face translation, add a new network T - if opts.input_nc != 3: - self.input_label_layer = ResnetGenerator(opts.input_nc, opts.res_num) - - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - self.style_count = opts.n_styles - self.coarse_ind = 3 - self.middle_ind = 7 - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16, 'max_pooling' in opts and opts.max_pooling) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32, 'max_pooling' in opts and opts.max_pooling) - else: - style = GradualStyleBlock(512, 512, 64, 'max_pooling' in opts and opts.max_pooling) - self.styles.append(style) - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - # we concatenate pSp features in the middle layers and - # add a convolution layer to map the concatenated features to the first-layer input feature f of G. - self.featlayer = nn.Conv2d(768, 512, kernel_size=1, stride=1, padding=0) ##### modified - self.skiplayer = nn.Conv2d(768, 3, kernel_size=1, stride=1, padding=0) ##### modified - - # skip connection - if 'use_skip' in opts and opts.use_skip: ##### modified - self.fusion = nn.ModuleList() - channels = [[256,512], [256,512], [256,512], [256,512], [128,512], [64,256], [64,128]] - # opts.skip_max_layer: how many layers are skipped to the decoder - for inc, outc in channels[:max(1, min(7, opts.skip_max_layer))]: # from 4 to 256 - self.fusion.append(FusionLayer(inc, outc, opts.use_skip_torgb, opts.use_att)) - - def _upsample_add(self, x, y): - '''Upsample and add two feature maps. - Args: - x: (Variable) top feature map to be upsampled. - y: (Variable) lateral feature map. - Returns: - (Variable) added feature map. - Note in PyTorch, when input size is odd, the upsampled feature map - with `F.upsample(..., scale_factor=2, mode='nearest')` - maybe not equal to the lateral feature map size. - e.g. - original input size: [N,_,15,15] -> - conv2d feature map size: [N,_,8,8] -> - upsampled feature map size: [N,_,16,16] - So we choose bilinear upsample which supports arbitrary output sizes. - ''' - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y - - # return_feat: return f - # return_full: return f and the skipped encoder features - # return [out, feats] - # out is the style latent code w+ - # feats[0] is f for the 1st conv layer, feats[1] is f for the 1st torgb layer - # feats[2-8] is the skipped encoder features - def forward(self, x, return_feat=False, return_full=False): ##### modified - if x.shape[1] != 3: - x = self.input_label_layer(x) - else: - x = self.input_layer(x) - c256 = x ##### modified - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 2: ##### modified - c128 = x - elif i == 6: - c1 = x - elif i == 10: ##### modified - c21 = x ##### modified - elif i == 15: ##### modified - c22 = x ##### modified - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - for j in range(self.coarse_ind): - latents.append(self.styles[j](c3)) - - p2 = self._upsample_add(c3, self.latlayer1(c2)) - for j in range(self.coarse_ind, self.middle_ind): - latents.append(self.styles[j](p2)) - - p1 = self._upsample_add(p2, self.latlayer2(c1)) - for j in range(self.middle_ind, self.style_count): - latents.append(self.styles[j](p1)) - - out = torch.stack(latents, dim=1) - - if not return_feat: - return out - - feats = [self.featlayer(torch.cat((c21, c22, c2), dim=1)), self.skiplayer(torch.cat((c21, c22, c2), dim=1))] - - if return_full: ##### modified - feats += [c2, c2, c22, c21, c1, c128, c256] - - return out, feats - - - # only compute the first-layer feature f - # E_F in the paper - def get_feat(self, x): ##### modified - # for sketch/mask-to-face translation - # use a trainable light-weight translation network T - if x.shape[1] != 3: - x = self.input_label_layer(x) - else: - x = self.input_layer(x) - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 10: ##### modified - c21 = x ##### modified - elif i == 15: ##### modified - c22 = x ##### modified - elif i == 20: - c2 = x - break - return self.featlayer(torch.cat((c21, c22, c2), dim=1)) - -class BackboneEncoderUsingLastLayerIntoW(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoW, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoW') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1)) - self.linear = EqualLinear(512, 512, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_pool(x) - x = x.view(-1, 512) - x = self.linear(x) - return x - - -class BackboneEncoderUsingLastLayerIntoWPlus(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoWPlus, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoWPlus') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.n_styles = opts.n_styles - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_layer_2 = Sequential(BatchNorm2d(512), - torch.nn.AdaptiveAvgPool2d((7, 7)), - Flatten(), - Linear(512 * 7 * 7, 512)) - self.linear = EqualLinear(512, 512 * self.n_styles, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer_2(x) - x = self.linear(x) - x = x.view(-1, self.n_styles, 512) - return x diff --git a/spaces/AIFILMS/StyleGANEX/scripts/calc_losses_on_images.py b/spaces/AIFILMS/StyleGANEX/scripts/calc_losses_on_images.py deleted file mode 100644 index 436348db28a625d94f63bbb86ff779b92d28b419..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/scripts/calc_losses_on_images.py +++ /dev/null @@ -1,84 +0,0 @@ -from argparse import ArgumentParser -import os -import json -import sys -from tqdm import tqdm -import numpy as np -import torch -from torch.utils.data import DataLoader -import torchvision.transforms as transforms - -sys.path.append(".") -sys.path.append("..") - -from criteria.lpips.lpips import LPIPS -from datasets.gt_res_dataset import GTResDataset - - -def parse_args(): - parser = ArgumentParser(add_help=False) - parser.add_argument('--mode', type=str, default='lpips', choices=['lpips', 'l2']) - parser.add_argument('--data_path', type=str, default='results') - parser.add_argument('--gt_path', type=str, default='gt_images') - parser.add_argument('--workers', type=int, default=4) - parser.add_argument('--batch_size', type=int, default=4) - args = parser.parse_args() - return args - - -def run(args): - - transform = transforms.Compose([transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - - print('Loading dataset') - dataset = GTResDataset(root_path=args.data_path, - gt_dir=args.gt_path, - transform=transform) - - dataloader = DataLoader(dataset, - batch_size=args.batch_size, - shuffle=False, - num_workers=int(args.workers), - drop_last=True) - - if args.mode == 'lpips': - loss_func = LPIPS(net_type='alex') - elif args.mode == 'l2': - loss_func = torch.nn.MSELoss() - else: - raise Exception('Not a valid mode!') - loss_func.cuda() - - global_i = 0 - scores_dict = {} - all_scores = [] - for result_batch, gt_batch in tqdm(dataloader): - for i in range(args.batch_size): - loss = float(loss_func(result_batch[i:i+1].cuda(), gt_batch[i:i+1].cuda())) - all_scores.append(loss) - im_path = dataset.pairs[global_i][0] - scores_dict[os.path.basename(im_path)] = loss - global_i += 1 - - all_scores = list(scores_dict.values()) - mean = np.mean(all_scores) - std = np.std(all_scores) - result_str = 'Average loss is {:.2f}+-{:.2f}'.format(mean, std) - print('Finished with ', args.data_path) - print(result_str) - - out_path = os.path.join(os.path.dirname(args.data_path), 'inference_metrics') - if not os.path.exists(out_path): - os.makedirs(out_path) - - with open(os.path.join(out_path, 'stat_{}.txt'.format(args.mode)), 'w') as f: - f.write(result_str) - with open(os.path.join(out_path, 'scores_{}.json'.format(args.mode)), 'w') as f: - json.dump(scores_dict, f) - - -if __name__ == '__main__': - args = parse_args() - run(args) diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/lr_scheduler.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py deleted file mode 100644 index 2ecceb26149155301cd6d22b93be6c4948dfaaa0..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py +++ /dev/null @@ -1,2861 +0,0 @@ -default_scope = 'mmpose' -default_hooks = dict( - timer=dict(type='IterTimerHook'), - logger=dict(type='LoggerHook', interval=50), - param_scheduler=dict(type='ParamSchedulerHook'), - checkpoint=dict( - type='CheckpointHook', interval=10, save_best='PCK', rule='greater'), - sampler_seed=dict(type='DistSamplerSeedHook'), - visualization=dict(type='PoseVisualizationHook', enable=False)) -custom_hooks = [dict(type='SyncBuffersHook')] -env_cfg = dict( - cudnn_benchmark=False, - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - dist_cfg=dict(backend='nccl')) -vis_backends = [dict(type='LocalVisBackend')] -visualizer = dict( - type='PoseLocalVisualizer', - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')], - name='visualizer') -log_processor = dict( - type='LogProcessor', window_size=50, by_epoch=True, num_digits=6) -log_level = 'INFO' -load_from = None -resume = False -backend_args = dict(backend='local') -train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10) -val_cfg = dict() -test_cfg = dict() -colors = dict( - sss=[255, 128, 0], - lss=[255, 0, 128], - sso=[128, 0, 255], - lso=[0, 128, 255], - vest=[0, 128, 128], - sling=[0, 0, 128], - shorts=[128, 128, 128], - trousers=[128, 0, 128], - skirt=[64, 128, 128], - ssd=[64, 64, 128], - lsd=[128, 64, 0], - vd=[128, 64, 255], - sd=[128, 64, 0]) -dataset_info = dict( - dataset_name='deepfashion2', - paper_info=dict( - author= - 'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo', - title= - 'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images', - container= - 'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)', - year='2019', - homepage='https://github.com/switchablenorms/DeepFashion2'), - keypoint_info=dict({ - 0: - dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''), - 1: - dict( - name='sss_kpt2', - id=1, - color=[255, 128, 0], - type='', - swap='sss_kpt6'), - 2: - dict( - name='sss_kpt3', - id=2, - color=[255, 128, 0], - type='', - swap='sss_kpt5'), - 3: - dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''), - 4: - dict( - name='sss_kpt5', - id=4, - color=[255, 128, 0], - type='', - swap='sss_kpt3'), - 5: - dict( - name='sss_kpt6', - id=5, - color=[255, 128, 0], - type='', - swap='sss_kpt2'), - 6: - dict( - name='sss_kpt7', - id=6, - color=[255, 128, 0], - type='', - swap='sss_kpt25'), - 7: - dict( - name='sss_kpt8', - id=7, - color=[255, 128, 0], - type='', - swap='sss_kpt24'), - 8: - dict( - name='sss_kpt9', - id=8, - color=[255, 128, 0], - type='', - swap='sss_kpt23'), - 9: - dict( - name='sss_kpt10', - id=9, - color=[255, 128, 0], - type='', - swap='sss_kpt22'), - 10: - dict( - name='sss_kpt11', - id=10, - color=[255, 128, 0], - type='', - swap='sss_kpt21'), - 11: - dict( - name='sss_kpt12', - id=11, - color=[255, 128, 0], - type='', - swap='sss_kpt20'), - 12: - dict( - name='sss_kpt13', - id=12, - color=[255, 128, 0], - type='', - swap='sss_kpt19'), - 13: - dict( - name='sss_kpt14', - id=13, - color=[255, 128, 0], - type='', - swap='sss_kpt18'), - 14: - dict( - name='sss_kpt15', - id=14, - color=[255, 128, 0], - type='', - swap='sss_kpt17'), - 15: - dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''), - 16: - dict( - name='sss_kpt17', - id=16, - color=[255, 128, 0], - type='', - swap='sss_kpt15'), - 17: - dict( - name='sss_kpt18', - id=17, - color=[255, 128, 0], - type='', - swap='sss_kpt14'), - 18: - dict( - name='sss_kpt19', - id=18, - color=[255, 128, 0], - type='', - swap='sss_kpt13'), - 19: - dict( - name='sss_kpt20', - id=19, - color=[255, 128, 0], - type='', - swap='sss_kpt12'), - 20: - dict( - name='sss_kpt21', - id=20, - color=[255, 128, 0], - type='', - swap='sss_kpt11'), - 21: - dict( - name='sss_kpt22', - id=21, - color=[255, 128, 0], - type='', - swap='sss_kpt10'), - 22: - dict( - name='sss_kpt23', - id=22, - color=[255, 128, 0], - type='', - swap='sss_kpt9'), - 23: - dict( - name='sss_kpt24', - id=23, - color=[255, 128, 0], - type='', - swap='sss_kpt8'), - 24: - dict( - name='sss_kpt25', - id=24, - color=[255, 128, 0], - type='', - swap='sss_kpt7'), - 25: - dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''), - 26: - dict( - name='lss_kpt2', - id=26, - color=[255, 0, 128], - type='', - swap='lss_kpt6'), - 27: - dict( - name='lss_kpt3', - id=27, - color=[255, 0, 128], - type='', - swap='lss_kpt5'), - 28: - dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''), - 29: - dict( - name='lss_kpt5', - id=29, - color=[255, 0, 128], - type='', - swap='lss_kpt3'), - 30: - dict( - name='lss_kpt6', - id=30, - color=[255, 0, 128], - type='', - swap='lss_kpt2'), - 31: - dict( - name='lss_kpt7', - id=31, - color=[255, 0, 128], - type='', - swap='lss_kpt33'), - 32: - dict( - name='lss_kpt8', - id=32, - color=[255, 0, 128], - type='', - swap='lss_kpt32'), - 33: - dict( - name='lss_kpt9', - id=33, - color=[255, 0, 128], - type='', - swap='lss_kpt31'), - 34: - dict( - name='lss_kpt10', - id=34, - color=[255, 0, 128], - type='', - swap='lss_kpt30'), - 35: - dict( - name='lss_kpt11', - id=35, - color=[255, 0, 128], - type='', - swap='lss_kpt29'), - 36: - dict( - name='lss_kpt12', - id=36, - color=[255, 0, 128], - type='', - swap='lss_kpt28'), - 37: - dict( - name='lss_kpt13', - id=37, - color=[255, 0, 128], - type='', - swap='lss_kpt27'), - 38: - dict( - name='lss_kpt14', - id=38, - color=[255, 0, 128], - type='', - swap='lss_kpt26'), - 39: - dict( - name='lss_kpt15', - id=39, - color=[255, 0, 128], - type='', - swap='lss_kpt25'), - 40: - dict( - name='lss_kpt16', - id=40, - color=[255, 0, 128], - type='', - swap='lss_kpt24'), - 41: - dict( - name='lss_kpt17', - id=41, - color=[255, 0, 128], - type='', - swap='lss_kpt23'), - 42: - dict( - name='lss_kpt18', - id=42, - color=[255, 0, 128], - type='', - swap='lss_kpt22'), - 43: - dict( - name='lss_kpt19', - id=43, - color=[255, 0, 128], - type='', - swap='lss_kpt21'), - 44: - dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''), - 45: - dict( - name='lss_kpt21', - id=45, - color=[255, 0, 128], - type='', - swap='lss_kpt19'), - 46: - dict( - name='lss_kpt22', - id=46, - color=[255, 0, 128], - type='', - swap='lss_kpt18'), - 47: - dict( - name='lss_kpt23', - id=47, - color=[255, 0, 128], - type='', - swap='lss_kpt17'), - 48: - dict( - name='lss_kpt24', - id=48, - color=[255, 0, 128], - type='', - swap='lss_kpt16'), - 49: - dict( - name='lss_kpt25', - id=49, - color=[255, 0, 128], - type='', - swap='lss_kpt15'), - 50: - dict( - name='lss_kpt26', - id=50, - color=[255, 0, 128], - type='', - swap='lss_kpt14'), - 51: - dict( - name='lss_kpt27', - id=51, - color=[255, 0, 128], - type='', - swap='lss_kpt13'), - 52: - dict( - name='lss_kpt28', - id=52, - color=[255, 0, 128], - type='', - swap='lss_kpt12'), - 53: - dict( - name='lss_kpt29', - id=53, - color=[255, 0, 128], - type='', - swap='lss_kpt11'), - 54: - dict( - name='lss_kpt30', - id=54, - color=[255, 0, 128], - type='', - swap='lss_kpt10'), - 55: - dict( - name='lss_kpt31', - id=55, - color=[255, 0, 128], - type='', - swap='lss_kpt9'), - 56: - dict( - name='lss_kpt32', - id=56, - color=[255, 0, 128], - type='', - swap='lss_kpt8'), - 57: - dict( - name='lss_kpt33', - id=57, - color=[255, 0, 128], - type='', - swap='lss_kpt7'), - 58: - dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''), - 59: - dict( - name='sso_kpt2', - id=59, - color=[128, 0, 255], - type='', - swap='sso_kpt26'), - 60: - dict( - name='sso_kpt3', - id=60, - color=[128, 0, 255], - type='', - swap='sso_kpt5'), - 61: - dict( - name='sso_kpt4', - id=61, - color=[128, 0, 255], - type='', - swap='sso_kpt6'), - 62: - dict( - name='sso_kpt5', - id=62, - color=[128, 0, 255], - type='', - swap='sso_kpt3'), - 63: - dict( - name='sso_kpt6', - id=63, - color=[128, 0, 255], - type='', - swap='sso_kpt4'), - 64: - dict( - name='sso_kpt7', - id=64, - color=[128, 0, 255], - type='', - swap='sso_kpt25'), - 65: - dict( - name='sso_kpt8', - id=65, - color=[128, 0, 255], - type='', - swap='sso_kpt24'), - 66: - dict( - name='sso_kpt9', - id=66, - color=[128, 0, 255], - type='', - swap='sso_kpt23'), - 67: - dict( - name='sso_kpt10', - id=67, - color=[128, 0, 255], - type='', - swap='sso_kpt22'), - 68: - dict( - name='sso_kpt11', - id=68, - color=[128, 0, 255], - type='', - swap='sso_kpt21'), - 69: - dict( - name='sso_kpt12', - id=69, - color=[128, 0, 255], - type='', - swap='sso_kpt20'), - 70: - dict( - name='sso_kpt13', - id=70, - color=[128, 0, 255], - type='', - swap='sso_kpt19'), - 71: - dict( - name='sso_kpt14', - id=71, - color=[128, 0, 255], - type='', - swap='sso_kpt18'), - 72: - dict( - name='sso_kpt15', - id=72, - color=[128, 0, 255], - type='', - swap='sso_kpt17'), - 73: - dict( - name='sso_kpt16', - id=73, - color=[128, 0, 255], - type='', - swap='sso_kpt29'), - 74: - dict( - name='sso_kpt17', - id=74, - color=[128, 0, 255], - type='', - swap='sso_kpt15'), - 75: - dict( - name='sso_kpt18', - id=75, - color=[128, 0, 255], - type='', - swap='sso_kpt14'), - 76: - dict( - name='sso_kpt19', - id=76, - color=[128, 0, 255], - type='', - swap='sso_kpt13'), - 77: - dict( - name='sso_kpt20', - id=77, - color=[128, 0, 255], - type='', - swap='sso_kpt12'), - 78: - dict( - name='sso_kpt21', - id=78, - color=[128, 0, 255], - type='', - swap='sso_kpt11'), - 79: - dict( - name='sso_kpt22', - id=79, - color=[128, 0, 255], - type='', - swap='sso_kpt10'), - 80: - dict( - name='sso_kpt23', - id=80, - color=[128, 0, 255], - type='', - swap='sso_kpt9'), - 81: - dict( - name='sso_kpt24', - id=81, - color=[128, 0, 255], - type='', - swap='sso_kpt8'), - 82: - dict( - name='sso_kpt25', - id=82, - color=[128, 0, 255], - type='', - swap='sso_kpt7'), - 83: - dict( - name='sso_kpt26', - id=83, - color=[128, 0, 255], - type='', - swap='sso_kpt2'), - 84: - dict( - name='sso_kpt27', - id=84, - color=[128, 0, 255], - type='', - swap='sso_kpt30'), - 85: - dict( - name='sso_kpt28', - id=85, - color=[128, 0, 255], - type='', - swap='sso_kpt31'), - 86: - dict( - name='sso_kpt29', - id=86, - color=[128, 0, 255], - type='', - swap='sso_kpt16'), - 87: - dict( - name='sso_kpt30', - id=87, - color=[128, 0, 255], - type='', - swap='sso_kpt27'), - 88: - dict( - name='sso_kpt31', - id=88, - color=[128, 0, 255], - type='', - swap='sso_kpt28'), - 89: - dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''), - 90: - dict( - name='lso_kpt2', - id=90, - color=[0, 128, 255], - type='', - swap='lso_kpt6'), - 91: - dict( - name='lso_kpt3', - id=91, - color=[0, 128, 255], - type='', - swap='lso_kpt5'), - 92: - dict( - name='lso_kpt4', - id=92, - color=[0, 128, 255], - type='', - swap='lso_kpt34'), - 93: - dict( - name='lso_kpt5', - id=93, - color=[0, 128, 255], - type='', - swap='lso_kpt3'), - 94: - dict( - name='lso_kpt6', - id=94, - color=[0, 128, 255], - type='', - swap='lso_kpt2'), - 95: - dict( - name='lso_kpt7', - id=95, - color=[0, 128, 255], - type='', - swap='lso_kpt33'), - 96: - dict( - name='lso_kpt8', - id=96, - color=[0, 128, 255], - type='', - swap='lso_kpt32'), - 97: - dict( - name='lso_kpt9', - id=97, - color=[0, 128, 255], - type='', - swap='lso_kpt31'), - 98: - dict( - name='lso_kpt10', - id=98, - color=[0, 128, 255], - type='', - swap='lso_kpt30'), - 99: - dict( - name='lso_kpt11', - id=99, - color=[0, 128, 255], - type='', - swap='lso_kpt29'), - 100: - dict( - name='lso_kpt12', - id=100, - color=[0, 128, 255], - type='', - swap='lso_kpt28'), - 101: - dict( - name='lso_kpt13', - id=101, - color=[0, 128, 255], - type='', - swap='lso_kpt27'), - 102: - dict( - name='lso_kpt14', - id=102, - color=[0, 128, 255], - type='', - swap='lso_kpt26'), - 103: - dict( - name='lso_kpt15', - id=103, - color=[0, 128, 255], - type='', - swap='lso_kpt25'), - 104: - dict( - name='lso_kpt16', - id=104, - color=[0, 128, 255], - type='', - swap='lso_kpt24'), - 105: - dict( - name='lso_kpt17', - id=105, - color=[0, 128, 255], - type='', - swap='lso_kpt23'), - 106: - dict( - name='lso_kpt18', - id=106, - color=[0, 128, 255], - type='', - swap='lso_kpt22'), - 107: - dict( - name='lso_kpt19', - id=107, - color=[0, 128, 255], - type='', - swap='lso_kpt21'), - 108: - dict( - name='lso_kpt20', - id=108, - color=[0, 128, 255], - type='', - swap='lso_kpt37'), - 109: - dict( - name='lso_kpt21', - id=109, - color=[0, 128, 255], - type='', - swap='lso_kpt19'), - 110: - dict( - name='lso_kpt22', - id=110, - color=[0, 128, 255], - type='', - swap='lso_kpt18'), - 111: - dict( - name='lso_kpt23', - id=111, - color=[0, 128, 255], - type='', - swap='lso_kpt17'), - 112: - dict( - name='lso_kpt24', - id=112, - color=[0, 128, 255], - type='', - swap='lso_kpt16'), - 113: - dict( - name='lso_kpt25', - id=113, - color=[0, 128, 255], - type='', - swap='lso_kpt15'), - 114: - dict( - name='lso_kpt26', - id=114, - color=[0, 128, 255], - type='', - swap='lso_kpt14'), - 115: - dict( - name='lso_kpt27', - id=115, - color=[0, 128, 255], - type='', - swap='lso_kpt13'), - 116: - dict( - name='lso_kpt28', - id=116, - color=[0, 128, 255], - type='', - swap='lso_kpt12'), - 117: - dict( - name='lso_kpt29', - id=117, - color=[0, 128, 255], - type='', - swap='lso_kpt11'), - 118: - dict( - name='lso_kpt30', - id=118, - color=[0, 128, 255], - type='', - swap='lso_kpt10'), - 119: - dict( - name='lso_kpt31', - id=119, - color=[0, 128, 255], - type='', - swap='lso_kpt9'), - 120: - dict( - name='lso_kpt32', - id=120, - color=[0, 128, 255], - type='', - swap='lso_kpt8'), - 121: - dict( - name='lso_kpt33', - id=121, - color=[0, 128, 255], - type='', - swap='lso_kpt7'), - 122: - dict( - name='lso_kpt34', - id=122, - color=[0, 128, 255], - type='', - swap='lso_kpt4'), - 123: - dict( - name='lso_kpt35', - id=123, - color=[0, 128, 255], - type='', - swap='lso_kpt38'), - 124: - dict( - name='lso_kpt36', - id=124, - color=[0, 128, 255], - type='', - swap='lso_kpt39'), - 125: - dict( - name='lso_kpt37', - id=125, - color=[0, 128, 255], - type='', - swap='lso_kpt20'), - 126: - dict( - name='lso_kpt38', - id=126, - color=[0, 128, 255], - type='', - swap='lso_kpt35'), - 127: - dict( - name='lso_kpt39', - id=127, - color=[0, 128, 255], - type='', - swap='lso_kpt36'), - 128: - dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''), - 129: - dict( - name='vest_kpt2', - id=129, - color=[0, 128, 128], - type='', - swap='vest_kpt6'), - 130: - dict( - name='vest_kpt3', - id=130, - color=[0, 128, 128], - type='', - swap='vest_kpt5'), - 131: - dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''), - 132: - dict( - name='vest_kpt5', - id=132, - color=[0, 128, 128], - type='', - swap='vest_kpt3'), - 133: - dict( - name='vest_kpt6', - id=133, - color=[0, 128, 128], - type='', - swap='vest_kpt2'), - 134: - dict( - name='vest_kpt7', - id=134, - color=[0, 128, 128], - type='', - swap='vest_kpt15'), - 135: - dict( - name='vest_kpt8', - id=135, - color=[0, 128, 128], - type='', - swap='vest_kpt14'), - 136: - dict( - name='vest_kpt9', - id=136, - color=[0, 128, 128], - type='', - swap='vest_kpt13'), - 137: - dict( - name='vest_kpt10', - id=137, - color=[0, 128, 128], - type='', - swap='vest_kpt12'), - 138: - dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''), - 139: - dict( - name='vest_kpt12', - id=139, - color=[0, 128, 128], - type='', - swap='vest_kpt10'), - 140: - dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''), - 141: - dict( - name='vest_kpt14', - id=141, - color=[0, 128, 128], - type='', - swap='vest_kpt8'), - 142: - dict( - name='vest_kpt15', - id=142, - color=[0, 128, 128], - type='', - swap='vest_kpt7'), - 143: - dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''), - 144: - dict( - name='sling_kpt2', - id=144, - color=[0, 0, 128], - type='', - swap='sling_kpt6'), - 145: - dict( - name='sling_kpt3', - id=145, - color=[0, 0, 128], - type='', - swap='sling_kpt5'), - 146: - dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''), - 147: - dict( - name='sling_kpt5', - id=147, - color=[0, 0, 128], - type='', - swap='sling_kpt3'), - 148: - dict( - name='sling_kpt6', - id=148, - color=[0, 0, 128], - type='', - swap='sling_kpt2'), - 149: - dict( - name='sling_kpt7', - id=149, - color=[0, 0, 128], - type='', - swap='sling_kpt15'), - 150: - dict( - name='sling_kpt8', - id=150, - color=[0, 0, 128], - type='', - swap='sling_kpt14'), - 151: - dict( - name='sling_kpt9', - id=151, - color=[0, 0, 128], - type='', - swap='sling_kpt13'), - 152: - dict( - name='sling_kpt10', - id=152, - color=[0, 0, 128], - type='', - swap='sling_kpt12'), - 153: - dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''), - 154: - dict( - name='sling_kpt12', - id=154, - color=[0, 0, 128], - type='', - swap='sling_kpt10'), - 155: - dict( - name='sling_kpt13', - id=155, - color=[0, 0, 128], - type='', - swap='sling_kpt9'), - 156: - dict( - name='sling_kpt14', - id=156, - color=[0, 0, 128], - type='', - swap='sling_kpt8'), - 157: - dict( - name='sling_kpt15', - id=157, - color=[0, 0, 128], - type='', - swap='sling_kpt7'), - 158: - dict( - name='shorts_kpt1', - id=158, - color=[128, 128, 128], - type='', - swap='shorts_kpt3'), - 159: - dict( - name='shorts_kpt2', - id=159, - color=[128, 128, 128], - type='', - swap=''), - 160: - dict( - name='shorts_kpt3', - id=160, - color=[128, 128, 128], - type='', - swap='shorts_kpt1'), - 161: - dict( - name='shorts_kpt4', - id=161, - color=[128, 128, 128], - type='', - swap='shorts_kpt10'), - 162: - dict( - name='shorts_kpt5', - id=162, - color=[128, 128, 128], - type='', - swap='shorts_kpt9'), - 163: - dict( - name='shorts_kpt6', - id=163, - color=[128, 128, 128], - type='', - swap='shorts_kpt8'), - 164: - dict( - name='shorts_kpt7', - id=164, - color=[128, 128, 128], - type='', - swap=''), - 165: - dict( - name='shorts_kpt8', - id=165, - color=[128, 128, 128], - type='', - swap='shorts_kpt6'), - 166: - dict( - name='shorts_kpt9', - id=166, - color=[128, 128, 128], - type='', - swap='shorts_kpt5'), - 167: - dict( - name='shorts_kpt10', - id=167, - color=[128, 128, 128], - type='', - swap='shorts_kpt4'), - 168: - dict( - name='trousers_kpt1', - id=168, - color=[128, 0, 128], - type='', - swap='trousers_kpt3'), - 169: - dict( - name='trousers_kpt2', - id=169, - color=[128, 0, 128], - type='', - swap=''), - 170: - dict( - name='trousers_kpt3', - id=170, - color=[128, 0, 128], - type='', - swap='trousers_kpt1'), - 171: - dict( - name='trousers_kpt4', - id=171, - color=[128, 0, 128], - type='', - swap='trousers_kpt14'), - 172: - dict( - name='trousers_kpt5', - id=172, - color=[128, 0, 128], - type='', - swap='trousers_kpt13'), - 173: - dict( - name='trousers_kpt6', - id=173, - color=[128, 0, 128], - type='', - swap='trousers_kpt12'), - 174: - dict( - name='trousers_kpt7', - id=174, - color=[128, 0, 128], - type='', - swap='trousers_kpt11'), - 175: - dict( - name='trousers_kpt8', - id=175, - color=[128, 0, 128], - type='', - swap='trousers_kpt10'), - 176: - dict( - name='trousers_kpt9', - id=176, - color=[128, 0, 128], - type='', - swap=''), - 177: - dict( - name='trousers_kpt10', - id=177, - color=[128, 0, 128], - type='', - swap='trousers_kpt8'), - 178: - dict( - name='trousers_kpt11', - id=178, - color=[128, 0, 128], - type='', - swap='trousers_kpt7'), - 179: - dict( - name='trousers_kpt12', - id=179, - color=[128, 0, 128], - type='', - swap='trousers_kpt6'), - 180: - dict( - name='trousers_kpt13', - id=180, - color=[128, 0, 128], - type='', - swap='trousers_kpt5'), - 181: - dict( - name='trousers_kpt14', - id=181, - color=[128, 0, 128], - type='', - swap='trousers_kpt4'), - 182: - dict( - name='skirt_kpt1', - id=182, - color=[64, 128, 128], - type='', - swap='skirt_kpt3'), - 183: - dict( - name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''), - 184: - dict( - name='skirt_kpt3', - id=184, - color=[64, 128, 128], - type='', - swap='skirt_kpt1'), - 185: - dict( - name='skirt_kpt4', - id=185, - color=[64, 128, 128], - type='', - swap='skirt_kpt8'), - 186: - dict( - name='skirt_kpt5', - id=186, - color=[64, 128, 128], - type='', - swap='skirt_kpt7'), - 187: - dict( - name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''), - 188: - dict( - name='skirt_kpt7', - id=188, - color=[64, 128, 128], - type='', - swap='skirt_kpt5'), - 189: - dict( - name='skirt_kpt8', - id=189, - color=[64, 128, 128], - type='', - swap='skirt_kpt4'), - 190: - dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''), - 191: - dict( - name='ssd_kpt2', - id=191, - color=[64, 64, 128], - type='', - swap='ssd_kpt6'), - 192: - dict( - name='ssd_kpt3', - id=192, - color=[64, 64, 128], - type='', - swap='ssd_kpt5'), - 193: - dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''), - 194: - dict( - name='ssd_kpt5', - id=194, - color=[64, 64, 128], - type='', - swap='ssd_kpt3'), - 195: - dict( - name='ssd_kpt6', - id=195, - color=[64, 64, 128], - type='', - swap='ssd_kpt2'), - 196: - dict( - name='ssd_kpt7', - id=196, - color=[64, 64, 128], - type='', - swap='ssd_kpt29'), - 197: - dict( - name='ssd_kpt8', - id=197, - color=[64, 64, 128], - type='', - swap='ssd_kpt28'), - 198: - dict( - name='ssd_kpt9', - id=198, - color=[64, 64, 128], - type='', - swap='ssd_kpt27'), - 199: - dict( - name='ssd_kpt10', - id=199, - color=[64, 64, 128], - type='', - swap='ssd_kpt26'), - 200: - dict( - name='ssd_kpt11', - id=200, - color=[64, 64, 128], - type='', - swap='ssd_kpt25'), - 201: - dict( - name='ssd_kpt12', - id=201, - color=[64, 64, 128], - type='', - swap='ssd_kpt24'), - 202: - dict( - name='ssd_kpt13', - id=202, - color=[64, 64, 128], - type='', - swap='ssd_kpt23'), - 203: - dict( - name='ssd_kpt14', - id=203, - color=[64, 64, 128], - type='', - swap='ssd_kpt22'), - 204: - dict( - name='ssd_kpt15', - id=204, - color=[64, 64, 128], - type='', - swap='ssd_kpt21'), - 205: - dict( - name='ssd_kpt16', - id=205, - color=[64, 64, 128], - type='', - swap='ssd_kpt20'), - 206: - dict( - name='ssd_kpt17', - id=206, - color=[64, 64, 128], - type='', - swap='ssd_kpt19'), - 207: - dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''), - 208: - dict( - name='ssd_kpt19', - id=208, - color=[64, 64, 128], - type='', - swap='ssd_kpt17'), - 209: - dict( - name='ssd_kpt20', - id=209, - color=[64, 64, 128], - type='', - swap='ssd_kpt16'), - 210: - dict( - name='ssd_kpt21', - id=210, - color=[64, 64, 128], - type='', - swap='ssd_kpt15'), - 211: - dict( - name='ssd_kpt22', - id=211, - color=[64, 64, 128], - type='', - swap='ssd_kpt14'), - 212: - dict( - name='ssd_kpt23', - id=212, - color=[64, 64, 128], - type='', - swap='ssd_kpt13'), - 213: - dict( - name='ssd_kpt24', - id=213, - color=[64, 64, 128], - type='', - swap='ssd_kpt12'), - 214: - dict( - name='ssd_kpt25', - id=214, - color=[64, 64, 128], - type='', - swap='ssd_kpt11'), - 215: - dict( - name='ssd_kpt26', - id=215, - color=[64, 64, 128], - type='', - swap='ssd_kpt10'), - 216: - dict( - name='ssd_kpt27', - id=216, - color=[64, 64, 128], - type='', - swap='ssd_kpt9'), - 217: - dict( - name='ssd_kpt28', - id=217, - color=[64, 64, 128], - type='', - swap='ssd_kpt8'), - 218: - dict( - name='ssd_kpt29', - id=218, - color=[64, 64, 128], - type='', - swap='ssd_kpt7'), - 219: - dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''), - 220: - dict( - name='lsd_kpt2', - id=220, - color=[128, 64, 0], - type='', - swap='lsd_kpt6'), - 221: - dict( - name='lsd_kpt3', - id=221, - color=[128, 64, 0], - type='', - swap='lsd_kpt5'), - 222: - dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''), - 223: - dict( - name='lsd_kpt5', - id=223, - color=[128, 64, 0], - type='', - swap='lsd_kpt3'), - 224: - dict( - name='lsd_kpt6', - id=224, - color=[128, 64, 0], - type='', - swap='lsd_kpt2'), - 225: - dict( - name='lsd_kpt7', - id=225, - color=[128, 64, 0], - type='', - swap='lsd_kpt37'), - 226: - dict( - name='lsd_kpt8', - id=226, - color=[128, 64, 0], - type='', - swap='lsd_kpt36'), - 227: - dict( - name='lsd_kpt9', - id=227, - color=[128, 64, 0], - type='', - swap='lsd_kpt35'), - 228: - dict( - name='lsd_kpt10', - id=228, - color=[128, 64, 0], - type='', - swap='lsd_kpt34'), - 229: - dict( - name='lsd_kpt11', - id=229, - color=[128, 64, 0], - type='', - swap='lsd_kpt33'), - 230: - dict( - name='lsd_kpt12', - id=230, - color=[128, 64, 0], - type='', - swap='lsd_kpt32'), - 231: - dict( - name='lsd_kpt13', - id=231, - color=[128, 64, 0], - type='', - swap='lsd_kpt31'), - 232: - dict( - name='lsd_kpt14', - id=232, - color=[128, 64, 0], - type='', - swap='lsd_kpt30'), - 233: - dict( - name='lsd_kpt15', - id=233, - color=[128, 64, 0], - type='', - swap='lsd_kpt29'), - 234: - dict( - name='lsd_kpt16', - id=234, - color=[128, 64, 0], - type='', - swap='lsd_kpt28'), - 235: - dict( - name='lsd_kpt17', - id=235, - color=[128, 64, 0], - type='', - swap='lsd_kpt27'), - 236: - dict( - name='lsd_kpt18', - id=236, - color=[128, 64, 0], - type='', - swap='lsd_kpt26'), - 237: - dict( - name='lsd_kpt19', - id=237, - color=[128, 64, 0], - type='', - swap='lsd_kpt25'), - 238: - dict( - name='lsd_kpt20', - id=238, - color=[128, 64, 0], - type='', - swap='lsd_kpt24'), - 239: - dict( - name='lsd_kpt21', - id=239, - color=[128, 64, 0], - type='', - swap='lsd_kpt23'), - 240: - dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''), - 241: - dict( - name='lsd_kpt23', - id=241, - color=[128, 64, 0], - type='', - swap='lsd_kpt21'), - 242: - dict( - name='lsd_kpt24', - id=242, - color=[128, 64, 0], - type='', - swap='lsd_kpt20'), - 243: - dict( - name='lsd_kpt25', - id=243, - color=[128, 64, 0], - type='', - swap='lsd_kpt19'), - 244: - dict( - name='lsd_kpt26', - id=244, - color=[128, 64, 0], - type='', - swap='lsd_kpt18'), - 245: - dict( - name='lsd_kpt27', - id=245, - color=[128, 64, 0], - type='', - swap='lsd_kpt17'), - 246: - dict( - name='lsd_kpt28', - id=246, - color=[128, 64, 0], - type='', - swap='lsd_kpt16'), - 247: - dict( - name='lsd_kpt29', - id=247, - color=[128, 64, 0], - type='', - swap='lsd_kpt15'), - 248: - dict( - name='lsd_kpt30', - id=248, - color=[128, 64, 0], - type='', - swap='lsd_kpt14'), - 249: - dict( - name='lsd_kpt31', - id=249, - color=[128, 64, 0], - type='', - swap='lsd_kpt13'), - 250: - dict( - name='lsd_kpt32', - id=250, - color=[128, 64, 0], - type='', - swap='lsd_kpt12'), - 251: - dict( - name='lsd_kpt33', - id=251, - color=[128, 64, 0], - type='', - swap='lsd_kpt11'), - 252: - dict( - name='lsd_kpt34', - id=252, - color=[128, 64, 0], - type='', - swap='lsd_kpt10'), - 253: - dict( - name='lsd_kpt35', - id=253, - color=[128, 64, 0], - type='', - swap='lsd_kpt9'), - 254: - dict( - name='lsd_kpt36', - id=254, - color=[128, 64, 0], - type='', - swap='lsd_kpt8'), - 255: - dict( - name='lsd_kpt37', - id=255, - color=[128, 64, 0], - type='', - swap='lsd_kpt7'), - 256: - dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''), - 257: - dict( - name='vd_kpt2', - id=257, - color=[128, 64, 255], - type='', - swap='vd_kpt6'), - 258: - dict( - name='vd_kpt3', - id=258, - color=[128, 64, 255], - type='', - swap='vd_kpt5'), - 259: - dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''), - 260: - dict( - name='vd_kpt5', - id=260, - color=[128, 64, 255], - type='', - swap='vd_kpt3'), - 261: - dict( - name='vd_kpt6', - id=261, - color=[128, 64, 255], - type='', - swap='vd_kpt2'), - 262: - dict( - name='vd_kpt7', - id=262, - color=[128, 64, 255], - type='', - swap='vd_kpt19'), - 263: - dict( - name='vd_kpt8', - id=263, - color=[128, 64, 255], - type='', - swap='vd_kpt18'), - 264: - dict( - name='vd_kpt9', - id=264, - color=[128, 64, 255], - type='', - swap='vd_kpt17'), - 265: - dict( - name='vd_kpt10', - id=265, - color=[128, 64, 255], - type='', - swap='vd_kpt16'), - 266: - dict( - name='vd_kpt11', - id=266, - color=[128, 64, 255], - type='', - swap='vd_kpt15'), - 267: - dict( - name='vd_kpt12', - id=267, - color=[128, 64, 255], - type='', - swap='vd_kpt14'), - 268: - dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''), - 269: - dict( - name='vd_kpt14', - id=269, - color=[128, 64, 255], - type='', - swap='vd_kpt12'), - 270: - dict( - name='vd_kpt15', - id=270, - color=[128, 64, 255], - type='', - swap='vd_kpt11'), - 271: - dict( - name='vd_kpt16', - id=271, - color=[128, 64, 255], - type='', - swap='vd_kpt10'), - 272: - dict( - name='vd_kpt17', - id=272, - color=[128, 64, 255], - type='', - swap='vd_kpt9'), - 273: - dict( - name='vd_kpt18', - id=273, - color=[128, 64, 255], - type='', - swap='vd_kpt8'), - 274: - dict( - name='vd_kpt19', - id=274, - color=[128, 64, 255], - type='', - swap='vd_kpt7'), - 275: - dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''), - 276: - dict( - name='sd_kpt2', - id=276, - color=[128, 64, 0], - type='', - swap='sd_kpt6'), - 277: - dict( - name='sd_kpt3', - id=277, - color=[128, 64, 0], - type='', - swap='sd_kpt5'), - 278: - dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''), - 279: - dict( - name='sd_kpt5', - id=279, - color=[128, 64, 0], - type='', - swap='sd_kpt3'), - 280: - dict( - name='sd_kpt6', - id=280, - color=[128, 64, 0], - type='', - swap='sd_kpt2'), - 281: - dict( - name='sd_kpt7', - id=281, - color=[128, 64, 0], - type='', - swap='sd_kpt19'), - 282: - dict( - name='sd_kpt8', - id=282, - color=[128, 64, 0], - type='', - swap='sd_kpt18'), - 283: - dict( - name='sd_kpt9', - id=283, - color=[128, 64, 0], - type='', - swap='sd_kpt17'), - 284: - dict( - name='sd_kpt10', - id=284, - color=[128, 64, 0], - type='', - swap='sd_kpt16'), - 285: - dict( - name='sd_kpt11', - id=285, - color=[128, 64, 0], - type='', - swap='sd_kpt15'), - 286: - dict( - name='sd_kpt12', - id=286, - color=[128, 64, 0], - type='', - swap='sd_kpt14'), - 287: - dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''), - 288: - dict( - name='sd_kpt14', - id=288, - color=[128, 64, 0], - type='', - swap='sd_kpt12'), - 289: - dict( - name='sd_kpt15', - id=289, - color=[128, 64, 0], - type='', - swap='sd_kpt11'), - 290: - dict( - name='sd_kpt16', - id=290, - color=[128, 64, 0], - type='', - swap='sd_kpt10'), - 291: - dict( - name='sd_kpt17', - id=291, - color=[128, 64, 0], - type='', - swap='sd_kpt9'), - 292: - dict( - name='sd_kpt18', - id=292, - color=[128, 64, 0], - type='', - swap='sd_kpt8'), - 293: - dict( - name='sd_kpt19', - id=293, - color=[128, 64, 0], - type='', - swap='sd_kpt7') - }), - skeleton_info=dict({ - 0: - dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]), - 1: - dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]), - 2: - dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]), - 3: - dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]), - 4: - dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]), - 5: - dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]), - 6: - dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]), - 7: - dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]), - 8: - dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]), - 9: - dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]), - 10: - dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]), - 11: - dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]), - 12: - dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]), - 13: - dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]), - 14: - dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]), - 15: - dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]), - 16: - dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]), - 17: - dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]), - 18: - dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]), - 19: - dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]), - 20: - dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]), - 21: - dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]), - 22: - dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]), - 23: - dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]), - 24: - dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]), - 25: - dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]), - 26: - dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]), - 27: - dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]), - 28: - dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]), - 29: - dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]), - 30: - dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]), - 31: - dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]), - 32: - dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]), - 33: - dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]), - 34: - dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]), - 35: - dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]), - 36: - dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]), - 37: - dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]), - 38: - dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]), - 39: - dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]), - 40: - dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]), - 41: - dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]), - 42: - dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]), - 43: - dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]), - 44: - dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]), - 45: - dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]), - 46: - dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]), - 47: - dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]), - 48: - dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]), - 49: - dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]), - 50: - dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]), - 51: - dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]), - 52: - dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]), - 53: - dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]), - 54: - dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]), - 55: - dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]), - 56: - dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]), - 57: - dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]), - 58: - dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]), - 59: - dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]), - 60: - dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]), - 61: - dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]), - 62: - dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]), - 63: - dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]), - 64: - dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]), - 65: - dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]), - 66: - dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]), - 67: - dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]), - 68: - dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]), - 69: - dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]), - 70: - dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]), - 71: - dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]), - 72: - dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]), - 73: - dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]), - 74: - dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]), - 75: - dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]), - 76: - dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]), - 77: - dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]), - 78: - dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]), - 79: - dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]), - 80: - dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]), - 81: - dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]), - 82: - dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]), - 83: - dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]), - 84: - dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]), - 85: - dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]), - 86: - dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]), - 87: - dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]), - 88: - dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]), - 89: - dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]), - 90: - dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]), - 91: - dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]), - 92: - dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]), - 93: - dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]), - 94: - dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]), - 95: - dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]), - 96: - dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]), - 97: - dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]), - 98: - dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]), - 99: - dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]), - 100: - dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]), - 101: - dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]), - 102: - dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]), - 103: - dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]), - 104: - dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]), - 105: - dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]), - 106: - dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]), - 107: - dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]), - 108: - dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]), - 109: - dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]), - 110: - dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]), - 111: - dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]), - 112: - dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]), - 113: - dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]), - 114: - dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]), - 115: - dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]), - 116: - dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]), - 117: - dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]), - 118: - dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]), - 119: - dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]), - 120: - dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]), - 121: - dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]), - 122: - dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]), - 123: - dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]), - 124: - dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]), - 125: - dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]), - 126: - dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]), - 127: - dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]), - 128: - dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]), - 129: - dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]), - 130: - dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]), - 131: - dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]), - 132: - dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]), - 133: - dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]), - 134: - dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]), - 135: - dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]), - 136: - dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]), - 137: - dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]), - 138: - dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]), - 139: - dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]), - 140: - dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]), - 141: - dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]), - 142: - dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]), - 143: - dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]), - 144: - dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]), - 145: - dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]), - 146: - dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]), - 147: - dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]), - 148: - dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]), - 149: - dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]), - 150: - dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]), - 151: - dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]), - 152: - dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]), - 153: - dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]), - 154: - dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]), - 155: - dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]), - 156: - dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]), - 157: - dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]), - 158: - dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]), - 159: - dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]), - 160: - dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]), - 161: - dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]), - 162: - dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]), - 163: - dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]), - 164: - dict( - link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128, - 128]), - 165: - dict( - link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128, - 128]), - 166: - dict( - link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128, - 128]), - 167: - dict( - link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128, - 128]), - 168: - dict( - link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128, - 128]), - 169: - dict( - link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128, - 128]), - 170: - dict( - link=('shorts_kpt9', 'shorts_kpt10'), - id=170, - color=[128, 128, 128]), - 171: - dict( - link=('shorts_kpt10', 'shorts_kpt3'), - id=171, - color=[128, 128, 128]), - 172: - dict( - link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128, - 128]), - 173: - dict( - link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128, - 128]), - 174: - dict( - link=('trousers_kpt1', 'trousers_kpt4'), - id=174, - color=[128, 0, 128]), - 175: - dict( - link=('trousers_kpt4', 'trousers_kpt5'), - id=175, - color=[128, 0, 128]), - 176: - dict( - link=('trousers_kpt5', 'trousers_kpt6'), - id=176, - color=[128, 0, 128]), - 177: - dict( - link=('trousers_kpt6', 'trousers_kpt7'), - id=177, - color=[128, 0, 128]), - 178: - dict( - link=('trousers_kpt7', 'trousers_kpt8'), - id=178, - color=[128, 0, 128]), - 179: - dict( - link=('trousers_kpt8', 'trousers_kpt9'), - id=179, - color=[128, 0, 128]), - 180: - dict( - link=('trousers_kpt9', 'trousers_kpt10'), - id=180, - color=[128, 0, 128]), - 181: - dict( - link=('trousers_kpt10', 'trousers_kpt11'), - id=181, - color=[128, 0, 128]), - 182: - dict( - link=('trousers_kpt11', 'trousers_kpt12'), - id=182, - color=[128, 0, 128]), - 183: - dict( - link=('trousers_kpt12', 'trousers_kpt13'), - id=183, - color=[128, 0, 128]), - 184: - dict( - link=('trousers_kpt13', 'trousers_kpt14'), - id=184, - color=[128, 0, 128]), - 185: - dict( - link=('trousers_kpt14', 'trousers_kpt3'), - id=185, - color=[128, 0, 128]), - 186: - dict( - link=('trousers_kpt3', 'trousers_kpt2'), - id=186, - color=[128, 0, 128]), - 187: - dict( - link=('trousers_kpt2', 'trousers_kpt1'), - id=187, - color=[128, 0, 128]), - 188: - dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]), - 189: - dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]), - 190: - dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]), - 191: - dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]), - 192: - dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]), - 193: - dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]), - 194: - dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]), - 195: - dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]), - 196: - dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]), - 197: - dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]), - 198: - dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]), - 199: - dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]), - 200: - dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]), - 201: - dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]), - 202: - dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]), - 203: - dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]), - 204: - dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]), - 205: - dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]), - 206: - dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]), - 207: - dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]), - 208: - dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]), - 209: - dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]), - 210: - dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]), - 211: - dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]), - 212: - dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]), - 213: - dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]), - 214: - dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]), - 215: - dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]), - 216: - dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]), - 217: - dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]), - 218: - dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]), - 219: - dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]), - 220: - dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]), - 221: - dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]), - 222: - dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]), - 223: - dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]), - 224: - dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]), - 225: - dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]), - 226: - dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]), - 227: - dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]), - 228: - dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]), - 229: - dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]), - 230: - dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]), - 231: - dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]), - 232: - dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]), - 233: - dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]), - 234: - dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]), - 235: - dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]), - 236: - dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]), - 237: - dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]), - 238: - dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]), - 239: - dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]), - 240: - dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]), - 241: - dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]), - 242: - dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]), - 243: - dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]), - 244: - dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]), - 245: - dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]), - 246: - dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]), - 247: - dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]), - 248: - dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]), - 249: - dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]), - 250: - dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]), - 251: - dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]), - 252: - dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]), - 253: - dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]), - 254: - dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]), - 255: - dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]), - 256: - dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]), - 257: - dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]), - 258: - dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]), - 259: - dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]), - 260: - dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]), - 261: - dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]), - 262: - dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]), - 263: - dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]), - 264: - dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]), - 265: - dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]), - 266: - dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]), - 267: - dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]), - 268: - dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]), - 269: - dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]), - 270: - dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]), - 271: - dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]), - 272: - dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]), - 273: - dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]), - 274: - dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]), - 275: - dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]), - 276: - dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]), - 277: - dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]), - 278: - dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]), - 279: - dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]), - 280: - dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]), - 281: - dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]), - 282: - dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]), - 283: - dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]), - 284: - dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]), - 285: - dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]), - 286: - dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]), - 287: - dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]), - 288: - dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]), - 289: - dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]), - 290: - dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]), - 291: - dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]), - 292: - dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]), - 293: - dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]), - 294: - dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]), - 295: - dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]), - 296: - dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]), - 297: - dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]), - 298: - dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]), - 299: - dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]), - 300: - dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]), - 301: - dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]), - 302: - dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]), - 303: - dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0]) - }), - joint_weights=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 - ], - sigmas=[]) -param_scheduler = [ - dict( - type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False), - dict( - type='MultiStepLR', - begin=0, - end=60, - milestones=[20, 40], - gamma=0.1, - by_epoch=True) -] -optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) -auto_scale_lr = dict(base_batch_size=512) -dataset_type = 'DeepFashion2Dataset' -data_mode = 'topdown' -data_root = 'data/deepfashion2/' -codec = dict( - type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2) -train_pipeline = [ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=(192, 256)), - dict( - type='GenerateTarget', - encoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - dict(type='PackPoseInputs') -] -val_pipeline = [ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') -] -train_dataloader = dict( - batch_size=16, - num_workers=6, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='train/deepfashion2_long_sleeved_outwear.json', - data_prefix=dict(img='train/image/'), - pipeline=[ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=(192, 256)), - dict( - type='GenerateTarget', - encoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - dict(type='PackPoseInputs') - ])) -val_dataloader = dict( - batch_size=16, - num_workers=6, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='validation/deepfashion2_long_sleeved_outwear.json', - data_prefix=dict(img='validation/image/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -test_dataloader = dict( - batch_size=16, - num_workers=6, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='validation/deepfashion2_long_sleeved_outwear.json', - data_prefix=dict(img='validation/image/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -channel_cfg = dict( - num_output_channels=294, - dataset_joints=294, - dataset_channel=[[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]) -model = dict( - type='TopdownPoseEstimator', - data_preprocessor=dict( - type='PoseDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True), - backbone=dict( - type='ResNet', - depth=50, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - head=dict( - type='HeatmapHead', - in_channels=2048, - out_channels=294, - loss=dict(type='KeypointMSELoss', use_target_weight=True), - decoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True)) -val_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE') -] -test_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE') -] -launcher = 'pytorch' -work_dir = './work_dirs/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192' diff --git a/spaces/AUBMC-AIM/MammoGANesis/app.py b/spaces/AUBMC-AIM/MammoGANesis/app.py deleted file mode 100644 index ddd8efc8c78ea5dda507135be4eff308e002a929..0000000000000000000000000000000000000000 --- a/spaces/AUBMC-AIM/MammoGANesis/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -import gradio as gr -from PIL import Image -from huggingface_hub import hf_hub_url, cached_download - - -os.system("git clone https://github.com/AK391/stylegan2-ada-pytorch") - - -os.chdir("stylegan2-ada-pytorch") - -os.mkdir("outputs") -os.mkdir("outputs/images") - -config_file_url = hf_hub_url("AUBMC-AIM/MammoGANesis", filename="mammoGANesis.pkl") -cached_file = cached_download(config_file_url) - -def inference(truncation,seeds): - os.system("python generate.py --outdir=./outputs/images/ --trunc="+str(truncation)+" --seeds="+str(int(seeds))+" --network="+cached_file) - seeds = int(seeds) - image = Image.open(f"./outputs/images/seed{seeds:04d}.png") - return image - -title = "MammoGANesis" -description = "Gradio demo for MammoGANesis: Controlled Generation of High-Resolution Mammograms for Radiology Education. This paper demonstrates the model’s ability to generate anatomically and medically relevant mammograms by achieving an average AUC of 0.54 in a double-blind study on four expert mammography radiologists to distinguish between generated and real images, ascribing to the high visual quality of the synthesized and edited mammograms, and to their potential use in advancing and facilitating medical education. To use it, add seed and truncation, or click one of the examples to load them. Read more at the links below." - -article = "

MammoGANesis: Controlled Generation of High-Resolution Mammograms for Radiology Education

visitor badge
" - -gr.Interface(inference,[gr.inputs.Slider(label="truncation",minimum=0, maximum=5, step=0.1, default=0.8),gr.inputs.Slider(label="Seed",minimum=0, maximum=1000, step=1, default=0)],"pil",title=title,description=description,article=article, examples=[ - [0.8,0] - ]).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py b/spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py deleted file mode 100644 index 8bf75e075a8927cf9af4e02b8fd26243fede68cd..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py +++ /dev/null @@ -1,280 +0,0 @@ -import argparse -import json -import logging -import os -import sys -from pathlib import Path - -import comet_ml - -logger = logging.getLogger(__name__) - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -from train import train -from utils.callbacks import Callbacks -from utils.general import increment_path -from utils.torch_utils import select_device - -# Project Configuration -config = comet_ml.config.get_config() -COMET_PROJECT_NAME = config.get_string( - os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5" -) - - -def get_args(known=False): - parser = argparse.ArgumentParser() - parser.add_argument( - "--weights", - type=str, - default=ROOT / "yolov5s.pt", - help="initial weights path", - ) - parser.add_argument("--cfg", type=str, default="", help="model.yaml path") - parser.add_argument( - "--data", - type=str, - default=ROOT / "data/coco128.yaml", - help="dataset.yaml path", - ) - parser.add_argument( - "--hyp", - type=str, - default=ROOT / "data/hyps/hyp.scratch-low.yaml", - help="hyperparameters path", - ) - parser.add_argument( - "--epochs", type=int, default=300, help="total training epochs" - ) - parser.add_argument( - "--batch-size", - type=int, - default=16, - help="total batch size for all GPUs, -1 for autobatch", - ) - parser.add_argument( - "--imgsz", - "--img", - "--img-size", - type=int, - default=640, - help="train, val image size (pixels)", - ) - parser.add_argument( - "--rect", action="store_true", help="rectangular training" - ) - parser.add_argument( - "--resume", - nargs="?", - const=True, - default=False, - help="resume most recent training", - ) - parser.add_argument( - "--nosave", action="store_true", help="only save final checkpoint" - ) - parser.add_argument( - "--noval", action="store_true", help="only validate final epoch" - ) - parser.add_argument( - "--noautoanchor", action="store_true", help="disable AutoAnchor" - ) - parser.add_argument( - "--noplots", action="store_true", help="save no plot files" - ) - parser.add_argument( - "--evolve", - type=int, - nargs="?", - const=300, - help="evolve hyperparameters for x generations", - ) - parser.add_argument("--bucket", type=str, default="", help="gsutil bucket") - parser.add_argument( - "--cache", - type=str, - nargs="?", - const="ram", - help='--cache images in "ram" (default) or "disk"', - ) - parser.add_argument( - "--image-weights", - action="store_true", - help="use weighted image selection for training", - ) - parser.add_argument( - "--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu" - ) - parser.add_argument( - "--multi-scale", action="store_true", help="vary img-size +/- 50%%" - ) - parser.add_argument( - "--single-cls", - action="store_true", - help="train multi-class data as single-class", - ) - parser.add_argument( - "--optimizer", - type=str, - choices=["SGD", "Adam", "AdamW"], - default="SGD", - help="optimizer", - ) - parser.add_argument( - "--sync-bn", - action="store_true", - help="use SyncBatchNorm, only available in DDP mode", - ) - parser.add_argument( - "--workers", - type=int, - default=8, - help="max dataloader workers (per RANK in DDP mode)", - ) - parser.add_argument( - "--project", default=ROOT / "runs/train", help="save to project/name" - ) - parser.add_argument("--name", default="exp", help="save to project/name") - parser.add_argument( - "--exist-ok", - action="store_true", - help="existing project/name ok, do not increment", - ) - parser.add_argument("--quad", action="store_true", help="quad dataloader") - parser.add_argument( - "--cos-lr", action="store_true", help="cosine LR scheduler" - ) - parser.add_argument( - "--label-smoothing", - type=float, - default=0.0, - help="Label smoothing epsilon", - ) - parser.add_argument( - "--patience", - type=int, - default=100, - help="EarlyStopping patience (epochs without improvement)", - ) - parser.add_argument( - "--freeze", - nargs="+", - type=int, - default=[0], - help="Freeze layers: backbone=10, first3=0 1 2", - ) - parser.add_argument( - "--save-period", - type=int, - default=-1, - help="Save checkpoint every x epochs (disabled if < 1)", - ) - parser.add_argument( - "--seed", type=int, default=0, help="Global training seed" - ) - parser.add_argument( - "--local_rank", - type=int, - default=-1, - help="Automatic DDP Multi-GPU argument, do not modify", - ) - - # Weights & Biases arguments - parser.add_argument("--entity", default=None, help="W&B: Entity") - parser.add_argument( - "--upload_dataset", - nargs="?", - const=True, - default=False, - help='W&B: Upload data, "val" option', - ) - parser.add_argument( - "--bbox_interval", - type=int, - default=-1, - help="W&B: Set bounding-box image logging interval", - ) - parser.add_argument( - "--artifact_alias", - type=str, - default="latest", - help="W&B: Version of dataset artifact to use", - ) - - # Comet Arguments - parser.add_argument( - "--comet_optimizer_config", - type=str, - help="Comet: Path to a Comet Optimizer Config File.", - ) - parser.add_argument( - "--comet_optimizer_id", - type=str, - help="Comet: ID of the Comet Optimizer sweep.", - ) - parser.add_argument( - "--comet_optimizer_objective", - type=str, - help="Comet: Set to 'minimize' or 'maximize'.", - ) - parser.add_argument( - "--comet_optimizer_metric", type=str, help="Comet: Metric to Optimize." - ) - parser.add_argument( - "--comet_optimizer_workers", - type=int, - default=1, - help="Comet: Number of Parallel Workers to use with the Comet Optimizer.", - ) - - return parser.parse_known_args()[0] if known else parser.parse_args() - - -def run(parameters, opt): - hyp_dict = { - k: v - for k, v in parameters.items() - if k not in ["epochs", "batch_size"] - } - - opt.save_dir = str( - increment_path( - Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve - ) - ) - opt.batch_size = parameters.get("batch_size") - opt.epochs = parameters.get("epochs") - - device = select_device(opt.device, batch_size=opt.batch_size) - train(hyp_dict, opt, device, callbacks=Callbacks()) - - -if __name__ == "__main__": - opt = get_args(known=True) - - opt.weights = str(opt.weights) - opt.cfg = str(opt.cfg) - opt.data = str(opt.data) - opt.project = str(opt.project) - - optimizer_id = os.getenv("COMET_OPTIMIZER_ID") - if optimizer_id is None: - with open(opt.comet_optimizer_config) as f: - optimizer_config = json.load(f) - optimizer = comet_ml.Optimizer(optimizer_config) - else: - optimizer = comet_ml.Optimizer(optimizer_id) - - opt.comet_optimizer_id = optimizer.id - status = optimizer.status() - - opt.comet_optimizer_objective = status["spec"]["objective"] - opt.comet_optimizer_metric = status["spec"]["metric"] - - logger.info("COMET INFO: Starting Hyperparameter Sweep") - for parameter in optimizer.get_parameters(): - run(parameter["parameters"], opt) diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js deleted file mode 100644 index e93decaf872ef153bf12ba1a5aaad6e4937a2c87..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js +++ /dev/null @@ -1,29 +0,0 @@ -import adapter from "@sveltejs/adapter-node"; -import { vitePreprocess } from "@sveltejs/kit/vite"; -import dotenv from "dotenv"; - -dotenv.config({ path: "./.env.local" }); -dotenv.config({ path: "./.env" }); - -process.env.PUBLIC_VERSION = process.env.npm_package_version; - -/** @type {import('@sveltejs/kit').Config} */ -const config = { - // Consult https://kit.svelte.dev/docs/integrations#preprocessors - // for more information about preprocessors - preprocess: vitePreprocess(), - - kit: { - adapter: adapter(), - - paths: { - base: process.env.APP_BASE || "", - }, - csrf: { - // handled in hooks.server.ts, because we can have multiple valid origins - checkOrigin: false, - }, - }, -}; - -export default config; diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py deleted file mode 100644 index 37c3073ba9fb4b256e7f30c532488cc1e557de77..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py +++ /dev/null @@ -1,62 +0,0 @@ -from __future__ import annotations - -import os -import subprocess -import multiprocessing -from typing import TYPE_CHECKING, Any, List, Tuple - -from agentverse.agents import ExecutorAgent -from agentverse.logging import logger -from agentverse.message import ExecutorMessage, SolverMessage - -from . import BaseExecutor, executor_registry - - -def execute_command(command: str, result_list) -> str: - # TODO: make it more secure - result = subprocess.run(command, capture_output=True, shell=True, encoding="utf-8") - result_list.append(f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}") - # return f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}" - - -@executor_registry.register("coverage-test") -class CoverageTestExecutor(BaseExecutor): - def step( - self, - agent: ExecutorAgent, - task_description: str, - solution: List[SolverMessage], - *args, - **kwargs, - ) -> Any: - from scripts.evaluate_commongen import scoring - - coverage, missing_tokens = scoring( - [s.content for s in solution], [task_description] - ) - if len(missing_tokens[0]) == 0: - missing_tokens = "No missing tokens." - else: - missing_tokens = ", ".join(missing_tokens[0]) - result = f"Coverage: {coverage*100:.2f}%\nMissing Tokens: {missing_tokens}" - return [ExecutorMessage(content=result)] - - async def astep( - self, - agent: ExecutorAgent, - task_description: str, - solution: List[SolverMessage], - *args, - **kwargs, - ) -> Any: - from scripts.evaluate_commongen import scoring - - coverage, missing_tokens = scoring( - [s.content for s in solution], [task_description] - ) - if len(missing_tokens[0]) == 0: - missing_tokens = "No missing tokens." - else: - missing_tokens = ", ".join(missing_tokens[0]) - result = f"Coverage: {coverage*100:.2f}%\nMissing Tokens: {missing_tokens}" - return [ExecutorMessage(content=result)] diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js deleted file mode 100644 index fe9ef979e6a7923b9c3dd9e27c0b543472134da3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js +++ /dev/null @@ -1,8 +0,0 @@ -import CreateAnySizer from './utils/CreateAnySizer.js'; -import FixWidthSizer from '../../fixwidthsizer/FixWidthSizer.js'; - -var CreateFixWidthSizer = function (scene, data, view, styles, customBuilders) { - return CreateAnySizer(scene, data, view, styles, customBuilders, FixWidthSizer); -} - -export default CreateFixWidthSizer; \ No newline at end of file diff --git a/spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py b/spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py deleted file mode 100644 index ca63de20737a3b4a46a323ef4c6a7e9ce5ffb542..0000000000000000000000000000000000000000 --- a/spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/WizardLM/WizardCoder-Python-34B-V1.0").launch() \ No newline at end of file diff --git a/spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py b/spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py deleted file mode 100644 index d4b5ffeef64dcceea8edfefbd065dd0884db363e..0000000000000000000000000000000000000000 --- a/spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import torch -import streamlit as st -from transformers import RobertaTokenizer, RobertaForSequenceClassification -import re -import string - -def tokenize_sentences(sentence): - encoded_dict = tokenizer.encode_plus( - sentence, - add_special_tokens=True, - max_length=128, - padding='max_length', - truncation=True, - return_attention_mask=True, - return_tensors='pt' - ) - return torch.cat([encoded_dict['input_ids']], dim=0), torch.cat([encoded_dict['attention_mask']], dim=0) - - - -def preprocess_query(query): - query = str(query).lower() - query = query.strip() - query=query.translate(str.maketrans("", "", string.punctuation)) - return query - -def predict_category(sentence, threshold): - input_ids, attention_mask = tokenize_sentences(sentence) - with torch.no_grad(): - outputs = categories_model(input_ids, attention_mask=attention_mask) - logits = outputs.logits - predicted_categories = torch.sigmoid(logits).squeeze().tolist() - results = dict() - for label, prediction in zip(LABEL_COLUMNS_CATEGORIES, predicted_categories): - if prediction < threshold: - continue - precentage = round(float(prediction) * 100, 2) - results[label] = precentage - return results - -# Load tokenizer and model -BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION = 'roberta-large' -tokenizer = RobertaTokenizer.from_pretrained(BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION, do_lower_case=True) - -LABEL_COLUMNS_CATEGORIES = ['AMBIENCE', 'DRINK', 'FOOD', 'GENERAL', 'RESTAURANT', 'SERVICE', 'STAFF'] - -categories_model = RobertaForSequenceClassification.from_pretrained(BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION, num_labels=len(LABEL_COLUMNS_CATEGORIES)) -categories_model.load_state_dict(torch.load('./Categories_Classification_Model_updated.pth',map_location=torch.device('cpu') )) -categories_model.eval() - -# Streamlit App -st.title("Review/Sentence Classification") -st.write("Multilable/Multiclass Sentence classification under 7 Defined Categories. ") - -sentence = st.text_input("Enter a sentence:") -threshold = st.slider("Threshold", min_value=0.0, max_value=1.0, step=0.01, value=0.5) - -if sentence: - processed_sentence = preprocess_query(sentence) - results = predict_category(processed_sentence, threshold) - if len(results) > 0: - st.write("Predicted Aspects:") - table_data = [["Category", "Probability"]] - for category, percentage in results.items(): - table_data.append([category, f"{percentage}%"]) - st.table(table_data) - else: - st.write("No Categories above the threshold.") \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py deleted file mode 100644 index 618ac25bdc957b5110de05cd0f5e8104f9e6f50f..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py +++ /dev/null @@ -1,495 +0,0 @@ -import inspect -from typing import List, Optional, Union - -import PIL -import torch -from torch.nn import functional as F -from transformers import ( - CLIPImageProcessor, - CLIPTextModelWithProjection, - CLIPTokenizer, - CLIPVisionModelWithProjection, -) - -from diffusers import ( - DiffusionPipeline, - ImagePipelineOutput, - UnCLIPScheduler, - UNet2DConditionModel, - UNet2DModel, -) -from diffusers.pipelines.unclip import UnCLIPTextProjModel -from diffusers.utils import is_accelerate_available, logging, randn_tensor - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def slerp(val, low, high): - """ - Find the interpolation point between the 'low' and 'high' values for the given 'val'. See https://en.wikipedia.org/wiki/Slerp for more details on the topic. - """ - low_norm = low / torch.norm(low) - high_norm = high / torch.norm(high) - omega = torch.acos((low_norm * high_norm)) - so = torch.sin(omega) - res = (torch.sin((1.0 - val) * omega) / so) * low + (torch.sin(val * omega) / so) * high - return res - - -class UnCLIPImageInterpolationPipeline(DiffusionPipeline): - """ - Pipeline to generate variations from an input image using unCLIP - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - text_encoder ([`CLIPTextModelWithProjection`]): - Frozen text-encoder. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `image_encoder`. - image_encoder ([`CLIPVisionModelWithProjection`]): - Frozen CLIP image-encoder. unCLIP Image Variation uses the vision portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection), - specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - text_proj ([`UnCLIPTextProjModel`]): - Utility class to prepare and combine the embeddings before they are passed to the decoder. - decoder ([`UNet2DConditionModel`]): - The decoder to invert the image embedding into an image. - super_res_first ([`UNet2DModel`]): - Super resolution unet. Used in all but the last step of the super resolution diffusion process. - super_res_last ([`UNet2DModel`]): - Super resolution unet. Used in the last step of the super resolution diffusion process. - decoder_scheduler ([`UnCLIPScheduler`]): - Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. - super_res_scheduler ([`UnCLIPScheduler`]): - Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. - - """ - - decoder: UNet2DConditionModel - text_proj: UnCLIPTextProjModel - text_encoder: CLIPTextModelWithProjection - tokenizer: CLIPTokenizer - feature_extractor: CLIPImageProcessor - image_encoder: CLIPVisionModelWithProjection - super_res_first: UNet2DModel - super_res_last: UNet2DModel - - decoder_scheduler: UnCLIPScheduler - super_res_scheduler: UnCLIPScheduler - - # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.__init__ - def __init__( - self, - decoder: UNet2DConditionModel, - text_encoder: CLIPTextModelWithProjection, - tokenizer: CLIPTokenizer, - text_proj: UnCLIPTextProjModel, - feature_extractor: CLIPImageProcessor, - image_encoder: CLIPVisionModelWithProjection, - super_res_first: UNet2DModel, - super_res_last: UNet2DModel, - decoder_scheduler: UnCLIPScheduler, - super_res_scheduler: UnCLIPScheduler, - ): - super().__init__() - - self.register_modules( - decoder=decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - text_proj=text_proj, - feature_extractor=feature_extractor, - image_encoder=image_encoder, - super_res_first=super_res_first, - super_res_last=super_res_last, - decoder_scheduler=decoder_scheduler, - super_res_scheduler=super_res_scheduler, - ) - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents - def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - latents = latents * scheduler.init_noise_sigma - return latents - - # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_prompt - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance): - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - text_mask = text_inputs.attention_mask.bool().to(device) - text_encoder_output = self.text_encoder(text_input_ids.to(device)) - - prompt_embeds = text_encoder_output.text_embeds - text_encoder_hidden_states = text_encoder_output.last_hidden_state - - prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0) - text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) - text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0) - - if do_classifier_free_guidance: - uncond_tokens = [""] * batch_size - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - uncond_text_mask = uncond_input.attention_mask.bool().to(device) - negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device)) - - negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds - uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len) - - seq_len = uncond_text_encoder_hidden_states.shape[1] - uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1) - uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view( - batch_size * num_images_per_prompt, seq_len, -1 - ) - uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0) - - # done duplicates - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states]) - - text_mask = torch.cat([uncond_text_mask, text_mask]) - - return prompt_embeds, text_encoder_hidden_states, text_mask - - # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_image - def _encode_image(self, image, device, num_images_per_prompt, image_embeddings: Optional[torch.Tensor] = None): - dtype = next(self.image_encoder.parameters()).dtype - - if image_embeddings is None: - if not isinstance(image, torch.Tensor): - image = self.feature_extractor(images=image, return_tensors="pt").pixel_values - - image = image.to(device=device, dtype=dtype) - image_embeddings = self.image_encoder(image).image_embeds - - image_embeddings = image_embeddings.repeat_interleave(num_images_per_prompt, dim=0) - - return image_embeddings - - # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.enable_sequential_cpu_offload - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's - models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only - when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - models = [ - self.decoder, - self.text_proj, - self.text_encoder, - self.super_res_first, - self.super_res_last, - ] - for cpu_offloaded_model in models: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.decoder, "_hf_hook"): - return self.device - for module in self.decoder.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - @torch.no_grad() - def __call__( - self, - image: Optional[Union[List[PIL.Image.Image], torch.FloatTensor]] = None, - steps: int = 5, - decoder_num_inference_steps: int = 25, - super_res_num_inference_steps: int = 7, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - image_embeddings: Optional[torch.Tensor] = None, - decoder_latents: Optional[torch.FloatTensor] = None, - super_res_latents: Optional[torch.FloatTensor] = None, - decoder_guidance_scale: float = 8.0, - output_type: Optional[str] = "pil", - return_dict: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - image (`List[PIL.Image.Image]` or `torch.FloatTensor`): - The images to use for the image interpolation. Only accepts a list of two PIL Images or If you provide a tensor, it needs to comply with the - configuration of - [this](https://huggingface.co/fusing/karlo-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json) - `CLIPImageProcessor` while still having a shape of two in the 0th dimension. Can be left to `None` only when `image_embeddings` are passed. - steps (`int`, *optional*, defaults to 5): - The number of interpolation images to generate. - decoder_num_inference_steps (`int`, *optional*, defaults to 25): - The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality - image at the expense of slower inference. - super_res_num_inference_steps (`int`, *optional*, defaults to 7): - The number of denoising steps for super resolution. More denoising steps usually lead to a higher - quality image at the expense of slower inference. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - image_embeddings (`torch.Tensor`, *optional*): - Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings - can be passed for tasks like image interpolations. `image` can the be left to `None`. - decoder_latents (`torch.FloatTensor` of shape (batch size, channels, height, width), *optional*): - Pre-generated noisy latents to be used as inputs for the decoder. - super_res_latents (`torch.FloatTensor` of shape (batch size, channels, super res height, super res width), *optional*): - Pre-generated noisy latents to be used as inputs for the decoder. - decoder_guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - """ - - batch_size = steps - - device = self._execution_device - - if isinstance(image, List): - if len(image) != 2: - raise AssertionError( - f"Expected 'image' List to be of size 2, but passed 'image' length is {len(image)}" - ) - elif not (isinstance(image[0], PIL.Image.Image) and isinstance(image[0], PIL.Image.Image)): - raise AssertionError( - f"Expected 'image' List to contain PIL.Image.Image, but passed 'image' contents are {type(image[0])} and {type(image[1])}" - ) - elif isinstance(image, torch.FloatTensor): - if image.shape[0] != 2: - raise AssertionError( - f"Expected 'image' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image' size is {image.shape[0]}" - ) - elif isinstance(image_embeddings, torch.Tensor): - if image_embeddings.shape[0] != 2: - raise AssertionError( - f"Expected 'image_embeddings' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image_embeddings' shape is {image_embeddings.shape[0]}" - ) - else: - raise AssertionError( - f"Expected 'image' or 'image_embeddings' to be not None with types List[PIL.Image] or Torch.FloatTensor respectively. Received {type(image)} and {type(image_embeddings)} repsectively" - ) - - original_image_embeddings = self._encode_image( - image=image, device=device, num_images_per_prompt=1, image_embeddings=image_embeddings - ) - - image_embeddings = [] - - for interp_step in torch.linspace(0, 1, steps): - temp_image_embeddings = slerp( - interp_step, original_image_embeddings[0], original_image_embeddings[1] - ).unsqueeze(0) - image_embeddings.append(temp_image_embeddings) - - image_embeddings = torch.cat(image_embeddings).to(device) - - do_classifier_free_guidance = decoder_guidance_scale > 1.0 - - prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt( - prompt=["" for i in range(steps)], - device=device, - num_images_per_prompt=1, - do_classifier_free_guidance=do_classifier_free_guidance, - ) - - text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj( - image_embeddings=image_embeddings, - prompt_embeds=prompt_embeds, - text_encoder_hidden_states=text_encoder_hidden_states, - do_classifier_free_guidance=do_classifier_free_guidance, - ) - - if device.type == "mps": - # HACK: MPS: There is a panic when padding bool tensors, - # so cast to int tensor for the pad and back to bool afterwards - text_mask = text_mask.type(torch.int) - decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1) - decoder_text_mask = decoder_text_mask.type(torch.bool) - else: - decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True) - - self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device) - decoder_timesteps_tensor = self.decoder_scheduler.timesteps - - num_channels_latents = self.decoder.config.in_channels - height = self.decoder.config.sample_size - width = self.decoder.config.sample_size - - # Get the decoder latents for 1 step and then repeat the same tensor for the entire batch to keep same noise across all interpolation steps. - decoder_latents = self.prepare_latents( - (1, num_channels_latents, height, width), - text_encoder_hidden_states.dtype, - device, - generator, - decoder_latents, - self.decoder_scheduler, - ) - decoder_latents = decoder_latents.repeat((batch_size, 1, 1, 1)) - - for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents - - noise_pred = self.decoder( - sample=latent_model_input, - timestep=t, - encoder_hidden_states=text_encoder_hidden_states, - class_labels=additive_clip_time_embeddings, - attention_mask=decoder_text_mask, - ).sample - - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1) - noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1) - noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, predicted_variance], dim=1) - - if i + 1 == decoder_timesteps_tensor.shape[0]: - prev_timestep = None - else: - prev_timestep = decoder_timesteps_tensor[i + 1] - - # compute the previous noisy sample x_t -> x_t-1 - decoder_latents = self.decoder_scheduler.step( - noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator - ).prev_sample - - decoder_latents = decoder_latents.clamp(-1, 1) - - image_small = decoder_latents - - # done decoder - - # super res - - self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device) - super_res_timesteps_tensor = self.super_res_scheduler.timesteps - - channels = self.super_res_first.config.in_channels // 2 - height = self.super_res_first.config.sample_size - width = self.super_res_first.config.sample_size - - super_res_latents = self.prepare_latents( - (batch_size, channels, height, width), - image_small.dtype, - device, - generator, - super_res_latents, - self.super_res_scheduler, - ) - - if device.type == "mps": - # MPS does not support many interpolations - image_upscaled = F.interpolate(image_small, size=[height, width]) - else: - interpolate_antialias = {} - if "antialias" in inspect.signature(F.interpolate).parameters: - interpolate_antialias["antialias"] = True - - image_upscaled = F.interpolate( - image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias - ) - - for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)): - # no classifier free guidance - - if i == super_res_timesteps_tensor.shape[0] - 1: - unet = self.super_res_last - else: - unet = self.super_res_first - - latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1) - - noise_pred = unet( - sample=latent_model_input, - timestep=t, - ).sample - - if i + 1 == super_res_timesteps_tensor.shape[0]: - prev_timestep = None - else: - prev_timestep = super_res_timesteps_tensor[i + 1] - - # compute the previous noisy sample x_t -> x_t-1 - super_res_latents = self.super_res_scheduler.step( - noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator - ).prev_sample - - image = super_res_latents - # done super res - - # post processing - - image = image * 0.5 + 0.5 - image = image.clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md deleted file mode 100644 index dec919587935ec6e08a08e9299d62b0edc17449c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md +++ /dev/null @@ -1,118 +0,0 @@ -# Dreambooth for the inpainting model - -This script was added by @thedarkzeno . - -Please note that this script is not actively maintained, you can open an issue and tag @thedarkzeno or @patil-suraj though. - -```bash -export MODEL_NAME="runwayml/stable-diffusion-inpainting" -export INSTANCE_DIR="path-to-instance-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth_inpaint.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --instance_prompt="a photo of sks dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --max_train_steps=400 -``` - -### Training with prior-preservation loss - -Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data. -According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases. - -```bash -export MODEL_NAME="runwayml/stable-diffusion-inpainting" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth_inpaint.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - - -### Training with gradient checkpointing and 8-bit optimizer: - -With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU. - -To install `bitandbytes` please refer to this [readme](https://github.com/TimDettmers/bitsandbytes#requirements--installation). - -```bash -export MODEL_NAME="runwayml/stable-diffusion-inpainting" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth_inpaint.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=2 --gradient_checkpointing \ - --use_8bit_adam \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - -### Fine-tune text encoder with the UNet. - -The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces. -Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`. - -___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___ - -```bash -export MODEL_NAME="runwayml/stable-diffusion-inpainting" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth_inpaint.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_text_encoder \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --use_8bit_adam \ - --gradient_checkpointing \ - --learning_rate=2e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` diff --git a/spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md b/spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md deleted file mode 100644 index 05ac996a40cfa2f600f239f21adb0878a284292b..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# NAS-FCOS: Fast Neural Architecture Search for Object Detection - -## Introduction - -[ALGORITHM] - -```latex -@article{wang2019fcos, - title={Nas-fcos: Fast neural architecture search for object detection}, - author={Wang, Ning and Gao, Yang and Chen, Hao and Wang, Peng and Tian, Zhi and Shen, Chunhua}, - journal={arXiv preprint arXiv:1906.04423}, - year={2019} -} -``` - -## Results and Models - -| Head | Backbone | Style | GN-head | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:---------:|:---------:|:-------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| NAS-FCOSHead | R-50 | caffe | Y | 1x | | | 39.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520-1bdba3ce.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520.log.json) | -| FCOSHead | R-50 | caffe | Y | 1x | | | 38.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200521-7fdcbce0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200521.log.json) | - -**Notes:** - -- To be consistent with the author's implementation, we use 4 GPUs with 4 images/GPU. diff --git a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py b/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py deleted file mode 100644 index ae30c019796b3e20d96dc4486ad1eae8e8981b98..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py +++ /dev/null @@ -1,390 +0,0 @@ -import argparse -import copy -import os -import os.path as osp - -import mmcv -import torch -from mmcv import DictAction -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (get_dist_info, init_dist, load_checkpoint, - wrap_fp16_model) -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from tools.analysis_tools.robustness_eval import get_results - -from mmdet import datasets -from mmdet.apis import multi_gpu_test, set_random_seed, single_gpu_test -from mmdet.core import eval_map -from mmdet.datasets import build_dataloader, build_dataset -from mmdet.models import build_detector - - -def coco_eval_with_return(result_files, - result_types, - coco, - max_dets=(100, 300, 1000)): - for res_type in result_types: - assert res_type in ['proposal', 'bbox', 'segm', 'keypoints'] - - if mmcv.is_str(coco): - coco = COCO(coco) - assert isinstance(coco, COCO) - - eval_results = {} - for res_type in result_types: - result_file = result_files[res_type] - assert result_file.endswith('.json') - - coco_dets = coco.loadRes(result_file) - img_ids = coco.getImgIds() - iou_type = 'bbox' if res_type == 'proposal' else res_type - cocoEval = COCOeval(coco, coco_dets, iou_type) - cocoEval.params.imgIds = img_ids - if res_type == 'proposal': - cocoEval.params.useCats = 0 - cocoEval.params.maxDets = list(max_dets) - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - if res_type == 'segm' or res_type == 'bbox': - metric_names = [ - 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10', - 'AR100', 'ARs', 'ARm', 'ARl' - ] - eval_results[res_type] = { - metric_names[i]: cocoEval.stats[i] - for i in range(len(metric_names)) - } - else: - eval_results[res_type] = cocoEval.stats - - return eval_results - - -def voc_eval_with_return(result_file, - dataset, - iou_thr=0.5, - logger='print', - only_ap=True): - det_results = mmcv.load(result_file) - annotations = [dataset.get_ann_info(i) for i in range(len(dataset))] - if hasattr(dataset, 'year') and dataset.year == 2007: - dataset_name = 'voc07' - else: - dataset_name = dataset.CLASSES - mean_ap, eval_results = eval_map( - det_results, - annotations, - scale_ranges=None, - iou_thr=iou_thr, - dataset=dataset_name, - logger=logger) - - if only_ap: - eval_results = [{ - 'ap': eval_results[i]['ap'] - } for i in range(len(eval_results))] - - return mean_ap, eval_results - - -def parse_args(): - parser = argparse.ArgumentParser(description='MMDet test detector') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='output result file') - parser.add_argument( - '--corruptions', - type=str, - nargs='+', - default='benchmark', - choices=[ - 'all', 'benchmark', 'noise', 'blur', 'weather', 'digital', - 'holdout', 'None', 'gaussian_noise', 'shot_noise', 'impulse_noise', - 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', - 'frost', 'fog', 'brightness', 'contrast', 'elastic_transform', - 'pixelate', 'jpeg_compression', 'speckle_noise', 'gaussian_blur', - 'spatter', 'saturate' - ], - help='corruptions') - parser.add_argument( - '--severities', - type=int, - nargs='+', - default=[0, 1, 2, 3, 4, 5], - help='corruption severity levels') - parser.add_argument( - '--eval', - type=str, - nargs='+', - choices=['proposal', 'proposal_fast', 'bbox', 'segm', 'keypoints'], - help='eval types') - parser.add_argument( - '--iou-thr', - type=float, - default=0.5, - help='IoU threshold for pascal voc evaluation') - parser.add_argument( - '--summaries', - type=bool, - default=False, - help='Print summaries for every corruption and severity') - parser.add_argument( - '--workers', type=int, default=32, help='workers per gpu') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--show-dir', help='directory where painted images will be saved') - parser.add_argument( - '--show-score-thr', - type=float, - default=0.3, - help='score threshold (default: 0.3)') - parser.add_argument('--tmpdir', help='tmp dir for writing some results') - parser.add_argument('--seed', type=int, default=None, help='random seed') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - parser.add_argument( - '--final-prints', - type=str, - nargs='+', - choices=['P', 'mPC', 'rPC'], - default='mPC', - help='corruption benchmark metric to print at the end') - parser.add_argument( - '--final-prints-aggregate', - type=str, - choices=['all', 'benchmark'], - default='benchmark', - help='aggregate all results or only those for benchmark corruptions') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - return args - - -def main(): - args = parse_args() - - assert args.out or args.show or args.show_dir, \ - ('Please specify at least one operation (save or show the results) ' - 'with the argument "--out", "--show" or "show-dir"') - - if args.out is not None and not args.out.endswith(('.pkl', '.pickle')): - raise ValueError('The output file must be a pkl file.') - - cfg = mmcv.Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - if args.workers == 0: - args.workers = cfg.data.workers_per_gpu - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - - # set random seeds - if args.seed is not None: - set_random_seed(args.seed) - - if 'all' in args.corruptions: - corruptions = [ - 'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur', - 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog', - 'brightness', 'contrast', 'elastic_transform', 'pixelate', - 'jpeg_compression', 'speckle_noise', 'gaussian_blur', 'spatter', - 'saturate' - ] - elif 'benchmark' in args.corruptions: - corruptions = [ - 'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur', - 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog', - 'brightness', 'contrast', 'elastic_transform', 'pixelate', - 'jpeg_compression' - ] - elif 'noise' in args.corruptions: - corruptions = ['gaussian_noise', 'shot_noise', 'impulse_noise'] - elif 'blur' in args.corruptions: - corruptions = [ - 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur' - ] - elif 'weather' in args.corruptions: - corruptions = ['snow', 'frost', 'fog', 'brightness'] - elif 'digital' in args.corruptions: - corruptions = [ - 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression' - ] - elif 'holdout' in args.corruptions: - corruptions = ['speckle_noise', 'gaussian_blur', 'spatter', 'saturate'] - elif 'None' in args.corruptions: - corruptions = ['None'] - args.severities = [0] - else: - corruptions = args.corruptions - - rank, _ = get_dist_info() - aggregated_results = {} - for corr_i, corruption in enumerate(corruptions): - aggregated_results[corruption] = {} - for sev_i, corruption_severity in enumerate(args.severities): - # evaluate severity 0 (= no corruption) only once - if corr_i > 0 and corruption_severity == 0: - aggregated_results[corruption][0] = \ - aggregated_results[corruptions[0]][0] - continue - - test_data_cfg = copy.deepcopy(cfg.data.test) - # assign corruption and severity - if corruption_severity > 0: - corruption_trans = dict( - type='Corrupt', - corruption=corruption, - severity=corruption_severity) - # TODO: hard coded "1", we assume that the first step is - # loading images, which needs to be fixed in the future - test_data_cfg['pipeline'].insert(1, corruption_trans) - - # print info - print(f'\nTesting {corruption} at severity {corruption_severity}') - - # build the dataloader - # TODO: support multiple images per gpu - # (only minor changes are needed) - dataset = build_dataset(test_data_cfg) - data_loader = build_dataloader( - dataset, - samples_per_gpu=1, - workers_per_gpu=args.workers, - dist=distributed, - shuffle=False) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - checkpoint = load_checkpoint( - model, args.checkpoint, map_location='cpu') - # old versions did not save class info in checkpoints, - # this walkaround is for backward compatibility - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - model.CLASSES = dataset.CLASSES - - if not distributed: - model = MMDataParallel(model, device_ids=[0]) - show_dir = args.show_dir - if show_dir is not None: - show_dir = osp.join(show_dir, corruption) - show_dir = osp.join(show_dir, str(corruption_severity)) - if not osp.exists(show_dir): - osp.makedirs(show_dir) - outputs = single_gpu_test(model, data_loader, args.show, - show_dir, args.show_score_thr) - else: - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False) - outputs = multi_gpu_test(model, data_loader, args.tmpdir) - - if args.out and rank == 0: - eval_results_filename = ( - osp.splitext(args.out)[0] + '_results' + - osp.splitext(args.out)[1]) - mmcv.dump(outputs, args.out) - eval_types = args.eval - if cfg.dataset_type == 'VOCDataset': - if eval_types: - for eval_type in eval_types: - if eval_type == 'bbox': - test_dataset = mmcv.runner.obj_from_dict( - cfg.data.test, datasets) - logger = 'print' if args.summaries else None - mean_ap, eval_results = \ - voc_eval_with_return( - args.out, test_dataset, - args.iou_thr, logger) - aggregated_results[corruption][ - corruption_severity] = eval_results - else: - print('\nOnly "bbox" evaluation \ - is supported for pascal voc') - else: - if eval_types: - print(f'Starting evaluate {" and ".join(eval_types)}') - if eval_types == ['proposal_fast']: - result_file = args.out - else: - if not isinstance(outputs[0], dict): - result_files = dataset.results2json( - outputs, args.out) - else: - for name in outputs[0]: - print(f'\nEvaluating {name}') - outputs_ = [out[name] for out in outputs] - result_file = args.out - + f'.{name}' - result_files = dataset.results2json( - outputs_, result_file) - eval_results = coco_eval_with_return( - result_files, eval_types, dataset.coco) - aggregated_results[corruption][ - corruption_severity] = eval_results - else: - print('\nNo task was selected for evaluation;' - '\nUse --eval to select a task') - - # save results after each evaluation - mmcv.dump(aggregated_results, eval_results_filename) - - if rank == 0: - # print final results - print('\nAggregated results:') - prints = args.final_prints - aggregate = args.final_prints_aggregate - - if cfg.dataset_type == 'VOCDataset': - get_results( - eval_results_filename, - dataset='voc', - prints=prints, - aggregate=aggregate) - else: - get_results( - eval_results_filename, - dataset='coco', - prints=prints, - aggregate=aggregate) - - -if __name__ == '__main__': - main() diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh deleted file mode 100644 index 1b052e5c34bd43b7e898858d7993dd5f6a7a6f08..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -cd "$(dirname "${BASH_SOURCE[0]}")" - -if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi - -# deactivate existing conda envs as needed to avoid conflicts -{ conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null - -# config -CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda" -INSTALL_ENV_DIR="$(pwd)/installer_files/env" - -# environment isolation -export PYTHONNOUSERSITE=1 -unset PYTHONPATH -unset PYTHONHOME -export CUDA_PATH="$INSTALL_ENV_DIR" -export CUDA_HOME="$CUDA_PATH" - -# activate env -source $CONDA_ROOT_PREFIX/etc/profile.d/conda.sh -conda activate $INSTALL_ENV_DIR -exec bash --norc diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py deleted file mode 100644 index 93c97640d4b9ed088ca82cfe03e6efebfcfa9dbf..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import torch - -TORCH_VERSION = torch.__version__ - - -def is_rocm_pytorch() -> bool: - is_rocm = False - if TORCH_VERSION != 'parrots': - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - return is_rocm - - -def _get_cuda_home(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import CUDA_HOME - else: - if is_rocm_pytorch(): - from torch.utils.cpp_extension import ROCM_HOME - CUDA_HOME = ROCM_HOME - else: - from torch.utils.cpp_extension import CUDA_HOME - return CUDA_HOME - - -def get_build_config(): - if TORCH_VERSION == 'parrots': - from parrots.config import get_build_info - return get_build_info() - else: - return torch.__config__.show() - - -def _get_conv(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin - else: - from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin - return _ConvNd, _ConvTransposeMixin - - -def _get_dataloader(): - if TORCH_VERSION == 'parrots': - from torch.utils.data import DataLoader, PoolDataLoader - else: - from torch.utils.data import DataLoader - PoolDataLoader = DataLoader - return DataLoader, PoolDataLoader - - -def _get_extension(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import BuildExtension, Extension - CppExtension = partial(Extension, cuda=False) - CUDAExtension = partial(Extension, cuda=True) - else: - from torch.utils.cpp_extension import (BuildExtension, CppExtension, - CUDAExtension) - return BuildExtension, CppExtension, CUDAExtension - - -def _get_pool(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - else: - from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd - - -def _get_norm(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm2d - else: - from torch.nn.modules.instancenorm import _InstanceNorm - from torch.nn.modules.batchnorm import _BatchNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm - return _BatchNorm, _InstanceNorm, SyncBatchNorm_ - - -_ConvNd, _ConvTransposeMixin = _get_conv() -DataLoader, PoolDataLoader = _get_dataloader() -BuildExtension, CppExtension, CUDAExtension = _get_extension() -_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm() -_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool() - - -class SyncBatchNorm(SyncBatchNorm_): - - def _check_input_dim(self, input): - if TORCH_VERSION == 'parrots': - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input (got {input.dim()}D input)') - else: - super()._check_input_dim(input) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py deleted file mode 100644 index 3cbfda8ae74bdf26c5aef197ff2866a7c7ad0cfd..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class DRIVEDataset(CustomDataset): - """DRIVE dataset. - - In segmentation map annotation for DRIVE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_manual1.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(DRIVEDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_manual1.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/Anonymous-sub/Rerender/src/video_util.py b/spaces/Anonymous-sub/Rerender/src/video_util.py deleted file mode 100644 index 437d5cf9d06b7ad1f8a3ef68528c6acf8dbb3986..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/src/video_util.py +++ /dev/null @@ -1,100 +0,0 @@ -import os - -import cv2 -import imageio -import numpy as np - - -def video_to_frame(video_path: str, - frame_dir: str, - filename_pattern: str = 'frame%03d.jpg', - log: bool = True, - frame_edit_func=None): - os.makedirs(frame_dir, exist_ok=True) - - vidcap = cv2.VideoCapture(video_path) - success, image = vidcap.read() - - if log: - print('img shape: ', image.shape[0:2]) - - count = 0 - while success: - if frame_edit_func is not None: - image = frame_edit_func(image) - - cv2.imwrite(os.path.join(frame_dir, filename_pattern % count), image) - success, image = vidcap.read() - if log: - print('Read a new frame: ', success, count) - count += 1 - - vidcap.release() - - -def frame_to_video(video_path: str, frame_dir: str, fps=30, log=True): - - first_img = True - writer = imageio.get_writer(video_path, fps=fps) - - file_list = sorted(os.listdir(frame_dir)) - for file_name in file_list: - if not (file_name.endswith('jpg') or file_name.endswith('png')): - continue - - fn = os.path.join(frame_dir, file_name) - curImg = imageio.imread(fn) - - if first_img: - H, W = curImg.shape[0:2] - if log: - print('img shape', (H, W)) - first_img = False - - writer.append_data(curImg) - - writer.close() - - -def get_fps(video_path: str): - video = cv2.VideoCapture(video_path) - fps = video.get(cv2.CAP_PROP_FPS) - video.release() - return fps - - -def get_frame_count(video_path: str): - video = cv2.VideoCapture(video_path) - frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT) - video.release() - return frame_count - - -def resize_image(input_image, resolution): - H, W, C = input_image.shape - H = float(H) - W = float(W) - k = min(float(resolution) / min(H, W), float(768) / max(H, W)) - H *= k - W *= k - H = int(np.round(H / 64.0)) * 64 - W = int(np.round(W / 64.0)) * 64 - img = cv2.resize( - input_image, (W, H), - interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA) - return img - - -def prepare_frames(input_path: str, output_dir: str, resolution: int, crop): - l, r, t, b = crop - - def crop_func(frame): - H, W, C = frame.shape - left = np.clip(l, 0, W) - right = np.clip(W - r, left, W) - top = np.clip(t, 0, H) - bottom = np.clip(H - b, top, H) - frame = frame[top:bottom, left:right] - return resize_image(frame, resolution) - - video_to_frame(input_path, output_dir, '%04d.png', False, crop_func) diff --git a/spaces/Antonpy/stable-diffusion-license/app.py b/spaces/Antonpy/stable-diffusion-license/app.py deleted file mode 100644 index f6f318530f0aeb268c9f9389e556065beef2ac9e..0000000000000000000000000000000000000000 --- a/spaces/Antonpy/stable-diffusion-license/app.py +++ /dev/null @@ -1,14 +0,0 @@ -import streamlit as st - -txt_link = "https://huggingface.co/spaces/CompVis/stable-diffusion-license/raw/main/license.txt" -html_link = "https://huggingface.co/spaces/CompVis/stable-diffusion-license/raw/main/license.html" - -st.sidebar.title("Stable Diffusion") -st.sidebar.markdown("## Stable Diffusion RAIL License v1.0") -st.sidebar.markdown(f"This is the home of the Stable Diffusion RAIL License v1.0.\ -If you would like to download the license you can get it as [.txt]({txt_link}), or [.html]({html_link}) file.") - -with open("license.txt", "r") as f: - license_html = f.read() - -st.markdown(license_html, unsafe_allow_html=True) diff --git a/spaces/ArnePan/German-LLM-leaderboard/README.md b/spaces/ArnePan/German-LLM-leaderboard/README.md deleted file mode 100644 index 4e8b340e235d036f00515c293258313530479b6b..0000000000000000000000000000000000000000 --- a/spaces/ArnePan/German-LLM-leaderboard/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: German-LLM-leaderboard -emoji: 🇩🇪 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.46.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py deleted file mode 100644 index 7802ff158d83eb88e6dbe78d9cd33ca14341662a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py +++ /dev/null @@ -1,331 +0,0 @@ -# module pyparsing.py -# -# Copyright (c) 2003-2022 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - -__doc__ = """ -pyparsing module - Classes and methods to define and execute parsing grammars -============================================================================= - -The pyparsing module is an alternative approach to creating and -executing simple grammars, vs. the traditional lex/yacc approach, or the -use of regular expressions. With pyparsing, you don't need to learn -a new syntax for defining grammars or matching expressions - the parsing -module provides a library of classes that you use to construct the -grammar directly in Python. - -Here is a program to parse "Hello, World!" (or any greeting of the form -``", !"``), built up using :class:`Word`, -:class:`Literal`, and :class:`And` elements -(the :meth:`'+'` operators create :class:`And` expressions, -and the strings are auto-converted to :class:`Literal` expressions):: - - from pyparsing import Word, alphas - - # define grammar of a greeting - greet = Word(alphas) + "," + Word(alphas) + "!" - - hello = "Hello, World!" - print(hello, "->", greet.parse_string(hello)) - -The program outputs the following:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - -The Python representation of the grammar is quite readable, owing to the -self-explanatory class names, and the use of :class:`'+'`, -:class:`'|'`, :class:`'^'` and :class:`'&'` operators. - -The :class:`ParseResults` object returned from -:class:`ParserElement.parseString` can be -accessed as a nested list, a dictionary, or an object with named -attributes. - -The pyparsing module handles some of the problems that are typically -vexing when writing text parsers: - - - extra or missing whitespace (the above program will also handle - "Hello,World!", "Hello , World !", etc.) - - quoted strings - - embedded comments - - -Getting Started - ------------------ -Visit the classes :class:`ParserElement` and :class:`ParseResults` to -see the base classes that most other pyparsing -classes inherit from. Use the docstrings for examples of how to: - - - construct literal match expressions from :class:`Literal` and - :class:`CaselessLiteral` classes - - construct character word-group expressions using the :class:`Word` - class - - see how to create repetitive expressions using :class:`ZeroOrMore` - and :class:`OneOrMore` classes - - use :class:`'+'`, :class:`'|'`, :class:`'^'`, - and :class:`'&'` operators to combine simple expressions into - more complex ones - - associate names with your parsed results using - :class:`ParserElement.setResultsName` - - access the parsed data, which is returned as a :class:`ParseResults` - object - - find some helpful expression short-cuts like :class:`delimitedList` - and :class:`oneOf` - - find more useful common expressions in the :class:`pyparsing_common` - namespace class -""" -from typing import NamedTuple - - -class version_info(NamedTuple): - major: int - minor: int - micro: int - releaselevel: str - serial: int - - @property - def __version__(self): - return ( - "{}.{}.{}".format(self.major, self.minor, self.micro) - + ( - "{}{}{}".format( - "r" if self.releaselevel[0] == "c" else "", - self.releaselevel[0], - self.serial, - ), - "", - )[self.releaselevel == "final"] - ) - - def __str__(self): - return "{} {} / {}".format(__name__, self.__version__, __version_time__) - - def __repr__(self): - return "{}.{}({})".format( - __name__, - type(self).__name__, - ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)), - ) - - -__version_info__ = version_info(3, 0, 9, "final", 0) -__version_time__ = "05 May 2022 07:02 UTC" -__version__ = __version_info__.__version__ -__versionTime__ = __version_time__ -__author__ = "Paul McGuire " - -from .util import * -from .exceptions import * -from .actions import * -from .core import __diag__, __compat__ -from .results import * -from .core import * -from .core import _builtin_exprs as core_builtin_exprs -from .helpers import * -from .helpers import _builtin_exprs as helper_builtin_exprs - -from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode -from .testing import pyparsing_test as testing -from .common import ( - pyparsing_common as common, - _builtin_exprs as common_builtin_exprs, -) - -# define backward compat synonyms -if "pyparsing_unicode" not in globals(): - pyparsing_unicode = unicode -if "pyparsing_common" not in globals(): - pyparsing_common = common -if "pyparsing_test" not in globals(): - pyparsing_test = testing - -core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs - - -__all__ = [ - "__version__", - "__version_time__", - "__author__", - "__compat__", - "__diag__", - "And", - "AtLineStart", - "AtStringStart", - "CaselessKeyword", - "CaselessLiteral", - "CharsNotIn", - "Combine", - "Dict", - "Each", - "Empty", - "FollowedBy", - "Forward", - "GoToColumn", - "Group", - "IndentedBlock", - "Keyword", - "LineEnd", - "LineStart", - "Literal", - "Located", - "PrecededBy", - "MatchFirst", - "NoMatch", - "NotAny", - "OneOrMore", - "OnlyOnce", - "OpAssoc", - "Opt", - "Optional", - "Or", - "ParseBaseException", - "ParseElementEnhance", - "ParseException", - "ParseExpression", - "ParseFatalException", - "ParseResults", - "ParseSyntaxException", - "ParserElement", - "PositionToken", - "QuotedString", - "RecursiveGrammarException", - "Regex", - "SkipTo", - "StringEnd", - "StringStart", - "Suppress", - "Token", - "TokenConverter", - "White", - "Word", - "WordEnd", - "WordStart", - "ZeroOrMore", - "Char", - "alphanums", - "alphas", - "alphas8bit", - "any_close_tag", - "any_open_tag", - "c_style_comment", - "col", - "common_html_entity", - "counted_array", - "cpp_style_comment", - "dbl_quoted_string", - "dbl_slash_comment", - "delimited_list", - "dict_of", - "empty", - "hexnums", - "html_comment", - "identchars", - "identbodychars", - "java_style_comment", - "line", - "line_end", - "line_start", - "lineno", - "make_html_tags", - "make_xml_tags", - "match_only_at_col", - "match_previous_expr", - "match_previous_literal", - "nested_expr", - "null_debug_action", - "nums", - "one_of", - "printables", - "punc8bit", - "python_style_comment", - "quoted_string", - "remove_quotes", - "replace_with", - "replace_html_entity", - "rest_of_line", - "sgl_quoted_string", - "srange", - "string_end", - "string_start", - "trace_parse_action", - "unicode_string", - "with_attribute", - "indentedBlock", - "original_text_for", - "ungroup", - "infix_notation", - "locatedExpr", - "with_class", - "CloseMatch", - "token_map", - "pyparsing_common", - "pyparsing_unicode", - "unicode_set", - "condition_as_parse_action", - "pyparsing_test", - # pre-PEP8 compatibility names - "__versionTime__", - "anyCloseTag", - "anyOpenTag", - "cStyleComment", - "commonHTMLEntity", - "countedArray", - "cppStyleComment", - "dblQuotedString", - "dblSlashComment", - "delimitedList", - "dictOf", - "htmlComment", - "javaStyleComment", - "lineEnd", - "lineStart", - "makeHTMLTags", - "makeXMLTags", - "matchOnlyAtCol", - "matchPreviousExpr", - "matchPreviousLiteral", - "nestedExpr", - "nullDebugAction", - "oneOf", - "opAssoc", - "pythonStyleComment", - "quotedString", - "removeQuotes", - "replaceHTMLEntity", - "replaceWith", - "restOfLine", - "sglQuotedString", - "stringEnd", - "stringStart", - "traceParseAction", - "unicodeString", - "withAttribute", - "indentedBlock", - "originalTextFor", - "infixNotation", - "locatedExpr", - "withClass", - "tokenMap", - "conditionAsParseAction", - "autoname_elements", -] diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py deleted file mode 100644 index f5ba4297567d650f147eebeed361e9d62fab899d..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py +++ /dev/null @@ -1,330 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import collections -from dataclasses import dataclass -from typing import Callable, List, Optional, Tuple -import torch -from torch import nn - -from detectron2.structures import Boxes, Instances, ROIMasks -from detectron2.utils.registry import _convert_target_to_string, locate - -from .torchscript_patch import patch_builtin_len - - -@dataclass -class Schema: - """ - A Schema defines how to flatten a possibly hierarchical object into tuple of - primitive objects, so it can be used as inputs/outputs of PyTorch's tracing. - - PyTorch does not support tracing a function that produces rich output - structures (e.g. dict, Instances, Boxes). To trace such a function, we - flatten the rich object into tuple of tensors, and return this tuple of tensors - instead. Meanwhile, we also need to know how to "rebuild" the original object - from the flattened results, so we can evaluate the flattened results. - A Schema defines how to flatten an object, and while flattening it, it records - necessary schemas so that the object can be rebuilt using the flattened outputs. - - The flattened object and the schema object is returned by ``.flatten`` classmethod. - Then the original object can be rebuilt with the ``__call__`` method of schema. - - A Schema is a dataclass that can be serialized easily. - """ - - # inspired by FetchMapper in tensorflow/python/client/session.py - - @classmethod - def flatten(cls, obj): - raise NotImplementedError - - def __call__(self, values): - raise NotImplementedError - - @staticmethod - def _concat(values): - ret = () - sizes = [] - for v in values: - assert isinstance(v, tuple), "Flattened results must be a tuple" - ret = ret + v - sizes.append(len(v)) - return ret, sizes - - @staticmethod - def _split(values, sizes): - if len(sizes): - expected_len = sum(sizes) - assert ( - len(values) == expected_len - ), f"Values has length {len(values)} but expect length {expected_len}." - ret = [] - for k in range(len(sizes)): - begin, end = sum(sizes[:k]), sum(sizes[: k + 1]) - ret.append(values[begin:end]) - return ret - - -@dataclass -class ListSchema(Schema): - schemas: List[Schema] # the schemas that define how to flatten each element in the list - sizes: List[int] # the flattened length of each element - - def __call__(self, values): - values = self._split(values, self.sizes) - if len(values) != len(self.schemas): - raise ValueError( - f"Values has length {len(values)} but schemas " f"has length {len(self.schemas)}!" - ) - values = [m(v) for m, v in zip(self.schemas, values)] - return list(values) - - @classmethod - def flatten(cls, obj): - res = [flatten_to_tuple(k) for k in obj] - values, sizes = cls._concat([k[0] for k in res]) - return values, cls([k[1] for k in res], sizes) - - -@dataclass -class TupleSchema(ListSchema): - def __call__(self, values): - return tuple(super().__call__(values)) - - -@dataclass -class IdentitySchema(Schema): - def __call__(self, values): - return values[0] - - @classmethod - def flatten(cls, obj): - return (obj,), cls() - - -@dataclass -class DictSchema(ListSchema): - keys: List[str] - - def __call__(self, values): - values = super().__call__(values) - return dict(zip(self.keys, values)) - - @classmethod - def flatten(cls, obj): - for k in obj.keys(): - if not isinstance(k, str): - raise KeyError("Only support flattening dictionaries if keys are str.") - keys = sorted(obj.keys()) - values = [obj[k] for k in keys] - ret, schema = ListSchema.flatten(values) - return ret, cls(schema.schemas, schema.sizes, keys) - - -@dataclass -class InstancesSchema(DictSchema): - def __call__(self, values): - image_size, fields = values[-1], values[:-1] - fields = super().__call__(fields) - return Instances(image_size, **fields) - - @classmethod - def flatten(cls, obj): - ret, schema = super().flatten(obj.get_fields()) - size = obj.image_size - if not isinstance(size, torch.Tensor): - size = torch.tensor(size) - return ret + (size,), schema - - -@dataclass -class TensorWrapSchema(Schema): - """ - For classes that are simple wrapper of tensors, e.g. - Boxes, RotatedBoxes, BitMasks - """ - - class_name: str - - def __call__(self, values): - return locate(self.class_name)(values[0]) - - @classmethod - def flatten(cls, obj): - return (obj.tensor,), cls(_convert_target_to_string(type(obj))) - - -# if more custom structures needed in the future, can allow -# passing in extra schemas for custom types -def flatten_to_tuple(obj): - """ - Flatten an object so it can be used for PyTorch tracing. - Also returns how to rebuild the original object from the flattened outputs. - - Returns: - res (tuple): the flattened results that can be used as tracing outputs - schema: an object with a ``__call__`` method such that ``schema(res) == obj``. - It is a pure dataclass that can be serialized. - """ - schemas = [ - ((str, bytes), IdentitySchema), - (list, ListSchema), - (tuple, TupleSchema), - (collections.abc.Mapping, DictSchema), - (Instances, InstancesSchema), - ((Boxes, ROIMasks), TensorWrapSchema), - ] - for klass, schema in schemas: - if isinstance(obj, klass): - F = schema - break - else: - F = IdentitySchema - - return F.flatten(obj) - - -class TracingAdapter(nn.Module): - """ - A model may take rich input/output format (e.g. dict or custom classes), - but `torch.jit.trace` requires tuple of tensors as input/output. - This adapter flattens input/output format of a model so it becomes traceable. - - It also records the necessary schema to rebuild model's inputs/outputs from flattened - inputs/outputs. - - Example: - :: - outputs = model(inputs) # inputs/outputs may be rich structure - adapter = TracingAdapter(model, inputs) - - # can now trace the model, with adapter.flattened_inputs, or another - # tuple of tensors with the same length and meaning - traced = torch.jit.trace(adapter, adapter.flattened_inputs) - - # traced model can only produce flattened outputs (tuple of tensors) - flattened_outputs = traced(*adapter.flattened_inputs) - # adapter knows the schema to convert it back (new_outputs == outputs) - new_outputs = adapter.outputs_schema(flattened_outputs) - """ - - flattened_inputs: Tuple[torch.Tensor] = None - """ - Flattened version of inputs given to this class's constructor. - """ - - inputs_schema: Schema = None - """ - Schema of the inputs given to this class's constructor. - """ - - outputs_schema: Schema = None - """ - Schema of the output produced by calling the given model with inputs. - """ - - def __init__( - self, - model: nn.Module, - inputs, - inference_func: Optional[Callable] = None, - allow_non_tensor: bool = False, - ): - """ - Args: - model: an nn.Module - inputs: An input argument or a tuple of input arguments used to call model. - After flattening, it has to only consist of tensors. - inference_func: a callable that takes (model, *inputs), calls the - model with inputs, and return outputs. By default it - is ``lambda model, *inputs: model(*inputs)``. Can be override - if you need to call the model differently. - allow_non_tensor: allow inputs/outputs to contain non-tensor objects. - This option will filter out non-tensor objects to make the - model traceable, but ``inputs_schema``/``outputs_schema`` cannot be - used anymore because inputs/outputs cannot be rebuilt from pure tensors. - This is useful when you're only interested in the single trace of - execution (e.g. for flop count), but not interested in - generalizing the traced graph to new inputs. - """ - super().__init__() - if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)): - model = model.module - self.model = model - if not isinstance(inputs, tuple): - inputs = (inputs,) - self.inputs = inputs - self.allow_non_tensor = allow_non_tensor - - if inference_func is None: - inference_func = lambda model, *inputs: model(*inputs) # noqa - self.inference_func = inference_func - - self.flattened_inputs, self.inputs_schema = flatten_to_tuple(inputs) - - if all(isinstance(x, torch.Tensor) for x in self.flattened_inputs): - return - if self.allow_non_tensor: - self.flattened_inputs = tuple( - [x for x in self.flattened_inputs if isinstance(x, torch.Tensor)] - ) - self.inputs_schema = None - else: - for input in self.flattened_inputs: - if not isinstance(input, torch.Tensor): - raise ValueError( - "Inputs for tracing must only contain tensors. " - f"Got a {type(input)} instead." - ) - - def forward(self, *args: torch.Tensor): - with torch.no_grad(), patch_builtin_len(): - if self.inputs_schema is not None: - inputs_orig_format = self.inputs_schema(args) - else: - if len(args) != len(self.flattened_inputs) or any( - x is not y for x, y in zip(args, self.flattened_inputs) - ): - raise ValueError( - "TracingAdapter does not contain valid inputs_schema." - " So it cannot generalize to other inputs and must be" - " traced with `.flattened_inputs`." - ) - inputs_orig_format = self.inputs - - outputs = self.inference_func(self.model, *inputs_orig_format) - flattened_outputs, schema = flatten_to_tuple(outputs) - - flattened_output_tensors = tuple( - [x for x in flattened_outputs if isinstance(x, torch.Tensor)] - ) - if len(flattened_output_tensors) < len(flattened_outputs): - if self.allow_non_tensor: - flattened_outputs = flattened_output_tensors - self.outputs_schema = None - else: - raise ValueError( - "Model cannot be traced because some model outputs " - "cannot flatten to tensors." - ) - else: # schema is valid - if self.outputs_schema is None: - self.outputs_schema = schema - else: - assert self.outputs_schema == schema, ( - "Model should always return outputs with the same " - "structure so it can be traced!" - ) - return flattened_outputs - - def _create_wrapper(self, traced_model): - """ - Return a function that has an input/output interface the same as the - original model, but it calls the given traced model under the hood. - """ - - def forward(*args): - flattened_inputs, _ = flatten_to_tuple(args) - flattened_outputs = traced_model(*flattened_inputs) - return self.outputs_schema(flattened_outputs) - - return forward diff --git a/spaces/Ayaka2022/anime-aesthetic-predict/app.py b/spaces/Ayaka2022/anime-aesthetic-predict/app.py deleted file mode 100644 index 6f0cd457993cc220641a974f27509b94fcace949..0000000000000000000000000000000000000000 --- a/spaces/Ayaka2022/anime-aesthetic-predict/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import cv2 -import numpy as np -import gradio as gr -import onnxruntime as rt -from huggingface_hub import hf_hub_download - - -def predict(img): - img = img.astype(np.float32) / 255 - s = 768 - h, w = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - pred = model.run(None, {"img": img_input})[0].item() - return pred - - -if __name__ == "__main__": - model_path = hf_hub_download(repo_id="skytnt/anime-aesthetic", filename="model.onnx") - model = rt.InferenceSession(model_path, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) - examples = [[f"examples/{x:02d}.jpg"] for x in range(0, 2)] - app = gr.Interface(predict, gr.Image(label="input image"), gr.Number(label="score"),title="Anime Aesthetic Predict", - allow_flagging="never", examples=examples, cache_examples=False) - app.launch() diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py deleted file mode 100644 index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md b/spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md deleted file mode 100644 index 5d0259864eb8fddee9bf7d2bb3ea0a81f121f4d2..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md +++ /dev/null @@ -1,71 +0,0 @@ - -

Cheto Hack 8BP APK Descargar 5.4 5: Todo lo que necesita saber

-

Si eres un fan de 8 Ball Pool, es posible que hayas oído hablar de Cheto Hack 8BP, una herramienta que puede ayudarte a mejorar tu juego y ganar más partidos. Pero ¿qué es Cheto Hack 8BP exactamente, y cómo se puede descargar y usarlo? En este artículo, responderemos estas preguntas y más, para que pueda decidir si vale la pena probar o no Cheto Hack 8BP.

-

cheto hack 8bp apk descargar 5.4 5


DOWNLOAD ··· https://bltlly.com/2v6LbK



-

¿Qué es Cheto Hack 8BP?

-

Cheto Hack 8BP es una herramienta de hackeo para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para extender la guía, apoyar los disparos de cojín y dibujar la trayectoria de la bola y el estado de disparo. También puede predecir el resultado del juego y jugar automáticamente para usted. A diferencia de algunas otras herramientas de hackeo, Cheto Hack 8BP no requiere acceso de root ni modificaciones en los archivos del juego. Funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador.

-

Características de Cheto Hack 8BP

-

Algunas de las características que ofrece Cheto Hack 8BP son:

-
    -
  • Guía de extensión automática: puede ver toda la longitud de la guía, incluso más allá de la tabla, para ayudarlo a apuntar mejor.
  • -
  • Disparos de cojín de apoyo: Puedes ver la guía para disparos de cojín, que son disparos que rebotan en los rieles antes de golpear la bola objetivo.
  • -
  • Dibujar la trayectoria de la bola: Puedes ver la trayectoria de la bola después de golpearla, incluyendo cualquier giro o curva.
  • -
  • Dibujar estado de disparo: Puede ver la potencia, el ángulo y el giro de su tiro, así como la posición y la dirección de la bola blanca.
  • -
  • Predicción: Puedes ver la probabilidad de ganar o perder el juego basado en la situación actual.
  • -
  • Auto-play: Usted puede dejar que la herramienta de corte jugar para usted automáticamente, utilizando los mejores movimientos posibles.
  • -
-

Cómo descargar e instalar Cheto Hack 8BP APK

-

Para descargar e instalar Cheto Hack 8BP APK, debe seguir estos pasos:

-
    -
  1. Descargar Gameloop PC desde su sitio web oficial e instalarlo en su ordenador.
  2. - -
  3. Abra el Gameloop PC y lance el Ball Pool 8 desde su centro de juego.
  4. -
  5. Abrir Cheto Hack 8BP APK e introduzca la contraseña (reproducción automática o cheto).
  6. -
  7. Seleccione las características que desea utilizar y haga clic en Inicio.
  8. -
  9. ¡Disfruta jugando 8 bolas con Cheto Hack 8BP!
  10. -
-

¿Por qué usar Cheto Hack 8BP?

-

Es posible que se pregunte por qué debe utilizar Cheto Hack 8BP en lugar de jugar normalmente. Aquí hay algunas razones por las que es posible que desee probarlo:

-

Beneficios de Cheto Hack 8BP

-
    -
  • Puede mejorar sus habilidades y aprender nuevos trucos al ver cómo juega la herramienta de hackeo.
  • -
  • Usted puede ganar más partidos y ganar más monedas y recompensas mediante el uso de las características de la herramienta de corte.
  • -
  • Puedes divertirte más y desafiarte jugando contra oponentes más fuertes o usando diferentes modos.
  • -
  • Usted puede ahorrar tiempo y esfuerzo dejando que la herramienta de corte jugar para usted automáticamente.
  • -
-

Riesgos de Cheto Hack 8BP

-
  • Puede obtener prohibido o reportado por otros jugadores o los desarrolladores del juego para el uso de la herramienta de corte.
  • -
  • Puedes perder la diversión y la satisfacción de jugar el juego de forma justa y honesta.
  • -
  • Puede dañar su dispositivo o comprometer su seguridad mediante la descarga de un archivo APK falso o malicioso.
  • -
-

Alternativas a Cheto Hack 8BP

-

Si no está convencido por Cheto Hack 8BP, o si desea probar algo diferente, hay algunas alternativas que puede usar para hackear 8 Ball Pool. Aquí hay dos de ellos:

-

-

Grupo de objetivos - Directriz 8BP

-

Aim Pool - Guideline 8BP es una herramienta de corte que extiende la guía y muestra la trayectoria de la bola para 8 Ball Pool. Funciona tanto en dispositivos Android como iOS, y no requiere root ni jailbreak. También tiene una interfaz sencilla y fácil de usar, y es compatible con varios idiomas. Puedes descargar Aim Pool - Guideline 8BP desde su web oficial o desde la Google Play Store.

-

Guardián del juego

- -

Conclusión

-

En este artículo, hemos discutido todo lo que necesita saber sobre Cheto Hack 8BP APK Descargar 5.4 5, una herramienta de corte para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para mejorar su juego. Hemos explicado lo que es, cómo funciona, cómo descargarlo e instalarlo, por qué debería usarlo y cuáles son algunas alternativas. Esperamos que haya encontrado este artículo útil e informativo.

-

Resumen del artículo

-
    -
  • Cheto Hack 8BP es una herramienta de hackeo para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para extender la guía, apoyar disparos de cojín, dibujar la trayectoria de la bola y el estado de disparo, predecir el resultado y jugar automáticamente.
  • -
  • Funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador.
  • -
  • Tiene muchas características y beneficios, pero también algunos riesgos y desventajas.
  • -
  • Hay algunas alternativas a Cheto Hack 8BP, como Aim Pool - Guía 8BP y Game Guardian.
  • -
-

Preguntas frecuentes

-
    -
  1. ¿Es Cheto Hack 8BP libre?
  2. -

    No, Cheto Hack 8BP no es gratis. Necesita pagar una cuota de suscripción para usarlo. La tarifa varía dependiendo de la duración y las características que elija.

    -
  3. ¿Es seguro Cheto Hack 8BP?
  4. -

    Cheto Hack 8BP es seguro si lo descarga desde su sitio web oficial o desde una fuente de confianza. Sin embargo, siempre hay un riesgo de ser prohibido o reportado por otros jugadores o los desarrolladores del juego para el uso de una herramienta de hackeo.

    -
  5. ¿Es Cheto Hack 8BP legal?
  6. -

    Cheto Hack 8BP no es legal en algunos países o regiones donde la piratería está prohibida o regulada por ley. Usted debe comprobar las leyes locales antes de usarlo.

    -
  7. ¿Cheto Hack 8BP funciona en dispositivos móviles?
  8. -

    No, Cheto Hack 8BP no funciona en dispositivos móviles. Solo funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador.

    -
  9. ¿Puedo usar Cheto Hack 8BP con otras herramientas de hackeo?
  10. - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py deleted file mode 100644 index f72c66e743146c7a5b70a5440e9ab5459f10245b..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py +++ /dev/null @@ -1,207 +0,0 @@ -# actions.py - -from .exceptions import ParseException -from .util import col - - -class OnlyOnce: - """ - Wrapper for parse actions, to ensure they are only called once. - """ - - def __init__(self, method_call): - from .core import _trim_arity - - self.callable = _trim_arity(method_call) - self.called = False - - def __call__(self, s, l, t): - if not self.called: - results = self.callable(s, l, t) - self.called = True - return results - raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset") - - def reset(self): - """ - Allow the associated parse action to be called once more. - """ - - self.called = False - - -def match_only_at_col(n): - """ - Helper method for defining parse actions that require matching at - a specific column in the input text. - """ - - def verify_col(strg, locn, toks): - if col(locn, strg) != n: - raise ParseException(strg, locn, "matched token not at column {}".format(n)) - - return verify_col - - -def replace_with(repl_str): - """ - Helper method for common parse actions that simply return - a literal value. Especially useful when used with - :class:`transform_string` (). - - Example:: - - num = Word(nums).set_parse_action(lambda toks: int(toks[0])) - na = one_of("N/A NA").set_parse_action(replace_with(math.nan)) - term = na | num - - term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234] - """ - return lambda s, l, t: [repl_str] - - -def remove_quotes(s, l, t): - """ - Helper parse action for removing quotation marks from parsed - quoted strings. - - Example:: - - # by default, quotation marks are included in parsed results - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] - - # use remove_quotes to strip quotation marks from parsed results - quoted_string.set_parse_action(remove_quotes) - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] - """ - return t[0][1:-1] - - -def with_attribute(*args, **attr_dict): - """ - Helper to create a validating parse action to be used with start - tags created with :class:`make_xml_tags` or - :class:`make_html_tags`. Use ``with_attribute`` to qualify - a starting tag with a required attribute value, to avoid false - matches on common tags such as ``

Have you ever wondered what it would be like to travel around the world and see different places? Well, now you can with Guess the Place, a geography game that lets you explore the world from your computer or phone.

-

Guess the Place is a game that drops you somewhere in the world in a street view panorama and challenges you to guess your location on the world map. You can choose from different maps and modes, such as worldwide, USA, Europe, monuments, streaks, challenges, and more.

-

guess the place


Download Zip ⚹⚹⚹ https://jinyurl.com/2uNN8E



-

Guess the Place is not only fun but also educational. It helps you learn about different cultures and places, improve your memory and spatial awareness, and challenge yourself with different levels of difficulty.

-

In this article, we'll show you how to play Guess the Place, give you some tips and tricks for guessing better, and tell you about some of the benefits of playing this game.

How to Play Guess the Place

Choose a Location or Difficulty

To start playing Guess the Place, you need to choose a map from the available options. You can select a location-based map, such as worldwide, USA, Europe, Japan, etc., or a theme-based map, such as monuments, landmarks, stadiums, etc.

-

You can also choose a difficulty level for each map, ranging from easy to hard. The difficulty level affects how many clues you get in each panorama and how precise your guess needs to be.

Explore the Street View Panorama

Once you choose a map and a difficulty level, you'll be dropped somewhere in that map in a street view panorama. You can use your mouse or keyboard to look around and find clues that can help you identify your location.

-

guess the place game online
-guess the place by street view
-guess the place quiz with answers
-guess the place from the picture
-guess the place in the world
-guess the place name from emoji
-guess the place based on clues
-guess the place of origin
-guess the place by sound
-guess the place from description
-guess the place trivia
-guess the place app
-guess the place challenge
-guess the place from landmarks
-guess the place from coordinates
-guess the place from google maps
-guess the place from flags
-guess the place from food
-guess the place from culture
-guess the place from celebrities
-guess the place from history
-guess the place from language
-guess the place from currency
-guess the place from animals
-guess the place from sports
-guess the place from music
-guess the place from movies
-guess the place from books
-guess the place from art
-guess the place from architecture
-guess the place from festivals
-guess the place from clothing
-guess the place from weather
-guess the place from population
-guess the place from religion
-guess the place from geography
-guess the place from capital city
-guess the place from airport code
-guess the place from license plate
-guess the place from phone number
-guess the place from zip code
-guess the place from area code
-guess the place from time zone
-guess the place from domain name
-guess the place from slogan
-guess the place from motto
-guess the place from anthem
-guess the place from flower
-guess the place from bird

-

Some of the clues you can look for are signs, flags, landmarks, buildings, cars, people, vegetation, etc. You can also zoom in or out to see more details or get a wider view.

Make Your Guess on the World Map

When you think you have enough clues, you can make your guess on the world map. You can drag and drop the marker on the map to the location where you think you are. You can zoom in or out on the map to see more details or get a wider view.

-

Once you place the marker, you can confirm your guess by clicking on the guess button. You can also skip the panorama if you have no idea where you are or if you want to try a different one.

See Your Score and Compare with Others

After you confirm your guess, you'll see your score and how far you were from the actual location. You'll also see a leaderboard with other players' scores and distances. You can compare your performance with others and see who's the best at guessing places.

-

You'll also see a summary of your points and streaks for each map and mode. You can earn more points by guessing closer to the actual location, by guessing faster, and by playing harder maps and modes. You can also earn streaks by guessing correctly multiple times in a row.

Tips and Tricks for Guessing Better

Look for Signs, Flags, and Landmarks

One of the easiest ways to guess better is to look for signs, flags, and landmarks that can give you clues about the country, region, city, or place where you are. For example, if you see a sign in French, you can narrow down your location to France or a French-speaking country. If you see a flag with stars and stripes, you can narrow down your location to the USA or a country with a similar flag. If you see a landmark like the Eiffel Tower, you can narrow down your location to Paris.

Use Google Search or Wikipedia

Another way to guess better is to use Google Search or Wikipedia to find more information about a place. For example, if you see a sign with a name of a place that you don't recognize, you can search it on Google or Wikipedia and see what it is and where it is located. You can also use Google Translate to translate signs or words that are in a different language.

Practice with Different Maps and Modes

A final way to guess better is to practice with different maps and modes that can challenge your skills and knowledge. For example, you can play with maps that cover different regions or themes, such as Asia, Africa, islands, capitals, etc. You can also play with modes that have different rules or goals, such as streaks, challenges, time limit, etc.

Benefits of Playing Guess the Place

Learn About Different Cultures and Places

One of the main benefits of playing Guess the Place is that it helps you learn about different cultures and places around the world. You can discover new things about the history, geography, culture, language, cuisine, architecture, nature, etc., of different countries and regions. You can also see how people live in different parts of the world and what they do for fun.

Improve Your Memory and Spatial Awareness

Another benefit of playing Guess the Place is that it helps you improve your memory and spatial awareness. You can remember facts and locations better by associating them with visual clues and images. You can also improve your sense of direction and orientation by navigating through different maps and panoramas.

Have Fun and Challenge Yourself

A final benefit of playing Guess the Place is that it helps you have fun and challenge yourself. You can enjoy the game as a hobby or as a way to relax and unwind. You can also challenge yourself by playing harder maps and modes, by competing with other players, or by setting your own goals and records.

Conclusion

Guess the Place is a

Guess the Place is a fun and educational geography game that lets you explore the world from your computer or phone. You can choose from different maps and modes, such as worldwide, USA, Europe, monuments, streaks, challenges, and more. You can also look for clues in the street view panoramas, make your guesses on the world map, see your score and compare with others, and learn more about different cultures and places.

-

Playing Guess the Place can help you improve your memory and spatial awareness, as well as have fun and challenge yourself. It's a great way to learn geography and discover new things about the world.

-

If you're interested in playing Guess the Place, you can find it online at https://www.geoguessr.com/ or download it from the App Store or Google Play. It's free to play, but you can also upgrade to a premium membership for more features and benefits.

-

So what are you waiting for? Start playing Guess the Place today and see how well you know the world!

FAQs

Here are some of the frequently asked questions about Guess the Place:

-
    -
  • What is Guess the Place?
    Guess the Place is a geography game that drops you somewhere in the world in a street view panorama and challenges you to guess your location on the world map.
  • -
  • How do I play Guess the Place?
    To play Guess the Place, you need to choose a map and a difficulty level, explore the street view panorama, make your guess on the world map, and see your score and compare with others.
  • -
  • Where can I find Guess the Place?
    You can find Guess the Place online at https://www.geoguessr.com/ or download it from the App Store or Google Play.
  • -
  • How much does Guess the Place cost?
    Guess the Place is free to play, but you can also upgrade to a premium membership for $2.99 per month or $23.99 per year. The premium membership gives you access to more maps and modes, unlimited games, no ads, and more.
  • -
  • What are the benefits of playing Guess the Place?
    Playing Guess the Place can help you learn about different cultures and places, improve your memory and spatial awareness, and have fun and challenge yourself.
  • -
`` or ``
``. - - Call ``with_attribute`` with a series of attribute names and - values. Specify the list of filter attributes names and values as: - - - keyword arguments, as in ``(align="right")``, or - - as an explicit dict with ``**`` operator, when an attribute - name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}`` - - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))`` - - For attribute names with a namespace prefix, you must use the second - form. Attribute names are matched insensitive to upper/lower case. - - If just testing for ``class`` (with or without a namespace), use - :class:`with_class`. - - To verify that the attribute exists, but without specifying a value, - pass ``with_attribute.ANY_VALUE`` as the value. - - Example:: - - html = ''' -
- Some text -
1 4 0 1 0
-
1,3 2,3 1,1
-
this has no type
-
- - ''' - div,div_end = make_html_tags("div") - - # only match div tag having a type attribute with value "grid" - div_grid = div().set_parse_action(with_attribute(type="grid")) - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - # construct a match with any div tag having a type attribute, regardless of the value - div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - if args: - attrs = args[:] - else: - attrs = attr_dict.items() - attrs = [(k, v) for k, v in attrs] - - def pa(s, l, tokens): - for attrName, attrValue in attrs: - if attrName not in tokens: - raise ParseException(s, l, "no matching attribute " + attrName) - if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue: - raise ParseException( - s, - l, - "attribute {!r} has value {!r}, must be {!r}".format( - attrName, tokens[attrName], attrValue - ), - ) - - return pa - - -with_attribute.ANY_VALUE = object() - - -def with_class(classname, namespace=""): - """ - Simplified version of :class:`with_attribute` when - matching on a div class - made difficult because ``class`` is - a reserved word in Python. - - Example:: - - html = ''' -
- Some text -
1 4 0 1 0
-
1,3 2,3 1,1
-
this <div> has no class
-
- - ''' - div,div_end = make_html_tags("div") - div_grid = div().set_parse_action(with_class("grid")) - - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - classattr = "{}:class".format(namespace) if namespace else "class" - return with_attribute(**{classattr: classname}) - - -# pre-PEP8 compatibility symbols -replaceWith = replace_with -removeQuotes = remove_quotes -withAttribute = with_attribute -withClass = with_class -matchOnlyAtCol = match_only_at_col diff --git a/spaces/BilalSardar/Black-N-White-To-Color/README.md b/spaces/BilalSardar/Black-N-White-To-Color/README.md deleted file mode 100644 index b7bd0b61bf25b3eb09bd53a01b9234eb603f3e79..0000000000000000000000000000000000000000 --- a/spaces/BilalSardar/Black-N-White-To-Color/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Black N White To Color -emoji: 🦀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h deleted file mode 100644 index 39bbb7927efd9fc1037f3a050429d0769e328ad5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h +++ /dev/null @@ -1,84 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - -// histogram -// sort (radix-sort, merge-sort) - -#include -#include -#include - -// pass -// ---------------- -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -// fail -// ---------------- -// fails with mixed types -#include - -// mixed types are not compiling, commented in testing/scan.cu -#include - -// stubs passed -// ---------------- -#include -#include -#include -#include -#include - -// work in progress - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h deleted file mode 100644 index 9d4ac199810cd7e8dcc815c8f90c43f36cb84d61..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h +++ /dev/null @@ -1,154 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ - void sort(thrust::execution_policy &exec, - RandomAccessIterator first, - RandomAccessIterator last); - - -template -__host__ __device__ - void sort(thrust::execution_policy &exec, - RandomAccessIterator first, - RandomAccessIterator last, - StrictWeakOrdering comp); - - -template -__host__ __device__ - void sort_by_key(thrust::execution_policy &exec, - RandomAccessIterator1 keys_first, - RandomAccessIterator1 keys_last, - RandomAccessIterator2 values_first); - - -template -__host__ __device__ - void sort_by_key(thrust::execution_policy &exec, - RandomAccessIterator1 keys_first, - RandomAccessIterator1 keys_last, - RandomAccessIterator2 values_first, - StrictWeakOrdering comp); - - -template -__host__ __device__ - void stable_sort(thrust::execution_policy &exec, - RandomAccessIterator first, - RandomAccessIterator last); - - -// XXX it is an error to call this function; it has no implementation -template -__host__ __device__ - void stable_sort(thrust::execution_policy &exec, - RandomAccessIterator first, - RandomAccessIterator last, - StrictWeakOrdering comp); - - -template -__host__ __device__ - void stable_sort_by_key(thrust::execution_policy &exec, - RandomAccessIterator1 keys_first, - RandomAccessIterator1 keys_last, - RandomAccessIterator2 values_first); - - -// XXX it is an error to call this function; it has no implementation -template -__host__ __device__ - void stable_sort_by_key(thrust::execution_policy &exec, - RandomAccessIterator1 keys_first, - RandomAccessIterator1 keys_last, - RandomAccessIterator2 values_first, - StrictWeakOrdering comp); - - -template -__host__ __device__ - bool is_sorted(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last); - - -template -__host__ __device__ - bool is_sorted(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - Compare comp); - - -template -__host__ __device__ - ForwardIterator is_sorted_until(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last); - - -template -__host__ __device__ - ForwardIterator is_sorted_until(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - Compare comp); - - -} // end generic -} // end detail -} // end system -} // end thrust - -#include - diff --git a/spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py b/spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py deleted file mode 100644 index 8c1d11f364e29707069b881fdca6f99dc1a52680..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py +++ /dev/null @@ -1,470 +0,0 @@ -import os.path as osp - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils - -from mmdet.core import BitmapMasks, PolygonMasks -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes(img_bytes, flag=self.color_type) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromWebcam(LoadImageFromFile): - """Load an image from webcam. - - Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in - ``results['img']``. - """ - - def __call__(self, results): - """Call functions to add image meta information. - - Args: - results (dict): Result dict with Webcam read image in - ``results['img']``. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - img = results['img'] - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = None - results['ori_filename'] = None - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - -@PIPELINES.register_module() -class LoadMultiChannelImageFromFiles(object): - """Load multi-channel images from a list of separate channel files. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename", which is expected to be a list of filenames). - Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='unchanged', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load multiple images and get images meta - information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded images and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = [ - osp.join(results['img_prefix'], fname) - for fname in results['img_info']['filename'] - ] - else: - filename = results['img_info']['filename'] - - img = [] - for name in filename: - img_bytes = self.file_client.get(name) - img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type)) - img = np.stack(img, axis=-1) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load mutiple types of annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: False. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: False. - poly2mask (bool): Whether to convert the instance masks from polygons - to bitmaps. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=False, - with_seg=False, - poly2mask=True, - file_client_args=dict(backend='disk')): - self.with_bbox = with_bbox - self.with_label = with_label - self.with_mask = with_mask - self.with_seg = with_seg - self.poly2mask = poly2mask - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_bboxes(self, results): - """Private function to load bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box annotations. - """ - - ann_info = results['ann_info'] - results['gt_bboxes'] = ann_info['bboxes'].copy() - - gt_bboxes_ignore = ann_info.get('bboxes_ignore', None) - if gt_bboxes_ignore is not None: - results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy() - results['bbox_fields'].append('gt_bboxes_ignore') - results['bbox_fields'].append('gt_bboxes') - return results - - def _load_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded label annotations. - """ - - results['gt_labels'] = results['ann_info']['labels'].copy() - return results - - def _poly2mask(self, mask_ann, img_h, img_w): - """Private function to convert masks represented with polygon to - bitmaps. - - Args: - mask_ann (list | dict): Polygon mask annotation input. - img_h (int): The height of output mask. - img_w (int): The width of output mask. - - Returns: - numpy.ndarray: The decode bitmap mask of shape (img_h, img_w). - """ - - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - def process_polygons(self, polygons): - """Convert polygons to list of ndarray and filter invalid polygons. - - Args: - polygons (list[list]): Polygons of one instance. - - Returns: - list[numpy.ndarray]: Processed polygons. - """ - - polygons = [np.array(p) for p in polygons] - valid_polygons = [] - for polygon in polygons: - if len(polygon) % 2 == 0 and len(polygon) >= 6: - valid_polygons.append(polygon) - return valid_polygons - - def _load_masks(self, results): - """Private function to load mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask annotations. - If ``self.poly2mask`` is set ``True``, `gt_mask` will contain - :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used. - """ - - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = results['ann_info']['masks'] - if self.poly2mask: - masks_all =[] - for mask in gt_masks: - if 'full' in mask: - full = self._poly2mask(mask['full'], h, w)*2 - visible = self._poly2mask(mask['visible'], h, w) - full[visible==1] = 1 - masks_all.append(full) - else: - print(mask) - asas - visible = self._poly2mask(mask['visible'], h, w) - masks_all.append(visible) - - gt_masks = BitmapMasks(masks_all, h, w) - else: - gt_masks = PolygonMasks( - [self.process_polygons(polygons) for polygons in gt_masks], h, - w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - - def _load_semantic_seg(self, results): - """Private function to load semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - results['gt_semantic_seg'] = mmcv.imfrombytes( - img_bytes, flag='unchanged').squeeze() - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask: - results = self._load_masks(results) - if self.with_seg: - results = self._load_semantic_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(with_bbox={self.with_bbox}, ' - repr_str += f'with_label={self.with_label}, ' - repr_str += f'with_mask={self.with_mask}, ' - repr_str += f'with_seg={self.with_seg}, ' - repr_str += f'poly2mask={self.poly2mask}, ' - repr_str += f'poly2mask={self.file_client_args})' - return repr_str - - -@PIPELINES.register_module() -class LoadProposals(object): - """Load proposal pipeline. - - Required key is "proposals". Updated keys are "proposals", "bbox_fields". - - Args: - num_max_proposals (int, optional): Maximum number of proposals to load. - If not specified, all proposals will be loaded. - """ - - def __init__(self, num_max_proposals=None): - self.num_max_proposals = num_max_proposals - - def __call__(self, results): - """Call function to load proposals from file. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded proposal annotations. - """ - - proposals = results['proposals'] - if proposals.shape[1] not in (4, 5): - raise AssertionError( - 'proposals should have shapes (n, 4) or (n, 5), ' - f'but found {proposals.shape}') - proposals = proposals[:, :4] - - if self.num_max_proposals is not None: - proposals = proposals[:self.num_max_proposals] - - if len(proposals) == 0: - proposals = np.array([[0, 0, 0, 0]], dtype=np.float32) - results['proposals'] = proposals - results['bbox_fields'].append('proposals') - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(num_max_proposals={self.num_max_proposals})' - - -@PIPELINES.register_module() -class FilterAnnotations(object): - """Filter invalid annotations. - - Args: - min_gt_bbox_wh (tuple[int]): Minimum width and height of ground truth - boxes. - """ - - def __init__(self, min_gt_bbox_wh): - # TODO: add more filter options - self.min_gt_bbox_wh = min_gt_bbox_wh - - def __call__(self, results): - assert 'gt_bboxes' in results - gt_bboxes = results['gt_bboxes'] - w = gt_bboxes[:, 2] - gt_bboxes[:, 0] - h = gt_bboxes[:, 3] - gt_bboxes[:, 1] - keep = (w > self.min_gt_bbox_wh[0]) & (h > self.min_gt_bbox_wh[1]) - if not keep.any(): - return None - else: - keys = ('gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg') - for key in keys: - if key in results: - results[key] = results[key][keep] - return results diff --git a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py b/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py deleted file mode 100644 index 63c54ee9a5ce2368494b775cc90fada1439feaa5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_R_101_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_tf.py b/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_tf.py deleted file mode 100644 index dbea9ed5079c3007b151420ad8dba50cb723e5cd..0000000000000000000000000000000000000000 --- a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_tf.py +++ /dev/null @@ -1,801 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tensorflow trainer class.""" - -import datetime -import math -import os -import warnings -from typing import Callable, Dict, Optional, Tuple - -from .utils import ENV_VARS_TRUE_VALUES - - -# Integrations must be imported before ML frameworks: -# isort: off -from .integrations import ( - is_comet_available, - is_wandb_available, -) - -# isort: on - -import numpy as np -import tensorflow as tf -from tensorflow.python.distribute.values import PerReplica - -from .modeling_tf_utils import TFPreTrainedModel -from .optimization_tf import GradientAccumulator, create_optimizer -from .trainer_utils import ( - PREFIX_CHECKPOINT_DIR, - EvalPrediction, - IntervalStrategy, - PredictionOutput, - enable_full_determinism, - set_seed, -) -from .training_args_tf import TFTrainingArguments -from .utils import logging - - -if is_wandb_available(): - import wandb - -if is_comet_available(): - import comet_ml - -logger = logging.get_logger(__name__) - - -class TFTrainer: - """ - TFTrainer is a simple but feature-complete training and eval loop for TensorFlow, optimized for 🤗 Transformers. - - Args: - model ([`TFPreTrainedModel`]): - The model to train, evaluate or use for predictions. - args ([`TFTrainingArguments`]): - The arguments to tweak training. - train_dataset ([`~tf.data.Dataset`], *optional*): - The dataset to use for training. The dataset should yield tuples of `(features, labels)` where `features` - is a dict of input features and `labels` is the labels. If `labels` is a tensor, the loss is calculated by - the model by calling `model(features, labels=labels)`. If `labels` is a dict, such as when using a - QuestionAnswering head model with multiple targets, the loss is instead calculated by calling - `model(features, **labels)`. - eval_dataset ([`~tf.data.Dataset`], *optional*): - The dataset to use for evaluation. The dataset should yield tuples of `(features, labels)` where `features` - is a dict of input features and `labels` is the labels. If `labels` is a tensor, the loss is calculated by - the model by calling `model(features, labels=labels)`. If `labels` is a dict, such as when using a - QuestionAnswering head model with multiple targets, the loss is instead calculated by calling - `model(features, **labels)`. - compute_metrics (`Callable[[EvalPrediction], Dict]`, *optional*): - The function that will be used to compute metrics at evaluation. Must take a [`EvalPrediction`] and return - a dictionary string to metric values. - tb_writer (`tf.summary.SummaryWriter`, *optional*): - Object to write to TensorBoard. - optimizers (`Tuple[tf.keras.optimizers.Optimizer, tf.keras.optimizers.schedules.LearningRateSchedule]`, *optional*): - A tuple containing the optimizer and the scheduler to use. The optimizer default to an instance of - [`tf.keras.optimizers.Adam`] if `args.weight_decay_rate` is 0 else an instance of [`AdamWeightDecay`]. The - scheduler will default to an instance of [`tf.keras.optimizers.schedules.PolynomialDecay`] if - `args.num_warmup_steps` is 0 else an instance of [`WarmUp`]. - """ - - def __init__( - self, - model: TFPreTrainedModel, - args: TFTrainingArguments, - train_dataset: Optional[tf.data.Dataset] = None, - eval_dataset: Optional[tf.data.Dataset] = None, - compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, - tb_writer: Optional[tf.summary.SummaryWriter] = None, - optimizers: Tuple[tf.keras.optimizers.Optimizer, tf.keras.optimizers.schedules.LearningRateSchedule] = ( - None, - None, - ), - ): - self.model = model - self.args = args - self.train_dataset = train_dataset - self.eval_dataset = eval_dataset - self.compute_metrics = compute_metrics - self.optimizer, self.lr_scheduler = optimizers - self.gradient_accumulator = GradientAccumulator() - self.global_step = 0 - self.epoch_logging = 0 - self.eval_loss = tf.keras.metrics.Sum() - - warnings.warn( - "The class `TFTrainer` is deprecated and will be removed in version 5 of Transformers. " - "We recommend using native Keras instead, by calling methods like `fit()` and `predict()` " - "directly on the model object. Detailed examples of the Keras style can be found in our " - "examples at https://github.com/huggingface/transformers/tree/main/examples/tensorflow", - FutureWarning, - ) - - if tb_writer is not None: - self.tb_writer = tb_writer - else: - self.tb_writer = tf.summary.create_file_writer(self.args.logging_dir) - - if is_wandb_available(): - self.setup_wandb() - elif os.getenv("WANDB_DISABLED", "").upper() not in ENV_VARS_TRUE_VALUES: - logger.info( - "You are instantiating a Trainer but W&B is not installed. To use wandb logging, " - "run `pip install wandb && wandb login` see https://docs.wandb.com/huggingface." - ) - - if is_comet_available(): - self.setup_comet() - elif os.environ.get("COMET_MODE") != "DISABLED": - logger.info( - "To use comet_ml logging, run `pip/conda install comet_ml` " - "see https://www.comet.ml/docs/python-sdk/huggingface/" - ) - - enable_full_determinism(self.args.seed) if self.args.full_determinism else set_seed(self.args.seed) - - def get_train_tfdataset(self) -> tf.data.Dataset: - """ - Returns the training [`~tf.data.Dataset`]. - - Subclass and override this method if you want to inject some custom behavior. - """ - if self.train_dataset is None: - raise ValueError("Trainer: training requires a train_dataset.") - - self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps - self.num_train_examples = self.train_dataset.cardinality().numpy() - - if self.num_train_examples < 0: - raise ValueError("The training dataset must have an asserted cardinality") - - ds = ( - self.train_dataset.repeat() - .shuffle(self.num_train_examples, seed=self.args.seed) - .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last) - .prefetch(tf.data.experimental.AUTOTUNE) - ) - - return self.args.strategy.experimental_distribute_dataset(ds) - - def get_eval_tfdataset(self, eval_dataset: Optional[tf.data.Dataset] = None) -> tf.data.Dataset: - """ - Returns the evaluation [`~tf.data.Dataset`]. - - Args: - eval_dataset ([`~tf.data.Dataset`], *optional*): - If provided, will override *self.eval_dataset*. The dataset should yield tuples of `(features, labels)` - where `features` is a dict of input features and `labels` is the labels. If `labels` is a tensor, the - loss is calculated by the model by calling `model(features, labels=labels)`. If `labels` is a dict, - such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated - by calling `model(features, **labels)`. - - Subclass and override this method if you want to inject some custom behavior. - """ - if eval_dataset is None and self.eval_dataset is None: - raise ValueError("Trainer: evaluation requires an eval_dataset.") - - eval_dataset = eval_dataset if eval_dataset is not None else self.eval_dataset - num_examples = eval_dataset.cardinality().numpy() - - if num_examples < 0: - raise ValueError("The training dataset must have an asserted cardinality") - - approx = math.floor if self.args.dataloader_drop_last else math.ceil - steps = approx(num_examples / self.args.eval_batch_size) - ds = ( - eval_dataset.repeat() - .batch(self.args.eval_batch_size, drop_remainder=self.args.dataloader_drop_last) - .prefetch(tf.data.experimental.AUTOTUNE) - ) - - return self.args.strategy.experimental_distribute_dataset(ds), steps, num_examples - - def get_test_tfdataset(self, test_dataset: tf.data.Dataset) -> tf.data.Dataset: - """ - Returns a test [`~tf.data.Dataset`]. - - Args: - test_dataset ([`~tf.data.Dataset`]): - The dataset to use. The dataset should yield tuples of `(features, labels)` where `features` is a dict - of input features and `labels` is the labels. If `labels` is a tensor, the loss is calculated by the - model by calling `model(features, labels=labels)`. If `labels` is a dict, such as when using a - QuestionAnswering head model with multiple targets, the loss is instead calculated by calling - `model(features, **labels)`. - - Subclass and override this method if you want to inject some custom behavior. - """ - - num_examples = test_dataset.cardinality().numpy() - - if num_examples < 0: - raise ValueError("The training dataset must have an asserted cardinality") - - steps = math.ceil(num_examples / self.args.eval_batch_size) - ds = test_dataset.batch(self.args.eval_batch_size).prefetch(tf.data.experimental.AUTOTUNE) - - return self.args.strategy.experimental_distribute_dataset(ds), steps, num_examples - - def create_optimizer_and_scheduler(self, num_training_steps: int): - """ - Setup the optimizer and the learning rate scheduler. - - We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the - TFTrainer's init through `optimizers`, or subclass and override this method. - """ - if not self.optimizer and not self.lr_scheduler: - warmup_steps = ( - self.args.warmup_steps - if self.args.warmup_steps > 0 - else math.ceil(num_training_steps * self.args.warmup_ratio) - ) - - self.optimizer, self.lr_scheduler = create_optimizer( - self.args.learning_rate, - num_training_steps, - warmup_steps, - adam_beta1=self.args.adam_beta1, - adam_beta2=self.args.adam_beta2, - adam_epsilon=self.args.adam_epsilon, - weight_decay_rate=self.args.weight_decay, - power=self.args.poly_power, - ) - - def setup_wandb(self): - """ - Setup the optional Weights & Biases (`wandb`) integration. - - One can subclass and override this method to customize the setup if needed. Find more information `here - `__. You can also override the following environment variables: - - Environment: - WANDB_PROJECT: - (Optional): str - "huggingface" by default, set this to a custom string to store results in a different - project. - WANDB_DISABLED: - (Optional): boolean - defaults to false, set to "true" to disable wandb entirely. - """ - - logger.info('Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"') - combined_dict = {**self.model.config.to_dict(), **self.args.to_sanitized_dict()} - wandb.init(project=os.getenv("WANDB_PROJECT", "huggingface"), config=combined_dict, name=self.args.run_name) - - def setup_comet(self): - """ - Setup the optional Comet.ml integration. - - Environment: - COMET_MODE: - (Optional): str - "OFFLINE", "ONLINE", or "DISABLED" - COMET_PROJECT_NAME: - (Optional): str - Comet.ml project name for experiments - COMET_OFFLINE_DIRECTORY: - (Optional): str - folder to use for saving offline experiments when `COMET_MODE` is "OFFLINE" - - For a number of configurable items in the environment, see `here - `__ - """ - comet_mode = os.getenv("COMET_MODE", "ONLINE").upper() - args = {"project_name": os.getenv("COMET_PROJECT_NAME", "huggingface")} - experiment = None - if comet_mode == "ONLINE": - experiment = comet_ml.Experiment(**args) - logger.info("Automatic Comet.ml online logging enabled") - elif comet_mode == "OFFLINE": - args["offline_directory"] = os.getenv("COMET_OFFLINE_DIRECTORY", "./") - experiment = comet_ml.OfflineExperiment(**args) - logger.info("Automatic Comet.ml offline logging enabled; use `comet upload` when finished") - if experiment is not None: - experiment._set_model_graph(self.model, framework="transformers") - experiment._log_parameters(self.args, prefix="args/", framework="transformers") - experiment._log_parameters(self.model.config, prefix="config/", framework="transformers") - - def prediction_loop( - self, - dataset: tf.data.Dataset, - steps: int, - num_examples: int, - description: str, - prediction_loss_only: Optional[bool] = None, - ) -> PredictionOutput: - """ - Prediction/evaluation loop, shared by [`~TFTrainer.evaluate`] and [`~TFTrainer.predict`]. - - Works both with or without labels. - """ - - prediction_loss_only = ( - prediction_loss_only if prediction_loss_only is not None else self.args.prediction_loss_only - ) - - logger.info(f"***** Running {description} *****") - logger.info(f" Num examples in dataset = {num_examples}") - if description == "Evaluation": - logger.info(f" Num examples in used in evaluation = {self.args.eval_batch_size * steps}") - logger.info(f" Batch size = {self.args.eval_batch_size}") - - label_ids: np.ndarray = None - preds: np.ndarray = None - self.eval_loss.reset_states() - - # Reset the past mems state at the beginning of the evaluation if necessary. - if self.args.past_index >= 0: - self._past = None - - for step, batch in enumerate(dataset): - logits = self.distributed_prediction_steps(batch) - _, labels = batch - - if not prediction_loss_only: - if isinstance(logits, tuple): - logits = logits[0] - - if isinstance(labels, tuple): - labels = labels[0] - - if self.args.n_replicas > 1: - for val in logits.values: - if preds is None: - preds = val.numpy() - else: - preds = np.append(preds, val.numpy(), axis=0) - - for val in labels.values: - if label_ids is None: - label_ids = val.numpy() - else: - label_ids = np.append(label_ids, val.numpy(), axis=0) - else: - if preds is None: - preds = logits.numpy() - else: - preds = np.append(preds, logits.numpy(), axis=0) - - if label_ids is None: - label_ids = labels.numpy() - else: - label_ids = np.append(label_ids, labels.numpy(), axis=0) - - if step == steps - 1: - break - - if self.compute_metrics is not None and preds is not None and label_ids is not None: - metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids)) - else: - metrics = {} - - metrics["eval_loss"] = self.eval_loss.result().numpy() / steps - - for key in list(metrics.keys()): - if not key.startswith("eval_"): - metrics[f"eval_{key}"] = metrics.pop(key) - - if self.args.past_index and hasattr(self, "_past"): - # Clean the state at the end of training - delattr(self, "_past") - - return PredictionOutput(predictions=preds, label_ids=label_ids, metrics=metrics) - - def log(self, logs: Dict[str, float]) -> None: - """ - Log `logs` on the various objects watching training. - - Subclass and override this method to inject custom behavior. - - Args: - logs (`Dict[str, float]`): - The values to log. - """ - logs["epoch"] = self.epoch_logging - - if self.tb_writer: - with self.tb_writer.as_default(): - for k, v in logs.items(): - tf.summary.scalar(k, v, step=self.global_step) - self.tb_writer.flush() - - if is_wandb_available(): - wandb.log(logs, step=self.global_step) - - if is_comet_available(): - experiment = comet_ml.config.get_global_experiment() - if experiment is not None: - experiment._log_metrics( - logs, step=self.global_step, epoch=self.epoch_logging, framework="transformers" - ) - - output = {**logs, **{"step": self.global_step}} - - logger.info(output) - - def evaluate(self, eval_dataset: Optional[tf.data.Dataset] = None) -> Dict[str, float]: - """ - Run evaluation and returns metrics. - - The calling script will be responsible for providing a method to compute metrics, as they are task-dependent - (pass it to the init `compute_metrics` argument). - - Args: - eval_dataset ([`~tf.data.Dataset`], *optional*): - Pass a dataset if you wish to override `self.eval_dataset`. The dataset should yield tuples of - `(features, labels)` where `features` is a dict of input features and `labels` is the labels. If - `labels` is a tensor, the loss is calculated by the model by calling `model(features, labels=labels)`. - If `labels` is a dict, such as when using a QuestionAnswering head model with multiple targets, the - loss is instead calculated by calling `model(features, **labels)`. - - Returns: - A dictionary containing the evaluation loss and the potential metrics computed from the predictions. - """ - eval_ds, steps, num_examples = self.get_eval_tfdataset(eval_dataset) - - output = self.prediction_loop(eval_ds, steps, num_examples, description="Evaluation") - logs = {**output.metrics} - logs["epoch"] = self.epoch_logging - - self.log(logs) - - return output.metrics - - def prediction_step( - self, features: tf.Tensor, labels: tf.Tensor, nb_instances_in_global_batch: tf.Tensor - ) -> tf.Tensor: - """ - Compute the prediction on features and update the loss with labels. - - Subclass and override to inject some custom behavior. - """ - per_example_loss, logits = self.run_model(features, labels, False) - scaled_loss = per_example_loss / tf.cast(nb_instances_in_global_batch, dtype=per_example_loss.dtype) - - self.eval_loss.update_state(scaled_loss) - - return logits - - @tf.function - def distributed_prediction_steps(self, batch): - nb_instances_in_batch = self._compute_nb_instances(batch) - inputs = self._get_step_inputs(batch, nb_instances_in_batch) - - logits = self.args.strategy.run(self.prediction_step, inputs) - - return logits - - def train(self) -> None: - """ - Train method to train the model. - """ - train_ds = self.get_train_tfdataset() - - if self.args.debug: - tf.summary.trace_on(graph=True, profiler=True) - - self.gradient_accumulator.reset() - - num_update_steps_per_epoch = self.num_train_examples / self.total_train_batch_size - - # In fact, ``self.args.dataloader_drop_last`` has no effect in `trainer_tf.py`, because - # the dataset is repeated before being batched. - # It has the effect only when TPU is used which requires explicit tensor shape in order to make - # the gradient accumulation implementation work. - approx = math.floor if self.args.dataloader_drop_last else math.ceil - num_update_steps_per_epoch = approx(num_update_steps_per_epoch) - - # At least one update for each epoch. - num_update_steps_per_epoch = max(num_update_steps_per_epoch, 1) - self.steps_per_epoch = num_update_steps_per_epoch - - if self.args.max_steps > 0: - t_total = self.args.max_steps - epochs = (self.args.max_steps // self.steps_per_epoch) + int( - self.args.max_steps % self.steps_per_epoch > 0 - ) - else: - t_total = self.steps_per_epoch * self.args.num_train_epochs - epochs = self.args.num_train_epochs - - # Since ``self.args.num_train_epochs`` can be `float`, we make ``epochs`` be a `float` always. - epochs = float(epochs) - - with self.args.strategy.scope(): - self.create_optimizer_and_scheduler(num_training_steps=t_total) - folder = os.path.join(self.args.output_dir, PREFIX_CHECKPOINT_DIR) - ckpt = tf.train.Checkpoint(optimizer=self.optimizer, model=self.model) - self.model.ckpt_manager = tf.train.CheckpointManager(ckpt, folder, max_to_keep=self.args.save_total_limit) - - iterations = self.optimizer.iterations - epochs_trained = 0 - steps_trained_in_current_epoch = 0 - if self.model.ckpt_manager.latest_checkpoint: - logger.info( - f"Checkpoint file {self.model.ckpt_manager.latest_checkpoint} found and restoring from checkpoint" - ) - ckpt.restore(self.model.ckpt_manager.latest_checkpoint).expect_partial() - - self.global_step = iterations.numpy() - - epochs_trained = self.global_step // self.steps_per_epoch - steps_trained_in_current_epoch = self.global_step % self.steps_per_epoch - - logger.info(" Continuing training from checkpoint, will skip to saved global_step") - logger.info(f" Continuing training from epoch {epochs_trained}") - logger.info(f" Continuing training from global step {self.global_step}") - logger.info(f" Will skip the first {steps_trained_in_current_epoch} steps in the first epoch") - - tf.summary.experimental.set_step(self.global_step) - - with self.tb_writer.as_default(): - tf.summary.text("args", self.args.to_json_string()) - - self.tb_writer.flush() - - logger.info("***** Running training *****") - logger.info(f" Num examples = {self.num_train_examples}") - # TODO: We might want to print a more precise ``epochs`` if self.args.max_steps > 0 ? - logger.info(f" Num Epochs = {epochs}") - logger.info(f" Instantaneous batch size per device = {self.args.per_device_train_batch_size}") - logger.info( - f" Total train batch size (w. parallel, distributed & accumulation) = {self.total_train_batch_size}" - ) - logger.info(f" Gradient Accumulation steps = {self.args.gradient_accumulation_steps}") - logger.info(f" Steps per epoch = {self.steps_per_epoch}") - logger.info(f" Total optimization steps = {t_total}") - - self.train_loss = tf.keras.metrics.Sum() - start_time = datetime.datetime.now() - - for epoch_iter in range(epochs_trained, int(epochs)): - # Reset the past mems state at the beginning of each epoch if necessary. - if self.args.past_index >= 0: - self._past = None - - for step, batch in enumerate(train_ds): - # Skip past any already trained steps if resuming training - if steps_trained_in_current_epoch > 0: - steps_trained_in_current_epoch -= 1 - continue - - self.distributed_training_steps(batch) - - self.global_step = iterations.numpy() - self.epoch_logging = epoch_iter + (step + 1) / self.steps_per_epoch - - training_loss = self.train_loss.result() / (step + 1) - - if self.args.debug: - logs = {} - logs["loss"] = training_loss.numpy() - logs["epoch"] = self.epoch_logging - - self.log(logs) - - if self.global_step == 1 and self.args.debug: - with self.tb_writer.as_default(): - tf.summary.trace_export( - name="training", step=self.global_step, profiler_outdir=self.args.logging_dir - ) - - if ( - self.args.eval_steps > 0 - and self.args.evaluation_strategy == IntervalStrategy.STEPS - and self.global_step % self.args.eval_steps == 0 - ): - self.evaluate() - - if (self.args.logging_steps > 0 and self.global_step % self.args.logging_steps == 0) or ( - self.global_step == 1 and self.args.logging_first_step - ): - logs = {} - logs["loss"] = training_loss.numpy() - logs["learning_rate"] = self.lr_scheduler(self.global_step).numpy() - logs["epoch"] = self.epoch_logging - - self.log(logs) - - if self.args.save_steps > 0 and self.global_step % self.args.save_steps == 0: - ckpt_save_path = self.model.ckpt_manager.save() - - logger.info(f"Saving checkpoint for step {self.global_step} at {ckpt_save_path}") - - if self.args.max_steps > 0 and self.global_step >= t_total: - break - - if self.global_step % self.steps_per_epoch == 0: - break - - self.train_loss.reset_states() - - if self.args.max_steps > 0 and self.global_step >= self.args.max_steps: - break - - end_time = datetime.datetime.now() - - logger.info(f"Training took: {str(end_time - start_time)}") - - if self.args.past_index and hasattr(self, "_past"): - # Clean the state at the end of training - delattr(self, "_past") - - def training_step(self, features, labels, nb_instances_in_global_batch): - """ - Perform a training step on features and labels. - - Subclass and override to inject some custom behavior. - """ - per_example_loss, _ = self.run_model(features, labels, True) - scaled_loss = per_example_loss / tf.cast(nb_instances_in_global_batch, dtype=per_example_loss.dtype) - gradients = tf.gradients(scaled_loss, self.model.trainable_variables) - gradients = [ - g if g is not None else tf.zeros_like(v) for g, v in zip(gradients, self.model.trainable_variables) - ] - - if self.args.gradient_accumulation_steps > 1: - self.gradient_accumulator(gradients) - - self.train_loss.update_state(scaled_loss) - - if self.args.gradient_accumulation_steps == 1: - return gradients - - def apply_gradients(self, features, labels, nb_instances_in_global_batch): - if self.args.gradient_accumulation_steps == 1: - gradients = self.training_step(features, labels, nb_instances_in_global_batch) - - self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables))) - else: - for _ in tf.range(self.args.gradient_accumulation_steps): - reduced_features = { - k: ft[: self.args.train_batch_size // self.args.n_replicas] for k, ft in features.items() - } - - if tf.is_tensor(labels): - reduced_labels = labels[: self.args.train_batch_size // self.args.n_replicas] - elif isinstance(labels, dict): - reduced_labels = { - k: lbl[: self.args.train_batch_size // self.args.n_replicas] for k, lbl in labels.items() - } - else: - raise ValueError("The labels must be either a tf.Tensor or a dict.") - - self.training_step(reduced_features, reduced_labels, nb_instances_in_global_batch) - - features = { - k: tf.concat( - [ft[self.args.train_batch_size // self.args.n_replicas :], reduced_features[k]], - axis=0, - ) - for k, ft in features.items() - } - - if tf.is_tensor(labels): - labels = tf.concat( - [labels[self.args.train_batch_size // self.args.n_replicas :], reduced_labels], axis=0 - ) - elif isinstance(labels, dict): - labels = { - k: tf.concat( - [lbl[self.args.train_batch_size // self.args.n_replicas :], reduced_labels[k]], - axis=0, - ) - for k, lbl in labels.items() - } - else: - raise ValueError("The labels must be either a tf.Tensor or a dict.") - - gradients = self.gradient_accumulator.gradients - gradients = [ - (tf.clip_by_value(grad, -self.args.max_grad_norm, self.args.max_grad_norm)) for grad in gradients - ] - - self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables))) - self.gradient_accumulator.reset() - - @tf.function - def distributed_training_steps(self, batch): - with self.args.strategy.scope(): - nb_instances_in_batch = self._compute_nb_instances(batch) - inputs = self._get_step_inputs(batch, nb_instances_in_batch) - - self.args.strategy.run(self.apply_gradients, inputs) - - @staticmethod - def _compute_nb_instances(batch): - labels = batch[-1] - if isinstance(labels, PerReplica): - labels = tf.concat(labels.values, axis=0) - - nb_instances = tf.reduce_sum(tf.cast(labels != -100, dtype=tf.int32)) - - return nb_instances - - @staticmethod - def _get_step_inputs(batch, nb_instances): - features, labels = batch - - if isinstance(labels, PerReplica): - # need to make a `PerReplica` objects for ``nb_instances`` - nb_instances = PerReplica([nb_instances] * len(labels.values)) - - step_inputs = (features, labels, nb_instances) - - return step_inputs - - def run_model(self, features, labels, training): - """ - Computes the loss of the given features and labels pair. - - Subclass and override this method if you want to inject some custom behavior. - - Args: - features (`tf.Tensor`): A batch of input features. - labels (`tf.Tensor`): A batch of labels. - training (`bool`): Whether or not to run the model in training mode. - - Returns: - A tuple of two `tf.Tensor`: The loss and logits. - """ - - if self.args.past_index >= 0 and getattr(self, "_past", None) is not None: - features["mems"] = self._past - - if isinstance(labels, (dict)): - outputs = self.model(features, training=training, **labels)[:2] - else: - outputs = self.model(features, labels=labels, training=training)[:2] - - loss, logits = outputs[:2] - - if self.args.past_index >= 0: - self._past = outputs[self.args.past_index] - - return loss, logits - - def predict(self, test_dataset: tf.data.Dataset) -> PredictionOutput: - """ - Run prediction and returns predictions and potential metrics. - - Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method - will also return metrics, like in `evaluate()`. - - Args: - test_dataset ([`~tf.data.Dataset`]): - Dataset to run the predictions on. The dataset should yield tuples of `(features, labels)` where - `features` is a dict of input features and `labels` is the labels. If `labels` is a tensor, the loss is - calculated by the model by calling `model(features, labels=labels)`. If `labels` is a dict, such as - when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by - calling `model(features, **labels)` - - Returns: *NamedTuple* A namedtuple with the following keys: - - - predictions (`np.ndarray`): The predictions on `test_dataset`. - - label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some). - - metrics (`Dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained - labels). - """ - test_ds, steps, num_examples = self.get_test_tfdataset(test_dataset) - - return self.prediction_loop(test_ds, steps, num_examples, description="Prediction") - - def save_model(self, output_dir: Optional[str] = None): - """ - Will save the model, so you can reload it using `from_pretrained()`. - """ - output_dir = output_dir if output_dir is not None else self.args.output_dir - - logger.info(f"Saving model in {output_dir}") - - if not isinstance(self.model, TFPreTrainedModel): - raise ValueError("Trainer.model appears to not be a PreTrainedModel") - - self.model.save_pretrained(output_dir) \ No newline at end of file diff --git a/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/app.py b/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/app.py deleted file mode 100644 index e1e1025c8f06010197c50917ac9dd1ddeaf7e5aa..0000000000000000000000000000000000000000 --- a/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/CompVis/stable-diffusion-v1-4").launch() \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/test.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/test.py deleted file mode 100644 index ae99b2778f346df88890a0f3e2c1d0b730a5309d..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/test.py +++ /dev/null @@ -1,7 +0,0 @@ -#encoding = utf-8 -import numpy as np - -assert_true = np.testing.assert_ -assert_equal = np.testing.assert_equal -assert_array_equal = np.testing.assert_array_equal -assert_almost_equal = np.testing.assert_almost_equal diff --git a/spaces/Cyril666/my_abi/app.py b/spaces/Cyril666/my_abi/app.py deleted file mode 100644 index 36e4bca6c60b2aa7eecb1d978ef035ebb2e60a62..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/my_abi/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -os.system('pip install --upgrade gdown') -import gdown -gdown.download(id='1mYM_26qHUom_5NU7iutHneB_KHlLjL5y', output='workdir.zip') -os.system('unzip workdir.zip') - -import glob -import gradio as gr -from demo import get_model, preprocess, postprocess, load -from utils import Config, Logger, CharsetMapper - -def process_image(image): - config = Config('configs/train_abinet.yaml') - config.model_vision_checkpoint = None - model = get_model(config) - model = load(model, 'workdir/train-abinet/best-train-abinet.pth') - charset = CharsetMapper(filename=config.dataset_charset_path, max_length=config.dataset_max_length + 1) - - img = image.convert('RGB') - img = preprocess(img, config.dataset_image_width, config.dataset_image_height) - res = model(img) - return postprocess(res, charset, 'alignment')[0][0] - -title = "张博强毕设中期展示(文本识别部分)" -description = "西北工业大学航海学院张博强毕设,目前识别部分进度为复现abinet,本网页为abinet复现的可视化web端展示" -#article = "

Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition | Github Repo

" - -iface = gr.Interface(fn=process_image, - inputs=[gr.inputs.Image(type="pil")], - outputs=[gr.outputs.Textbox()], - title=title, - description=description,) - #examples=glob.glob('figs/test/*.png')) -iface.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Cyril666/my_abi/modules/model_abinet.py b/spaces/Cyril666/my_abi/modules/model_abinet.py deleted file mode 100644 index 34c37b64ac4814b868483e3027d6ecf88b62c1bb..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/my_abi/modules/model_abinet.py +++ /dev/null @@ -1,30 +0,0 @@ -import torch -import torch.nn as nn -from fastai.vision import * - -from .model_vision import BaseVision -from .model_language import BCNLanguage -from .model_alignment import BaseAlignment - - -class ABINetModel(nn.Module): - def __init__(self, config): - super().__init__() - self.use_alignment = ifnone(config.model_use_alignment, True) - self.max_length = config.dataset_max_length + 1 # additional stop token - self.vision = BaseVision(config) - self.language = BCNLanguage(config) - if self.use_alignment: self.alignment = BaseAlignment(config) - - def forward(self, images, *args): - v_res = self.vision(images) - v_tokens = torch.softmax(v_res['logits'], dim=-1) - v_lengths = v_res['pt_lengths'].clamp_(2, self.max_length) # TODO:move to langauge model - - l_res = self.language(v_tokens, v_lengths) - if not self.use_alignment: - return l_res, v_res - l_feature, v_feature = l_res['feature'], v_res['feature'] - - a_res = self.alignment(l_feature, v_feature) - return a_res, l_res, v_res diff --git a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.2a935a7f.css b/spaces/DEEMOSTECH/ChatAvatar/static/css/main.2a935a7f.css deleted file mode 100644 index 02847b41795c16c81949b6cedcdfa5e5ea9d11f7..0000000000000000000000000000000000000000 --- a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.2a935a7f.css +++ /dev/null @@ -1,2 +0,0 @@ -html{overflow-x:hidden;overflow-y:overlay}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;box-sizing:border-box;color:#cfcfcf;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;margin:0}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.root{display:flex;justify-content:center;width:100%}.container{height:100vh;width:100%}.\!container{width:100%!important}@media (min-width:640px){.container{max-width:640px}.\!container{max-width:640px!important}}@media (min-width:768px){.container{max-width:768px}.\!container{max-width:768px!important}}@media (min-width:1024px){.container{max-width:1024px}.\!container{max-width:1024px!important}}@media (min-width:1280px){.container{max-width:1280px}.\!container{max-width:1280px!important}}@media (min-width:1536px){.container{max-width:1536px}.\!container{max-width:1536px!important}}.App{--theme-color:#4a00e0;--font-dark-color:#434343;--font-gray-color:#aaa;--font-light-color:#cfcfcf;--bg-light-color:#fff;--bg-gray0-color:#f8f8f8;--bg-gray1-color:#ececec;--bg-gray2-color:#7c7c7c;--bg-gray3-color:#373737;--bg-theme-color:#e7e3f1;--bg-dark-color:#121317;--side-gap:5rem;--radius:0.5rem;--shadow:-10px 0px 12px 1px hsla(0,0%,53%,.16);display:flex;justify-content:space-between;padding:16px;text-align:center}.App *{box-sizing:border-box;transition:all .3s}.App ::-webkit-scrollbar-thumb{background-color:rgba(0,0,0,.2)}textarea{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;border:1px solid transparent;color:var(--font-dark-color);font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;font-size:1rem;line-height:1.5rem;outline:none;padding:0;resize:none}textarea:focus{border-color:var(--theme-color)}img{-webkit-user-drag:none;-webkit-user-select:none;user-select:none}.gallery_con__Y2mej{align-items:flex-start;display:flex;justify-content:center;margin-top:3rem;padding:0 1.25rem;width:100%}.gallery_menuCon__fVdFJ{margin-right:2rem;width:-webkit-max-content;width:max-content}.gallery_menu__U2btD{align-items:center;background-color:initial;border:2px solid transparent;border-radius:1.5rem;cursor:pointer;display:flex;height:3rem;justify-content:center;line-height:1rem;margin-bottom:1rem;text-align:center;width:6rem}.gallery_menu__U2btD.gallery_selected__T2qcs,.gallery_menu__U2btD:hover{background-color:var(--bg-gray3-color);color:#fff}.gallery_menu__U2btD.gallery_selected__T2qcs{border-color:#fff}.gallery_cardsCon__wAfcp{align-items:flex-start;display:flex;flex-grow:1;flex-shrink:1;flex-wrap:wrap;justify-content:space-between;max-height:100vh;max-width:calc(1600px + 9rem)}.gallery_cardsCon__wAfcp::-webkit-scrollbar-thumb{background-color:hsla(0,0%,100%,.2);border:5px solid #121317;border-radius:8px}.gallery_card__noUoL{background-color:var(--bg-gray3-color);border-radius:var(--radius);cursor:pointer;font-size:.75rem;height:260px;margin-bottom:1rem;overflow:hidden;position:relative;width:200px}.gallery_coverImg__BYj-o,.gallery_coverImg__BYj-o img{height:100%;width:100%}.gallery_prompt__9PEmb{background-color:#f8f8f880;border-radius:var(--radius);bottom:1rem;color:var(--font-dark-color);height:0;left:1rem;overflow:hidden;padding:0 .5rem;position:absolute;right:1rem;text-align:left;white-space:pre-wrap;word-break:break-all}.gallery_prompt__9PEmb.gallery_show__c2k50{height:-webkit-fit-content;height:-moz-fit-content;height:fit-content;padding:.5rem}.gallery_infoCon__E8oLy{align-items:center;bottom:1rem;color:var(--font-dark-color);display:flex;justify-content:flex-start;left:1rem;position:absolute;right:1rem}.gallery_avatar__KWBmI,.gallery_avatar__KWBmI img{border-radius:12px;height:24px;overflow:hidden;width:24px}.gallery_avatar__KWBmI{margin-right:1rem}.gallery_spaceholder__xJwYU{flex-grow:1;flex-shrink:1}.header_con__M\+u1W{align-items:center;display:flex;justify-content:center;padding:0 var(--side-gap);width:100vw}.header_header__Y7CqP{align-items:center;border-bottom:1px solid hsla(0,0%,100%,.1);display:flex;justify-content:space-between;padding:1rem 0;width:100%}.header_logoCon__MIdGL{align-items:flex-start;display:flex;height:3rem;justify-content:center}.header_logo__90zuC{height:3rem;margin-right:1rem}.header_logoCon__MIdGL>div{font-size:2rem;font-weight:700;line-height:2rem;margin-top:5px}.header_avatar__B3zXB{background:var(--bg-gray2-color);border-radius:50%;overflow:hidden}.header_avatar__B3zXB,.header_avatar__B3zXB img{height:3rem;width:3rem}.result_con__gHOU1{align-items:center;color:var(--font-dark-color);justify-content:center;width:50%;z-index:999}.result_con__gHOU1 *{flex-shrink:0}.result_board__PCvVJ{background-color:var(--bg-light-color);border-radius:var(--radius);display:flex;flex-flow:column;height:100%;width:100%}.result_colHead__k0Mk-{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;flex:0 1 auto;padding:8px}.result_colInner__9FccK{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 1px 2px 0 rgba(0,0,0,.05);flex-wrap:wrap;gap:1px;margin-bottom:1rem;overflow:hidden;padding:10px 12px}.result_colDetail__jggqg,.result_colInner__9FccK{align-items:center;flex-direction:column;justify-content:flex-start}.result_colDetail__jggqg{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;display:flex;flex:1 1 auto;margin-top:1rem;padding:8px 8px 24px}.result_colContent__FYZno{background:#fff;border:1px solid #e5e7eb;border-radius:8px;height:100%;width:100%}.result_colTitle__R8k\+A{align-items:flex-end;color:#6b7280;display:flex;font-size:.875rem;justify-content:space-between;line-height:1.2rem;margin-bottom:8px;width:100%}.result_passwordCon__OjFSI{border-top:1px solid #e5e7eb;padding:8px 12px 2px}.result_emailCon__eEqXk{padding-bottom:10px;padding-left:12px;padding-right:12px}.result_colTitle__R8k\+A>div{margin-bottom:.5rem}.result_colTitle__R8k\+A>div.result_restart__fLq8E{border-radius:5px;cursor:pointer;font-size:1rem;font-weight:400;margin-bottom:0;margin-left:1rem;padding:.5rem;-webkit-user-select:none;user-select:none}.result_restart__fLq8E:hover{background-color:var(--bg-gray0-color);color:var(--font-dark-color)}.result_spaceholder__GAxGZ{flex-grow:1;flex-shrink:1}.result_lang__85-De{cursor:pointer;font-weight:400;margin-right:1rem;-webkit-user-select:none;user-select:none}.result_lang__85-De.result_en__n-Jo7{margin-left:1rem;margin-right:0;width:4rem}.result_lang__85-De:hover{font-weight:700}.result_lang__85-De.result_selected__kDzD1{color:var(--font-dark-color);font-weight:700}.result_regene__yKazF{color:var(--theme-color);cursor:pointer;font-weight:400;-webkit-user-select:none;user-select:none}.result_chatCon__Hm\+zJ{background-color:var(--bg-gray0-color);border-radius:var(--radius);height:calc(100% - 4rem);padding:1rem}.result_chatCon__Hm\+zJ,.result_chatMsgCon__x8UTP{align-items:center;display:flex;flex-direction:column;flex-grow:1;flex-shrink:1;justify-content:flex-start;width:100%}.result_chatMsgCon__x8UTP{overflow-y:overlay;text-align:left}.result_chatMsgCon__x8UTP::-webkit-scrollbar-thumb{border:none;border-radius:3px}.result_chatMsgCon__x8UTP::-webkit-scrollbar{width:6px}.result_chatMsgRow__dr9Qg{align-items:flex-start;display:flex;flex-direction:row;justify-content:flex-start;margin-bottom:1rem;width:100%}.result_chatMsgRow__dr9Qg.result_user__bUuRg{flex-direction:row-reverse}.result_avatar__B2zOp{background:var(--bg-gray2-color);border-radius:1.5rem;margin-left:0;margin-right:1rem;overflow:hidden}.result_avatar__B2zOp,.result_avatar__B2zOp img{height:3rem;width:3rem}.result_user__bUuRg .result_avatar__B2zOp{margin-left:1rem;margin-right:0}.result_bubble__GexXm{background:var(--bg-theme-color);border-radius:var(--radius);flex-shrink:1;line-height:1.5rem;padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_bubble__GexXm.result_unactive__zyVF2{background:var(--bg-gray1-color)}.result_user__bUuRg .result_bubble__GexXm{background:var(--bg-light-color)}.result_chatIptCon__LXDF-{align-items:center;display:flex;flex-direction:column;justify-content:flex-start;width:100%}.result_chatTipsCon__w4uUf{align-items:flex-end;display:flex;flex-direction:row;justify-content:flex-start;margin-top:1rem;max-width:100%;overflow-x:auto;overflow-y:hidden;width:100%}.result_chatTipsCon__w4uUf::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_chatTips__6b9zJ{background:var(--bg-light-color);border-radius:var(--radius);cursor:pointer;margin-right:1rem;padding:1rem;text-align:left;white-space:pre-wrap;width:15.5rem;word-break:break-all}.result_chatTips__6b9zJ:last-child{margin-right:0}.result_chatRowCon__jLGk3{align-items:flex-start;display:flex;flex-direction:row;justify-content:space-between;margin-top:1rem;width:100%}.result_iptLineCon__nLuWa{flex-grow:1;flex-shrink:1;line-height:1.5rem;margin-right:1rem;position:relative;text-align:left}.result_iptSpaceholder__hAkD5{border:1px solid transparent;max-height:calc(9rem + 2px);visibility:hidden}.result_iptSpaceholder__hAkD5,.result_ipt__tA\+g4{padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_ipt__tA\+g4{background:var(--bg-light-color);border-radius:var(--radius);bottom:0;left:0;overflow-y:auto;position:absolute;right:0;top:0}.result_ipt__tA\+g4::-webkit-scrollbar-thumb{border-color:var(--bg-light-color)}.result_btn__h5tQr{align-items:center;background-color:var(--theme-color);border:1px solid var(--theme-color);border-radius:1.5rem;color:#fff;cursor:pointer;display:flex;font-weight:700;height:calc(3rem - 2px);justify-content:center;line-height:1rem;padding:0 1.5rem;-webkit-user-select:none;user-select:none}.result_con__gHOU1 .result_btn__h5tQr.result_disabled__lB61-{background:var(--bg-gray2-color);border-color:var(--bg-gray2-color);color:var(--font-light-color);cursor:not-allowed}.result_iptArea__23TZc{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 0 0 3px transparent,inset 0 2px 4px 0 rgba(0,0,0,.05);color:#1f2937;display:block;font-size:14px;height:42px;line-height:1.4;outline:none!important;padding:10px;position:relative;width:100%}.result_iptArea__23TZc:focus{border-color:#93c5fd;box-shadow:0 0 0 3px #dfedfe,inset 0 2px 4px 0 transparent}.result_iptArea__23TZc::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_clearBtn__r6e0y{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_clearBtn__r6e0y:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_clearBtnLogin__LOsgV{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_inputError__qtPTq{border-color:#f56565;box-shadow:0 0 0 3px #fed7d7,inset 0 2px 4px 0 transparent}.result_clearBtnLogin__LOsgV:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_btnCon__LEoi5{display:flex;justify-content:space-between}.result_generateBtn__UGmBG{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtn__UGmBG:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_generateBtnLogin__nkLOj{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtnLogin__nkLOj:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_candidateCon__x9kyB{align-items:flex-start;background-color:var(--bg-gray0-color);border-radius:var(--radius);display:flex;flex-direction:row;flex-grow:1;flex-shrink:1;height:100%;justify-content:space-between;max-height:45rem;overflow-y:auto;padding:1rem;position:relative;width:100%}.result_candidateCon__x9kyB::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_candidateCol__eoHna{margin-right:1rem;position:relative;width:calc(33.33333% - .66667rem)}.result_candidateCol__eoHna:last-child{margin-right:0}.result_candidateCol__eoHna img{border-radius:var(--radius);cursor:pointer;margin-bottom:.5rem}.result_creatorCon__tIm3e{align-items:flex-end;color:var(--font-gray-color);display:flex;font-size:1.2rem;font-weight:700;justify-content:flex-start;line-height:1.2rem;margin-bottom:1rem;width:100%}.result_creatorInfoCon__pET8h{text-align:left}.result_creatorName__VLTXL{color:var(--font-dark-color);font-size:1.2rem;font-weight:700;line-height:1.8rem}.result_creatorInfo__CkbWU{color:var(--font-gray-color);font-size:1rem;line-height:1.2rem}.result_modelView__Y25w5{background:var(--bg-gray0-color);border-radius:var(--radius);flex-grow:1;flex-shrink:1;height:100%;overflow:hidden;width:100%}.result_modelInfoCon__bXw5O{align-items:center;display:flex;flex-direction:column;justify-content:flex-end;text-align:left}.result_progressInfo__g9iwR{margin-bottom:.5rem;width:100%}.result_progressTrack__I6zDn{background:var(--bg-light-color);border-radius:2px;height:4px;position:relative;width:100%}.result_progressThumb__mbBQj{background-color:var(--theme-color);border-radius:2px;height:4px;left:0;position:absolute;top:0}.result_modelPrompt__DzUbD{background:var(--bg-light-color);border-radius:var(--radius);margin-top:1rem;min-height:3rem;padding:1rem;width:100%}.result_loadingCon__XVvXD,.result_progressCon__O57XA{font-size:14px;position:absolute;top:55%}.result_loadingCon__XVvXD{z-index:-111}.result_icon__dFKnM{height:20px;position:absolute;top:55%}.result_hideModel__3phD0{display:none}.result_descriptionLogin__xi7Yx{text-align:start}.login_con__\+RJgQ{background:#000;box-shadow:-5px 0 20px 0 hsla(0,0%,100%,.2);height:100vh;padding:var(--side-gap);position:fixed;right:0;top:0;z-index:9}.login_close__JulM-{cursor:pointer;-webkit-user-select:none;user-select:none}.welcome_con__o1kmf{align-items:center;background:#121317;border-radius:.5rem;display:flex;flex-direction:column;justify-content:flex-start;padding-bottom:1rem;padding-top:2rem;position:relative;width:45%}.welcome_con__o1kmf>img{position:absolute;top:0;width:100%}.welcome_mainCon__H1gv\+{margin-top:.5rem;z-index:999}.welcome_title__Gd8m4{color:#fff;font-family:Courier New;font-size:5rem;font-weight:700;line-height:5rem}.welcome_ioCon__PQZXU{background-color:#fff;border-radius:1rem;border-style:solid;margin-left:8rem;margin-right:8rem;margin-top:24rem;padding:2rem;width:calc(100% - 16rem)}.welcome_iptCon__KpWEL{align-items:center;background:#ededf2;border-radius:1rem;display:flex;height:4rem;justify-content:space-between;margin-bottom:2rem;width:100%}.welcome_iptCon__KpWEL>img{height:2rem;margin-right:1rem;position:static;width:2rem}.welcome_ipt__ayi9Z{background:#ededf2;border:none;border-radius:1rem;color:var(--font-dark-color);flex-grow:1;font-size:1rem;height:100%;outline:none;padding:0 2rem}.welcome_ipt__ayi9Z::-webkit-input-placeholder{font-size:1rem}.welcome_ipt__ayi9Z::placeholder{font-size:1rem}.welcome_btnCon__Mx-ta,.welcome_btn__jCuoG{align-items:center;display:flex;justify-content:center}.welcome_btn__jCuoG{border:1px solid #8f8f8f;border-radius:1rem;cursor:pointer;height:3rem;line-height:1rem;-webkit-user-select:none;user-select:none;width:100%}.welcome_btn__jCuoG:last-child{background:#4a00e0;border:none;font-weight:700}.welcome_btn__jCuoG.welcome_disabled__pcSzv{cursor:not-allowed}.welcome_btn__jCuoG:hover{color:#fff} -/*# sourceMappingURL=main.2a935a7f.css.map*/ \ No newline at end of file diff --git a/spaces/DKDohare/Chat-GPT4-MAX/README.md b/spaces/DKDohare/Chat-GPT4-MAX/README.md deleted file mode 100644 index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000 --- a/spaces/DKDohare/Chat-GPT4-MAX/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT4 -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_webhooks_payload.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_webhooks_payload.py deleted file mode 100644 index bc9aa7be046065e0521a8654290d520fa3f917dc..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_webhooks_payload.py +++ /dev/null @@ -1,117 +0,0 @@ -# coding=utf-8 -# Copyright 2023-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains data structures to parse the webhooks payload.""" -from typing import List, Optional - -from pydantic import BaseModel - -from .utils._typing import Literal - - -# This is an adaptation of the ReportV3 interface implemented in moon-landing. V0, V1 and V2 have been ignored as they -# are not in used anymore. To keep in sync when format is updated in -# https://github.com/huggingface/moon-landing/blob/main/server/lib/HFWebhooks.ts (internal link). - - -WebhookEvent_T = Literal[ - "create", - "delete", - "move", - "update", -] -RepoChangeEvent_T = Literal[ - "add", - "move", - "remove", - "update", -] -RepoType_T = Literal[ - "dataset", - "model", - "space", -] -DiscussionStatus_T = Literal[ - "closed", - "draft", - "open", - "merged", -] -SupportedWebhookVersion = Literal[3] - - -class ObjectId(BaseModel): - id: str - - -class WebhookPayloadUrl(BaseModel): - web: str - api: Optional[str] - - -class WebhookPayloadMovedTo(BaseModel): - name: str - owner: ObjectId - - -class WebhookPayloadWebhook(ObjectId): - version: SupportedWebhookVersion - - -class WebhookPayloadEvent(BaseModel): - action: WebhookEvent_T - scope: str - - -class WebhookPayloadDiscussionChanges(BaseModel): - base: str - mergeCommitId: Optional[str] - - -class WebhookPayloadComment(ObjectId): - author: ObjectId - hidden: bool - content: Optional[str] - url: WebhookPayloadUrl - - -class WebhookPayloadDiscussion(ObjectId): - num: int - author: ObjectId - url: WebhookPayloadUrl - title: str - isPullRequest: bool - status: DiscussionStatus_T - changes: Optional[WebhookPayloadDiscussionChanges] - pinned: Optional[bool] - - -class WebhookPayloadRepo(ObjectId): - owner: ObjectId - head_sha: Optional[str] - name: str - private: bool - subdomain: Optional[str] - tags: Optional[List[str]] - type: Literal["dataset", "model", "space"] - url: WebhookPayloadUrl - - -class WebhookPayload(BaseModel): - event: WebhookPayloadEvent - repo: WebhookPayloadRepo - discussion: Optional[WebhookPayloadDiscussion] - comment: Optional[WebhookPayloadComment] - webhook: WebhookPayloadWebhook - movedTo: Optional[WebhookPayloadMovedTo] diff --git a/spaces/DaFujaTyping/hf-Chat-ui/Dockerfile b/spaces/DaFujaTyping/hf-Chat-ui/Dockerfile deleted file mode 100644 index 8276e4aaacb2c140bc099342e178cd480f711c14..0000000000000000000000000000000000000000 --- a/spaces/DaFujaTyping/hf-Chat-ui/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM node:19 - -RUN npm install -g pm2 - -WORKDIR /app - -COPY --link --chown=1000 . . - -RUN npm i - -RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local npm run build - -CMD pm2 start build/index.js -i $CPU_CORES --no-daemon diff --git a/spaces/Dauzy/whisper-webui/src/modelCache.py b/spaces/Dauzy/whisper-webui/src/modelCache.py deleted file mode 100644 index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000 --- a/spaces/Dauzy/whisper-webui/src/modelCache.py +++ /dev/null @@ -1,17 +0,0 @@ -class ModelCache: - def __init__(self): - self._cache = dict() - - def get(self, model_key: str, model_factory): - result = self._cache.get(model_key) - - if result is None: - result = model_factory() - self._cache[model_key] = result - return result - - def clear(self): - self._cache.clear() - -# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times. -GLOBAL_MODEL_CACHE = ModelCache() \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/projectors/__init__.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/projectors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Duckymalone/dreamlike-art-dreamlike-diffusion-1.0/README.md b/spaces/Duckymalone/dreamlike-art-dreamlike-diffusion-1.0/README.md deleted file mode 100644 index 12b8fe4e6e7de0da86fd3ae4c05f3d39d6b681a6..0000000000000000000000000000000000000000 --- a/spaces/Duckymalone/dreamlike-art-dreamlike-diffusion-1.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dreamlike Art Dreamlike Diffusion 1.0 -emoji: 🚀 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Duskfallcrew/EpicMix_Realism_WebUi/README.md b/spaces/Duskfallcrew/EpicMix_Realism_WebUi/README.md deleted file mode 100644 index b95f0d606dc1c299b2a5138f0c5ce4b9fdc89baf..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/EpicMix_Realism_WebUi/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: EpicMix Realism WebUi -emoji: 🌖 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_mix_det.py b/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_mix_det.py deleted file mode 100644 index 8013d94558c9e01cfe454778c4bd25231dbec7d8..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_mix_det.py +++ /dev/null @@ -1,138 +0,0 @@ -# encoding: utf-8 -import os -import random -import torch -import torch.nn as nn -import torch.distributed as dist - -from yolox.exp import Exp as MyExp -from yolox.data import get_yolox_datadir - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.num_classes = 1 - self.depth = 1.33 - self.width = 1.25 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - self.train_ann = "train.json" - self.val_ann = "test.json" # change to train.json when running on training set - self.input_size = (800, 1440) - self.test_size = (800, 1440) - self.random_size = (18, 32) - self.max_epoch = 80 - self.print_interval = 20 - self.eval_interval = 5 - self.test_conf = 0.001 - self.nmsthre = 0.7 - self.no_aug_epochs = 10 - self.basic_lr_per_img = 0.001 / 64.0 - self.warmup_epochs = 1 - - def get_data_loader(self, batch_size, is_distributed, no_aug=False): - from yolox.data import ( - MOTDataset, - TrainTransform, - YoloBatchSampler, - DataLoader, - InfiniteSampler, - MosaicDetection, - ) - - dataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mix_det"), - json_file=self.train_ann, - name='', - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=500, - ), - ) - - dataset = MosaicDetection( - dataset, - mosaic=not no_aug, - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=1000, - ), - degrees=self.degrees, - translate=self.translate, - scale=self.scale, - shear=self.shear, - perspective=self.perspective, - enable_mixup=self.enable_mixup, - ) - - self.dataset = dataset - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - - sampler = InfiniteSampler( - len(self.dataset), seed=self.seed if self.seed else 0 - ) - - batch_sampler = YoloBatchSampler( - sampler=sampler, - batch_size=batch_size, - drop_last=False, - input_dimension=self.input_size, - mosaic=not no_aug, - ) - - dataloader_kwargs = {"num_workers": self.data_num_workers, "pin_memory": True} - dataloader_kwargs["batch_sampler"] = batch_sampler - train_loader = DataLoader(self.dataset, **dataloader_kwargs) - - return train_loader - - def get_eval_loader(self, batch_size, is_distributed, testdev=False): - from yolox.data import MOTDataset, ValTransform - - valdataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mot"), - json_file=self.val_ann, - img_size=self.test_size, - name='test', # change to train when running on training set - preproc=ValTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - ), - ) - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - sampler = torch.utils.data.distributed.DistributedSampler( - valdataset, shuffle=False - ) - else: - sampler = torch.utils.data.SequentialSampler(valdataset) - - dataloader_kwargs = { - "num_workers": self.data_num_workers, - "pin_memory": True, - "sampler": sampler, - } - dataloader_kwargs["batch_size"] = batch_size - val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs) - - return val_loader - - def get_evaluator(self, batch_size, is_distributed, testdev=False): - from yolox.evaluators import COCOEvaluator - - val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev) - evaluator = COCOEvaluator( - dataloader=val_loader, - img_size=self.test_size, - confthre=self.test_conf, - nmsthre=self.nmsthre, - num_classes=self.num_classes, - testdev=testdev, - ) - return evaluator diff --git a/spaces/ECCV2022/bytetrack/yolox/models/network_blocks.py b/spaces/ECCV2022/bytetrack/yolox/models/network_blocks.py deleted file mode 100644 index 4bdb2ca731a07aa9e5e6b68c652467f28fe96079..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/models/network_blocks.py +++ /dev/null @@ -1,210 +0,0 @@ -#!/usr/bin/env python -# -*- encoding: utf-8 -*- -# Copyright (c) 2014-2021 Megvii Inc. All rights reserved. - -import torch -import torch.nn as nn - - -class SiLU(nn.Module): - """export-friendly version of nn.SiLU()""" - - @staticmethod - def forward(x): - return x * torch.sigmoid(x) - - -def get_activation(name="silu", inplace=True): - if name == "silu": - module = nn.SiLU(inplace=inplace) - elif name == "relu": - module = nn.ReLU(inplace=inplace) - elif name == "lrelu": - module = nn.LeakyReLU(0.1, inplace=inplace) - else: - raise AttributeError("Unsupported act type: {}".format(name)) - return module - - -class BaseConv(nn.Module): - """A Conv2d -> Batchnorm -> silu/leaky relu block""" - - def __init__( - self, in_channels, out_channels, ksize, stride, groups=1, bias=False, act="silu" - ): - super().__init__() - # same padding - pad = (ksize - 1) // 2 - self.conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size=ksize, - stride=stride, - padding=pad, - groups=groups, - bias=bias, - ) - self.bn = nn.BatchNorm2d(out_channels) - self.act = get_activation(act, inplace=True) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def fuseforward(self, x): - return self.act(self.conv(x)) - - -class DWConv(nn.Module): - """Depthwise Conv + Conv""" - - def __init__(self, in_channels, out_channels, ksize, stride=1, act="silu"): - super().__init__() - self.dconv = BaseConv( - in_channels, - in_channels, - ksize=ksize, - stride=stride, - groups=in_channels, - act=act, - ) - self.pconv = BaseConv( - in_channels, out_channels, ksize=1, stride=1, groups=1, act=act - ) - - def forward(self, x): - x = self.dconv(x) - return self.pconv(x) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__( - self, - in_channels, - out_channels, - shortcut=True, - expansion=0.5, - depthwise=False, - act="silu", - ): - super().__init__() - hidden_channels = int(out_channels * expansion) - Conv = DWConv if depthwise else BaseConv - self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act) - self.conv2 = Conv(hidden_channels, out_channels, 3, stride=1, act=act) - self.use_add = shortcut and in_channels == out_channels - - def forward(self, x): - y = self.conv2(self.conv1(x)) - if self.use_add: - y = y + x - return y - - -class ResLayer(nn.Module): - "Residual layer with `in_channels` inputs." - - def __init__(self, in_channels: int): - super().__init__() - mid_channels = in_channels // 2 - self.layer1 = BaseConv( - in_channels, mid_channels, ksize=1, stride=1, act="lrelu" - ) - self.layer2 = BaseConv( - mid_channels, in_channels, ksize=3, stride=1, act="lrelu" - ) - - def forward(self, x): - out = self.layer2(self.layer1(x)) - return x + out - - -class SPPBottleneck(nn.Module): - """Spatial pyramid pooling layer used in YOLOv3-SPP""" - - def __init__( - self, in_channels, out_channels, kernel_sizes=(5, 9, 13), activation="silu" - ): - super().__init__() - hidden_channels = in_channels // 2 - self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=activation) - self.m = nn.ModuleList( - [ - nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2) - for ks in kernel_sizes - ] - ) - conv2_channels = hidden_channels * (len(kernel_sizes) + 1) - self.conv2 = BaseConv(conv2_channels, out_channels, 1, stride=1, act=activation) - - def forward(self, x): - x = self.conv1(x) - x = torch.cat([x] + [m(x) for m in self.m], dim=1) - x = self.conv2(x) - return x - - -class CSPLayer(nn.Module): - """C3 in yolov5, CSP Bottleneck with 3 convolutions""" - - def __init__( - self, - in_channels, - out_channels, - n=1, - shortcut=True, - expansion=0.5, - depthwise=False, - act="silu", - ): - """ - Args: - in_channels (int): input channels. - out_channels (int): output channels. - n (int): number of Bottlenecks. Default value: 1. - """ - # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - hidden_channels = int(out_channels * expansion) # hidden channels - self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act) - self.conv2 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act) - self.conv3 = BaseConv(2 * hidden_channels, out_channels, 1, stride=1, act=act) - module_list = [ - Bottleneck( - hidden_channels, hidden_channels, shortcut, 1.0, depthwise, act=act - ) - for _ in range(n) - ] - self.m = nn.Sequential(*module_list) - - def forward(self, x): - x_1 = self.conv1(x) - x_2 = self.conv2(x) - x_1 = self.m(x_1) - x = torch.cat((x_1, x_2), dim=1) - return self.conv3(x) - - -class Focus(nn.Module): - """Focus width and height information into channel space.""" - - def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu"): - super().__init__() - self.conv = BaseConv(in_channels * 4, out_channels, ksize, stride, act=act) - - def forward(self, x): - # shape of x (b,c,w,h) -> y(b,4c,w/2,h/2) - patch_top_left = x[..., ::2, ::2] - patch_top_right = x[..., ::2, 1::2] - patch_bot_left = x[..., 1::2, ::2] - patch_bot_right = x[..., 1::2, 1::2] - x = torch.cat( - ( - patch_top_left, - patch_bot_left, - patch_top_right, - patch_bot_right, - ), - dim=1, - ) - return self.conv(x) diff --git a/spaces/EDGAhab/Paimon-Talking/models.py b/spaces/EDGAhab/Paimon-Talking/models.py deleted file mode 100644 index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Paimon-Talking/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/Ekimetrics/Biomap/biomap/dataset_generator/__init__.py b/spaces/Ekimetrics/Biomap/biomap/dataset_generator/__init__.py deleted file mode 100644 index c2a235adcb65666fad8e16b7466c1f2d170a3b5a..0000000000000000000000000000000000000000 --- a/spaces/Ekimetrics/Biomap/biomap/dataset_generator/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .data_loader import DataLoader - - -__all__ = [ - 'DataLoader', -] \ No newline at end of file diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/lr_scheduler.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/lr_scheduler.py deleted file mode 100644 index e598ed120159c53da6820a55ad86b89f5c70c82d..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/lr_scheduler.py +++ /dev/null @@ -1,34 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n): - return self.schedule(n) - diff --git a/spaces/EricLam/yamatohome/app.py b/spaces/EricLam/yamatohome/app.py deleted file mode 100644 index 4f17aff3fca834a6a52c77fb9281e03486f0c35e..0000000000000000000000000000000000000000 --- a/spaces/EricLam/yamatohome/app.py +++ /dev/null @@ -1,49 +0,0 @@ -#libraries -import gradio as gr -from gradio.mix import Parallel - -#variables, functions and parameters -model1 = gr.Interface.load("huggingface/gpt2") -model2 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -model3 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B") - -#functions, parameters and variables -gr.Parallel(model1, model2, model3).launch() - -import gradio as gr -from gradio import inputs -from gradio.inputs import Textbox -from gradio import outputs -from transformers import pipeline - -title = "Next Sentence Generator" -description = "Try this text generator!" -examples = [ - ["Zoe Kwan is a 20-year old singer and songwriter who has taken Hong Kong’s music scene by storm."], - ["Zoe’s big break came when the godfather of Cantopop Sam Hui stumbled upon a YouTube video of Zoe singing."] -] -generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") -generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -generator1 = gr.Interface.load("huggingface/gpt2-large") - -gr.Parallel(generator1, generator2, generator3, inputs=gr.inputs.Textbox(lines=5, label="Enter a sentence to get another sentence."), title=title, description=description, examples=examples).launch(share=False, enable_queue=True) - -import gradio as gr - -api = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") - - -def complete_with_gpt(text): - # Use the last 50 characters of the text as context - return text[:-50] + api(text[-50:]) - - -with gr.Blocks() as demo: - with gr.Row(): - textbox = gr.Textbox(placeholder="Type here and press enter...", lines=8) - with gr.Column(): - btn = gr.Button("Generate") - - btn.click(complete_with_gpt, textbox, textbox) - -demo.launch() diff --git a/spaces/Felix123456/bingo/src/lib/isomorphic/index.ts b/spaces/Felix123456/bingo/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/Felix123456/bingo/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/fused_act/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/fused_act/__init__.py deleted file mode 100644 index 241dc0754fae7d88dbbd9a02e665ca30a73c7422..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/fused_act/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu - -__all__ = ['FusedLeakyReLU', 'fused_leaky_relu'] diff --git a/spaces/FelixLuoX/stable_diffusion_test/footer.html b/spaces/FelixLuoX/stable_diffusion_test/footer.html deleted file mode 100644 index b58ca8b79cc930a56952881f4922bda406fd3581..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/stable_diffusion_test/footer.html +++ /dev/null @@ -1,18 +0,0 @@ - - - diff --git a/spaces/Ferion/image-matting-app/ppmatting/models/modnet.py b/spaces/Ferion/image-matting-app/ppmatting/models/modnet.py deleted file mode 100644 index ecadfdd1a1710980e36a23bc82717e3081ad64e9..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/ppmatting/models/modnet.py +++ /dev/null @@ -1,494 +0,0 @@ -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from collections import defaultdict - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F -import numpy as np -import scipy -import paddleseg -from paddleseg.models import layers, losses -from paddleseg import utils -from paddleseg.cvlibs import manager, param_init - - -@manager.MODELS.add_component -class MODNet(nn.Layer): - """ - The MODNet implementation based on PaddlePaddle. - - The original article refers to - Zhanghan Ke, et, al. "Is a Green Screen Really Necessary for Real-Time Portrait Matting?" - (https://arxiv.org/pdf/2011.11961.pdf). - - Args: - backbone: backbone model. - hr(int, optional): The channels of high resolutions branch. Defautl: None. - pretrained(str, optional): The path of pretrianed model. Defautl: None. - - """ - - def __init__(self, backbone, hr_channels=32, pretrained=None): - super().__init__() - self.backbone = backbone - self.pretrained = pretrained - self.head = MODNetHead( - hr_channels=hr_channels, backbone_channels=backbone.feat_channels) - self.init_weight() - self.blurer = GaussianBlurLayer(1, 3) - self.loss_func_dict = None - - def forward(self, inputs): - """ - If training, return a dict. - If evaluation, return the final alpha prediction. - """ - x = inputs['img'] - feat_list = self.backbone(x) - y = self.head(inputs=inputs, feat_list=feat_list) - if self.training: - loss = self.loss(y, inputs) - return y, loss - else: - return y - - def loss(self, logit_dict, label_dict, loss_func_dict=None): - if loss_func_dict is None: - if self.loss_func_dict is None: - self.loss_func_dict = defaultdict(list) - self.loss_func_dict['semantic'].append(paddleseg.models.MSELoss( - )) - self.loss_func_dict['detail'].append(paddleseg.models.L1Loss()) - self.loss_func_dict['fusion'].append(paddleseg.models.L1Loss()) - self.loss_func_dict['fusion'].append(paddleseg.models.L1Loss()) - else: - self.loss_func_dict = loss_func_dict - - loss = {} - # semantic loss - semantic_gt = F.interpolate( - label_dict['alpha'], - scale_factor=1 / 16, - mode='bilinear', - align_corners=False) - semantic_gt = self.blurer(semantic_gt) - # semantic_gt.stop_gradient=True - loss['semantic'] = self.loss_func_dict['semantic'][0]( - logit_dict['semantic'], semantic_gt) - - # detail loss - trimap = label_dict['trimap'] - mask = (trimap == 128).astype('float32') - logit_detail = logit_dict['detail'] * mask - label_detail = label_dict['alpha'] * mask - loss_detail = self.loss_func_dict['detail'][0](logit_detail, - label_detail) - loss_detail = loss_detail / (mask.mean() + 1e-6) - loss['detail'] = 10 * loss_detail - - # fusion loss - matte = logit_dict['matte'] - alpha = label_dict['alpha'] - transition_mask = label_dict['trimap'] == 128 - matte_boundary = paddle.where(transition_mask, matte, alpha) - # l1 loss - loss_fusion_l1 = self.loss_func_dict['fusion'][0]( - matte, alpha) + 4 * self.loss_func_dict['fusion'][0](matte_boundary, - alpha) - # composition loss - loss_fusion_comp = self.loss_func_dict['fusion'][1]( - matte * label_dict['img'], alpha * - label_dict['img']) + 4 * self.loss_func_dict['fusion'][1]( - matte_boundary * label_dict['img'], alpha * label_dict['img']) - # consisten loss with semantic - transition_mask = F.interpolate( - label_dict['trimap'], - scale_factor=1 / 16, - mode='nearest', - align_corners=False) - transition_mask = transition_mask == 128 - matte_con_sem = F.interpolate( - matte, scale_factor=1 / 16, mode='bilinear', align_corners=False) - matte_con_sem = self.blurer(matte_con_sem) - logit_semantic = logit_dict['semantic'].clone() - logit_semantic.stop_gradient = True - matte_con_sem = paddle.where(transition_mask, logit_semantic, - matte_con_sem) - if False: - import cv2 - matte_con_sem_num = matte_con_sem.numpy() - matte_con_sem_num = matte_con_sem_num[0].squeeze() - matte_con_sem_num = (matte_con_sem_num * 255).astype('uint8') - semantic = logit_dict['semantic'].numpy() - semantic = semantic[0].squeeze() - semantic = (semantic * 255).astype('uint8') - transition_mask = transition_mask.astype('uint8') - transition_mask = transition_mask.numpy() - transition_mask = (transition_mask[0].squeeze()) * 255 - cv2.imwrite('matte_con.png', matte_con_sem_num) - cv2.imwrite('semantic.png', semantic) - cv2.imwrite('transition.png', transition_mask) - mse_loss = paddleseg.models.MSELoss() - loss_fusion_con_sem = mse_loss(matte_con_sem, logit_dict['semantic']) - loss_fusion = loss_fusion_l1 + loss_fusion_comp + loss_fusion_con_sem - loss['fusion'] = loss_fusion - loss['fusion_l1'] = loss_fusion_l1 - loss['fusion_comp'] = loss_fusion_comp - loss['fusion_con_sem'] = loss_fusion_con_sem - - loss['all'] = loss['semantic'] + loss['detail'] + loss['fusion'] - - return loss - - def init_weight(self): - if self.pretrained is not None: - utils.load_entire_model(self, self.pretrained) - - -class MODNetHead(nn.Layer): - def __init__(self, hr_channels, backbone_channels): - super().__init__() - - self.lr_branch = LRBranch(backbone_channels) - self.hr_branch = HRBranch(hr_channels, backbone_channels) - self.f_branch = FusionBranch(hr_channels, backbone_channels) - self.init_weight() - - def forward(self, inputs, feat_list): - pred_semantic, lr8x, [enc2x, enc4x] = self.lr_branch(feat_list) - pred_detail, hr2x = self.hr_branch(inputs['img'], enc2x, enc4x, lr8x) - pred_matte = self.f_branch(inputs['img'], lr8x, hr2x) - - if self.training: - logit_dict = { - 'semantic': pred_semantic, - 'detail': pred_detail, - 'matte': pred_matte - } - return logit_dict - else: - return pred_matte - - def init_weight(self): - for layer in self.sublayers(): - if isinstance(layer, nn.Conv2D): - param_init.kaiming_uniform(layer.weight) - - -class FusionBranch(nn.Layer): - def __init__(self, hr_channels, enc_channels): - super().__init__() - self.conv_lr4x = Conv2dIBNormRelu( - enc_channels[2], hr_channels, 5, stride=1, padding=2) - - self.conv_f2x = Conv2dIBNormRelu( - 2 * hr_channels, hr_channels, 3, stride=1, padding=1) - self.conv_f = nn.Sequential( - Conv2dIBNormRelu( - hr_channels + 3, int(hr_channels / 2), 3, stride=1, padding=1), - Conv2dIBNormRelu( - int(hr_channels / 2), - 1, - 1, - stride=1, - padding=0, - with_ibn=False, - with_relu=False)) - - def forward(self, img, lr8x, hr2x): - lr4x = F.interpolate( - lr8x, scale_factor=2, mode='bilinear', align_corners=False) - lr4x = self.conv_lr4x(lr4x) - lr2x = F.interpolate( - lr4x, scale_factor=2, mode='bilinear', align_corners=False) - - f2x = self.conv_f2x(paddle.concat((lr2x, hr2x), axis=1)) - f = F.interpolate( - f2x, scale_factor=2, mode='bilinear', align_corners=False) - f = self.conv_f(paddle.concat((f, img), axis=1)) - pred_matte = F.sigmoid(f) - - return pred_matte - - -class HRBranch(nn.Layer): - """ - High Resolution Branch of MODNet - """ - - def __init__(self, hr_channels, enc_channels): - super().__init__() - - self.tohr_enc2x = Conv2dIBNormRelu( - enc_channels[0], hr_channels, 1, stride=1, padding=0) - self.conv_enc2x = Conv2dIBNormRelu( - hr_channels + 3, hr_channels, 3, stride=2, padding=1) - - self.tohr_enc4x = Conv2dIBNormRelu( - enc_channels[1], hr_channels, 1, stride=1, padding=0) - self.conv_enc4x = Conv2dIBNormRelu( - 2 * hr_channels, 2 * hr_channels, 3, stride=1, padding=1) - - self.conv_hr4x = nn.Sequential( - Conv2dIBNormRelu( - 2 * hr_channels + enc_channels[2] + 3, - 2 * hr_channels, - 3, - stride=1, - padding=1), - Conv2dIBNormRelu( - 2 * hr_channels, 2 * hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - 2 * hr_channels, hr_channels, 3, stride=1, padding=1)) - - self.conv_hr2x = nn.Sequential( - Conv2dIBNormRelu( - 2 * hr_channels, 2 * hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - 2 * hr_channels, hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - hr_channels, hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - hr_channels, hr_channels, 3, stride=1, padding=1)) - - self.conv_hr = nn.Sequential( - Conv2dIBNormRelu( - hr_channels + 3, hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - hr_channels, - 1, - 1, - stride=1, - padding=0, - with_ibn=False, - with_relu=False)) - - def forward(self, img, enc2x, enc4x, lr8x): - img2x = F.interpolate( - img, scale_factor=1 / 2, mode='bilinear', align_corners=False) - img4x = F.interpolate( - img, scale_factor=1 / 4, mode='bilinear', align_corners=False) - - enc2x = self.tohr_enc2x(enc2x) - hr4x = self.conv_enc2x(paddle.concat((img2x, enc2x), axis=1)) - - enc4x = self.tohr_enc4x(enc4x) - hr4x = self.conv_enc4x(paddle.concat((hr4x, enc4x), axis=1)) - - lr4x = F.interpolate( - lr8x, scale_factor=2, mode='bilinear', align_corners=False) - hr4x = self.conv_hr4x(paddle.concat((hr4x, lr4x, img4x), axis=1)) - - hr2x = F.interpolate( - hr4x, scale_factor=2, mode='bilinear', align_corners=False) - hr2x = self.conv_hr2x(paddle.concat((hr2x, enc2x), axis=1)) - - pred_detail = None - if self.training: - hr = F.interpolate( - hr2x, scale_factor=2, mode='bilinear', align_corners=False) - hr = self.conv_hr(paddle.concat((hr, img), axis=1)) - pred_detail = F.sigmoid(hr) - - return pred_detail, hr2x - - -class LRBranch(nn.Layer): - def __init__(self, backbone_channels): - super().__init__() - self.se_block = SEBlock(backbone_channels[4], reduction=4) - self.conv_lr16x = Conv2dIBNormRelu( - backbone_channels[4], backbone_channels[3], 5, stride=1, padding=2) - self.conv_lr8x = Conv2dIBNormRelu( - backbone_channels[3], backbone_channels[2], 5, stride=1, padding=2) - self.conv_lr = Conv2dIBNormRelu( - backbone_channels[2], - 1, - 3, - stride=2, - padding=1, - with_ibn=False, - with_relu=False) - - def forward(self, feat_list): - enc2x, enc4x, enc32x = feat_list[0], feat_list[1], feat_list[4] - - enc32x = self.se_block(enc32x) - lr16x = F.interpolate( - enc32x, scale_factor=2, mode='bilinear', align_corners=False) - lr16x = self.conv_lr16x(lr16x) - lr8x = F.interpolate( - lr16x, scale_factor=2, mode='bilinear', align_corners=False) - lr8x = self.conv_lr8x(lr8x) - - pred_semantic = None - if self.training: - lr = self.conv_lr(lr8x) - pred_semantic = F.sigmoid(lr) - - return pred_semantic, lr8x, [enc2x, enc4x] - - -class IBNorm(nn.Layer): - """ - Combine Instance Norm and Batch Norm into One Layer - """ - - def __init__(self, in_channels): - super().__init__() - self.bnorm_channels = in_channels // 2 - self.inorm_channels = in_channels - self.bnorm_channels - - self.bnorm = nn.BatchNorm2D(self.bnorm_channels) - self.inorm = nn.InstanceNorm2D(self.inorm_channels) - - def forward(self, x): - bn_x = self.bnorm(x[:, :self.bnorm_channels, :, :]) - in_x = self.inorm(x[:, self.bnorm_channels:, :, :]) - - return paddle.concat((bn_x, in_x), 1) - - -class Conv2dIBNormRelu(nn.Layer): - """ - Convolution + IBNorm + Relu - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias_attr=None, - with_ibn=True, - with_relu=True): - - super().__init__() - - layers = [ - nn.Conv2D( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias_attr=bias_attr) - ] - - if with_ibn: - layers.append(IBNorm(out_channels)) - - if with_relu: - layers.append(nn.ReLU()) - - self.layers = nn.Sequential(*layers) - - def forward(self, x): - return self.layers(x) - - -class SEBlock(nn.Layer): - """ - SE Block Proposed in https://arxiv.org/pdf/1709.01507.pdf - """ - - def __init__(self, num_channels, reduction=1): - super().__init__() - self.pool = nn.AdaptiveAvgPool2D(1) - self.conv = nn.Sequential( - nn.Conv2D( - num_channels, - int(num_channels // reduction), - 1, - bias_attr=False), - nn.ReLU(), - nn.Conv2D( - int(num_channels // reduction), - num_channels, - 1, - bias_attr=False), - nn.Sigmoid()) - - def forward(self, x): - w = self.pool(x) - w = self.conv(w) - return w * x - - -class GaussianBlurLayer(nn.Layer): - """ Add Gaussian Blur to a 4D tensors - This layer takes a 4D tensor of {N, C, H, W} as input. - The Gaussian blur will be performed in given channel number (C) splitly. - """ - - def __init__(self, channels, kernel_size): - """ - Args: - channels (int): Channel for input tensor - kernel_size (int): Size of the kernel used in blurring - """ - - super(GaussianBlurLayer, self).__init__() - self.channels = channels - self.kernel_size = kernel_size - assert self.kernel_size % 2 != 0 - - self.op = nn.Sequential( - nn.Pad2D( - int(self.kernel_size / 2), mode='reflect'), - nn.Conv2D( - channels, - channels, - self.kernel_size, - stride=1, - padding=0, - bias_attr=False, - groups=channels)) - - self._init_kernel() - self.op[1].weight.stop_gradient = True - - def forward(self, x): - """ - Args: - x (paddle.Tensor): input 4D tensor - Returns: - paddle.Tensor: Blurred version of the input - """ - - if not len(list(x.shape)) == 4: - print('\'GaussianBlurLayer\' requires a 4D tensor as input\n') - exit() - elif not x.shape[1] == self.channels: - print('In \'GaussianBlurLayer\', the required channel ({0}) is' - 'not the same as input ({1})\n'.format(self.channels, x.shape[ - 1])) - exit() - - return self.op(x) - - def _init_kernel(self): - sigma = 0.3 * ((self.kernel_size - 1) * 0.5 - 1) + 0.8 - - n = np.zeros((self.kernel_size, self.kernel_size)) - i = int(self.kernel_size / 2) - n[i, i] = 1 - kernel = scipy.ndimage.gaussian_filter(n, sigma) - kernel = kernel.astype('float32') - kernel = kernel[np.newaxis, np.newaxis, :, :] - paddle.assign(kernel, self.op[1].weight) diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/app.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/app.py deleted file mode 100644 index 5e07f2a38901623cec2f5d76c800af21c16d4d31..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/app.py +++ /dev/null @@ -1,291 +0,0 @@ -import re -import os -import numpy as np -import torch -from torch import no_grad, LongTensor -import argparse -import commons -from mel_processing import spectrogram_torch -import utils -from models import SynthesizerTrn -import gradio as gr -import librosa -import webbrowser - -from text import text_to_sequence, _clean_text -device = "cuda:0" if torch.cuda.is_available() else "cpu" -language_marks = { - "Japanese": "", - "日本語": "[JA]", - "简体中文": "[ZH]", - "English": "[EN]", - "Mix": "", -} - - -def get_text(text, hps, is_symbol): - text_norm = text_to_sequence( - text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -def create_tts_fn(model, hps, speaker_ids): - def tts_fn(text, speaker, language, ns, nsw, speed, is_symbol): - if language is not None: - text = language_marks[language] + text + language_marks[language] - speaker_id = speaker_ids[speaker] - stn_tst = get_text(text, hps, is_symbol) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([speaker_id]).to(device) - audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=ns, noise_scale_w=nsw, - length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return tts_fn - - -def create_vc_fn(model, hps, speaker_ids): - def vc_fn(original_speaker, target_speaker, record_audio, upload_audio): - input_audio = record_audio if record_audio is not None else upload_audio - if input_audio is None: - return "You need to record or upload an audio", None - sampling_rate, audio = input_audio - original_speaker_id = speaker_ids[original_speaker] - target_speaker_id = speaker_ids[target_speaker] - - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != hps.data.sampling_rate: - audio = librosa.resample( - audio, orig_sr=sampling_rate, target_sr=hps.data.sampling_rate) - with no_grad(): - y = torch.FloatTensor(audio) - y = y / max(-y.min(), y.max()) / 0.99 - y = y.to(device) - y = y.unsqueeze(0) - spec = spectrogram_torch(y, hps.data.filter_length, - hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length, - center=False).to(device) - spec_lengths = LongTensor([spec.size(-1)]).to(device) - sid_src = LongTensor([original_speaker_id]).to(device) - sid_tgt = LongTensor([target_speaker_id]).to(device) - audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][ - 0, 0].data.cpu().float().numpy() - del y, spec, spec_lengths, sid_src, sid_tgt - return "Success", (hps.data.sampling_rate, audio) - - return vc_fn - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def get_text(text, hps, is_symbol): - text_norm = text_to_sequence( - text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -def create_to_symbol_fn(hps): - def to_symbol_fn(is_symbol_input, input_text, temp_text): - return (_clean_text(input_text, hps.data.text_cleaners), input_text) if is_symbol_input \ - else (temp_text, temp_text) - - return to_symbol_fn - - -models_info = [ - { - "languages": ['日本語', '简体中文', 'English', 'Mix'], - "description": """ - 这个模型包含赛马娘的116名角色,能合成中日英三语。\n\n - 若需要在同一个句子中混合多种语言,使用相应的语言标记包裹句子。 (日语用[JA], 中文用[ZH], 英文用[EN]),参考Examples中的示例。 - """, - "model_path": "./models/G_15800.pth", - "config_path": "./configs/modified_finetune_speaker.json", - "examples": [['私、必ず強くなりますっ。', '特别周', '日本語', 1, False], - ['私も自信を持ってこの走りを貫けます。', '无声铃鹿', '日本語', 1, False], - ['无论做什么事情都要全力以赴!', '大和赤骥', '简体中文', 1, False], - ['Can you tell me how much the shirt is?', - '目白麦昆', 'English', 1, False], - ['[EN]Excuse me?[EN][JA]お帰りなさい,お兄様![JA]', '草上飞', 'Mix', 1, False]], - } -] - -models_tts = [] -models_vc = [] -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--share", action="store_true", - default=False, help="share gradio app") - args = parser.parse_args() - categories = ["Umamusume"] - others = { - "Princess Connect! Re:Dive": "https://huggingface.co/spaces/FrankZxShen/vits-fast-finetuning-pcr", - "Blue Archive": "https://huggingface.co/spaces/FrankZxShen/vits-fast-fineturning-models-ba", - } - for info in models_info: - lang = info['languages'] - examples = info['examples'] - config_path = info['config_path'] - model_path = info['model_path'] - description = info['description'] - hps = utils.get_hparams_from_file(config_path) - - net_g = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(model_path, net_g, None) - speaker_ids = hps.speakers - speakers = list(hps.speakers.keys()) - models_tts.append((description, speakers, lang, examples, - hps.symbols, create_tts_fn(net_g, hps, speaker_ids), - create_to_symbol_fn(hps))) - models_vc.append( - (description, speakers, create_vc_fn(net_g, hps, speaker_ids))) - - app = gr.Blocks() - with app: - gr.Markdown( - "#
vits-fast-fineturning-models-umamusume\n" - "##
Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n" - "##
请不要生成会对个人以及组织造成侵害的内容\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/FrankZxShen/vits-fast-finetuning-umamusume?duplicate=true)\n\n" - "[![Finetune your own model](https://badgen.net/badge/icon/github?icon=github&label=Finetune%20your%20own%20model)](https://github.com/Plachtaa/VITS-fast-fine-tuning)" - ) - gr.Markdown("# TTS&Voice Conversion for Umamusume\n\n" - ) - with gr.Tabs(): - for category in categories: - with gr.TabItem(category): - with gr.Tab("TTS"): - for i, (description, speakers, lang, example, symbols, tts_fn, to_symbol_fn) in enumerate( - models_tts): - gr.Markdown(description) - with gr.Row(): - with gr.Column(): - textbox = gr.TextArea(label="Text", - placeholder="Type your sentence here ", - value="よーし、私もがんばらないと!", elem_id=f"tts-input") - with gr.Accordion(label="Phoneme Input", open=False): - temp_text_var = gr.Variable() - symbol_input = gr.Checkbox( - value=False, label="Symbol input") - symbol_list = gr.Dataset(label="Symbol list", components=[textbox], - samples=[[x] - for x in symbols], - elem_id=f"symbol-list") - symbol_list_json = gr.Json( - value=symbols, visible=False) - symbol_input.change(to_symbol_fn, - [symbol_input, textbox, - temp_text_var], - [textbox, temp_text_var]) - symbol_list.click(None, [symbol_list, symbol_list_json], textbox, - _js=f""" - (i, symbols, text) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#tts-input").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - text = text_input.value; - return text; - }}""") - # select character - # with gr.Row(): - # search = gr.Textbox(label="Search Speaker", lines=1) - # btn2 = gr.Button(value="Search") - # btn2.click(search_speaker, inputs=[search], outputs=[char_dropdown]) - char_dropdown = gr.Dropdown( - choices=speakers, value=speakers[0], label='character') - language_dropdown = gr.Dropdown( - choices=lang, value=lang[0], label='language') - ns = gr.Slider( - label="noise_scale", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w", minimum=0.1, - maximum=1.0, step=0.1, value=0.668, interactive=True) - duration_slider = gr.Slider(minimum=0.1, maximum=5, value=1, step=0.1, - label='速度 Speed') - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio( - label="Output Audio", elem_id="tts-audio") - btn = gr.Button("Generate!") - btn.click(tts_fn, - inputs=[textbox, char_dropdown, language_dropdown, ns, nsw, duration_slider, - symbol_input], - outputs=[text_output, audio_output]) - gr.Examples( - examples=example, - inputs=[textbox, char_dropdown, language_dropdown, - duration_slider, symbol_input], - outputs=[text_output, audio_output], - fn=tts_fn - ) - with gr.Tab("Voice Conversion"): - for i, (description, speakers, vc_fn) in enumerate( - models_vc): - gr.Markdown(""" - 录制或上传声音,并选择要转换的音色。 - """) - with gr.Column(): - record_audio = gr.Audio( - label="record your voice", source="microphone") - upload_audio = gr.Audio( - label="or upload audio here", source="upload") - source_speaker = gr.Dropdown( - choices=speakers, value=speakers[0], label="source speaker") - target_speaker = gr.Dropdown( - choices=speakers, value=speakers[0], label="target speaker") - with gr.Column(): - message_box = gr.Textbox(label="Message") - converted_audio = gr.Audio( - label='converted audio') - btn = gr.Button("Convert!") - btn.click(vc_fn, inputs=[source_speaker, target_speaker, record_audio, upload_audio], - outputs=[message_box, converted_audio]) - for category, link in others.items(): - with gr.TabItem(category): - gr.Markdown( - f''' -
-

Click to Go

- - -
- ''' - ) - - app.queue(concurrency_count=3).launch(show_api=False, share=args.share) diff --git a/spaces/FridaZuley/RVC_HFKawaii/diffq/base.py b/spaces/FridaZuley/RVC_HFKawaii/diffq/base.py deleted file mode 100644 index 9bd5276b51fbed3d4b898a45b93479ff19e62a7b..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/diffq/base.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from concurrent import futures -from fnmatch import fnmatch -from functools import partial -import io -import math -from multiprocessing import cpu_count -import typing as tp -import zlib - -import torch - - -class BaseQuantizer: - @dataclass - class _QuantizedParam: - name: str - param: torch.nn.Parameter - module: torch.nn.Module - # If a Parameter is used multiple times, `other` can be used - # to share state between the different Quantizers - other: tp.Optional[tp.Any] - - def __init__(self, model: torch.nn.Module, min_size: float = 0.01, float16: bool = False, - exclude: tp.Optional[tp.List[str]] = [], detect_bound: bool = True): - self.model = model - self.min_size = min_size - self.float16 = float16 - self.exclude = exclude - self.detect_bound = detect_bound - self._quantized = False - self._pre_handle = self.model.register_forward_pre_hook(self._forward_pre_hook) - self._post_handle = self.model.register_forward_hook(self._forward_hook) - - self._quantized_state = None - self._qparams = [] - self._float16 = [] - self._others = [] - self._rnns = [] - - self._saved = [] - - self._find_params() - - def _find_params(self): - min_params = self.min_size * 2**20 // 4 - previous = {} - for module_name, module in self.model.named_modules(): - if isinstance(module, torch.nn.RNNBase): - self._rnns.append(module) - for name, param in list(module.named_parameters(recurse=False)): - full_name = f"{module_name}.{name}" - matched = False - for pattern in self.exclude: - if fnmatch(full_name, pattern) or fnmatch(name, pattern): - matched = True - break - - if param.numel() <= min_params or matched: - if id(param) in previous: - continue - if self.detect_bound: - previous[id(param)] = None - if self.float16: - self._float16.append(param) - else: - self._others.append(param) - else: - qparam = self._register_param(name, param, module, previous.get(id(param))) - if self.detect_bound: - previous[id(param)] = qparam - self._qparams.append(qparam) - - def _register_param(self, name, param, module, other): - return self.__class__._QuantizedParam(name, param, module, other) - - def _forward_pre_hook(self, module, input): - if self.model.training: - self._quantized_state = None - if self._quantized: - self.unquantize() - if self._pre_forward_train(): - self._fix_rnns() - else: - self.quantize() - - def _forward_hook(self, module, input, output): - if self.model.training: - if self._post_forward_train(): - self._fix_rnns(flatten=False) # Hacky, next forward will flatten - - def quantize(self, save=True): - """ - Immediately apply quantization to the model parameters. - If `save` is True, save a copy of the unquantized parameters, that can be - restored with `unquantize()`. - """ - if self._quantized: - return - if save: - self._saved = [qp.param.data.to('cpu', copy=True) - for qp in self._qparams if qp.other is None] - self.restore_quantized_state(self.get_quantized_state()) - self._quantized = True - self._fix_rnns() - - def unquantize(self): - """ - Revert a previous call to `quantize()`. - """ - if not self._quantized: - raise RuntimeError("Can only be called on a quantized model.") - if not self._saved: - raise RuntimeError("Nothing to restore.") - for qparam in self._qparams: - if qparam.other is None: - qparam.param.data[:] = self._saved.pop(0) - assert len(self._saved) == 0 - self._quantized = False - self._fix_rnns() - - def _pre_forward_train(self) -> bool: - """ - Called once before each forward for continuous quantization. - Should return True if parameters were changed. - """ - return False - - def _post_forward_train(self) -> bool: - """ - Called once after each forward (to restore state for instance). - Should return True if parameters were changed. - """ - return False - - def _fix_rnns(self, flatten=True): - """ - To be called after quantization happened to fix RNNs. - """ - for rnn in self._rnns: - rnn._flat_weights = [ - (lambda wn: getattr(rnn, wn) if hasattr(rnn, wn) else None)(wn) - for wn in rnn._flat_weights_names] - if flatten: - rnn.flatten_parameters() - - def get_quantized_state(self): - """ - Returns sufficient quantized information to rebuild the model state. - - ..Note:: - To achieve maximum compression, you should compress this with - gzip or other, as quantized weights are not optimally coded! - """ - if self._quantized_state is None: - self._quantized_state = self._get_quantized_state() - return self._quantized_state - - def _get_quantized_state(self): - """ - Actual implementation for `get_quantized_state`. - """ - float16_params = [] - for p in self._float16: - q = p.data.half() - float16_params.append(q) - - return { - "quantized": [self._quantize_param(qparam) for qparam in self._qparams - if qparam.other is None], - "float16": float16_params, - "others": [p.data.clone() for p in self._others], - } - - def _quantize_param(self, qparam: _QuantizedParam) -> tp.Any: - """ - To be overriden. - """ - raise NotImplementedError() - - def _unquantize_param(self, qparam: _QuantizedParam, quantized: tp.Any) -> torch.Tensor: - """ - To be overriden. - """ - raise NotImplementedError() - - def restore_quantized_state(self, state) -> None: - """ - Restore the state of the model from the quantized state. - """ - for p, q in zip(self._float16, state["float16"]): - p.data[:] = q.to(p) - - for p, q in zip(self._others, state["others"]): - p.data[:] = q - - remaining = list(state["quantized"]) - for qparam in self._qparams: - if qparam.other is not None: - # Only unquantize first appearance of nn.Parameter. - continue - quantized = remaining.pop(0) - qparam.param.data[:] = self._unquantize_param(qparam, quantized) - self._fix_rnns() - - def detach(self) -> None: - """ - Detach from the model, removes hooks and anything else. - """ - self._pre_handle.remove() - self._post_handle.remove() - - def model_size(self) -> torch.Tensor: - """ - Returns an estimate of the quantized model size. - """ - total = torch.tensor(0.) - for p in self._float16: - total += 16 * p.numel() - for p in self._others: - total += 32 * p.numel() - return total / 2**20 / 8 # bits to MegaBytes - - def true_model_size(self) -> float: - """ - Return the true quantized model size, in MB, without extra - compression. - """ - return self.model_size().item() - - def compressed_model_size(self, compress_level=-1, num_workers=8) -> float: - """ - Return the compressed quantized model size, in MB. - - Args: - compress_level (int): compression level used with zlib, - see `zlib.compress` for details. - num_workers (int): will split the final big byte representation in that - many chunks processed in parallels. - """ - out = io.BytesIO() - torch.save(self.get_quantized_state(), out) - ms = _parallel_compress_len(out.getvalue(), compress_level, num_workers) - return ms / 2 ** 20 - - -def _compress_len(data, compress_level): - return len(zlib.compress(data, level=compress_level)) - - -def _parallel_compress_len(data, compress_level, num_workers): - num_workers = min(cpu_count(), num_workers) - chunk_size = int(math.ceil(len(data) / num_workers)) - chunks = [data[offset:offset + chunk_size] for offset in range(0, len(data), chunk_size)] - with futures.ProcessPoolExecutor(num_workers) as pool: - return sum(pool.map(partial(_compress_len, compress_level=compress_level), chunks)) diff --git a/spaces/FriendlyUser/YoutubeDownloaderSubber/app.py b/spaces/FriendlyUser/YoutubeDownloaderSubber/app.py deleted file mode 100644 index be73f2f43ab153e17cc412a197b8c28060ba6c9a..0000000000000000000000000000000000000000 --- a/spaces/FriendlyUser/YoutubeDownloaderSubber/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import whisper -import gradio as gr -import ffmpeg -from yt_dlp import YoutubeDL -import os -import sys -from subprocess import PIPE, run - -youtube_livestream_codes = [ - 91, - 92, - 93, - 94, - 95, - 96, - 300, - 301, -] -youtube_mp4_codes = [ - 298, - 18, - 22, - 140, - 133, - 134 -] - -def second_to_timecode(x: float) -> str: - hour, x = divmod(x, 3600) - minute, x = divmod(x, 60) - second, x = divmod(x, 1) - millisecond = int(x * 1000.) - - return '%.2d:%.2d:%.2d,%.3d' % (hour, minute, second, millisecond) - -def get_video_metadata(video_url: str = "https://www.youtube.com/watch?v=21X5lGlDOfg&ab_channel=NASA")-> dict: - with YoutubeDL({'outtmpl': '%(id)s.%(ext)s'}) as ydl: - info_dict = ydl.extract_info(video_url, download=False) - video_title = info_dict.get('title', None) - uploader_id = info_dict.get('uploader_id', None) - print(f"[youtube] {video_title}: {uploader_id}") - return info_dict - - -def parse_metadata(metadata) -> dict: - """ - Parse metadata and send to discord. - After a video is done recording, - it will have both the livestream format and the mp4 format. - """ - # send metadata to discord - formats = metadata.get("formats", []) - # filter for ext = mp4 - mp4_formats = [f for f in formats if f.get("ext", "") == "mp4"] - try: - format_ids = [int(f.get("format_id", 0)) for f in mp4_formats] - video_entries = sorted(set(format_ids).intersection(youtube_mp4_codes)) - - is_livestream = True - if len(video_entries) > 0: - # use video format id over livestream id if available - selected_id = video_entries[0] - is_livestream = False - except Exception as e: - print(e) - selected_id = mp4_formats[0].get("format_id") - is_livestream = False - - - return { - "selected_id": selected_id, - "is_livestream": is_livestream, - } - -def get_video(url: str, config: dict): - """ - Get video from start time. - """ - # result = subprocess.run() - # could delay start time by a few seconds to just sync up and capture the full video length - # but would need to time how long it takes to fetch the video using youtube-dl and other adjustments and start a bit before - filename = config.get("filename", "livestream01.mp4") - end = config.get("end", "00:15:00") - overlay_file = ffmpeg.input(filename) - ( - ffmpeg - .input(url, t=end) - .output(filename) - .run() - ) - -def get_all_files(url: str, end: str = "00:15:00"): - metadata = get_video_metadata(url) - temp_dict = parse_metadata(metadata) - selected_id = temp_dict.get("selected_id", 0) - formats = metadata.get("formats", []) - selected_format = [f for f in formats if f.get("format_id", "") == str(selected_id)][0] - format_url = selected_format.get("url", "") - filename = "temp.mp4" - get_video(format_url, {"filename": filename, "end": end}) - return filename - -def get_text_from_mp3_whisper(inputType:str, mp3_file: str, url_path: str, taskName: str, srcLanguage: str)->str: - # remove the file if it exists - if os.path.exists("transcript.srt"): - os.remove("transcript.srt") - - if os.path.exists("temp.mp4"): - os.remove("temp.mp4") - - if os.path.exists("subtitled.mp4"): - os.remove("subtitled.mp4") - - model = whisper.load_model("medium") - # options = whisper.DecodingOptions(language="en", without_timestamps=True) - options = dict(language=srcLanguage) - transcribe_options = dict(task=taskName, **options) - # return if url_path is not set, taskName is not set, srcLanguage is not set - if inputType == "url": - filename = get_all_files(url_path) - print("Retrieved the file") - result = model.transcribe(filename, **transcribe_options) - print("transcribing the file") - else: - result = model.transcribe(mp3_file, **transcribe_options) - # adjust for spacy mode - html_text = "" - lines = [] - for count, segment in enumerate(result.get("segments")): - # print(segment) - start = segment.get("start") - end = segment.get("end") - lines.append(f"{count}") - lines.append(f"{second_to_timecode(start)} --> {second_to_timecode(end)}") - lines.append(segment.get("text", "").strip()) - lines.append('') - words = '\n'.join(lines) - # save to transcript.srt - with open("transcript.srt", "w") as f: - f.write(words) - print("done transcribing") - - input_file = 'temp.mp4' - subtitles_file = 'transcript.srt' - output_file = 'subtitled.mp4' - try: - print("attempt to output file") - video = ffmpeg.input(input_file) - audio = video.audio - ffmpeg.concat(video.filter("subtitles", subtitles_file), audio, v=1, a=1).output(output_file).run() - except Exception as e: - print("failed to output file") - print(e) - output_file = "temp.mp4" - # return temp.mp4 - - return result.get("segments"), words, output_file - -gr.Interface( - title = 'Download Video From url and extract text from audio', - fn=get_text_from_mp3_whisper, - inputs=[ - gr.Dropdown(["url", "file"], value="url"), - gr.inputs.Audio(type="filepath"), - gr.inputs.Textbox(), - gr.Dropdown(["translate", "transcribe"], value="translate"), - gr.Dropdown(["Japanese", "English"], value="Japanese") - ], - button_text="Go!", - button_color="#333333", - outputs=[ - "json", "text", "file" - ], - live=True).launch() \ No newline at end of file diff --git a/spaces/GIZ/SDSN-demo/utils/semantic_search.py b/spaces/GIZ/SDSN-demo/utils/semantic_search.py deleted file mode 100644 index a2e8ed720cf083f79687adc5d1aa68b640618884..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/utils/semantic_search.py +++ /dev/null @@ -1,582 +0,0 @@ -from haystack.nodes import TransformersQueryClassifier, Docs2Answers -from haystack.nodes import EmbeddingRetriever, FARMReader -from haystack.nodes.base import BaseComponent -from haystack.document_stores import InMemoryDocumentStore -from markdown import markdown -from annotated_text import annotation -from haystack.schema import Document -from typing import List, Text, Union -from typing_extensions import Literal -from utils.preprocessing import processingpipeline -from utils.streamlitcheck import check_streamlit -from haystack.pipelines import Pipeline -import pandas as pd -import logging -try: - from termcolor import colored -except: - pass -try: - import streamlit as st -except ImportError: - logging.info("Streamlit not installed") - - -@st.cache(allow_output_mutation=True) -def loadQueryClassifier(): - """ - retuns the haystack query classifier model - model = shahrukhx01/bert-mini-finetune-question-detection - - """ - query_classifier = TransformersQueryClassifier(model_name_or_path= - "shahrukhx01/bert-mini-finetune-question-detection") - return query_classifier - -class QueryCheck(BaseComponent): - """ - Uses Query Classifier from Haystack, process the query based on query type. - Ability to determine the statements is not so good, therefore the chances - statement also get modified. Ex: "List water related issues" will be - identified by the model as keywords, and therefore it be processed as "what - are the 'list all water related issues' related issues and discussions?". - This is one shortcoming but is igonred for now, as semantic search will not - get affected a lot, by this. If you want to pass keywords list and want to - do batch processing use. run_batch. Example: if you want to find relevant - passages for water, food security, poverty then querylist = ["water", "food - security","poverty"] and then execute QueryCheck.run_batch(queries = querylist) - - 1. https://docs.haystack.deepset.ai/docs/query_classifier - - """ - - outgoing_edges = 1 - - def run(self, query:str): - """ - mandatory method to use the custom node. Determines the query type, if - if the query is of type keyword/statement will modify it to make it more - useful for sentence transoformers. - - Params - -------- - query: query/statement/keywords in form of string - - Return - ------ - output: dictionary, with key as identifier and value could be anything - we need to return. In this case the output contain key = 'query'. - - output_1: As there is only one outgoing edge, we pass 'output_1' string - - """ - query_classifier = loadQueryClassifier() - result = query_classifier.run(query=query) - - if result[1] == "output_1": - output = {"query":query, - "query_type": 'question/statement'} - else: - output = {"query": "what are the {} related issues and \ - discussions?".format(query), - "query_type": 'statements/keyword'} - logging.info(output) - return output, "output_1" - - def run_batch(self, queries:List[str]): - """ - running multiple queries in one go, howeevr need the queries to be passed - as list of string. Example: if you want to find relevant passages for - water, food security, poverty then querylist = ["water", "food security", - "poverty"] and then execute QueryCheck.run_batch(queries = querylist) - - Params - -------- - queries: queries/statements/keywords in form of string encapsulated - within List - - Return - ------ - output: dictionary, with key as identifier and value could be anything - we need to return. In this case the output contain key = 'queries'. - - output_1: As there is only one outgoing edge, we pass 'output_1' string - """ - query_classifier = loadQueryClassifier() - query_list = [] - for query in queries: - result = query_classifier.run(query=query) - if result[1] == "output_1": - query_list.append(query) - else: - query_list.append("what are the {} related issues and \ - discussions?".format(query)) - output = {'queries':query_list} - logging.info(output) - return output, "output_1" - - -@st.cache(allow_output_mutation=True) -def runSemanticPreprocessingPipeline(file_path:str, file_name:str, - split_by: Literal["sentence", "word"] = 'sentence', - split_length:int = 2, split_overlap:int = 0, - split_respect_sentence_boundary:bool = False, - remove_punc:bool = False)->List[Document]: - """ - creates the pipeline and runs the preprocessing pipeline. - - Params - ------------ - - file_name: filename, in case of streamlit application use - st.session_state['filename'] - file_path: filepath, in case of streamlit application use - st.session_state['filepath'] - split_by: document splitting strategy either as word or sentence - split_length: when synthetically creating the paragrpahs from document, - it defines the length of paragraph. - split_overlap: Number of words or sentences that overlap when creating the - paragraphs. This is done as one sentence or 'some words' make sense - when read in together with others. Therefore the overlap is used. - split_respect_sentence_boundary: Used when using 'word' strategy for - splititng of text. - remove_punc: to remove all Punctuation including ',' and '.' or not - - Return - -------------- - List[Document]: When preprocessing pipeline is run, the output dictionary - has four objects. For the Haysatck implementation of semantic search we, - need to use the List of Haystack Document, which can be fetched by - key = 'documents' on output. - - """ - - semantic_processing_pipeline = processingpipeline() - - output_semantic_pre = semantic_processing_pipeline.run(file_paths = file_path, - params= {"FileConverter": {"file_path": file_path, \ - "file_name": file_name}, - "UdfPreProcessor": {"remove_punc": remove_punc, \ - "split_by": split_by, \ - "split_length":split_length,\ - "split_overlap": split_overlap, - "split_respect_sentence_boundary":split_respect_sentence_boundary}}) - - return output_semantic_pre - - -@st.cache(hash_funcs={"builtins.SwigPyObject": lambda _: None}, - allow_output_mutation=True) -def loadRetriever(embedding_model:Text=None, embedding_model_format:Text = None, - embedding_layer:int = None, retriever_top_k:int = 10, - max_seq_len:int=512, document_store:InMemoryDocumentStore=None): - """ - Returns the Retriever model based on params provided. - 1. https://docs.haystack.deepset.ai/docs/retriever#embedding-retrieval-recommended - 2. https://www.sbert.net/examples/applications/semantic-search/README.html - 3. https://github.com/deepset-ai/haystack/blob/main/haystack/nodes/retriever/dense.py - - - Params - --------- - embedding_model: Name of the model to be used for embedding. Check the links - provided in documentation - embedding_model_format: check the github link of Haystack provided in - documentation embedding_layer: check the github link of Haystack - provided in documentation retriever_top_k: Number of Top results to - be returned by - retriever max_seq_len: everymodel has max seq len it can handle, check in - model card. Needed to hanlde the edge cases. - document_store: InMemoryDocumentStore, write haystack Document list to - DocumentStore and pass the same to function call. Can be done using - createDocumentStore from utils. - - Return - ------- - retriever: embedding model - """ - logging.info("loading retriever") - if document_store is None: - logging.warning("Retriever initialization requires the DocumentStore") - return - - retriever = EmbeddingRetriever( - embedding_model=embedding_model,top_k = retriever_top_k, - document_store = document_store, - emb_extraction_layer=embedding_layer, scale_score =True, - model_format=embedding_model_format, use_gpu = True, - max_seq_len = max_seq_len ) - if check_streamlit: - st.session_state['retriever'] = retriever - return retriever - -@st.cache(hash_funcs={"builtins.SwigPyObject": lambda _: None}, - allow_output_mutation=True) -def createDocumentStore(documents:List[Document], similarity:str = 'dot_product', - embedding_dim:int = 768): - """ - Creates the InMemory Document Store from haystack list of Documents. - It is mandatory component for Retriever to work in Haystack frame work. - - Params - ------- - documents: List of haystack document. If using the preprocessing pipeline, - can be fetched key = 'documents; on output of preprocessing pipeline. - similarity: scoring function, can be either 'cosine' or 'dot_product' - embedding_dim: Document store has default value of embedding size = 768, and - update_embeddings method of Docstore cannot infer the embedding size of - retiever automatically, therefore set this value as per the model card. - - Return - ------- - document_store: InMemory Document Store object type. - - """ - document_store = InMemoryDocumentStore(similarity = similarity, - embedding_dim = embedding_dim ) - document_store.write_documents(documents) - - return document_store - - -@st.cache(hash_funcs={"builtins.SwigPyObject": lambda _: None}, - allow_output_mutation=True) -def semanticSearchPipeline(documents:List[Document], embedding_model:Text = None, - embedding_model_format:Text = None,embedding_layer:int = None, - embedding_dim:int = 768,retriever_top_k:int = 10, - reader_model:str = None, reader_top_k:int = 10, - max_seq_len:int =512,useQueryCheck = True, - top_k_per_candidate:int = 1): - """ - creates the semantic search pipeline and document Store object from the - list of haystack documents. The top_k for the Reader and Retirever are kept - same, so that all the results returned by Retriever are used, however the - context is extracted by Reader for each retrieved result. The querycheck is - added as node to process the query. This pipeline is suited for keyword search, - and to some extent extractive QA purpose. The purpose of Reader is strictly to - highlight the context for retrieved result and not for QA, however as stated - it can work for QA too in limited sense. - There are 4 variants of pipeline it can return - 1.QueryCheck > Retriever > Reader - 2.Retriever > Reader - 3.QueryCheck > Retriever > Docs2Answers : If reader is None, - then Doc2answer is used to keep the output of pipeline structurally same. - 4.Retriever > Docs2Answers - - Links - - 1. https://docs.haystack.deepset.ai/docs/retriever#embedding-retrieval-recommended - 2. https://www.sbert.net/examples/applications/semantic-search/README.html - 3. https://github.com/deepset-ai/haystack/blob/main/haystack/nodes/retriever/dense.py - 4. https://docs.haystack.deepset.ai/docs/reader - - - Params - ---------- - documents: list of Haystack Documents, returned by preprocessig pipeline. - embedding_model: Name of the model to be used for embedding. Check the links - provided in documentation - embedding_model_format: check the github link of Haystack provided in - documentation - embedding_layer: check the github link of Haystack provided in documentation - embedding_dim: Document store has default value of embedding size = 768, and - update_embeddings method of Docstore cannot infer the embedding size of - retiever automatically, therefore set this value as per the model card. - retriever_top_k: Number of Top results to be returned by retriever - reader_model: Name of the model to be used for Reader node in hasyatck - Pipeline. Check the links provided in documentation - reader_top_k: Reader will use retrieved results to further find better matches. - As purpose here is to use reader to extract context, the value is - same as retriever_top_k. - max_seq_len:everymodel has max seq len it can handle, check in model card. - Needed to hanlde the edge cases - useQueryCheck: Whether to use the querycheck which modifies the query or not. - top_k_per_candidate:How many answers to extract for each candidate doc - that is coming from the retriever - - Return - --------- - semanticsearch_pipeline: Haystack Pipeline object, with all the necessary - nodes [QueryCheck, Retriever, Reader/Docs2Answer]. If reader is None, - then Doc2answer is used to keep the output of pipeline structurally - same. - - document_store: As retriever can work only with Haystack Document Store, the - list of document returned by preprocessing pipeline are fed into to - get InMemmoryDocumentStore object type, with retriever updating the - embeddings of each paragraph in document store. - - """ - document_store = createDocumentStore(documents=documents, - embedding_dim=embedding_dim) - retriever = loadRetriever(embedding_model = embedding_model, - embedding_model_format=embedding_model_format, - embedding_layer=embedding_layer, - retriever_top_k= retriever_top_k, - document_store = document_store, - max_seq_len=max_seq_len) - document_store.update_embeddings(retriever) - semantic_search_pipeline = Pipeline() - if useQueryCheck and reader_model: - querycheck = QueryCheck() - reader = FARMReader(model_name_or_path=reader_model, - top_k = reader_top_k, use_gpu=True, - top_k_per_candidate = top_k_per_candidate) - semantic_search_pipeline.add_node(component = querycheck, - name = "QueryCheck",inputs = ["Query"]) - semantic_search_pipeline.add_node(component = retriever, - name = "EmbeddingRetriever",inputs = ["QueryCheck.output_1"]) - semantic_search_pipeline.add_node(component = reader, name = "FARMReader", - inputs= ["EmbeddingRetriever"]) - - elif reader_model : - reader = FARMReader(model_name_or_path=reader_model, - top_k = reader_top_k, use_gpu=True, - top_k_per_candidate = top_k_per_candidate) - semantic_search_pipeline.add_node(component = retriever, - name = "EmbeddingRetriever",inputs = ["Query"]) - semantic_search_pipeline.add_node(component = reader, - name = "FARMReader",inputs= ["EmbeddingRetriever"]) - elif useQueryCheck and not reader_model: - querycheck = QueryCheck() - docs2answers = Docs2Answers() - semantic_search_pipeline.add_node(component = querycheck, - name = "QueryCheck",inputs = ["Query"]) - semantic_search_pipeline.add_node(component = retriever, - name = "EmbeddingRetriever",inputs = ["QueryCheck.output_1"]) - semantic_search_pipeline.add_node(component = docs2answers, - name = "Docs2Answers",inputs= ["EmbeddingRetriever"]) - elif not useQueryCheck and not reader_model: - docs2answers = Docs2Answers() - semantic_search_pipeline.add_node(component = retriever, - name = "EmbeddingRetriever",inputs = ["Query"]) - semantic_search_pipeline.add_node(component = docs2answers, - name = "Docs2Answers",inputs= ["EmbeddingRetriever"]) - - logging.info(semantic_search_pipeline.components) - return semantic_search_pipeline, document_store - -def runSemanticPipeline(pipeline:Pipeline, queries:Union[list,str])->dict: - """ - will use the haystack run or run_batch based on if single query is passed - as string or multiple queries as List[str] - - Params - ------- - pipeline: haystack pipeline, this is same as returned by semanticSearchPipeline - from utils.semanticsearch - - queries: Either a single query or list of queries. - - Return - ------- - results: Dict containing answers and documents as key and their respective - values - - """ - - if type(queries) == list: - results = pipeline.run_batch(queries=queries) - elif type(queries) == str: - results = pipeline.run(query=queries) - else: - logging.info("Please check the input type for the queries") - return - - return results - -def process_query_output(results:dict)->pd.DataFrame: - """ - Returns the dataframe with necessary information like including - ['query','answer','answer_offset','context_offset','context','content', - 'reader_score','retriever_score','id',]. This is designed for output given - by semantic search pipeline with single query and final node as reader. - The output of pipeline having Docs2Answers as final node or multiple queries - need to be handled separately. In these other cases, use process_semantic_output - from utils.semantic_search which uses this function internally to make one - combined dataframe. - - Params - --------- - results: this dictionary should have key,values with - keys = [query,answers,documents], however answers is optional. - in case of [Doc2Answers as final node], process_semantic_output - doesnt return answers thereby setting all values contained in - answers to 'None' - - Return - -------- - df: dataframe with all the columns mentioned in function description. - - """ - query_text = results['query'] - if 'answers' in results.keys(): - answer_dict = {} - - for answer in results['answers']: - answer_dict[answer.document_id] = answer.to_dict() - else: - answer_dict = {} - docs = results['documents'] - df = pd.DataFrame(columns=['query','answer','answer_offset','context_offset', - 'context','content','reader_score','retriever_score', - 'id']) - for doc in docs: - row_list = {} - row_list['query'] = query_text - row_list['retriever_score'] = doc.score - row_list['id'] = doc.id - row_list['content'] = doc.content - if doc.id in answer_dict.keys(): - row_list['answer'] = answer_dict[doc.id]['answer'] - row_list['context'] = answer_dict[doc.id]['context'] - row_list['reader_score'] = answer_dict[doc.id]['score'] - answer_offset = answer_dict[doc.id]['offsets_in_document'][0] - row_list['answer_offset'] = [answer_offset['start'],answer_offset['end']] - start_idx = doc.content.find(row_list['context']) - end_idx = start_idx + len(row_list['context']) - row_list['context_offset'] = [start_idx, end_idx] - else: - row_list['answer'] = None - row_list['context'] = None - row_list['reader_score'] = None - row_list['answer_offset'] = None - row_list['context_offset'] = None - df_dictionary = pd.DataFrame([row_list]) - df = pd.concat([df, df_dictionary], ignore_index=True) - - return df - -def process_semantic_output(results): - """ - Returns the dataframe with necessary information like including - ['query','answer','answer_offset','context_offset','context','content', - 'reader_score','retriever_score','id',]. Distingushes if its single query or - multi queries by reading the pipeline output dictionary keys. - Uses the process_query_output to get the dataframe for each query and create - one concataneted dataframe. In case of Docs2Answers as final node, deletes - the answers part. See documentations of process_query_output. - - Params - --------- - results: raw output of runSemanticPipeline. - - Return - -------- - df: dataframe with all the columns mentioned in function description. - - """ - output = {} - if 'query' in results.keys(): - output['query'] = results['query'] - output['documents'] = results['documents'] - if results['node_id'] == 'Docs2Answers': - pass - else: - output['answers'] = results['answers'] - df = process_query_output(output) - return df - if 'queries' in results.keys(): - df = pd.DataFrame(columns=['query','answer','answer_offset', - 'context_offset','context','content', - 'reader_score','retriever_score','id']) - for query,answers,documents in zip(results['queries'], - results['answers'],results['documents']): - output = {} - output['query'] = query - output['documents'] = documents - if results['node_id'] == 'Docs2Answers': - pass - else: - output['answers'] = answers - - temp = process_query_output(output) - df = pd.concat([df, temp], ignore_index=True) - - - return df - -def semanticsearchAnnotator(matches:List[List[int]], document:Text): - """ - Annotates the text in the document defined by list of [start index, end index] - Example: "How are you today", if document type is text, matches = [[0,3]] - will give answer = "How", however in case we used the spacy matcher then the - matches = [[0,3]] will give answer = "How are you". However if spacy is used - to find "How" then the matches = [[0,1]] for the string defined above. - - """ - start = 0 - annotated_text = "" - for match in matches: - start_idx = match[0] - end_idx = match[1] - if check_streamlit(): - annotated_text = (annotated_text + document[start:start_idx] - + str(annotation(body=document[start_idx:end_idx], - label="Context", background="#964448", color='#ffffff'))) - else: - annotated_text = (annotated_text + document[start:start_idx] - + colored(document[start_idx:end_idx], - "green", attrs = ['bold'])) - start = end_idx - - annotated_text = annotated_text + document[end_idx:] - - if check_streamlit(): - - st.write( - markdown(annotated_text), - unsafe_allow_html=True, - ) - else: - print(annotated_text) - - -def semantic_keywordsearch(query:Text,documents:List[Document], - embedding_model:Text, - embedding_model_format:Text, - embedding_layer:int, reader_model:str, - retriever_top_k:int = 10, reader_top_k:int = 10, - return_results:bool = False, embedding_dim:int = 768, - max_seq_len:int = 512,top_k_per_candidate:int =1, - sort_by:Literal["retriever", "reader"] = 'retriever'): - """ - Performs the Semantic search on the List of haystack documents which is - returned by preprocessing Pipeline. - - Params - ------- - query: Keywords that need to be searche in documents. - documents: List fo Haystack documents returned by preprocessing pipeline. - - """ - semanticsearch_pipeline, doc_store = semanticSearchPipeline(documents = documents, - embedding_model= embedding_model, - embedding_layer= embedding_layer, - embedding_model_format= embedding_model_format, - reader_model= reader_model, retriever_top_k= retriever_top_k, - reader_top_k= reader_top_k, embedding_dim=embedding_dim, - max_seq_len=max_seq_len, - top_k_per_candidate=top_k_per_candidate) - - raw_output = runSemanticPipeline(semanticsearch_pipeline,query) - results_df = process_semantic_output(raw_output) - if sort_by == 'retriever': - results_df = results_df.sort_values(by=['retriever_score'], ascending=False) - else: - results_df = results_df.sort_values(by=['reader_score'], ascending=False) - - if return_results: - return results_df - else: - if check_streamlit: - st.markdown("##### Top few semantic search results #####") - else: - print("Top few semantic search results") - for i in range(len(results_df)): - if check_streamlit: - st.write("Result {}".format(i+1)) - else: - print("Result {}".format(i+1)) - semanticsearchAnnotator([results_df.loc[i]['context_offset']], - results_df.loc[i]['content'] ) \ No newline at end of file diff --git a/spaces/GaenKoki/voicevox/build_util/modify_pyinstaller.bash b/spaces/GaenKoki/voicevox/build_util/modify_pyinstaller.bash deleted file mode 100644 index de4815fd2c85c4b0a01f4035f48a40cbca91db3d..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/build_util/modify_pyinstaller.bash +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# PyInstallerをカスタマイズしてから再インストールする -# 良いGPUが自動的に選択されるようにしている -# https://github.com/VOICEVOX/voicevox_engine/issues/502 - -set -eux - -pyinstaller_version=$(pyinstaller -v) -tempdir=$(mktemp -dt modify_pyinstaller.XXXXXXXX) -trap 'rm -rf "$tempdir"' EXIT -git clone https://github.com/pyinstaller/pyinstaller.git "$tempdir" -b "v$pyinstaller_version" --depth 1 -cat > "$tempdir/bootloader/src/symbols.c" << EOF -#ifdef _WIN32 -#include - -// https://docs.nvidia.com/gameworks/content/technologies/desktop/optimus.htm -__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; - -// https://gpuopen.com/learn/amdpowerxpressrequesthighperformance/ -__declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001; -#endif -EOF -(cd "$tempdir/bootloader" && python ./waf all) -pip install -U "$tempdir" diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/audio.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/audio.py deleted file mode 100644 index 799aa835499ce8b839290f28b2c8ffb629f37565..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/audio.py +++ /dev/null @@ -1,117 +0,0 @@ -from scipy.ndimage.morphology import binary_dilation -from encoder.params_data import * -from pathlib import Path -from typing import Optional, Union -from warnings import warn -import numpy as np -import librosa -import struct - -try: - import webrtcvad -except: - warn("Unable to import 'webrtcvad'. This package enables noise removal and is recommended.") - webrtcvad=None - -int16_max = (2 ** 15) - 1 - - -def preprocess_wav(fpath_or_wav: Union[str, Path, np.ndarray], - source_sr: Optional[int] = None, - normalize: Optional[bool] = True, - trim_silence: Optional[bool] = True): - """ - Applies the preprocessing operations used in training the Speaker Encoder to a waveform - either on disk or in memory. The waveform will be resampled to match the data hyperparameters. - - :param fpath_or_wav: either a filepath to an audio file (many extensions are supported, not - just .wav), either the waveform as a numpy array of floats. - :param source_sr: if passing an audio waveform, the sampling rate of the waveform before - preprocessing. After preprocessing, the waveform's sampling rate will match the data - hyperparameters. If passing a filepath, the sampling rate will be automatically detected and - this argument will be ignored. - """ - # Load the wav from disk if needed - if isinstance(fpath_or_wav, str) or isinstance(fpath_or_wav, Path): - wav, source_sr = librosa.load(str(fpath_or_wav), sr=None) - else: - wav = fpath_or_wav - - # Resample the wav if needed - if source_sr is not None and source_sr != sampling_rate: - wav = librosa.resample(wav, source_sr, sampling_rate) - - # Apply the preprocessing: normalize volume and shorten long silences - if normalize: - wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True) - if webrtcvad and trim_silence: - wav = trim_long_silences(wav) - - return wav - - -def wav_to_mel_spectrogram(wav): - """ - Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform. - Note: this not a log-mel spectrogram. - """ - frames = librosa.feature.melspectrogram( - wav, - sampling_rate, - n_fft=int(sampling_rate * mel_window_length / 1000), - hop_length=int(sampling_rate * mel_window_step / 1000), - n_mels=mel_n_channels - ) - return frames.astype(np.float32).T - - -def trim_long_silences(wav): - """ - Ensures that segments without voice in the waveform remain no longer than a - threshold determined by the VAD parameters in params.py. - - :param wav: the raw waveform as a numpy array of floats - :return: the same waveform with silences trimmed away (length <= original wav length) - """ - # Compute the voice detection window size - samples_per_window = (vad_window_length * sampling_rate) // 1000 - - # Trim the end of the audio to have a multiple of the window size - wav = wav[:len(wav) - (len(wav) % samples_per_window)] - - # Convert the float waveform to 16-bit mono PCM - pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16)) - - # Perform voice activation detection - voice_flags = [] - vad = webrtcvad.Vad(mode=3) - for window_start in range(0, len(wav), samples_per_window): - window_end = window_start + samples_per_window - voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2], - sample_rate=sampling_rate)) - voice_flags = np.array(voice_flags) - - # Smooth the voice detection with a moving average - def moving_average(array, width): - array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2))) - ret = np.cumsum(array_padded, dtype=float) - ret[width:] = ret[width:] - ret[:-width] - return ret[width - 1:] / width - - audio_mask = moving_average(voice_flags, vad_moving_average_width) - audio_mask = np.round(audio_mask).astype(np.bool) - - # Dilate the voiced regions - audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1)) - audio_mask = np.repeat(audio_mask, samples_per_window) - - return wav[audio_mask == True] - - -def normalize_volume(wav, target_dBFS, increase_only=False, decrease_only=False): - if increase_only and decrease_only: - raise ValueError("Both increase only and decrease only are set") - dBFS_change = target_dBFS - 10 * np.log10(np.mean(wav ** 2)) - if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only): - return wav - return wav * (10 ** (dBFS_change / 20)) diff --git a/spaces/Gladiator/Sartorius-Cell-Segmentation/README.md b/spaces/Gladiator/Sartorius-Cell-Segmentation/README.md deleted file mode 100644 index bf62e0f6faeb09231b4a9ddae93989c91a7ed983..0000000000000000000000000000000000000000 --- a/spaces/Gladiator/Sartorius-Cell-Segmentation/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Sartorius Cell Segmentation -emoji: 📊 -colorFrom: red -colorTo: red -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Gmq-x/gpt-academic/config.py b/spaces/Gmq-x/gpt-academic/config.py deleted file mode 100644 index 16a3ef1e17370369673f15febf371c059738b3e8..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/config.py +++ /dev/null @@ -1,58 +0,0 @@ -# [step 1]>> 例如: API_KEY = "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" (此key无效) -API_KEY = "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" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2" - -# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改 -USE_PROXY = False -if USE_PROXY: - # 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改 - # 例如 "socks5h://localhost:11284" - # [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http - # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上) - # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上 - - # 代理网络的地址,打开你的科学上网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284) - proxies = { - # [协议]:// [地址] :[端口] - "http": "socks5h://localhost:11284", - "https": "socks5h://localhost:11284", - } -else: - proxies = None - -# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次 -# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview -DEFAULT_WORKER_NUM = 3 - - -# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改 -# 对话窗的高度 -CHATBOT_HEIGHT = 1115 - -# 代码高亮 -CODE_HIGHLIGHT = True - -# 窗口布局 -LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) - -# 发送请求到OpenAI后,等待多久判定为超时 -TIMEOUT_SECONDS = 30 - -# 网页的端口, -1代表随机端口 -WEB_PORT = -1 - -# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制 -MAX_RETRY = 2 - -# OpenAI模型选择是(gpt4现在只对申请成功的人开放) -LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm" -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"] - -# 本地LLM模型如ChatGLM的执行方式 CPU/GPU -LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" - -# 设置gradio的并行线程数(不需要修改) -CONCURRENT_COUNT = 100 - -# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个) -# [("username", "password"), ("username2", "password2"), ...] -AUTHENTICATION = [] diff --git a/spaces/Gmq-x/gpt-academic/crazy_functions/crazy_utils.py b/spaces/Gmq-x/gpt-academic/crazy_functions/crazy_utils.py deleted file mode 100644 index 4e0eba499e6f2fa94b1a962421b3c4bfef7a2f26..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/crazy_functions/crazy_utils.py +++ /dev/null @@ -1,566 +0,0 @@ -import traceback -from toolbox import update_ui, get_conf - -def input_clipping(inputs, history, max_token_limit): - import numpy as np - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - - mode = 'input-and-history' - # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史 - input_token_num = get_token_num(inputs) - if input_token_num < max_token_limit//2: - mode = 'only-history' - max_token_limit = max_token_limit - input_token_num - - everything = [inputs] if mode == 'input-and-history' else [''] - everything.extend(history) - n_token = get_token_num('\n'.join(everything)) - everything_token = [get_token_num(e) for e in everything] - delta = max(everything_token) // 16 # 截断时的颗粒度 - - while n_token > max_token_limit: - where = np.argmax(everything_token) - encoded = enc.encode(everything[where], disallowed_special=()) - clipped_encoded = encoded[:len(encoded)-delta] - everything[where] = enc.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char - everything_token[where] = get_token_num(everything[where]) - n_token = get_token_num('\n'.join(everything)) - - if mode == 'input-and-history': - inputs = everything[0] - else: - pass - history = everything[1:] - return inputs, history - -def request_gpt_model_in_new_thread_with_ui_alive( - inputs, inputs_show_user, llm_kwargs, - chatbot, history, sys_prompt, refresh_interval=0.2, - handle_token_exceed=True, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model,请求GPT模型同时维持用户界面活跃。 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs (string): List of inputs (输入) - inputs_show_user (string): List of inputs to show user(展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - top_p (float): Top p value for sampling from model distribution (GPT参数,浮点数) - temperature (float): Temperature value for sampling from model distribution(GPT参数,浮点数) - chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化) - history (list): List of chat history (历史,对话历史列表) - sys_prompt (string): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - retry_times_at_unknown_error:失败时的重试次数 - - 输出 Returns: - future: 输出,GPT返回的结果 - """ - import time - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_all import predict_no_ui_long_connection - # 用户反馈 - chatbot.append([inputs_show_user, ""]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - executor = ThreadPoolExecutor(max_workers=16) - mutable = ["", time.time(), ""] - def _req_gpt(inputs, history, sys_prompt): - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - while True: - # watchdog error - if len(mutable) >= 2 and (time.time()-mutable[1]) > 5: - raise RuntimeError("检测到程序终止。") - try: - # 【第一种情况】:顺利完成 - result = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, - history=history, sys_prompt=sys_prompt, observe_window=mutable) - return result - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出 - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + traceback.format_exc() + '```' - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - return mutable[0] # 放弃 - except: - # 【第三种情况】:其他错误:重试几次 - tb_str = '```\n' + traceback.format_exc() + '```' - print(tb_str) - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if retry_op > 0: - retry_op -= 1 - mutable[0] += f"[Local Message] 重试中,请稍等 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}:\n\n" - if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): - time.sleep(30) - time.sleep(5) - continue # 返回重试 - else: - time.sleep(5) - return mutable[0] # 放弃 - - # 提交任务 - future = executor.submit(_req_gpt, inputs, history, sys_prompt) - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - # “喂狗”(看门狗) - mutable[1] = time.time() - if future.done(): - break - chatbot[-1] = [chatbot[-1][0], mutable[0]] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - - final_result = future.result() - chatbot[-1] = [chatbot[-1][0], final_result] - yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息 - return final_result - - -def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, inputs_show_user_array, llm_kwargs, - chatbot, history_array, sys_prompt_array, - refresh_interval=0.2, max_workers=-1, scroller_max_len=30, - handle_token_exceed=True, show_user_at_complete=False, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model using multiple threads with UI and high efficiency - 请求GPT模型的[多线程]版。 - 具备以下功能: - 实时在UI上反馈远程数据流 - 使用线程池,可调节线程池的大小避免openai的流量限制错误 - 处理中途中止的情况 - 网络等出问题时,会把traceback和已经接收的数据转入输出 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs_array (list): List of inputs (每个子任务的输入) - inputs_show_user_array (list): List of inputs to show user(每个子任务展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - llm_kwargs: llm_kwargs参数 - chatbot: chatbot (用户界面对话窗口句柄,用于数据流可视化) - history_array (list): List of chat history (历史对话输入,双层列表,第一层列表是子任务分解,第二层列表是对话历史) - sys_prompt_array (list): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - max_workers (int, optional): Maximum number of threads (default: see config.py) (最大线程数,如果子任务非常多,需要用此选项防止高频地请求openai导致错误) - scroller_max_len (int, optional): Maximum length for scroller (default: 30)(数据流的显示最后收到的多少个字符,仅仅服务于视觉效果) - handle_token_exceed (bool, optional): (是否在输入过长时,自动缩减文本) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - show_user_at_complete (bool, optional): (在结束时,把完整输入-输出结果显示在聊天框) - retry_times_at_unknown_error:子任务失败时的重试次数 - - 输出 Returns: - list: List of GPT model responses (每个子任务的输出汇总,如果某个子任务出错,response中会携带traceback报错信息,方便调试和定位问题。) - """ - import time, random - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_all import predict_no_ui_long_connection - assert len(inputs_array) == len(history_array) - assert len(inputs_array) == len(sys_prompt_array) - if max_workers == -1: # 读取配置文件 - try: max_workers, = get_conf('DEFAULT_WORKER_NUM') - except: max_workers = 8 - if max_workers <= 0 or max_workers >= 20: max_workers = 8 - # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿 - if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')): - max_workers = 1 - - executor = ThreadPoolExecutor(max_workers=max_workers) - n_frag = len(inputs_array) - # 用户反馈 - chatbot.append(["请开始多线程操作。", ""]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - # 跨线程传递 - mutable = [["", time.time(), "等待中"] for _ in range(n_frag)] - - # 子线程任务 - def _req_gpt(index, inputs, history, sys_prompt): - gpt_say = "" - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - mutable[index][2] = "执行中" - while True: - # watchdog error - if len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > 5: - raise RuntimeError("检测到程序终止。") - try: - # 【第一种情况】:顺利完成 - # time.sleep(10); raise RuntimeError("测试") - gpt_say = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, history=history, - sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True - ) - mutable[index][2] = "已成功" - return gpt_say - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出, - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - mutable[index][2] = f"截断重试" - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + traceback.format_exc() + '```' - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - mutable[index][2] = "输入过长已放弃" - return gpt_say # 放弃 - except: - # 【第三种情况】:其他错误 - tb_str = '```\n' + traceback.format_exc() + '```' - print(tb_str) - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - if retry_op > 0: - retry_op -= 1 - wait = random.randint(5, 20) - if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): - wait = wait * 3 - fail_info = "OpenAI绑定信用卡可解除频率限制 " - else: - fail_info = "" - # 也许等待十几秒后,情况会好转 - for i in range(wait): - mutable[index][2] = f"{fail_info}等待重试 {wait-i}"; time.sleep(1) - # 开始重试 - mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}" - continue # 返回重试 - else: - mutable[index][2] = "已失败" - wait = 5 - time.sleep(5) - return gpt_say # 放弃 - - # 异步任务开始 - futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip( - range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)] - cnt = 0 - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - cnt += 1 - worker_done = [h.done() for h in futures] - if all(worker_done): - executor.shutdown() - break - # 更好的UI视觉效果 - observe_win = [] - # 每个线程都要“喂狗”(看门狗) - for thread_index, _ in enumerate(worker_done): - mutable[thread_index][1] = time.time() - # 在前端打印些好玩的东西 - for thread_index, _ in enumerate(worker_done): - print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\ - replace('\n', '').replace('```', '...').replace( - ' ', '.').replace('
', '.....').replace('$', '.')+"`... ]" - observe_win.append(print_something_really_funny) - # 在前端打印些好玩的东西 - stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' - if not done else f'`{mutable[thread_index][2]}`\n\n' - for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)]) - # 在前端打印些好玩的东西 - chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - - # 异步任务结束 - gpt_response_collection = [] - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - gpt_response_collection.extend([inputs_show_user, gpt_res]) - - # 是否在结束时,在界面上显示结果 - if show_user_at_complete: - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - chatbot.append([inputs_show_user, gpt_res]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - time.sleep(0.3) - return gpt_response_collection - - -def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit): - def cut(txt_tocut, must_break_at_empty_line): # 递归 - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - print(cnt) - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - raise RuntimeError("存在一行极长的文本!") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line)) - return result - try: - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - return cut(txt, must_break_at_empty_line=False) - - -def force_breakdown(txt, limit, get_token_fn): - """ - 当无法用标点、空行分割时,我们用最暴力的方法切割 - """ - for i in reversed(range(len(txt))): - if get_token_fn(txt[:i]) < limit: - return txt[:i], txt[i:] - return "Tiktoken未知错误", "Tiktoken未知错误" - -def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit): - # 递归 - def cut(txt_tocut, must_break_at_empty_line, break_anyway=False): - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - cnt = 0 - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - if break_anyway: - prev, post = force_breakdown(txt_tocut, limit, get_token_fn) - else: - raise RuntimeError(f"存在一行极长的文本!{txt_tocut}") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway)) - return result - try: - # 第1次尝试,将双空行(\n\n)作为切分点 - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - try: - # 第2次尝试,将单空行(\n)作为切分点 - return cut(txt, must_break_at_empty_line=False) - except RuntimeError: - try: - # 第3次尝试,将英文句号(.)作为切分点 - res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在 - return [r.replace('。\n', '.') for r in res] - except RuntimeError as e: - try: - # 第4次尝试,将中文句号(。)作为切分点 - res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False) - return [r.replace('。。\n', '。') for r in res] - except RuntimeError as e: - # 第5次尝试,没办法了,随便切一下敷衍吧 - return cut(txt, must_break_at_empty_line=False, break_anyway=True) - - - -def read_and_clean_pdf_text(fp): - """ - 这个函数用于分割pdf,用了很多trick,逻辑较乱,效果奇好 - - **输入参数说明** - - `fp`:需要读取和清理文本的pdf文件路径 - - **输出参数说明** - - `meta_txt`:清理后的文本内容字符串 - - `page_one_meta`:第一页清理后的文本内容列表 - - **函数功能** - 读取pdf文件并清理其中的文本内容,清理规则包括: - - 提取所有块元的文本信息,并合并为一个字符串 - - 去除短块(字符数小于100)并替换为回车符 - - 清理多余的空行 - - 合并小写字母开头的段落块并替换为空格 - - 清除重复的换行 - - 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔 - """ - import fitz, copy - import re - import numpy as np - from colorful import print亮黄, print亮绿 - fc = 0 # Index 0 文本 - fs = 1 # Index 1 字体 - fb = 2 # Index 2 框框 - REMOVE_FOOT_NOTE = True # 是否丢弃掉 不是正文的内容 (比正文字体小,如参考文献、脚注、图注等) - REMOVE_FOOT_FFSIZE_PERCENT = 0.95 # 小于正文的?时,判定为不是正文(有些文章的正文部分字体大小不是100%统一的,有肉眼不可见的小变化) - def primary_ffsize(l): - """ - 提取文本块主字体 - """ - fsize_statiscs = {} - for wtf in l['spans']: - if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0 - fsize_statiscs[wtf['size']] += len(wtf['text']) - return max(fsize_statiscs, key=fsize_statiscs.get) - - def ffsize_same(a,b): - """ - 提取字体大小是否近似相等 - """ - return abs((a-b)/max(a,b)) < 0.02 - - with fitz.open(fp) as doc: - meta_txt = [] - meta_font = [] - - meta_line = [] - meta_span = [] - ############################## <第 1 步,搜集初始信息> ################################## - for index, page in enumerate(doc): - # file_content += page.get_text() - text_areas = page.get_text("dict") # 获取页面上的文本信息 - for t in text_areas['blocks']: - if 'lines' in t: - pf = 998 - for l in t['lines']: - txt_line = "".join([wtf['text'] for wtf in l['spans']]) - if len(txt_line) == 0: continue - pf = primary_ffsize(l) - meta_line.append([txt_line, pf, l['bbox'], l]) - for wtf in l['spans']: # for l in t['lines']: - meta_span.append([wtf['text'], wtf['size'], len(wtf['text'])]) - # meta_line.append(["NEW_BLOCK", pf]) - # 块元提取 for each word segment with in line for each line cross-line words for each block - meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t]) - meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']]) - for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t]) - if index == 0: - page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t] - - ############################## <第 2 步,获取正文主字体> ################################## - fsize_statiscs = {} - for span in meta_span: - if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0 - fsize_statiscs[span[1]] += span[2] - main_fsize = max(fsize_statiscs, key=fsize_statiscs.get) - if REMOVE_FOOT_NOTE: - give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT - - ############################## <第 3 步,切分和重新整合> ################################## - mega_sec = [] - sec = [] - for index, line in enumerate(meta_line): - if index == 0: - sec.append(line[fc]) - continue - if REMOVE_FOOT_NOTE: - if meta_line[index][fs] <= give_up_fize_threshold: - continue - if ffsize_same(meta_line[index][fs], meta_line[index-1][fs]): - # 尝试识别段落 - if meta_line[index][fc].endswith('.') and\ - (meta_line[index-1][fc] != 'NEW_BLOCK') and \ - (meta_line[index][fb][2] - meta_line[index][fb][0]) < (meta_line[index-1][fb][2] - meta_line[index-1][fb][0]) * 0.7: - sec[-1] += line[fc] - sec[-1] += "\n\n" - else: - sec[-1] += " " - sec[-1] += line[fc] - else: - if (index+1 < len(meta_line)) and \ - meta_line[index][fs] > main_fsize: - # 单行 + 字体大 - mega_sec.append(copy.deepcopy(sec)) - sec = [] - sec.append("# " + line[fc]) - else: - # 尝试识别section - if meta_line[index-1][fs] > meta_line[index][fs]: - sec.append("\n" + line[fc]) - else: - sec.append(line[fc]) - mega_sec.append(copy.deepcopy(sec)) - - finals = [] - for ms in mega_sec: - final = " ".join(ms) - final = final.replace('- ', ' ') - finals.append(final) - meta_txt = finals - - ############################## <第 4 步,乱七八糟的后处理> ################################## - def 把字符太少的块清除为回车(meta_txt): - for index, block_txt in enumerate(meta_txt): - if len(block_txt) < 100: - meta_txt[index] = '\n' - return meta_txt - meta_txt = 把字符太少的块清除为回车(meta_txt) - - def 清理多余的空行(meta_txt): - for index in reversed(range(1, len(meta_txt))): - if meta_txt[index] == '\n' and meta_txt[index-1] == '\n': - meta_txt.pop(index) - return meta_txt - meta_txt = 清理多余的空行(meta_txt) - - def 合并小写开头的段落块(meta_txt): - def starts_with_lowercase_word(s): - pattern = r"^[a-z]+" - match = re.match(pattern, s) - if match: - return True - else: - return False - for _ in range(100): - for index, block_txt in enumerate(meta_txt): - if starts_with_lowercase_word(block_txt): - if meta_txt[index-1] != '\n': - meta_txt[index-1] += ' ' - else: - meta_txt[index-1] = '' - meta_txt[index-1] += meta_txt[index] - meta_txt[index] = '\n' - return meta_txt - meta_txt = 合并小写开头的段落块(meta_txt) - meta_txt = 清理多余的空行(meta_txt) - - meta_txt = '\n'.join(meta_txt) - # 清除重复的换行 - for _ in range(5): - meta_txt = meta_txt.replace('\n\n', '\n') - - # 换行 -> 双换行 - meta_txt = meta_txt.replace('\n', '\n\n') - - ############################## <第 5 步,展示分割效果> ################################## - # for f in finals: - # print亮黄(f) - # print亮绿('***************************') - - return meta_txt, page_one_meta diff --git a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/gaussian_diffusion.py b/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/gaussian_diffusion.py deleted file mode 100644 index 403d474f3bc3486dff7618d262f6437b2ab43e5c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/gaussian_diffusion.py +++ /dev/null @@ -1,841 +0,0 @@ -""" -This code started out as a PyTorch port of Ho et al's diffusion models: -https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py - -Docstrings have been added, as well as DDIM sampling and a new collection of beta schedules. -""" - -import enum -import math - -import numpy as np -import torch as th - -from .nn import mean_flat -from .losses import normal_kl, discretized_gaussian_log_likelihood - - -def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): - """ - Get a pre-defined beta schedule for the given name. - - The beta schedule library consists of beta schedules which remain similar - in the limit of num_diffusion_timesteps. - Beta schedules may be added, but should not be removed or changed once - they are committed to maintain backwards compatibility. - """ - if schedule_name == "linear": - # Linear schedule from Ho et al, extended to work for any number of - # diffusion steps. - scale = 1000 / num_diffusion_timesteps - beta_start = scale * 0.0001 - beta_end = scale * 0.02 - return np.linspace( - beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64 - ) - elif schedule_name == "cosine": - return betas_for_alpha_bar( - num_diffusion_timesteps, - lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2, - ) - else: - raise NotImplementedError(f"unknown beta schedule: {schedule_name}") - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -class ModelMeanType(enum.Enum): - """ - Which type of output the model predicts. - """ - - PREVIOUS_X = enum.auto() # the model predicts x_{t-1} - START_X = enum.auto() # the model predicts x_0 - EPSILON = enum.auto() # the model predicts epsilon - - -class ModelVarType(enum.Enum): - """ - What is used as the model's output variance. - - The LEARNED_RANGE option has been added to allow the model to predict - values between FIXED_SMALL and FIXED_LARGE, making its job easier. - """ - - LEARNED = enum.auto() - FIXED_SMALL = enum.auto() - FIXED_LARGE = enum.auto() - LEARNED_RANGE = enum.auto() - - -class LossType(enum.Enum): - MSE = enum.auto() # use raw MSE loss (and KL when learning variances) - RESCALED_MSE = ( - enum.auto() - ) # use raw MSE loss (with RESCALED_KL when learning variances) - KL = enum.auto() # use the variational lower-bound - RESCALED_KL = enum.auto() # like KL, but rescale to estimate the full VLB - - def is_vb(self): - return self == LossType.KL or self == LossType.RESCALED_KL - - -class GaussianDiffusion: - """ - Utilities for training and sampling diffusion models. - - Ported directly from here, and then adapted over time to further experimentation. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42 - - :param betas: a 1-D numpy array of betas for each diffusion timestep, - starting at T and going to 1. - :param model_mean_type: a ModelMeanType determining what the model outputs. - :param model_var_type: a ModelVarType determining how variance is output. - :param loss_type: a LossType determining the loss function to use. - :param rescale_timesteps: if True, pass floating point timesteps into the - model so that they are always scaled like in the - original paper (0 to 1000). - """ - - def __init__( - self, - *, - betas, - model_mean_type, - model_var_type, - loss_type, - rescale_timesteps=False, - ): - self.model_mean_type = model_mean_type - self.model_var_type = model_var_type - self.loss_type = loss_type - self.rescale_timesteps = rescale_timesteps - - # Use float64 for accuracy. - betas = np.array(betas, dtype=np.float64) - self.betas = betas - assert len(betas.shape) == 1, "betas must be 1-D" - assert (betas > 0).all() and (betas <= 1).all() - - self.num_timesteps = int(betas.shape[0]) - - alphas = 1.0 - betas - self.alphas_cumprod = np.cumprod(alphas, axis=0) - self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1]) - self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0) - assert self.alphas_cumprod_prev.shape == (self.num_timesteps,) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod) - self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod) - self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod) - self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod) - self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = ( - betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - # log calculation clipped because the posterior variance is 0 at the - # beginning of the diffusion chain. - self.posterior_log_variance_clipped = np.log( - np.append(self.posterior_variance[1], self.posterior_variance[1:]) - ) - self.posterior_mean_coef1 = ( - betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - self.posterior_mean_coef2 = ( - (1.0 - self.alphas_cumprod_prev) - * np.sqrt(alphas) - / (1.0 - self.alphas_cumprod) - ) - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = ( - _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - ) - variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = _extract_into_tensor( - self.log_one_minus_alphas_cumprod, t, x_start.shape - ) - return mean, variance, log_variance - - def q_sample(self, x_start, t, noise=None): - """ - Diffuse the data for a given number of diffusion steps. - - In other words, sample from q(x_t | x_0). - - :param x_start: the initial data batch. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :param noise: if specified, the split-out normal noise. - :return: A noisy version of x_start. - """ - if noise is None: - noise = th.randn_like(x_start) - assert noise.shape == x_start.shape - return ( - _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) - * noise - ) - - def q_posterior_mean_variance(self, x_start, x_t, t): - """ - Compute the mean and variance of the diffusion posterior: - - q(x_{t-1} | x_t, x_0) - - """ - assert x_start.shape == x_t.shape - posterior_mean = ( - _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start - + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = _extract_into_tensor( - self.posterior_log_variance_clipped, t, x_t.shape - ) - assert ( - posterior_mean.shape[0] - == posterior_variance.shape[0] - == posterior_log_variance_clipped.shape[0] - == x_start.shape[0] - ) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance( - self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None - ): - """ - Apply the model to get p(x_{t-1} | x_t), as well as a prediction of - the initial x, x_0. - - :param model: the model, which takes a signal and a batch of timesteps - as input. - :param x: the [N x C x ...] tensor at time t. - :param t: a 1-D Tensor of timesteps. - :param clip_denoised: if True, clip the denoised signal into [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. Applies before - clip_denoised. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict with the following keys: - - 'mean': the model mean output. - - 'variance': the model variance output. - - 'log_variance': the log of 'variance'. - - 'pred_xstart': the prediction for x_0. - """ - if model_kwargs is None: - model_kwargs = {} - - B, C = x.shape[:2] - assert t.shape == (B,) - model_output = model(x, self._scale_timesteps(t), **model_kwargs) - - if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]: - assert model_output.shape == (B, C * 2, *x.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - if self.model_var_type == ModelVarType.LEARNED: - model_log_variance = model_var_values - model_variance = th.exp(model_log_variance) - else: - min_log = _extract_into_tensor( - self.posterior_log_variance_clipped, t, x.shape - ) - max_log = _extract_into_tensor(np.log(self.betas), t, x.shape) - # The model_var_values is [-1, 1] for [min_var, max_var]. - frac = (model_var_values + 1) / 2 - model_log_variance = frac * max_log + (1 - frac) * min_log - model_variance = th.exp(model_log_variance) - else: - model_variance, model_log_variance = { - # for fixedlarge, we set the initial (log-)variance like so - # to get a better decoder log likelihood. - ModelVarType.FIXED_LARGE: ( - np.append(self.posterior_variance[1], self.betas[1:]), - np.log(np.append(self.posterior_variance[1], self.betas[1:])), - ), - ModelVarType.FIXED_SMALL: ( - self.posterior_variance, - self.posterior_log_variance_clipped, - ), - }[self.model_var_type] - model_variance = _extract_into_tensor(model_variance, t, x.shape) - model_log_variance = _extract_into_tensor(model_log_variance, t, x.shape) - - def process_xstart(x): - if denoised_fn is not None: - x = denoised_fn(x) - if clip_denoised: - return x.clamp(-1, 1) - return x - - if self.model_mean_type == ModelMeanType.PREVIOUS_X: - pred_xstart = process_xstart( - self._predict_xstart_from_xprev(x_t=x, t=t, xprev=model_output) - ) - model_mean = model_output - elif self.model_mean_type in [ModelMeanType.START_X, ModelMeanType.EPSILON]: - if self.model_mean_type == ModelMeanType.START_X: - pred_xstart = process_xstart(model_output) - else: - pred_xstart = process_xstart( - self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output) - ) - model_mean, _, _ = self.q_posterior_mean_variance( - x_start=pred_xstart, x_t=x, t=t - ) - else: - raise NotImplementedError(self.model_mean_type) - - assert ( - model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape - ) - return { - "mean": model_mean, - "variance": model_variance, - "log_variance": model_log_variance, - "pred_xstart": pred_xstart, - } - - def _predict_xstart_from_eps(self, x_t, t, eps): - assert x_t.shape == eps.shape - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps - ) - - def _predict_xstart_from_xprev(self, x_t, t, xprev): - assert x_t.shape == xprev.shape - return ( # (xprev - coef2*x_t) / coef1 - _extract_into_tensor(1.0 / self.posterior_mean_coef1, t, x_t.shape) * xprev - - _extract_into_tensor( - self.posterior_mean_coef2 / self.posterior_mean_coef1, t, x_t.shape - ) - * x_t - ) - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - pred_xstart - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _scale_timesteps(self, t): - if self.rescale_timesteps: - return t.float() * (1000.0 / self.num_timesteps) - return t - - def p_sample( - self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None - ): - """ - Sample x_{t-1} from the model at the given timestep. - - :param model: the model to sample from. - :param x: the current tensor at x_{t-1}. - :param t: the value of t, starting at 0 for the first diffusion step. - :param clip_denoised: if True, clip the x_start prediction to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict containing the following keys: - - 'sample': a random sample from the model. - - 'pred_xstart': a prediction of x_0. - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - noise = th.randn_like(x) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - sample = out["mean"] + nonzero_mask * th.exp(0.5 * out["log_variance"]) * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def p_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model. - - :param model: the model module. - :param shape: the shape of the samples, (N, C, H, W). - :param noise: if specified, the noise from the encoder to sample. - Should be of the same shape as `shape`. - :param clip_denoised: if True, clip x_start predictions to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param device: if specified, the device to create the samples on. - If not specified, use a model parameter's device. - :param progress: if True, show a tqdm progress bar. - :return: a non-differentiable batch of samples. - """ - final = None - for sample in self.p_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - ): - final = sample - return final["sample"] - - def p_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model and yield intermediate samples from - each timestep of diffusion. - - Arguments are the same as p_sample_loop(). - Returns a generator over dicts, where each dict is the return value of - p_sample(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.p_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - yield out - img = out["sample"] - - def ddim_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t-1} from the model using DDIM. - - Same usage as p_sample(). - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = self._predict_eps_from_xstart(x, t, out["pred_xstart"]) - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape) - sigma = ( - eta - * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar)) - * th.sqrt(1 - alpha_bar / alpha_bar_prev) - ) - # Equation 12. - noise = th.randn_like(x) - mean_pred = ( - out["pred_xstart"] * th.sqrt(alpha_bar_prev) - + th.sqrt(1 - alpha_bar_prev - sigma ** 2) * eps - ) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - sample = mean_pred + nonzero_mask * sigma * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def ddim_reverse_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t+1} from the model using DDIM reverse ODE. - """ - assert eta == 0.0, "Reverse ODE only for deterministic path" - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x - - out["pred_xstart"] - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape) - alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape) - - # Equation 12. reversed - mean_pred = ( - out["pred_xstart"] * th.sqrt(alpha_bar_next) - + th.sqrt(1 - alpha_bar_next) * eps - ) - - return {"sample": mean_pred, "pred_xstart": out["pred_xstart"]} - - def ddim_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Generate samples from the model using DDIM. - - Same usage as p_sample_loop(). - """ - final = None - for sample in self.ddim_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - eta=eta, - ): - final = sample - return final["sample"] - - def ddim_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Use DDIM to sample from the model and yield intermediate samples from - each timestep of DDIM. - - Same usage as p_sample_loop_progressive(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.ddim_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - eta=eta, - ) - yield out - img = out["sample"] - - def _vb_terms_bpd( - self, model, x_start, x_t, t, clip_denoised=True, model_kwargs=None - ): - """ - Get a term for the variational lower-bound. - - The resulting units are bits (rather than nats, as one might expect). - This allows for comparison to other papers. - - :return: a dict with the following keys: - - 'output': a shape [N] tensor of NLLs or KLs. - - 'pred_xstart': the x_0 predictions. - """ - true_mean, _, true_log_variance_clipped = self.q_posterior_mean_variance( - x_start=x_start, x_t=x_t, t=t - ) - out = self.p_mean_variance( - model, x_t, t, clip_denoised=clip_denoised, model_kwargs=model_kwargs - ) - kl = normal_kl( - true_mean, true_log_variance_clipped, out["mean"], out["log_variance"] - ) - kl = mean_flat(kl) / np.log(2.0) - - decoder_nll = -discretized_gaussian_log_likelihood( - x_start, means=out["mean"], log_scales=0.5 * out["log_variance"] - ) - assert decoder_nll.shape == x_start.shape - decoder_nll = mean_flat(decoder_nll) / np.log(2.0) - - # At the first timestep return the decoder NLL, - # otherwise return KL(q(x_{t-1}|x_t,x_0) || p(x_{t-1}|x_t)) - output = th.where((t == 0), decoder_nll, kl) - return {"output": output, "pred_xstart": out["pred_xstart"]} - - def training_losses(self, model, x_start, t, model_kwargs=None, noise=None): - """ - Compute training losses for a single timestep. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param t: a batch of timestep indices. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param noise: if specified, the specific Gaussian noise to try to remove. - :return: a dict with the key "loss" containing a tensor of shape [N]. - Some mean or variance settings may also have other keys. - """ - if model_kwargs is None: - model_kwargs = {} - if noise is None: - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start, t, noise=noise) - - terms = {} - - if self.loss_type == LossType.KL or self.loss_type == LossType.RESCALED_KL: - terms["loss"] = self._vb_terms_bpd( - model=model, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - model_kwargs=model_kwargs, - )["output"] - if self.loss_type == LossType.RESCALED_KL: - terms["loss"] *= self.num_timesteps - elif self.loss_type == LossType.MSE or self.loss_type == LossType.RESCALED_MSE: - model_output = model(x_t, self._scale_timesteps(t), **model_kwargs) - - if self.model_var_type in [ - ModelVarType.LEARNED, - ModelVarType.LEARNED_RANGE, - ]: - B, C = x_t.shape[:2] - assert model_output.shape == (B, C * 2, *x_t.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - # Learn the variance using the variational bound, but don't let - # it affect our mean prediction. - frozen_out = th.cat([model_output.detach(), model_var_values], dim=1) - terms["vb"] = self._vb_terms_bpd( - model=lambda *args, r=frozen_out: r, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - )["output"] - if self.loss_type == LossType.RESCALED_MSE: - # Divide by 1000 for equivalence with initial implementation. - # Without a factor of 1/1000, the VB term hurts the MSE term. - terms["vb"] *= self.num_timesteps / 1000.0 - - target = { - ModelMeanType.PREVIOUS_X: self.q_posterior_mean_variance( - x_start=x_start, x_t=x_t, t=t - )[0], - ModelMeanType.START_X: x_start, - ModelMeanType.EPSILON: noise, - }[self.model_mean_type] - assert model_output.shape == target.shape == x_start.shape - terms["mse"] = mean_flat((target - model_output) ** 2) - if "vb" in terms: - terms["loss"] = terms["mse"] + terms["vb"] - else: - terms["loss"] = terms["mse"] - else: - raise NotImplementedError(self.loss_type) - - return terms - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - - This term can't be optimized, as it only depends on the encoder. - - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = th.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl( - mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0 - ) - return mean_flat(kl_prior) / np.log(2.0) - - def calc_bpd_loop(self, model, x_start, clip_denoised=True, model_kwargs=None): - """ - Compute the entire variational lower-bound, measured in bits-per-dim, - as well as other related quantities. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param clip_denoised: if True, clip denoised samples. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - - :return: a dict containing the following keys: - - total_bpd: the total variational lower-bound, per batch element. - - prior_bpd: the prior term in the lower-bound. - - vb: an [N x T] tensor of terms in the lower-bound. - - xstart_mse: an [N x T] tensor of x_0 MSEs for each timestep. - - mse: an [N x T] tensor of epsilon MSEs for each timestep. - """ - device = x_start.device - batch_size = x_start.shape[0] - - vb = [] - xstart_mse = [] - mse = [] - for t in list(range(self.num_timesteps))[::-1]: - t_batch = th.tensor([t] * batch_size, device=device) - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start=x_start, t=t_batch, noise=noise) - # Calculate VLB term at the current timestep - with th.no_grad(): - out = self._vb_terms_bpd( - model, - x_start=x_start, - x_t=x_t, - t=t_batch, - clip_denoised=clip_denoised, - model_kwargs=model_kwargs, - ) - vb.append(out["output"]) - xstart_mse.append(mean_flat((out["pred_xstart"] - x_start) ** 2)) - eps = self._predict_eps_from_xstart(x_t, t_batch, out["pred_xstart"]) - mse.append(mean_flat((eps - noise) ** 2)) - - vb = th.stack(vb, dim=1) - xstart_mse = th.stack(xstart_mse, dim=1) - mse = th.stack(mse, dim=1) - - prior_bpd = self._prior_bpd(x_start) - total_bpd = vb.sum(dim=1) + prior_bpd - return { - "total_bpd": total_bpd, - "prior_bpd": prior_bpd, - "vb": vb, - "xstart_mse": xstart_mse, - "mse": mse, - } - - -def _extract_into_tensor(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float() - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res.expand(broadcast_shape) diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/cleanup_test.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/cleanup_test.py deleted file mode 100644 index 7061b292953fe9512bd7243031ac1cb4611e4556..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/cleanup_test.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for relax.cleanup.""" -import io - -from absl.testing import absltest -from alphafold.relax import cleanup -from simtk.openmm.app.internal import pdbstructure - - -def _pdb_to_structure(pdb_str): - handle = io.StringIO(pdb_str) - return pdbstructure.PdbStructure(handle) - - -def _lines_to_structure(pdb_lines): - return _pdb_to_structure('\n'.join(pdb_lines)) - - -class CleanupTest(absltest.TestCase): - - def test_missing_residues(self): - pdb_lines = ['SEQRES 1 C 3 CYS GLY LEU', - 'ATOM 1 N CYS C 1 -12.262 20.115 60.959 1.00 ' - '19.08 N', - 'ATOM 2 CA CYS C 1 -11.065 20.934 60.773 1.00 ' - '17.23 C', - 'ATOM 3 C CYS C 1 -10.002 20.742 61.844 1.00 ' - '15.38 C', - 'ATOM 4 O CYS C 1 -10.284 20.225 62.929 1.00 ' - '16.04 O', - 'ATOM 5 N LEU C 3 -7.688 18.700 62.045 1.00 ' - '14.75 N', - 'ATOM 6 CA LEU C 3 -7.256 17.320 62.234 1.00 ' - '16.81 C', - 'ATOM 7 C LEU C 3 -6.380 16.864 61.070 1.00 ' - '16.95 C', - 'ATOM 8 O LEU C 3 -6.551 17.332 59.947 1.00 ' - '16.97 O'] - input_handle = io.StringIO('\n'.join(pdb_lines)) - alterations = {} - result = cleanup.fix_pdb(input_handle, alterations) - structure = _pdb_to_structure(result) - residue_names = [r.get_name() for r in structure.iter_residues()] - self.assertCountEqual(residue_names, ['CYS', 'GLY', 'LEU']) - self.assertCountEqual(alterations['missing_residues'].values(), [['GLY']]) - - def test_missing_atoms(self): - pdb_lines = ['SEQRES 1 A 1 PRO', - 'ATOM 1 CA PRO A 1 1.000 1.000 1.000 1.00 ' - ' 0.00 C'] - input_handle = io.StringIO('\n'.join(pdb_lines)) - alterations = {} - result = cleanup.fix_pdb(input_handle, alterations) - structure = _pdb_to_structure(result) - atom_names = [a.get_name() for a in structure.iter_atoms()] - self.assertCountEqual(atom_names, ['N', 'CD', 'HD2', 'HD3', 'CG', 'HG2', - 'HG3', 'CB', 'HB2', 'HB3', 'CA', 'HA', - 'C', 'O', 'H2', 'H3', 'OXT']) - missing_atoms_by_residue = list(alterations['missing_heavy_atoms'].values()) - self.assertLen(missing_atoms_by_residue, 1) - atoms_added = [a.name for a in missing_atoms_by_residue[0]] - self.assertCountEqual(atoms_added, ['N', 'CD', 'CG', 'CB', 'C', 'O']) - missing_terminals_by_residue = alterations['missing_terminals'] - self.assertLen(missing_terminals_by_residue, 1) - has_missing_terminal = [r.name for r in missing_terminals_by_residue.keys()] - self.assertCountEqual(has_missing_terminal, ['PRO']) - self.assertCountEqual([t for t in missing_terminals_by_residue.values()], - [['OXT']]) - - def test_remove_heterogens(self): - pdb_lines = ['SEQRES 1 A 1 GLY', - 'ATOM 1 CA GLY A 1 0.000 0.000 0.000 1.00 ' - ' 0.00 C', - 'ATOM 2 O HOH A 2 0.000 0.000 0.000 1.00 ' - ' 0.00 O'] - input_handle = io.StringIO('\n'.join(pdb_lines)) - alterations = {} - result = cleanup.fix_pdb(input_handle, alterations) - structure = _pdb_to_structure(result) - self.assertCountEqual([res.get_name() for res in structure.iter_residues()], - ['GLY']) - self.assertEqual(alterations['removed_heterogens'], set(['HOH'])) - - def test_fix_nonstandard_residues(self): - pdb_lines = ['SEQRES 1 A 1 DAL', - 'ATOM 1 CA DAL A 1 0.000 0.000 0.000 1.00 ' - ' 0.00 C'] - input_handle = io.StringIO('\n'.join(pdb_lines)) - alterations = {} - result = cleanup.fix_pdb(input_handle, alterations) - structure = _pdb_to_structure(result) - residue_names = [res.get_name() for res in structure.iter_residues()] - self.assertCountEqual(residue_names, ['ALA']) - self.assertLen(alterations['nonstandard_residues'], 1) - original_res, new_name = alterations['nonstandard_residues'][0] - self.assertEqual(original_res.id, '1') - self.assertEqual(new_name, 'ALA') - - def test_replace_met_se(self): - pdb_lines = ['SEQRES 1 A 1 MET', - 'ATOM 1 SD MET A 1 0.000 0.000 0.000 1.00 ' - ' 0.00 Se'] - structure = _lines_to_structure(pdb_lines) - alterations = {} - cleanup._replace_met_se(structure, alterations) - sd = [a for a in structure.iter_atoms() if a.get_name() == 'SD'] - self.assertLen(sd, 1) - self.assertEqual(sd[0].element_symbol, 'S') - self.assertCountEqual(alterations['Se_in_MET'], [sd[0].residue_number]) - - def test_remove_chains_of_length_one(self): - pdb_lines = ['SEQRES 1 A 1 GLY', - 'ATOM 1 CA GLY A 1 0.000 0.000 0.000 1.00 ' - ' 0.00 C'] - structure = _lines_to_structure(pdb_lines) - alterations = {} - cleanup._remove_chains_of_length_one(structure, alterations) - chains = list(structure.iter_chains()) - self.assertEmpty(chains) - self.assertCountEqual(alterations['removed_chains'].values(), [['A']]) - - -if __name__ == '__main__': - absltest.main() diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 267483d88ff25d75dc18c5c2d37375cd77c9639c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = '../deeplabv3/deeplabv3_r101-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='mmcls://mobilenet_v2', - backbone=dict( - _delete_=True, - type='MobileNetV2', - widen_factor=1., - strides=(1, 2, 2, 1, 1, 1, 1), - dilations=(1, 1, 1, 2, 2, 4, 4), - out_indices=(1, 2, 4, 6)), - decode_head=dict(in_channels=320), - auxiliary_head=dict(in_channels=96)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k.py deleted file mode 100644 index c5f6ab0d62e269e44dac016eb5ac58f49c1fa292..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = '../fcn/fcn_r101-d8_512x512_160k_ade20k.py' -model = dict( - pretrained='mmcls://mobilenet_v2', - backbone=dict( - _delete_=True, - type='MobileNetV2', - widen_factor=1., - strides=(1, 2, 2, 1, 1, 1, 1), - dilations=(1, 1, 1, 2, 2, 4, 4), - out_indices=(1, 2, 4, 6)), - decode_head=dict(in_channels=320), - auxiliary_head=dict(in_channels=96)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py deleted file mode 100644 index 0b5a990604a77238375cb6d2b8298a382a457dd6..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './pspnet_r50-d8_480x480_40k_pascal_context.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_seanet.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_seanet.py deleted file mode 100644 index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_seanet.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock -from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d - - -class TestSEANetModel: - - def test_base(self): - encoder = SEANetEncoder() - decoder = SEANetDecoder() - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_causal(self): - encoder = SEANetEncoder(causal=True) - decoder = SEANetDecoder(causal=True) - x = torch.randn(1, 1, 24000) - - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_conv_skip_connection(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False) - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_seanet_encoder_decoder_final_act(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False, final_activation='Tanh') - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in encoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - # here we add + 1 to n_blocks as we increment n_blocks just after the block - assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm - - def test_encoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_encoder_blocks_norm(encoder, disable_blocks, norm) - - def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in decoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, StreamableConvTranspose1d): - n_blocks += 1 - assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - assert resnet_layer.conv.norm_type == 'none' \ - if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - - def test_decoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_decoder_blocks_norm(decoder, disable_blocks, norm) - - def test_disable_norm_raises_exception(self): - # Invalid disable_norm_outer_blocks values raise exceptions - with pytest.raises(AssertionError): - SEANetEncoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) - - with pytest.raises(AssertionError): - SEANetDecoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/models/builders.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/models/builders.py deleted file mode 100644 index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/models/builders.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp -import warnings - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, - MusicLMPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ConditioningProvider, - LUTConditioner, - T5Conditioner, - ConditionFuser, - ChromaStemConditioner, -) -from .. import quantization as qt -from ..utils.utils import dict_from_config - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model. - """ - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', None) - renorm = kwargs.pop('renorm') - if renormalize is None: - renormalize = renorm is not None - warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.") - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM. - """ - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling' - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f'Unexpected LM model {cfg.lm_model}') - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model. - """ - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, "conditioners") - cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg - conditioners: tp.Dict[str, BaseConditioner] = {} - with omegaconf.open_dict(cfg): - condition_provider_args = cfg.pop('args', {}) - for cond, cond_cfg in cfg.items(): - model_type = cond_cfg["model"] - model_args = cond_cfg[model_type] - if model_type == "t5": - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == "lut": - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == "chroma_stem": - model_args.pop('cache_path', None) - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - else: - raise ValueError(f"unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object. - """ - fuser_cfg = getattr(cfg, "fuser") - fuser_methods = ["sum", "cross", "prepend", "input_interpolate"] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object. - """ - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu'): - """Instantiate a debug compression model to be used for unit tests. - """ - seanet_kwargs = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': [10, 8, 16] # 25 Hz at 32kHz - } - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=25, sample_rate=32000, channels=1).to(device) - return compression_model.eval() - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests. - """ - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() diff --git a/spaces/GroveStreet/GTA_SOVITS/modules/F0Predictor/DioF0Predictor.py b/spaces/GroveStreet/GTA_SOVITS/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index 4ab27de23cae4dbc282e30f84501afebd1a37518..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,85 +0,0 @@ -from modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - -class DioF0Predictor(F0Predictor): - def __init__(self,hop_length=512,f0_min=50,f0_max=1100,sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self,f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] #这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - def resize_f0(self,x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - - def compute_f0(self,wav,p_len=None): - if p_len is None: - p_len = wav.shape[0]//self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self,wav,p_len=None): - if p_len is None: - p_len = wav.shape[0]//self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/HaHaBill/LandShapes-Antarctica/app.py b/spaces/HaHaBill/LandShapes-Antarctica/app.py deleted file mode 100644 index 0ec2d2bb45bbbeca2ef777b121820ccdb1d252ed..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/app.py +++ /dev/null @@ -1,302 +0,0 @@ -from ipywidgets import fixed -import gradio as gr -from skimage import img_as_ubyte -from config import Config -from decomposition import get_or_compute -from models import get_instrumented_model -import imageio -from PIL import Image -import ipywidgets as widgets -import numpy as np -import PIL -import torch -from IPython.utils import io -import nltk -nltk.download('wordnet') - -# @title Load Model -selected_model = 'landshapes-v2' - -# Load model - -# Speed up computation -torch.autograd.set_grad_enabled(False) -torch.backends.cudnn.benchmark = True - -# Specify model to use -config = Config( - model='StyleGAN2', - layer='style', - output_class=selected_model, - components=80, - use_w=True, - batch_size=5_000, # style layer quite small -) -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -inst = get_instrumented_model(config.model, config.output_class, - config.layer, torch.device(device), use_w=config.use_w) - -path_to_components = get_or_compute(config, inst) - -model = inst.model - -comps = np.load(path_to_components) -lst = comps.files -latent_dirs = [] -latent_stdevs = [] - -load_activations = False - -for item in lst: - if load_activations: - if item == 'act_comp': - for i in range(comps[item].shape[0]): - latent_dirs.append(comps[item][i]) - if item == 'act_stdev': - for i in range(comps[item].shape[0]): - latent_stdevs.append(comps[item][i]) - else: - if item == 'lat_comp': - for i in range(comps[item].shape[0]): - latent_dirs.append(comps[item][i]) - if item == 'lat_stdev': - for i in range(comps[item].shape[0]): - latent_stdevs.append(comps[item][i]) - - -# @title Define functions - - -# Taken from https://github.com/alexanderkuk/log-progress -def log_progress(sequence, every=1, size=None, name='Items'): - from ipywidgets import IntProgress, HTML, VBox - from IPython.display import display - - is_iterator = False - if size is None: - try: - size = len(sequence) - except TypeError: - is_iterator = True - if size is not None: - if every is None: - if size <= 200: - every = 1 - else: - every = int(size / 200) # every 0.5% - else: - assert every is not None, 'sequence is iterator, set every' - - if is_iterator: - progress = IntProgress(min=0, max=1, value=1) - progress.bar_style = 'info' - else: - progress = IntProgress(min=0, max=size, value=0) - label = HTML() - box = VBox(children=[label, progress]) - display(box) - - index = 0 - try: - for index, record in enumerate(sequence, 1): - if index == 1 or index % every == 0: - if is_iterator: - label.value = '{name}: {index} / ?'.format( - name=name, - index=index - ) - else: - progress.value = index - label.value = u'{name}: {index} / {size}'.format( - name=name, - index=index, - size=size - ) - yield record - except: - progress.bar_style = 'danger' - raise - else: - progress.bar_style = 'success' - progress.value = index - label.value = "{name}: {index}".format( - name=name, - index=str(index or '?') - ) - - -def name_direction(sender): - if not text.value: - print('Please name the direction before saving') - return - - if num in named_directions.values(): - target_key = list(named_directions.keys())[ - list(named_directions.values()).index(num)] - print(f'Direction already named: {target_key}') - print(f'Overwriting... ') - del(named_directions[target_key]) - named_directions[text.value] = [num, start_layer.value, end_layer.value] - save_direction(random_dir, text.value) - for item in named_directions: - print(item, named_directions[item]) - - -def save_direction(direction, filename): - filename += ".npy" - np.save(filename, direction, allow_pickle=True, fix_imports=True) - print(f'Latent direction saved as {filename}') - - -def mix_w(w1, w2, content, style): - for i in range(0, 5): - w2[i] = w1[i] * (1 - content) + w2[i] * content - - for i in range(5, 16): - w2[i] = w1[i] * (1 - style) + w2[i] * style - - return w2 - - -def display_sample_pytorch(seed, truncation, directions, distances, scale, start, end, w=None, disp=True, save=None, noise_spec=None): - # blockPrint() - model.truncation = truncation - if w is None: - w = model.sample_latent(1, seed=seed).detach().cpu().numpy() - w = [w]*model.get_max_latents() # one per layer - else: - w = [np.expand_dims(x, 0) for x in w] - - for l in range(start, end): - for i in range(len(directions)): - w[l] = w[l] + directions[i] * distances[i] * scale - - torch.cuda.empty_cache() - # save image and display - out = model.sample_np(w) - final_im = Image.fromarray( - (out * 255).astype(np.uint8)).resize((500, 500), Image.LANCZOS) - - if save is not None: - if disp == False: - print(save) - final_im.save(f'out/{seed}_{save:05}.png') - if disp: - display(final_im) - - return final_im - - -def generate_mov(seed, truncation, direction_vec, scale, layers, n_frames, out_name='out', noise_spec=None, loop=True): - """Generates a mov moving back and forth along the chosen direction vector""" - # Example of reading a generated set of images, and storing as MP4. - movieName = f'{out_name}.mp4' - offset = -10 - step = 20 / n_frames - imgs = [] - for i in log_progress(range(n_frames), name="Generating frames"): - print(f'\r{i} / {n_frames}', end='') - w = model.sample_latent(1, seed=seed).cpu().numpy() - - model.truncation = truncation - w = [w]*model.get_max_latents() # one per layer - for l in layers: - if l <= model.get_max_latents(): - w[l] = w[l] + direction_vec * offset * scale - - # save image and display - out = model.sample_np(w) - final_im = Image.fromarray((out * 255).astype(np.uint8)) - imgs.append(out) - # increase offset - offset += step - if loop: - imgs += imgs[::-1] - with imageio.get_writer(movieName, mode='I') as writer: - for image in log_progress(list(imgs), name="Creating animation"): - writer.append_data(img_as_ubyte(image)) - - -# @title Demo UI - - -def generate_image(seed1, seed2, content, style, truncation, c0, c1, c2, c3, c4, c5, c6, start_layer, end_layer): - seed1 = int(seed1) - seed2 = int(seed2) - - scale = 1 - params = {'c0': c0, - 'c1': c1, - 'c2': c2, - 'c3': c3, - 'c4': c4, - 'c5': c5, - 'c6': c6} - - param_indexes = {'c0': 0, - 'c1': 1, - 'c2': 2, - 'c3': 3, - 'c4': 4, - 'c5': 5, - 'c6': 6} - - directions = [] - distances = [] - for k, v in params.items(): - directions.append(latent_dirs[param_indexes[k]]) - distances.append(v) - - w1 = model.sample_latent(1, seed=seed1).detach().cpu().numpy() - w1 = [w1]*model.get_max_latents() # one per layer - im1 = model.sample_np(w1) - - w2 = model.sample_latent(1, seed=seed2).detach().cpu().numpy() - w2 = [w2]*model.get_max_latents() # one per layer - im2 = model.sample_np(w2) - combined_im = np.concatenate([im1, im2], axis=1) - input_im = Image.fromarray((combined_im * 255).astype(np.uint8)) - - mixed_w = mix_w(w1, w2, content, style) - return display_sample_pytorch(seed1, truncation, directions, distances, scale, int(start_layer), int(end_layer), w=mixed_w, disp=False) - - -truncation = gr.inputs.Slider( - minimum=0, maximum=1, default=0.5, label="Truncation") -start_layer = gr.inputs.Number(default=3, label="Start Layer") -end_layer = gr.inputs.Number(default=14, label="End Layer") -seed1 = gr.inputs.Number(default=0, label="Seed 1") -seed2 = gr.inputs.Number(default=0, label="Seed 2") -content = gr.inputs.Slider( - label="Structure", minimum=0, maximum=1, default=0.5) -style = gr.inputs.Slider(label="Style", minimum=0, maximum=1, default=0.5) - -slider_max_val = 20 -slider_min_val = -20 -slider_step = 1 - -c0 = gr.inputs.Slider(label="Component 1", - minimum=slider_min_val, maximum=slider_max_val, default=0) -c1 = gr.inputs.Slider(label="Component 2", - minimum=slider_min_val, maximum=slider_max_val, default=0) -c2 = gr.inputs.Slider( - label="Component 3", minimum=slider_min_val, maximum=slider_max_val, default=0) -c3 = gr.inputs.Slider(label="Component 4", minimum=slider_min_val, - maximum=slider_max_val, default=0) -c4 = gr.inputs.Slider(label="Component 5", minimum=slider_min_val, - maximum=slider_max_val, default=0) -c5 = gr.inputs.Slider(label="Component 6", minimum=slider_min_val, - maximum=slider_max_val, default=0) -c6 = gr.inputs.Slider(label="Component 7", - minimum=slider_min_val, maximum=slider_max_val, default=0) - - -scale = 1 - -inputs = [seed1, seed2, content, style, truncation, - c0, c1, c2, c3, c4, c5, c6, start_layer, end_layer] -description = "Change the seed number to generate different parent design" - -gr.Interface(generate_image, inputs, [ - "image"], description=description, live=True, title="Landshapes Online").launch() diff --git a/spaces/Haokko/AronaTTS/text/__init__.py b/spaces/Haokko/AronaTTS/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/Haokko/AronaTTS/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py deleted file mode 100644 index 2be05d5535cb05b16f61603a7356df2326bf2e23..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - - -class LayerSelect(nn.Module): - """Compute samples (from a Gumbel-Sigmoid distribution) which is used as - either (soft) weighting or (hard) selection of residual connection. - https://arxiv.org/abs/2009.13102 - """ - def __init__(self, num_layers, num_logits, soft_select=False, sampling_tau=5.): - super(LayerSelect, self).__init__() - self.layer_logits = torch.nn.Parameter( - torch.Tensor(num_logits, num_layers), - requires_grad=True, - ) - self.hard_select = not soft_select - self.tau = sampling_tau - self.detach_grad = False - self.layer_samples = [None] * num_logits - - def sample(self, logit_idx): - """To leverage the efficiency of distributed training, samples for all - layers are computed at once for each logit_idx. Logits are parameters - learnt independent of each other. - - Args: - logit_idx: The index of logit parameters used for sampling. - """ - assert logit_idx is not None - self.samples = self._gumbel_sigmoid( - self.layer_logits[logit_idx, :].detach() - if self.detach_grad - else self.layer_logits[logit_idx, :], - dim=-1, - tau=self.tau, - hard=self.hard_select, - ) - self.layer_samples[logit_idx] = self.samples - - def forward(self, i): - sample = self.samples[i] - return sample - - def _gumbel_sigmoid( - self, logits, tau=1, hard=False, eps=1e-10, dim=-1, threshold=0.5 - ): - # ~Gumbel(0,1) - gumbels1 = ( - -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format) - .exponential_() - .log() - ) - gumbels2 = ( - -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format) - .exponential_() - .log() - ) - # Difference of two gumbels because we apply a sigmoid - gumbels1 = (logits + gumbels1 - gumbels2) / tau - y_soft = gumbels1.sigmoid() - if hard: - # Straight through. - y_hard = torch.zeros_like( - logits, memory_format=torch.legacy_contiguous_format - ).masked_fill(y_soft > threshold, 1.0) - ret = y_hard - y_soft.detach() + y_soft - else: - # Reparametrization trick. - ret = y_soft - return ret diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py deleted file mode 100644 index 688d4e36e358df2dcc432d37d3e57bd81e2f1ed1..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py +++ /dev/null @@ -1,140 +0,0 @@ -""" -Byte pair encoding utilities from GPT-2. - -Original source: https://github.com/openai/gpt-2/blob/master/src/encoder.py -Original license: MIT -""" - -import json -from functools import lru_cache - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) - + list(range(ord("¡"), ord("¬") + 1)) - + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2 ** 8): - if b not in bs: - bs.append(b) - cs.append(2 ** 8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -class Encoder: - def __init__(self, encoder, bpe_merges, errors="replace"): - self.encoder = encoder - self.decoder = {v: k for k, v in self.encoder.items()} - self.errors = errors # how to handle errors in decoding - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) - self.cache = {} - - try: - import regex as re - - self.re = re - except ImportError: - raise ImportError("Please install regex with: pip install regex") - - # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions - self.pat = self.re.compile( - r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""" - ) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - for token in self.re.findall(self.pat, text): - token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) - bpe_tokens.extend( - self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ") - ) - return bpe_tokens - - def decode(self, tokens): - text = "".join([self.decoder.get(token, token) for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode( - "utf-8", errors=self.errors - ) - return text - - -def get_encoder(encoder_json_path, vocab_bpe_path): - with open(encoder_json_path, "r") as f: - encoder = json.load(f) - with open(vocab_bpe_path, "r", encoding="utf-8") as f: - bpe_data = f.read() - bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split("\n")[1:-1]] - return Encoder( - encoder=encoder, - bpe_merges=bpe_merges, - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/lightconv.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/lightconv.py deleted file mode 100644 index 4edfe359379bc2445c1ae1ada04bd34ca4a32798..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/lightconv.py +++ /dev/null @@ -1,1019 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - AdaptiveSoftmax, - DynamicConv, - FairseqDropout, - LayerNorm, - LightweightConv, - MultiheadAttention, - PositionalEmbedding, -) -from fairseq.utils import safe_hasattr - - -@register_model("lightconv") -class LightConvModel(FairseqEncoderDecoderModel): - """ - LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019) - `_. - To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight`` - To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic`` - - Args: - encoder (LightConvEncoder): the encoder - decoder (LightConvDecoder): the decoder - - The LightConv model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.lightconv_parser - :prog: - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - return { - 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'), - 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'), - 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'), - 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'), - 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'), - } - # fmt: on - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--encoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-conv-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--decoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-conv-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ), - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--encoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31,31]")', - ) - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--encoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--encoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not safe_hasattr(args, "max_source_positions"): - args.max_source_positions = 1024 - if not safe_hasattr(args, "max_target_positions"): - args.max_target_positions = 1024 - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise RuntimeError( - "--share-all-embeddings requires a joined dictionary" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise RuntimeError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise RuntimeError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens) - decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens) - return LightConvModel(encoder, decoder) - - -class LightConvEncoder(FairseqEncoder): - """ - LightConv encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`LightConvEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = args.max_source_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvEncoderLayer( - args, kernel_size=args.encoder_kernel_size_list[i] - ) - for i in range(args.encoder_layers) - ] - ) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.encoder_normalize_before - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward(self, src_tokens, **unused): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - """ - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x += self.embed_positions(src_tokens) - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # encoder layers - for layer in self.layers: - x = layer(x, encoder_padding_mask) - - if self.normalize: - x = self.layer_norm(x) - - return { - "encoder_out": x, # T x B x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - -class LightConvDecoder(FairseqIncrementalDecoder): - """ - LightConv decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`LightConvDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.share_input_output_embed = args.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = args.decoder_embed_dim - output_embed_dim = args.decoder_output_dim - - padding_idx = embed_tokens.padding_idx - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvDecoderLayer( - args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i] - ) - for i in range(args.decoder_layers) - ] - ) - - self.adaptive_softmax = None - - self.project_out_dim = ( - Linear(embed_dim, output_embed_dim, bias=False) - if embed_dim != output_embed_dim and not args.tie_adaptive_weights - else None - ) - - if args.adaptive_softmax_cutoff is not None: - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - output_embed_dim, - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.decoder_normalize_before and final_norm - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - for layer in self.layers: - x, attn = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] - if encoder_out is not None - else None, - incremental_state, - ) - inner_states.append(x) - - if self.normalize: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = F.linear(x, self.embed_out) - - return x, {"attn": attn, "inner_states": inner_states} - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - -class LightConvEncoderLayer(nn.Module): - """Encoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, kernel_size=0): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.conv_dim = args.encoder_conv_dim - padding_l = ( - kernel_size // 2 - if kernel_size % 2 == 1 - else ((kernel_size - 1) // 2, kernel_size // 2) - ) - - if args.encoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.encoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.encoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)]) - - def forward(self, x, encoder_padding_mask): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(0, x, before=True) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0) - x = self.conv(x) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(0, x, after=True) - - residual = x - x = self.maybe_layer_norm(1, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(1, x, after=True) - return x - - def maybe_layer_norm(self, i, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return self.layer_norms[i](x) - else: - return x - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -class LightConvDecoderLayer(nn.Module): - """Decoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, no_encoder_attn=False, kernel_size=0): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.conv_dim = args.decoder_conv_dim - if args.decoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.decoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.decoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - self.conv_layer_norm = LayerNorm(self.embed_dim) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim) - self.need_attn = True - - def forward( - self, - x, - encoder_out, - encoder_padding_mask, - incremental_state, - prev_conv_state=None, - prev_attn_state=None, - conv_mask=None, - conv_padding_mask=None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True) - if prev_conv_state is not None: - if incremental_state is None: - incremental_state = {} - self.conv._set_input_buffer(incremental_state, prev_conv_state) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - x = self.conv(x, incremental_state=incremental_state) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True) - - attn = None - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return x, attn - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@register_model_architecture("lightconv", "lightconv") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - args.encoder_kernel_size_list = getattr( - args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31] - ) - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.encoder_kernel_size_list) == 1: - args.encoder_kernel_size_list = ( - args.encoder_kernel_size_list * args.encoder_layers - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.encoder_kernel_size_list) == args.encoder_layers - ), "encoder_kernel_size_list doesn't match encoder_layers" - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.encoder_glu = getattr(args, "encoder_glu", True) - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv", "lightconv_iwslt_de_en") -def lightconv_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", 0.1) - args.encoder_glu = getattr(args, "encoder_glu", False) - args.decoder_glu = getattr(args, "decoder_glu", False) - args.input_dropout = getattr(args, "input_dropout", 0.0) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de") -def lightconv_wmt_en_de(args): - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de_big") -def lightconv_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big") -def lightconv_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - lightconv_wmt_en_de_big(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big") -def lightconv_wmt_zh_en_big(args): - args.dropout = getattr(args, "dropout", 0.2) - args.attention_dropout = getattr(args, "attention_dropout", 0.2) - args.weight_dropout = getattr(args, "weight_dropout", 0.2) - lightconv_wmt_en_de_big(args) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/kmeans_attention.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/kmeans_attention.py deleted file mode 100644 index 11a7debcf2ac025fb02ba5e672987f87dbbc49a4..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/kmeans_attention.py +++ /dev/null @@ -1,609 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import math -from inspect import isfunction -from operator import mul -from functools import reduce, wraps - -from aml.multimodal_video.utils.einops.lib import rearrange, repeat -from aml.multimodal_video.utils.einops.lib.layers.torch import Rearrange - -from fairseq.modules.local_attention import LocalAttention - -# constants - -TOKEN_SELF_ATTN_VALUE = -5e4 -KMEAN_INIT_ITERS = 10 - -# helper functions - - -def exists(val): - return val is not None - - -def identity(x, *args, **kwargs): - return x - - -def default(x, d): - if not exists(x): - return d if not isfunction(d) else d() - return x - - -def cast_tuple(x): - return x if isinstance(x, tuple) else (x,) - - -def cache_fn(f): - cache = None - - @wraps(f) - def cached_fn(*args, **kwargs): - nonlocal cache - if exists(cache): - return cache - cache = f(*args, **kwargs) - return cache - return cached_fn - - -def to(t): - return {'device': t.device, 'dtype': t.dtype} - - -def find_modules(nn_module, type): - return [module for module in nn_module.modules() if isinstance(module, type)] - - -def is_empty(t): - return t.nelement() == 0 - - -def max_neg_value(tensor): - return -torch.finfo(tensor.dtype).max - - -def batched_index_select(values, indices): - last_dim = values.shape[-1] - return values.gather(2, expand_dim(indices, -1, last_dim)) - - -def merge_dims(ind_from, ind_to, tensor): - shape = list(tensor.shape) - arr_slice = slice(ind_from, ind_to + 1) - shape[arr_slice] = [reduce(mul, shape[arr_slice])] - return tensor.reshape(*shape) - - -def expand_dim(t, dim, k): - t = t.unsqueeze(dim) - expand_shape = [-1] * len(t.shape) - expand_shape[dim] = k - return t.expand(*expand_shape) - - -def scatter_mean(src, t, index, dim, eps=1e-5): - numer = src.scatter_add(dim, index, t) - denom = src.scatter_add(dim, index, torch.ones_like(t)) - return numer / (denom + eps) - - -def split_at_index(dim, index, t): - pre_slices = (slice(None),) * dim - l = (*pre_slices, slice(None, index)) - r = (*pre_slices, slice(index, None)) - return t[l], t[r] - - -def reshape_dim(t, dim, split_dims): - shape = list(t.shape) - num_dims = len(shape) - dim = (dim + num_dims) % num_dims - shape[dim:dim+1] = split_dims - return t.reshape(shape) - - -def ema(old, new, decay): - if not exists(old): - return new - return old * decay + new * (1 - decay) - - -def ema_inplace(moving_avg, new, decay): - if is_empty(moving_avg): - moving_avg.data.copy_(new) - return - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - -# helper classes - - -def map_first_tuple_or_el(x, fn): - if isinstance(x, tuple): - return (fn(x[0]),) + x[1:] - return fn(x) - - -class Chunk(nn.Module): - def __init__(self, chunks, fn, along_dim=-1): - super().__init__() - self.dim = along_dim - self.chunks = chunks - self.fn = fn - - def forward(self, x, **kwargs): - if self.chunks <= 1: - return self.fn(x, **kwargs) - chunks = x.chunk(self.chunks, dim=self.dim) - return torch.cat([self.fn(c, **kwargs) for c in chunks], dim=self.dim) - - -class PreNorm(nn.ModuleList): - def __init__(self, norm_class, dim, fn): - super().__init__() - self.norm = norm_class(dim) - self.fn = fn - - def forward(self, x, **kwargs): - x = self.norm(x) - return self.fn(x, **kwargs) - - -class ReZero(nn.Module): - def __init__(self, fn): - super().__init__() - self.residual_weight = nn.Parameter(torch.zeros(1)) - self.fn = fn - - def forward(self, x, **kwargs): - x = self.fn(x, **kwargs) - return map_first_tuple_or_el(x, lambda t: t * self.residual_weight) - - -class ScaleNorm(nn.Module): - def __init__(self, dim, eps=1e-5): - super().__init__() - self.g = nn.Parameter(torch.ones(1)) - self.eps = eps - - def forward(self, x): - def norm(t): - n = torch.norm(t, dim=-1, keepdim=True).clamp(min=self.eps) - return t / n * self.g - return map_first_tuple_or_el(x, norm) - - -class ProjectInOut(nn.Module): - def __init__(self, fn, dim_in, dim_out, project_out=True): - super().__init__() - self.fn = fn - self.project_in = nn.Linear(dim_in, dim_out) - self.project_out = nn.Linear(dim_out, dim_in) if project_out else identity - - def forward(self, x, **kwargs): - x = self.project_in(x) - x, loss = self.fn(x, **kwargs) - x = self.project_out(x) - return x, loss - - -class MatrixMultiply(nn.Module): - def __init__(self, tensor, transpose=False): - super().__init__() - self.tensor = tensor - self.transpose = transpose - - def forward(self, x): - tensor = self.tensor - if self.transpose: - tensor = tensor.t() - return x @ tensor - -# positional embeddings - - -class DepthWiseConv1d(nn.Module): - def __init__(self, dim_in, dim_out, kernel_size, stride=1, bias=True, causal=False): - super().__init__() - self.padding = ((kernel_size - 1), 0) if causal else (kernel_size // 2, kernel_size // 2) - - self.net = nn.Sequential( - nn.Conv1d(dim_in, dim_in, kernel_size=kernel_size, groups=dim_in, stride=stride, bias=bias), - nn.Conv1d(dim_in, dim_out, 1, bias=bias) - ) - - def forward(self, x): - x = F.pad(x, self.padding, value=0.) - return self.net(x) - - -class FixedPositionalEmbedding(nn.Module): - def __init__(self, dim, max_seq_len): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - position = torch.arange(0, max_seq_len, dtype=torch.float) - sinusoid_inp = torch.einsum("i,j->ij", position, inv_freq) - emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1) - self.register_buffer('emb', emb) - - def forward(self, x): - return self.emb[None, :x.shape[1], :].to(x) - - -def rotate_every_two(x): - x = rearrange(x, '... (d j) -> ... d j', j=2) - x1, x2 = x.unbind(dim=-1) - x = torch.stack((-x2, x1), dim=-1) - return rearrange(x, '... d j -> ... (d j)') - - -def apply_rotary_pos_emb(q, k, sinu_pos): - sinu_pos = rearrange(sinu_pos, '() n (j d) -> n j d', j=2) - sin, cos = sinu_pos.unbind(dim=-2) - sin, cos = map(lambda t: repeat(t, 'b n -> b (n j)', j=2), (sin, cos)) - q, k = map(lambda t: (t * cos) + (rotate_every_two(t) * sin), (q, k)) - return q, k - -# kmeans related function and class - - -def update_kmeans_on_backwards(module): - module.kmean_modules = find_modules(module, Kmeans) - - def hook(_, grad_in, grad_out): - for m in module.kmean_modules: - m.update() - - return module.register_backward_hook(hook) - - -def similarity(x, means): - return torch.einsum('bhld,hcd->bhlc', x, means) - - -def dists_and_buckets(x, means): - dists = similarity(x, means) - _, buckets = torch.max(dists, dim=-1) - return dists, buckets - - -def batched_bincount(index, num_classes, dim=-1): - shape = list(index.shape) - shape[dim] = num_classes - out = index.new_zeros(shape) - out.scatter_add_(dim, index, torch.ones_like(index, dtype=index.dtype)) - return out - - -def kmeans_iter(x, means, buckets=None): - b, h, _, d, dtype, num_clusters = *x.shape, x.dtype, means.shape[1] - - if not exists(buckets): - _, buckets = dists_and_buckets(x, means) - - bins = batched_bincount(buckets, num_clusters).sum(0, keepdim=True) - zero_mask = bins.long() == 0 - - means_ = buckets.new_zeros(b, h, num_clusters, d, dtype=dtype) - means_.scatter_add_(-2, expand_dim(buckets, -1, d), x) - means_ = F.normalize(means_.sum(0, keepdim=True), dim=-1).type(dtype) - - means = torch.where(zero_mask.unsqueeze(-1), means, means_) - means = means.squeeze(0) - return means - - -def distribution(dists, window_size): - _, topk_indices = dists.topk(k=window_size, dim=-2) - indices = topk_indices.transpose(-2, -1) - return indices.reshape(*indices.size()[:2], -1) - - -class Kmeans(nn.Module): - def __init__(self, num_heads, head_dim, num_clusters, ema_decay=0.999, commitment=1e-4): - super().__init__() - self.commitment = commitment - self.ema_decay = ema_decay - - self.register_buffer('means', torch.randn(num_heads, num_clusters, head_dim)) - self.register_buffer('initted', torch.tensor(False)) - self.num_new_means = 0 - self.new_means = None - - @torch.no_grad() - def init(self, x): - if self.initted: - return - _, h, _, d, device, _ = *x.shape, x.device, x.dtype - - num_clusters = self.means.shape[1] - - means = x.transpose(0, 1).contiguous().view(h, -1, d) - num_samples = means.shape[1] - - if num_samples >= num_clusters: - indices = torch.randperm(num_samples, device=device)[:num_clusters] - else: - indices = torch.randint(0, num_samples, (num_clusters,), device=device) - - means = means[:, indices] - - for _ in range(KMEAN_INIT_ITERS): - means = kmeans_iter(x, means) - - self.num_new_means = 0 - self.means.data.copy_(means) - self.initted.data.copy_(torch.tensor(True)) - - @torch.no_grad() - def update(self, new_means=None): - new_means = default(new_means, self.new_means) - assert exists(new_means), 'new kmeans has not been supplied' - ema_inplace(self.means, new_means, self.ema_decay) - - del self.new_means - self.new_means = None - self.num_new_means = 0 - - def forward(self, x, update_means=False): - self.init(x) - - b, dtype = x.shape[0], x.dtype - means = self.means.type(dtype) - x = F.normalize(x, 2, dim=-1).type(dtype) - - with torch.no_grad(): - dists, buckets = dists_and_buckets(x, means) - - routed_means = batched_index_select(expand_dim(means, 0, b), buckets) - loss = F.mse_loss(x, routed_means) * self.commitment - - if update_means: - with torch.no_grad(): - means = kmeans_iter(x, means, buckets) - self.new_means = ema(self.new_means, means, self.num_new_means / (self.num_new_means + 1)) - self.num_new_means += 1 - - return dists, loss - -# kmeans attention class - - -class KmeansAttention(nn.Module): - def __init__(self, num_clusters, window_size, num_heads, head_dim, causal=False, dropout=0., ema_decay=0.999, commitment=1e-4, context_window_size=None, receives_context=False, num_mem_kv=0, shared_qk=False): - super().__init__() - self.num_heads = num_heads - self.num_clusters = num_clusters - self.head_dim = head_dim - - self.window_size = window_size - self.context_window_size = default(context_window_size, window_size) - self.causal = causal - - self.shared_qk = shared_qk - self.receives_context = receives_context - self.kmeans = Kmeans(num_heads, head_dim, num_clusters, ema_decay, commitment) - self.dropout = nn.Dropout(dropout) - - self.num_mem_kv = max(num_mem_kv, 1 if causal and not shared_qk else 0) - self.mem_key = nn.Parameter(torch.randn(num_heads, num_clusters, self.num_mem_kv, head_dim)) - self.mem_value = nn.Parameter(torch.randn(num_heads, num_clusters, self.num_mem_kv, head_dim)) - - def forward(self, q, k, v, query_mask=None, key_mask=None, **kwargs): - b, h, t, d, kv_t, wsz, c_wsz, nc, device, dtype = *q.shape, k.shape[2], self.window_size, self.context_window_size, self.num_clusters, q.device, q.dtype - is_reverse = kwargs.pop('_reverse', False) - - out = torch.zeros_like(q, dtype=dtype) - - update_kmeans = self.training and not is_reverse - - key_mask = default(key_mask, query_mask) if not self.receives_context else key_mask - kv_wsz = wsz if not self.receives_context else c_wsz - - wsz = min(wsz, t) - kv_wsz = min(kv_wsz, kv_t) - - if not self.shared_qk or self.receives_context: - dists, aux_loss = self.kmeans(torch.cat((q, k), dim=2), update_kmeans) - q_dists, k_dists = split_at_index(2, t, dists) - indices = distribution(q_dists, wsz) - kv_indices = distribution(k_dists, kv_wsz) - else: - dists, aux_loss = self.kmeans(q, update_kmeans) - k = F.normalize(k, dim=-1).to(q) - indices = distribution(dists, wsz) - kv_indices = indices - - q = batched_index_select(q, indices) - k = batched_index_select(k, kv_indices) - v = batched_index_select(v, kv_indices) - - reshape_with_window = lambda x: x.reshape(b, h, nc, -1, d) - q, k, v = map(reshape_with_window, (q, k, v)) - - m_k, m_v = map(lambda x: expand_dim(x, 0, b).to(q), (self.mem_key, self.mem_value)) - k, v = map(lambda x: torch.cat(x, dim=3), ((m_k, k), (m_v, v))) - - dots = torch.einsum('bhnid,bhnjd->bhnij', q, k) * (d ** -0.5) - - mask_value = max_neg_value(dots) - - if exists(query_mask) or exists(key_mask): - query_mask = default(query_mask, lambda: torch.ones((b, t), device=device).bool()) - key_mask = default(key_mask, lambda: torch.ones((b, kv_t), device=device).bool()) - - q_mask = expand_dim(query_mask, 1, h).gather(2, indices) - kv_mask = expand_dim(key_mask, 1, h).gather(2, kv_indices) - q_mask, kv_mask = map(lambda t: t.reshape(b, h, nc, -1), (q_mask, kv_mask)) - mask = q_mask[:, :, :, :, None] * kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=1) - dots.masked_fill_(~mask, mask_value) - del mask - - if self.causal: - q_mask, kv_mask = map(lambda t: t.reshape(b, h, nc, -1), (indices, kv_indices)) - mask = q_mask[:, :, :, :, None] >= kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=1) - dots.masked_fill_(~mask, mask_value) - del mask - - if self.shared_qk: - q_mask, kv_mask = map(lambda t: t.reshape(b, h, nc, -1), (indices, kv_indices)) - mask = q_mask[:, :, :, :, None] == kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=0) - dots.masked_fill_(mask, TOKEN_SELF_ATTN_VALUE) - del mask - - dots = dots.softmax(dim=-1) - dots = self.dropout(dots) - - bo = torch.einsum('bhcij,bhcjd->bhcid', dots, v) - so = torch.reshape(bo, (b, h, -1, bo.shape[-1])).type(dtype) - out = scatter_mean(out, so, indices.unsqueeze(-1).expand_as(so), -2) - return out, aux_loss - -# feedforward - - -class GELU_(nn.Module): - def forward(self, x): - return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) - - -GELU = nn.GELU if hasattr(nn, 'GELU') else GELU_ - - -class FeedForward(nn.Module): - def __init__(self, dim, mult=4, dropout=0., activation=None, glu=False): - super().__init__() - activation = default(activation, GELU) - - self.glu = glu - self.w1 = nn.Linear(dim, dim * mult * (2 if glu else 1)) - self.act = activation() - self.dropout = nn.Dropout(dropout) - self.w2 = nn.Linear(dim * mult, dim) - - def forward(self, x, **kwargs): - if not self.glu: - x = self.w1(x) - x = self.act(x) - else: - x, v = self.w1(x).chunk(2, dim=-1) - x = self.act(x) * v - - x = self.dropout(x) - x = self.w2(x) - return x - -# self attention - - -class SelfAttention(nn.Module): - def __init__(self, dim, max_seq_len, heads, local_attn_heads, window_size, dim_head=None, local_attn_window_size=None, local_attn_radius_blocks=1, causal=False, attn_dropout=0., dropout=0., kmeans_ema_decay=0.999, commitment_factor=1e-4, receives_context=False, context_window_size=None, rel_pos_emb=True, num_mem_kv=0, shared_qk=False, conv_query_kernel=9): - super().__init__() - assert dim_head or (dim % heads) == 0, 'hidden dimension must be divisible by number of heads' - assert (max_seq_len % window_size) == 0, 'maximum sequence length must be divisible by the target window size' - assert local_attn_heads <= heads, 'number of local attention heads must be less than total heads' - assert not (receives_context and local_attn_heads > 0), 'local attention cannot be used for self attention with context' - assert not (receives_context and causal), 'contextual attention layer cannot be causal' - - local_attn_window_size = default(local_attn_window_size, window_size) - context_window_size = default(context_window_size, window_size) - - self.shared_qk = shared_qk - self.receives_context = receives_context - self.heads = heads - self.local_attn_heads = local_attn_heads - self.global_attn_heads = heads - local_attn_heads - - self.causal = causal - self.window_size = window_size - - dim_head = default(dim_head, dim // heads) - dim_heads = dim_head * heads - self.dim_head = dim_head - - num_clusters = max_seq_len // window_size - - # local - - local_dim_heads = dim_head * self.local_attn_heads - - if self.local_attn_heads > 0: - rel_pos_emb_config = (dim_head, local_attn_heads) if rel_pos_emb else None - self.local_attn = LocalAttention(dim=dim_head, window_size=local_attn_window_size, causal=causal, dropout=attn_dropout, rel_pos_emb_config=rel_pos_emb_config, look_backward=local_attn_radius_blocks, look_forward=0 if causal else local_attn_radius_blocks) - self.local_to_qkv = nn.Linear(dim, 3 * local_dim_heads) - - # global - - global_dim_heads = dim_head * self.global_attn_heads - - if self.global_attn_heads > 0: - self.global_attn = KmeansAttention(num_clusters, window_size, self.global_attn_heads, dim_head, causal=causal, dropout=attn_dropout, ema_decay=kmeans_ema_decay, commitment=commitment_factor, receives_context=receives_context, num_mem_kv=num_mem_kv, shared_qk=shared_qk) - - self.to_q = nn.Sequential( - Rearrange('b n c -> b c n'), - DepthWiseConv1d(dim, global_dim_heads, conv_query_kernel, causal=causal), - Rearrange('b c n -> b n c') - ) - - self.to_v = nn.Linear(dim, global_dim_heads, bias=False) - - if not self.shared_qk: - self.to_k = nn.Linear(dim, global_dim_heads, bias=False) - - # out - - self.to_out = nn.Linear(dim_heads, dim, bias=False) - self.dropout = nn.Dropout(dropout) - - def forward(self, query, key, value, context=None, key_padding_mask=None, context_mask=None, pos_emb=None, **kwargs): - assert not (self.receives_context and not exists(context)), 'context must be passed if self attention is set to receive context' - input_mask = key_padding_mask - x = query.transpose(0, 1) - b, t, _, h, dh = *x.shape, self.heads, self.dim_head - has_local, has_global = map(lambda x: x > 0, (self.local_attn_heads, self.global_attn_heads)) - - split_heads = lambda v: reshape_dim(v, -1, (-1, dh)).transpose(1, 2).contiguous() - - if has_local: - local_qkv = self.local_to_qkv(x).chunk(3, dim=-1) - lq, lk, lv = map(split_heads, local_qkv) - - if has_global: - kv_input = x if not self.receives_context else context - - q, v = self.to_q(x), self.to_v(kv_input) - - if not self.shared_qk: - k = self.to_k(kv_input) - else: - k = self.to_q(kv_input) if self.receives_context else q - - q, k, v = map(split_heads, (q, k, v)) - - out = [] - total_loss = torch.tensor(0., requires_grad=True, **to(x)) - - if has_local: - local_out = self.local_attn(lq, lk, lv, input_mask=input_mask) - out.append(local_out) - - if has_global: - if not self.receives_context and exists(pos_emb): - q, k = apply_rotary_pos_emb(q, k, pos_emb) - - global_out, loss = self.global_attn(q, k, v, query_mask=input_mask, key_mask=context_mask) - total_loss = total_loss + loss - - out.append(global_out) - - out = torch.cat(out, dim=1) - out = out.reshape(b, h, t, -1).transpose(1, 2).reshape(b, t, -1) - out = self.dropout(out.transpose(0, 1)) - # out = self.to_out(out) - return out, total_loss diff --git a/spaces/Harsimran19/SegmentationGAN/app.py b/spaces/Harsimran19/SegmentationGAN/app.py deleted file mode 100644 index 90d62a81770176ea865721ba0eac938fdedea7ba..0000000000000000000000000000000000000000 --- a/spaces/Harsimran19/SegmentationGAN/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import gradio as gr -import torch -import numpy as np -import os -from model import gen_model -from torchvision import transforms -MEAN = (0.5, 0.5, 0.5,) -STD = (0.5, 0.5, 0.5,) -# Model -gen,transform_gen=gen_model() -# print(gen) -# to_img=T.ToPILImage() -# examples=["examples/input_0.png","examples/input_9.png"] -example_list = [["examples/" + example] for example in os.listdir("examples")] -# example_list=['1.jpg','2.jpg'] -# def de_norm(img): -# img_ = img.mul(torch.FloatTensor(STD).view(3, 1, 1)) -# img_ = img_.add(torch.FloatTensor(MEAN).view(3, 1, 1)).detach().numpy() -# img_ = np.transpose(img_, (1, 2, 0)) -# return img_ -inverse_transform = transforms.Compose([ transforms.Normalize(mean=[-1.0, -1.0, -1.0], std=[2.0, 2.0, 2.0]), - transforms.ToPILImage() -]) -def predict(img): - # Apply Transformations - img = transform_gen(img).unsqueeze(0) - - # Predict - gen.eval() - with torch.inference_mode(): - y_gen = gen(img) - y_gen = y_gen[0] - y_gen = inverse_transform(y_gen) - - return y_gen - - - -# Gradio App -title="Image Segmentation GAN" -description="This segments a Normal Image" - -demo=gr.Interface(fn=predict, - inputs=gr.Image(type='pil'), - outputs=gr.Image(type='pil'), - title=title , - examples=example_list, - description=description) -demo.launch(debug=False) \ No newline at end of file diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/tts_infer/__init__.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/tts_infer/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Harveenchadha/en_to_indic_translation/legacy/env.sh b/spaces/Harveenchadha/en_to_indic_translation/legacy/env.sh deleted file mode 100644 index 9c9611b0d11e821bdb17b612b64c3d14e208cc74..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/legacy/env.sh +++ /dev/null @@ -1,17 +0,0 @@ - -export SRC='' - -## Python env directory where fairseq is installed -export PYTHON_ENV='' - -export SUBWORD_NMT_DIR='' -export INDIC_RESOURCES_PATH='' -export INDIC_NLP_HOME='' - -export CUDA_HOME='' - -export PATH=$CUDA_HOME/bin:$INDIC_NLP_HOME:$PATH -export LD_LIBRARY_PATH=$CUDA_HOME/lib64 - -# set environment variable to control GPUS visible to the application -#export CUDA_VISIBLE_DEVICES="' diff --git a/spaces/Hazem/roop/roop/processors/frame/face_swapper.py b/spaces/Hazem/roop/roop/processors/frame/face_swapper.py deleted file mode 100644 index c53b5b86d7e87870191c01855652088d43726142..0000000000000000000000000000000000000000 --- a/spaces/Hazem/roop/roop/processors/frame/face_swapper.py +++ /dev/null @@ -1,88 +0,0 @@ -from typing import Any, List, Callable -import cv2 -import insightface -import threading - -import roop.globals -import roop.processors.frame.core -from roop.core import update_status -from roop.face_analyser import get_one_face, get_many_faces -from roop.typing import Face, Frame -from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video - -FACE_SWAPPER = None -THREAD_LOCK = threading.Lock() -NAME = 'ROOP.FACE-SWAPPER' - - -def get_face_swapper() -> Any: - global FACE_SWAPPER - - with THREAD_LOCK: - if FACE_SWAPPER is None: - model_path = resolve_relative_path('../models/inswapper_128.onnx') - FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=roop.globals.execution_providers) - return FACE_SWAPPER - - -def pre_check() -> bool: - download_directory_path = resolve_relative_path('../models') - conditional_download(download_directory_path, ['https://huggingface.co/henryruhs/roop/resolve/main/inswapper_128.onnx']) - return True - - -def pre_start() -> bool: - if not is_image(roop.globals.source_path): - update_status('Select an image for source path.', NAME) - return False - elif not get_one_face(cv2.imread(roop.globals.source_path)): - update_status('No face in source path detected.', NAME) - return False - if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path): - update_status('Select an image or video for target path.', NAME) - return False - return True - - -def post_process() -> None: - global FACE_SWAPPER - - FACE_SWAPPER = None - - -def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame: - return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True) - - -def process_frame(source_face: Face, temp_frame: Frame) -> Frame: - if roop.globals.many_faces: - many_faces = get_many_faces(temp_frame) - if many_faces: - for target_face in many_faces: - temp_frame = swap_face(source_face, target_face, temp_frame) - else: - target_face = get_one_face(temp_frame) - if target_face: - temp_frame = swap_face(source_face, target_face, temp_frame) - return temp_frame - - -def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None: - source_face = get_one_face(cv2.imread(source_path)) - for temp_frame_path in temp_frame_paths: - temp_frame = cv2.imread(temp_frame_path) - result = process_frame(source_face, temp_frame) - cv2.imwrite(temp_frame_path, result) - if update: - update() - - -def process_image(source_path: str, target_path: str, output_path: str) -> None: - source_face = get_one_face(cv2.imread(source_path)) - target_frame = cv2.imread(target_path) - result = process_frame(source_face, target_frame) - cv2.imwrite(output_path, result) - - -def process_video(source_path: str, temp_frame_paths: List[str]) -> None: - roop.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames) diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/utils/dataset_utils.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/utils/dataset_utils.py deleted file mode 100644 index 4bcbcf6f855783e632bffa367d0bb54f092db4e0..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/utils/dataset_utils.py +++ /dev/null @@ -1,424 +0,0 @@ -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -import os -import pandas as pd -import plotly -import pyarrow.feather as feather -import utils -from dataclasses import asdict -from datasets import Dataset, get_dataset_infos, load_dataset, load_from_disk, \ - NamedSplit -from dotenv import load_dotenv -from huggingface_hub import Repository, list_datasets -from json2html import * -from os import getenv -from os.path import exists, isdir, join as pjoin -from pathlib import Path - -# treating inf values as NaN as well -pd.set_option("use_inf_as_na", True) - -## String names used in Hugging Face dataset configs. -HF_FEATURE_FIELD = "features" -HF_LABEL_FIELD = "label" -HF_DESC_FIELD = "description" - -CACHE_DIR = "cache_dir" -## String names we are using within this code. -# These are not coming from the stored dataset nor HF config, -# but rather used as identifiers in our dicts and dataframes. -TEXT_FIELD = "text" -PERPLEXITY_FIELD = "perplexity" -TOKENIZED_FIELD = "tokenized_text" -EMBEDDING_FIELD = "embedding" -LENGTH_FIELD = "length" -VOCAB = "vocab" -WORD = "word" -CNT = "count" -PROP = "proportion" -TEXT_NAN_CNT = "text_nan_count" -TXT_LEN = "text lengths" -TOT_WORDS = "total words" -TOT_OPEN_WORDS = "total open words" - -_DATASET_LIST = [ - "c4", - "squad", - "squad_v2", - "hate_speech18", - "hate_speech_offensive", - "glue", - "super_glue", - "wikitext", - "imdb", - "HuggingFaceM4/OBELICS", -] - -_STREAMABLE_DATASET_LIST = [ - "c4", - "wikitext", - "HuggingFaceM4/OBELICS", -] - -_MAX_ROWS = 2000 - -logs = utils.prepare_logging(__file__) - -def _load_dotenv_for_cache_on_hub(): - """ - This function loads and returns the organization name that you've set up on the - hub for storing your data measurements cache on the hub. It also loads the associated - access token. It expects you to have HUB_CACHE_ORGANIZATION= - and HF_TOKEN= on separate lines in a file named .env at the root of this repo. - - Returns: - tuple of strings: hub_cache_organization, hf_token - """ - if Path(".env").is_file(): - load_dotenv(".env") - hf_token = getenv("HF_TOKEN") - hub_cache_organization = getenv("HUB_CACHE_ORGANIZATION") - return hub_cache_organization, hf_token - -def get_cache_dir_naming(out_dir, dataset, config, split, feature): - feature_text = hyphenated(feature) - dataset_cache_name = f"{dataset}_{config}_{split}_{feature_text}" - local_dataset_cache_dir = out_dir + "/" + dataset_cache_name - return dataset_cache_name, local_dataset_cache_dir - -def initialize_cache_hub_repo(local_cache_dir, dataset_cache_name): - """ - This function tries to initialize a dataset cache on the huggingface hub. The - function expects you to have HUB_CACHE_ORGANIZATION= - and HF_TOKEN= on separate lines in a file named .env at the root of this repo. - - Args: - local_cache_dir (string): - The path to the local dataset cache. - dataset_cache_name (string): - The name of the dataset repo on the huggingface hub that you want. - """ - - hub_cache_organization, hf_token = _load_dotenv_for_cache_on_hub() - clone_source = pjoin(hub_cache_organization, dataset_cache_name) - repo = Repository(local_dir=local_cache_dir, - clone_from=clone_source, - repo_type="dataset", use_auth_token=hf_token) - repo.lfs_track(["*.feather"]) - return repo - -def pull_cache_from_hub(cache_path, dataset_cache_dir): - """ - This function tries to pull a datasets cache from the huggingface hub if a - cache for the dataset does not already exist locally. The function expects you - to have you HUB_CACHE_ORGANIZATION= - and HF_TOKEN= on separate lines in a file named .env at the root of this repo. - - Args: - cache_path (string): - The path to the local dataset cache that you want. - dataset_cache_dir (string): - The name of the dataset repo on the huggingface hub. - - """ - - hub_cache_organization, hf_token = _load_dotenv_for_cache_on_hub() - clone_source = pjoin(hub_cache_organization, dataset_cache_dir) - - if isdir(cache_path): - logs.warning("Already a local cache for the dataset, so not pulling from the hub.") - else: - # Here, dataset_info.id is of the form: / - if dataset_cache_dir in [ - dataset_info.id.split("/")[-1] for dataset_info in - list_datasets(author=hub_cache_organization, - use_auth_token=hf_token)]: - Repository(local_dir=cache_path, - clone_from=clone_source, - repo_type="dataset", use_auth_token=hf_token) - logs.info("Pulled cache from hub!") - else: - logs.warning("Asking to pull cache from hub but cannot find cached repo on the hub.") - - -def load_truncated_dataset( - dataset_name, - config_name, - split_name, - num_rows=_MAX_ROWS, - use_cache=True, - cache_dir=CACHE_DIR, - use_streaming=True, - save=True, -): - """ - This function loads the first `num_rows` items of a dataset for a - given `config_name` and `split_name`. - If `use_cache` and `cache_name` exists, the truncated dataset is loaded from - `cache_name`. - Otherwise, a new truncated dataset is created and immediately saved - to `cache_name`. - When the dataset is streamable, we iterate through the first - `num_rows` examples in streaming mode, write them to a jsonl file, - then create a new dataset from the json. - This is the most direct way to make a Dataset from an IterableDataset - as of datasets version 1.6.1. - Otherwise, we download the full dataset and select the first - `num_rows` items - Args: - dataset_name (string): - dataset id in the dataset library - config_name (string): - dataset configuration - split_name (string): - split name - num_rows (int) [optional]: - number of rows to truncate the dataset to - cache_dir (string): - name of the cache directory - use_cache (bool): - whether to load from the cache if it exists - use_streaming (bool): - whether to use streaming when the dataset supports it - save (bool): - whether to save the dataset locally - Returns: - Dataset: the (truncated if specified) dataset as a Dataset object - """ - logs.info("Loading or preparing dataset saved in %s " % cache_dir) - if use_cache and exists(cache_dir): - dataset = load_from_disk(cache_dir) - else: - if use_streaming and dataset_name in _STREAMABLE_DATASET_LIST: - iterable_dataset = load_dataset( - dataset_name, - name=config_name, - split=split_name, - streaming=True, - ).take(num_rows) - rows = list(iterable_dataset) - def gen(): - yield from rows - dataset = Dataset.from_generator(gen, features=iterable_dataset.features) - dataset._split = NamedSplit(split_name) - # f = open("temp.jsonl", "w", encoding="utf-8") - # for row in rows: - # _ = f.write(json.dumps(row) + "\n") - # f.close() - # dataset = Dataset.from_json( - # "temp.jsonl", features=iterable_dataset.features, split=NamedSplit(split_name) - # ) - else: - full_dataset = load_dataset( - dataset_name, - name=config_name, - split=split_name, - ) - if len(full_dataset) >= num_rows: - dataset = full_dataset.select(range(num_rows)) - # Make the directory name clear that it's not the full dataset. - cache_dir = pjoin(cache_dir, ("_%s" % num_rows)) - else: - dataset = full_dataset - if save: - dataset.save_to_disk(cache_dir) - return dataset - -def hyphenated(features): - """When multiple features are asked for, hyphenate them together when they're used for filenames or titles""" - return '-'.join(features) - -def get_typed_features(features, ftype="string", parents=None): - """ - Recursively get a list of all features of a certain dtype - :param features: - :param ftype: - :param parents: - :return: a list of tuples > e.g. ('A', 'B', 'C') for feature example['A']['B']['C'] - """ - if parents is None: - parents = [] - typed_features = [] - for name, feat in features.items(): - if isinstance(feat, dict): - if feat.get("dtype", None) == ftype or feat.get("feature", {}).get( - ("dtype", None) == ftype - ): - typed_features += [tuple(parents + [name])] - elif "feature" in feat: - if feat["feature"].get("dtype", None) == ftype: - typed_features += [tuple(parents + [name])] - elif isinstance(feat["feature"], dict): - typed_features += get_typed_features( - feat["feature"], ftype, parents + [name] - ) - else: - for k, v in feat.items(): - if isinstance(v, dict): - typed_features += get_typed_features( - v, ftype, parents + [name, k] - ) - elif name == "dtype" and feat == ftype: - typed_features += [tuple(parents)] - return typed_features - - -def get_label_features(features, parents=None): - """ - Recursively get a list of all features that are ClassLabels - :param features: - :param parents: - :return: pairs of tuples as above and the list of class names - """ - if parents is None: - parents = [] - label_features = [] - for name, feat in features.items(): - if isinstance(feat, dict): - if "names" in feat: - label_features += [(tuple(parents + [name]), feat["names"])] - elif "feature" in feat: - if "names" in feat: - label_features += [ - (tuple(parents + [name]), feat["feature"]["names"]) - ] - elif isinstance(feat["feature"], dict): - label_features += get_label_features( - feat["feature"], parents + [name] - ) - else: - for k, v in feat.items(): - if isinstance(v, dict): - label_features += get_label_features(v, parents + [name, k]) - elif name == "names": - label_features += [(tuple(parents), feat)] - return label_features - - -# get the info we need for the app sidebar in dict format -def dictionarize_info(dset_info): - info_dict = asdict(dset_info) - res = { - "config_name": info_dict["config_name"], - "splits": { - spl: 100 - for spl, spl_info in info_dict["splits"].items() - }, - "features": { - "string": get_typed_features(info_dict["features"], "string"), - "int32": get_typed_features(info_dict["features"], "int32"), - "float32": get_typed_features(info_dict["features"], "float32"), - "label": get_label_features(info_dict["features"]), - }, - "description": dset_info.description, - } - return res - -def get_dataset_info_dicts(dataset_id=None): - """ - Creates a dict from dataset configs. - Uses the datasets lib's get_dataset_infos - :return: Dictionary mapping dataset names to their configurations - """ - if dataset_id is not None: - ds_name_to_conf_dict = { - dataset_id: { - config_name: dictionarize_info(config_info) - for config_name, config_info in get_dataset_infos(dataset_id).items() - } - } - else: - ds_name_to_conf_dict = { - ds_id: { - config_name: dictionarize_info(config_info) - for config_name, config_info in get_dataset_infos(ds_id).items() - } - for ds_id in _DATASET_LIST - } - return ds_name_to_conf_dict - - -# get all instances of a specific field in a dataset -def extract_field(examples, field_path, new_field_name=None): - if new_field_name is None: - new_field_name = "_".join(field_path) - field_list = [] - # TODO: Breaks the CLI if this isn't checked. - if isinstance(field_path, str): - field_path = [field_path] - item_list = examples[field_path[0]] - for field_name in field_path[1:]: - item_list = [ - next_item - for item in item_list - for next_item in ( - item[field_name] - if isinstance(item[field_name], list) - else [item[field_name]] - ) - ] - field_list += [ - field - for item in item_list - for field in (item if isinstance(item, list) else [item]) - ] - return {new_field_name: field_list} - -def make_path(path): - os.makedirs(path, exist_ok=True) - -def counter_dict_to_df(dict_input, key_as_column=False): - df_output = pd.DataFrame(dict_input, index=[0]).T - if key_as_column: - df_output.reset_index(inplace=True) - df_output.columns = ["instance", "count"] - else: - df_output.columns = ["count"] - return df_output.sort_values(by="count", ascending=False) - -def write_plotly(fig, fid): - write_json(plotly.io.to_json(fig), fid) - -def read_plotly(fid): - fig = plotly.io.from_json(json.load(open(fid, encoding="utf-8"))) - return fig - -def write_json_as_html(input_json, html_fid): - html_dict = json2html.convert(json=input_json) - with open(html_fid, "w+") as f: - f.write(html_dict) - -def df_to_write_html(input_df, html_fid): - """Writes a dataframe to an HTML file""" - input_df.to_HTML(html_fid) - -def read_df(df_fid): - return pd.DataFrame.from_dict(read_json(df_fid), orient="index") - -def write_df(df, df_fid): - """In order to preserve the index of our dataframes, we can't - use the compressed pandas dataframe file format .feather. - There's a preference for json amongst HF devs, so we use that here.""" - df_dict = df.to_dict('index') - write_json(df_dict, df_fid) - -def write_json(json_dict, json_fid): - with open(json_fid, "w", encoding="utf-8") as f: - json.dump(json_dict, f) - -def read_json(json_fid): - json_dict = json.load(open(json_fid, encoding="utf-8")) - return json_dict \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py deleted file mode 100644 index 3279dae89a8bca95178bbe1285d3cb334890b12f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import mmap -import os -import shutil -import struct -import typing as tp -from functools import lru_cache - -import numpy as np -import torch -from fairseq.data import indexed_dataset -from fairseq.data.huffman import HuffmanCoder -from fairseq.file_io import PathManager - - -class HuffmanMMapIndex: - """ - keep an index of the offsets in the huffman binary file. - First a header, then the list of sizes (num tokens) for each instance and finally - the addresses of each instance. - """ - - _HDR_MAGIC = b"HUFFIDX\x00\x00" - _VERSION = 1 - - @classmethod - def writer(cls, path: str, data_len: int): - class _Writer: - def __enter__(self): - self._file = open(path, "wb") - - # write header (magic + version) - self._file.write(cls._HDR_MAGIC) - self._file.write(struct.pack(" None: - self._path_prefix = path_prefix - self._coder = coder - self._sizes = [] - self._ptrs = [] - self._data_len = 0 - - def open(self): - self._coder.to_file(vocab_file_path(self._path_prefix)) - self._data_file = open(indexed_dataset.data_file_path(self._path_prefix), "wb") - - def __enter__(self) -> "HuffmanMMapIndexedDatasetBuilder": - self.open() - return self - - def add_item(self, tokens: tp.List[str]) -> None: - """ - add a list of tokens to the dataset, they will compressed with the - provided coder before being written to file. - """ - encoded = self._coder.encode(tokens) - code_len = len(encoded) - last_ptr = 0 - if len(self._ptrs) > 0: - last_ptr = self._ptrs[-1] - self._sizes.append(len(tokens)) - self._ptrs.append(last_ptr + code_len) - self._data_len += code_len - self._data_file.write(encoded) - - def append(self, other_dataset_path_prefix: str) -> None: - """ - append an existing dataset. - Beware, if it wasn't built with the same coder, you are in trouble. - """ - other_index = HuffmanMMapIndex( - indexed_dataset.index_file_path(other_dataset_path_prefix) - ) - for (ptr, size) in other_index: - self._ptrs.append(ptr + self._data_len) - self._sizes.append(size) - - # Concatenate data - with open(indexed_dataset.data_file_path(other_dataset_path_prefix), "rb") as f: - shutil.copyfileobj(f, self._data_file) - - self._data_len += other_index.data_len - - def close(self): - self._data_file.close() - with HuffmanMMapIndex.writer( - indexed_dataset.index_file_path(self._path_prefix), self._data_len - ) as index: - index.write(self._sizes, self._ptrs) - - def __exit__(self, exc_type, exc_val, exc_tb) -> None: - self.close() diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/sampling_method.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/sampling_method.py deleted file mode 100644 index 140c68f01d60e902ef88f11f30f8813dc15fc681..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/sampling_method.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List - - -logger = logging.getLogger(__name__) - - -def uniform(dataset_sizes: List[int]): - return [1.0] * len(dataset_sizes) - - -def temperature_sampling(dataset_sizes, temp): - total_size = sum(dataset_sizes) - return [(size / total_size) ** (1.0 / temp) for size in dataset_sizes] - - -def make_temperature_sampling(temp=1.0): - def sampling_func(dataset_sizes): - return temperature_sampling(dataset_sizes, temp) - - return sampling_func - - -def make_ratio_sampling(ratios): - def sampling_func(dataset_sizes): - return ratios - - return sampling_func - - -class SamplingMethod: - @staticmethod - def add_arguments(parser): - parser.add_argument( - "--sampling-method", - choices=[ - "uniform", - "temperature", - "concat", - "RoundRobin", - ], - type=str, - default="concat", - help="The method to sample data per language pairs", - ) - parser.add_argument( - "--sampling-temperature", - default=1.5, - type=float, - help="only work with --sampling-method temperature", - ) - - @staticmethod - def build_sampler(args, task): - return SamplingMethod(args, task) - - def __init__(self, args, task): - self.args = args - self.task = task - - def is_adaptive(self): - return False - - def sampling_method_selector(self): - args = self.args - logger.info(f"selected sampler: {args.sampling_method}") - if args.sampling_method == "uniform": - return uniform - elif args.sampling_method == "temperature" or self.is_adaptive(): - return make_temperature_sampling(float(args.sampling_temperature)) - else: - # default to concating all data set together - return None diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/speech_to_text/utils.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/speech_to_text/utils.py deleted file mode 100644 index 168b8bf13b0e734eee3f6989ff0f28a016a09c2b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/speech_to_text/utils.py +++ /dev/null @@ -1,563 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - - -import logging -from collections.abc import Iterable -from itertools import repeat -from typing import List, Optional, Tuple - -import torch -from torch import Tensor - - -# ------------------------------------------------------------------------------ -# assert_equal() -# ------------------------------------------------------------------------------ - - -def assert_equal(value1, value2, name1=None, name2=None): - """Asserts two values are equal otherwise raise an error.""" - - str_name1 = "" if name1 is None else "{} ".format(name1) - str_name2 = "" if name2 is None else "{} ".format(name2) - if value1 != value2: - str_value1 = "{}" if name1 is None else "({})" - str_value1 = str_value1.format(value1) - str_value2 = "{}" if name2 is None else "({})" - str_value2 = str_value2.format(value2) - raise ValueError( - "Expected {}{} == {}{}".format(str_name1, str_value1, str_name2, str_value2) - ) - - -def fill_config(config, key, value): - if value is not None: - if key not in config or config[key] is None: - config[key] = value - assert_equal(value, config[key], "value", f'config["{key}"]') - - -# ------------------------------------------------------------------------------ -# check_and_return_expected() -# ------------------------------------------------------------------------------ - - -def check_and_return_expected(value, undefined_value, expected_value, name=None): - """ - Return the expected value while checking if the given value is undefined or - equal to the expected value. - """ - if (undefined_value is None and value is None) or (undefined_value == value): - return expected_value - if value != expected_value: - str_name = "" if name is None else "{} ".format(name) - str_value = "{}" if name is None else "({})" - str_value = str_value.format(value) - raise ValueError( - "Expected {}{} == {}".format(str_name, str_value, expected_value) - ) - return expected_value - - -# ------------------------------------------------------------------------------ -# get_time_axis() -# ------------------------------------------------------------------------------ - - -def get_time_axis(layout): - """ - Extract the time axis from the layout, for example for breaking sequence into - segments. - """ - if layout in ["TB", "TBD"]: - return 0 - if layout in ["BT", "BTD"]: - return 1 - if layout in ["BCTD"]: - return 2 - raise ValueError("Unsupported layout = {}".format(layout)) - - -# ------------------------------------------------------------------------------ -# get_batch_axis() -# ------------------------------------------------------------------------------ - - -def get_batch_axis(layout): - """ - Extract the batch axis from the layout - """ - if layout in ["TB", "TBD"]: - return 1 - if layout in ["BT", "BTD", "BCTD"]: - return 0 - raise ValueError("Unsupported layout = {}".format(layout)) - - -# ------------------------------------------------------------------------------ -# monotonically_increasing_and_bounded() -# ------------------------------------------------------------------------------ - - -def monotonically_increasing_and_bounded(iterable, min=None, max=None): - """ - Check if the elements in the given iterable are monotonically increasing and - bounded by upper/lower bounds. - """ - if not isinstance(iterable, Iterable): - raise TypeError( - "Expected iterable to be of type Iterable, got ({})".format( - iterable.__class__.__name__ - ) - ) - for i in range(len(iterable)): - if min is not None and iterable[i] < min: - return False - if max is not None and iterable[i] > max: - return False - if i > 0 and iterable[i] <= iterable[i - 1]: - return False - return True - - -# ------------------------------------------------------------------------------ -# to_pair() -# ------------------------------------------------------------------------------ - - -def to_pair(value, name): - """Make a pair (of type tuple) of given value.""" - if isinstance(value, Iterable): - if len(value) != 2: - raise ValueError( - "Expected `{}` to have exactly 2 elements, got: ({})".format( - name, value - ) - ) - return value - return tuple(repeat(value, 2)) - - -# ------------------------------------------------------------------------------ -# infer_conv_output_attrs() -# ------------------------------------------------------------------------------ - - -# TODO(cfyeh): figure out if we can get `output_dim` without calling the module. -def infer_conv_output_attrs( - module, input_channels, input_dim, batch_size=1, max_length=8 -): - """Get output attributes of a module with input.""" - input = torch.randn(batch_size, input_channels, max_length, input_dim) - output = module(input) - output_channels = output.shape[1] - output_dim = output.shape[-1] - return output_channels, output_dim - - -# ------------------------------------------------------------------------------ -# NoOp -# ------------------------------------------------------------------------------ - - -class NoOp(torch.nn.Module): - """ - NoOp simply passes the input as the output. - """ - - def __init__(self): - super().__init__() - - def forward(self, input: Tensor) -> Tensor: - return input - - -# ------------------------------------------------------------------------------ -# Permute: a torch.nn.Module applies permutation on the input tensor. -# ------------------------------------------------------------------------------ - - -class Permute(torch.nn.Module): - def __init__(self, dims): - super().__init__() - self.dims = dims - - def forward(self, input: Tensor) -> Tensor: - return input.permute(self.dims).contiguous() - - -# ------------------------------------------------------------------------------ -# lengths_to_padding_mask() -# ------------------------------------------------------------------------------ - - -def lengths_to_padding_mask(lengths: Tensor) -> Tensor: - """Convert lengths of shape (B, ) to padding mask.""" - batch_size = lengths.shape[0] - max_length = int(torch.max(lengths).item()) - padding_mask = torch.arange( # [0, ..., T-1] - max_length, device=lengths.device, dtype=lengths.dtype - ).expand(batch_size, max_length) >= lengths.unsqueeze(1) - - return padding_mask - - -# ------------------------------------------------------------------------------ -# lengths_to_attention_mask() -# ------------------------------------------------------------------------------ - - -def lengths_to_attention_mask( - lengths: Tensor, - left_context: Optional[int] = None, - right_context: Optional[int] = None, -) -> Optional[Tensor]: - """ - Generate attention mask based on (lengths, left_context, right_context). - left_context is None means unlimited left context. - right_context is None means unlimited right context. - """ - - if left_context is None and right_context is None: - return None - - max_length = int(torch.max(lengths).item()) - - # For example, with `max_length` == 5, - # indices = tensor([ - # [ 0, 1, 2, 3, 4, 5], - # [-1, 0, 1, 2, 3, 4], - # [-2, -1, 0, 1, 2, 3], - # [-3, -2, -1, 0, 1, 2], - # [-4, -3, -2, -1, 0, 1], - # [-5, -4, -3, -2, -1, 0], - # ]) - - # In some cases the second torch.arange is created on cpu which causes a - # failure. Adding the device option to guard against it. - indices = torch.arange( - max_length, device=lengths.device, dtype=lengths.dtype - ).expand(max_length, max_length) - torch.arange( - max_length, device=lengths.device - ).view( - max_length, -1 - ) - - # For example, with `max_length` == 5, - # bool_mask = tensor([ - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # ]) - bool_mask = ( - torch.tensor([True]).to(device=lengths.device).expand(max_length, max_length) - ) - - # For example, with `max_length` == 5, left_context == 2 - # left_mask = tensor([ - # [ True, True, True, True, True], - # [ True, True, True, True, True], - # [ True, True, True, True, True], - # [False, True, True, True, True], - # [False, False, True, True, True], - # ]) - if left_context is not None: - left_mask = indices >= -left_context - bool_mask = bool_mask & left_mask - - # For example, with `max_length` == 5, right_context == 1 - # right_mask = tensor([ - # [True, True, False, False, False], - # [True, True, True, False, False], - # [True, True, True, True, False], - # [True, True, True, True, True], - # [True, True, True, True, True], - # ]) - if right_context is not None: - right_mask = indices <= right_context - bool_mask = bool_mask & right_mask - - bool_mask = (~bool_mask).to(device=lengths.device) - return bool_mask - - -# ------------------------------------------------------------------------------ -# infer_output_norm() -# ------------------------------------------------------------------------------ - - -def infer_output_norm(module, output_norm=None): - """ - Infer the output norm (string and module) needed on the module gvien desired - output normalization. - """ - if output_norm == module.output_norm(): - # output_norm already matches module.output_norm(). - return (None, NoOp()) - - if output_norm is None and module.output_norm() is not None: - logger = logging.getLogger("infer_output_norm()") - logger.warning( - "trying to set output_norm ({}) ".format(output_norm) - + "but got module.output_norm() ({}), ".format(module.output_norm()) - + "the combined output_norm() will be ({})".format(module.output_norm()) - ) - return (None, NoOp()) - - if output_norm == "log_softmax": - if module.output_norm() is not None: - raise ValueError( - "incompatible output_norm ({}) ".format(output_norm) - + "and module.output_norm() ({})".format(module.output_norm()) - ) - else: - return ("log_softmax", torch.nn.LogSoftmax(dim=-1)) - - if output_norm == "softmax": - if module.output_norm() is not None: - raise ValueError( - "incompatible output_norm ({}) ".format(output_norm) - + "and module.output_norm() ({})".format(module.output_norm()) - ) - else: - return ("softmax", torch.nn.Softmax(dim=-1)) - - raise ValueError( - "output_norm ({}) not in ".format(output_norm) - + "supported list = [None, softmax, log_softmax]" - ) - - -# ------------------------------------------------------------------------------ -# infer_channels_from_layout() -# ------------------------------------------------------------------------------ - - -def infer_channels_from_layout(layout, channels): - """Extract the number of channels from the layout.""" - if layout in ("TBD", "BTD"): - if channels is not None and channels != 1: - raise ValueError( - "Expected channels ({}) to be 1 for layout = {}".format( - channels, layout - ) - ) - if channels is None: - return 1 - return channels - - -# ------------------------------------------------------------------------------ -# pad_sequence() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def pad_sequence( - sequence: Tensor, - time_axis: int, - extra_left_context: int = 0, - extra_right_context: int = 0, -) -> Tensor: - """Pad extra left/right contexts to the sequence.""" - - if extra_left_context == 0 and extra_right_context == 0: - return sequence - - tensors_to_concat = [] - - if extra_left_context: - size = (extra_left_context,) - fill_value = 0 - indices = torch.full( - size=size, - fill_value=fill_value, - dtype=torch.long, - device=sequence.device, - ) - left_padding = torch.index_select(sequence, time_axis, indices) - tensors_to_concat.append(left_padding) - - tensors_to_concat.append(sequence) - - # NOTE(cfyeh): for efficiency reason we pad 0 instead of the last frame for - # extra right contexts. - if extra_right_context: - size = list(sequence.shape) - size[time_axis] = extra_right_context - right_padding = torch.zeros(size, dtype=sequence.dtype, device=sequence.device) - tensors_to_concat.append(right_padding) - - padded_sequence = torch.cat(tensors_to_concat, dim=time_axis) - return padded_sequence - - -# ------------------------------------------------------------------------------ -# sequence_to_segments() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def sequence_to_segments( - sequence: Tensor, - time_axis: int, - lengths: Tensor, - segment_size: Optional[int] = None, - extra_left_context: int = 0, - extra_right_context: int = 0, -) -> List[Tuple[Tensor, Tensor]]: - """Breaks sequence into segments.""" - - sequence = pad_sequence( - sequence=sequence, - time_axis=time_axis, - extra_left_context=extra_left_context, - extra_right_context=extra_right_context, - ) - - lengths = lengths + extra_left_context + extra_right_context - - segments: List[Tuple[Tensor, Tensor]] = [] - - if segment_size is None: - segments.append((sequence, lengths)) - return segments - - offset = 0 - end = sequence.shape[time_axis] - step = segment_size - size = extra_left_context + segment_size + extra_right_context - - while offset + extra_left_context + extra_right_context < end: - clamped_size = min(size, end - offset) - segment_lengths = torch.clamp(lengths - offset, min=0, max=clamped_size) - indices = torch.arange( - start=offset, - end=(offset + clamped_size), - step=1, - dtype=torch.long, - device=sequence.device, - ) - segment_tensor = torch.index_select(sequence, time_axis, indices) - segments.append((segment_tensor, segment_lengths)) - offset = offset + step - - return segments - - -# ------------------------------------------------------------------------------ -# segments_to_sequence() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def segments_to_sequence( - segments: List[Tuple[Tensor, Tensor]], time_axis: int -) -> Tuple[Tensor, Tensor]: - """Concatenate segments into a full sequence.""" - if len(segments) == 1: - return segments[0] - - tensors_to_concat: List[Tensor] = [] - lengths_to_stack: List[Tensor] = [] - - for tensor, lengths in segments: - tensors_to_concat.append(tensor) - lengths_to_stack.append(lengths) - - sequence = torch.cat(tensors_to_concat, dim=time_axis) - lengths = torch.stack(lengths_to_stack, dim=0) - lengths = torch.sum(lengths, dim=0) - - return sequence, lengths - - -def lengths_to_encoder_padding_mask(lengths, batch_first: bool = False): - """ - convert lengths (a 1-D Long/Int tensor) to 2-D binary tensor - - Args: - lengths: a (B, )-shaped tensor - batch_first: whether to return a (B, T) tensor - - Return: - max_length: maximum length of B sequences - encoder_padding_mask: a (max_length, B) binary mask, where - [t, b] = False for t < lengths[b] and True otherwise - - TODO: - kernelize this function if benchmarking shows this function is slow - """ - max_lengths = torch.max(lengths).item() - bsz = lengths.size(0) - encoder_padding_mask = torch.arange( - max_lengths - ).to( # a (T, ) tensor with [0, ..., T-1] - lengths.device - ).view( # move to the right device - 1, max_lengths - ).expand( # reshape to (1, T)-shaped tensor - bsz, -1 - ) > lengths.view( # expand to (B, T)-shaped tensor - bsz, 1 - ).expand( - -1, max_lengths - ) - if not batch_first: - return encoder_padding_mask.t(), max_lengths - else: - return encoder_padding_mask, max_lengths - - -# ------------------------------------------------------------------------------ -# attention suppression -# ------------------------------------------------------------------------------ - - -def attention_suppression(attention_weights: Tensor, scale: float): - # B, H, qlen, klen -> B, H, qlen, 1 - attention_prob = torch.nn.functional.softmax(attention_weights.float(), dim=-1) - attention_nozeros = attention_prob.to(torch.bool) - nozeros_sum = torch.sum(attention_nozeros.to(torch.float), dim=-1, keepdim=True) - - # For very sparse situation, we need get round about 0s - key_sum = torch.sum(attention_prob, dim=-1, keepdim=True) - - # nozeros_sum should > 1 - key_mean = key_sum / (nozeros_sum + 1e-8) - - # std calculation - dis = (attention_prob - key_mean) * (attention_prob - key_mean) - - # if attention_prob[i] < threshold, then dis_masked[i] = 0; for all i - dis_masked = torch.where( - attention_nozeros, dis, attention_prob.new_zeros(attention_prob.size()) - ) - - key_var = torch.sum(dis_masked, dim=-1, keepdim=True) - key_var = key_var / (nozeros_sum - 1.0 + 1e-8) - key_std = torch.sqrt(key_var) - key_thread = key_mean - scale * key_std - - # if attention_prob[i] >= key_thread, then attention_prob[i] - # , otherwise "-inf" - inf_tensor = attention_prob.new_zeros(attention_prob.size()).detach() - inf_tensor[:] = float("-inf") - attention_weights_float = torch.where( - attention_prob < key_thread, - inf_tensor, - attention_weights.float(), - ) - - return attention_weights_float.type_as(attention_weights) - - -def layer_norm_backward_hook(module, grad_input, grad_output, clamp_value): - return tuple(torch.clamp(v, min=-clamp_value, max=clamp_value) for v in grad_input) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py deleted file mode 100644 index 5ee9c1be4a59ad3d072412827ab4e9b62dc7434e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import List - -import torch.optim.lr_scheduler -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class ReduceLROnPlateauLRScheduleConfig(FairseqDataclass): - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - lr_threshold: float = field( - default=1e-4, - metadata={ - "help": ( - "threshold for measuring the new optimum, to only focus on " - "significant changes" - ) - }, - ) - lr_patience: int = field( - default=0, - metadata={ - "help": ( - "number of epochs with no improvement after which learning rate will " - "be reduced" - ) - }, - ) - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = II("optimization.lr") - maximize_best_checkpoint_metric: bool = II( - "checkpoint.maximize_best_checkpoint_metric" - ) - - -@register_lr_scheduler( - "reduce_lr_on_plateau", dataclass=ReduceLROnPlateauLRScheduleConfig -) -class ReduceLROnPlateauLRSchedule(FairseqLRScheduler): - """ - Decay the LR by a factor every time the validation loss plateaus. - Also comes with optional warmup phase, where we linearly increase - the learning rate from some initial learning rate - (``--warmup-init-lr``) until the configured learning rate - (``--lr``). Thereafter the lr is adjusted according to original - reduce_on_plateau scheme. - - During warmup:: - - lrs = torch.linspace( - cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates - ) - lr = lrs[update_num] - """ - - def __init__(self, cfg: ReduceLROnPlateauLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with reduce_lr_on_plateau." - " Consider --lr-scheduler=fixed instead." - ) - self.lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( - self.optimizer.optimizer, - patience=cfg.lr_patience, - factor=cfg.lr_shrink, - mode="max" if cfg.maximize_best_checkpoint_metric else "min", - threshold=cfg.lr_threshold, - ) - warmup_end_lr = cfg.lr[0] - # if no warm up, sets initial lr to be cfg.lr[0] - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr - - # linearly warmup for the first cfg.warmup_updates - if cfg.warmup_updates > 0: - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - - # this flag is either set from arg when no warm up, or set by - # step_update() when warmup finishes - self.warmup_end = True if cfg.warmup_updates <= 0 else False - - # initial learning rate - # this self.lr is used only during init and/or warm up period - self.lr = warmup_end_lr if self.warmup_end else cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def state_dict(self): - """Return the LR scheduler state dict.""" - return { - "best": self.lr_scheduler.best, - "last_epoch": self.lr_scheduler.last_epoch, - } - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - self.lr_scheduler.best = state_dict["best"] - if "last_epoch" in state_dict: - self.lr_scheduler.last_epoch = state_dict["last_epoch"] - - def step(self, epoch, val_loss=None): - """ - Update the learning rate at the end of the given epoch if warmup - finishes otherwise no update of lr on epoch boundaries - """ - if val_loss is not None and self.warmup_end is True: - self.lr_scheduler.step(val_loss) - else: - self.lr_scheduler.last_epoch = epoch - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """ - Update the learning rate after each update.""" - # if there is warmup - if self.cfg.warmup_updates > 0: - if num_updates <= self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - self.optimizer.set_lr(self.lr) - else: - if self.warmup_end is False: - self.warmup_end = True - # else do nothing - return self.optimizer.get_lr() diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/fetch_data/places_standard_test_val_prepare.sh b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/fetch_data/places_standard_test_val_prepare.sh deleted file mode 100644 index c0aa15008463c9fb881e0255c45994394a515806..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/fetch_data/places_standard_test_val_prepare.sh +++ /dev/null @@ -1,5 +0,0 @@ -mkdir -p places_standard_dataset/original/test/ -tar -xvf test_large.tar -C places_standard_dataset/original/test/ - -mkdir -p places_standard_dataset/original/val/ -tar -xvf val_large.tar -C places_standard_dataset/original/val/ diff --git a/spaces/JMalott/ai_architecture/page/intro.py b/spaces/JMalott/ai_architecture/page/intro.py deleted file mode 100644 index 0d54ddce23a771eedcf1e7276876a1138b0a99c0..0000000000000000000000000000000000000000 --- a/spaces/JMalott/ai_architecture/page/intro.py +++ /dev/null @@ -1,139 +0,0 @@ -import collections -from numpy.core.defchararray import lower -import streamlit as st -import numpy as np -import pandas as pd -import zipfile -import io -import os -from streamlit.elements.image import image_to_url -import gzip -import requests -from io import BytesIO -from PIL import Image, ImageDraw -import base64 -import datetime -import random, os, time -import threading - - -#List of files to use for each image object, 40 images total -files = [None]*40 - - - -def randomFile(ix): - - path = r"exampleImages" - - dd = list(os.listdir(path)) - random.shuffle(dd) - - #Parse through each file in directory - for file in dd: - - #If file is not in files list, use it for next image - if file not in files: - - files[ix] = file - return "exampleImages/"+file - -def gen(_p): - - - if(_p is not False): - st.session_state.prompt = _p - st.session_state.page = 0 - return - - _1 = ["A modern ","A post-modern ","A classical ", "A contemporary ", "A minimalist "] - _2 = ["museum architecture","home architecture","interior design"] - _3 = [""," in the style of I.M. Pei"," in the style of Frank Gehry"," in the style of John Lautner"," in the style of Frank Lloyd Wright"] - _4 = [" photograph",", watercolor painting",", oil painting", ", digital art"] - - st.session_state.prompt = str(random.choice(_1)+random.choice(_2)+random.choice(_3)+random.choice(_4)) - st.session_state.page = 0 - - -def app(): - - #Array of image objects - images = [] - - for i in range(30): - files.append( randomFile(i) ) - - placeholder = st.empty() - - with placeholder.container(): - - columns = [col1, col2, col3, col4, col5] = st.columns(5) - - ix = 0 - for column in columns: - with column: - for i in range(2): - images.append( st.empty() ) - - with images[ix].container(): - st.image("exampleImages/"+files[ix],width=None) - - ix += 1 - - - st.title('AI-Generated Architecture') - - prompt = st.text_input(label="Describe the architecture you want to see",value="") - - c1,c2,c3 = st.columns(3) - - with c1: - if st.button("Generate Architecture"): - if prompt: - gen(prompt) - elif prompt == "": - gen(False) - return - - - with c2: - if st.button("Random Prompt"): - gen(False) - return - - st.text("") - - - columns2 = [col1, col2, col3, col4, col5] = st.columns(5) - - - for column in columns2: - with column: - for i in range(4): - - images.append( st.empty() ) - - with images[ix].container(): - st.image("exampleImages/"+files[ix]) - - ix += 1 - - last = -1 - - while(True): - ch = random.randrange(30) - with images[ch].container(): - st.image(randomFile(ch)) - time.sleep(0.33) - - - - #download_thread = threading.Thread(target=background, name="Downloader") - #download_thread.start() - - - - - - - diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/configuration_utils.py b/spaces/Jackflack09/diffuse-custom/diffusers/configuration_utils.py deleted file mode 100644 index ecf23010c3c15f0fd7608888cb22f19e0045daf4..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/configuration_utils.py +++ /dev/null @@ -1,613 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" ConfigMixin base class and utilities.""" -import dataclasses -import functools -import importlib -import inspect -import json -import os -import re -from collections import OrderedDict -from typing import Any, Dict, Tuple, Union - -import numpy as np - -from huggingface_hub import hf_hub_download -from huggingface_hub.utils import EntryNotFoundError, RepositoryNotFoundError, RevisionNotFoundError -from requests import HTTPError - -from . import __version__ -from .utils import DIFFUSERS_CACHE, HUGGINGFACE_CO_RESOLVE_ENDPOINT, DummyObject, deprecate, logging - - -logger = logging.get_logger(__name__) - -_re_configuration_file = re.compile(r"config\.(.*)\.json") - - -class FrozenDict(OrderedDict): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - for key, value in self.items(): - setattr(self, key, value) - - self.__frozen = True - - def __delitem__(self, *args, **kwargs): - raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.") - - def setdefault(self, *args, **kwargs): - raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.") - - def pop(self, *args, **kwargs): - raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.") - - def update(self, *args, **kwargs): - raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.") - - def __setattr__(self, name, value): - if hasattr(self, "__frozen") and self.__frozen: - raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.") - super().__setattr__(name, value) - - def __setitem__(self, name, value): - if hasattr(self, "__frozen") and self.__frozen: - raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.") - super().__setitem__(name, value) - - -class ConfigMixin: - r""" - Base class for all configuration classes. Stores all configuration parameters under `self.config` Also handles all - methods for loading/downloading/saving classes inheriting from [`ConfigMixin`] with - - [`~ConfigMixin.from_config`] - - [`~ConfigMixin.save_config`] - - Class attributes: - - **config_name** (`str`) -- A filename under which the config should stored when calling - [`~ConfigMixin.save_config`] (should be overridden by parent class). - - **ignore_for_config** (`List[str]`) -- A list of attributes that should not be saved in the config (should be - overridden by subclass). - - **has_compatibles** (`bool`) -- Whether the class has compatible classes (should be overridden by subclass). - - **_deprecated_kwargs** (`List[str]`) -- Keyword arguments that are deprecated. Note that the init function - should only have a `kwargs` argument if at least one argument is deprecated (should be overridden by - subclass). - """ - config_name = None - ignore_for_config = [] - has_compatibles = False - - _deprecated_kwargs = [] - - def register_to_config(self, **kwargs): - if self.config_name is None: - raise NotImplementedError(f"Make sure that {self.__class__} has defined a class name `config_name`") - # Special case for `kwargs` used in deprecation warning added to schedulers - # TODO: remove this when we remove the deprecation warning, and the `kwargs` argument, - # or solve in a more general way. - kwargs.pop("kwargs", None) - for key, value in kwargs.items(): - try: - setattr(self, key, value) - except AttributeError as err: - logger.error(f"Can't set {key} with value {value} for {self}") - raise err - - if not hasattr(self, "_internal_dict"): - internal_dict = kwargs - else: - previous_dict = dict(self._internal_dict) - internal_dict = {**self._internal_dict, **kwargs} - logger.debug(f"Updating config from {previous_dict} to {internal_dict}") - - self._internal_dict = FrozenDict(internal_dict) - - def save_config(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs): - """ - Save a configuration object to the directory `save_directory`, so that it can be re-loaded using the - [`~ConfigMixin.from_config`] class method. - - Args: - save_directory (`str` or `os.PathLike`): - Directory where the configuration JSON file will be saved (will be created if it does not exist). - """ - if os.path.isfile(save_directory): - raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file") - - os.makedirs(save_directory, exist_ok=True) - - # If we save using the predefined names, we can load using `from_config` - output_config_file = os.path.join(save_directory, self.config_name) - - self.to_json_file(output_config_file) - logger.info(f"Configuration saved in {output_config_file}") - - @classmethod - def from_config(cls, config: Union[FrozenDict, Dict[str, Any]] = None, return_unused_kwargs=False, **kwargs): - r""" - Instantiate a Python class from a config dictionary - - Parameters: - config (`Dict[str, Any]`): - A config dictionary from which the Python class will be instantiated. Make sure to only load - configuration files of compatible classes. - return_unused_kwargs (`bool`, *optional*, defaults to `False`): - Whether kwargs that are not consumed by the Python class should be returned or not. - - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to update the configuration object (after it being loaded) and initiate the Python class. - `**kwargs` will be directly passed to the underlying scheduler/model's `__init__` method and eventually - overwrite same named arguments of `config`. - - Examples: - - ```python - >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler - - >>> # Download scheduler from huggingface.co and cache. - >>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") - - >>> # Instantiate DDIM scheduler class with same config as DDPM - >>> scheduler = DDIMScheduler.from_config(scheduler.config) - - >>> # Instantiate PNDM scheduler class with same config as DDPM - >>> scheduler = PNDMScheduler.from_config(scheduler.config) - ``` - """ - # <===== TO BE REMOVED WITH DEPRECATION - # TODO(Patrick) - make sure to remove the following lines when config=="model_path" is deprecated - if "pretrained_model_name_or_path" in kwargs: - config = kwargs.pop("pretrained_model_name_or_path") - - if config is None: - raise ValueError("Please make sure to provide a config as the first positional argument.") - # ======> - - if not isinstance(config, dict): - deprecation_message = "It is deprecated to pass a pretrained model name or path to `from_config`." - if "Scheduler" in cls.__name__: - deprecation_message += ( - f"If you were trying to load a scheduler, please use {cls}.from_pretrained(...) instead." - " Otherwise, please make sure to pass a configuration dictionary instead. This functionality will" - " be removed in v1.0.0." - ) - elif "Model" in cls.__name__: - deprecation_message += ( - f"If you were trying to load a model, please use {cls}.load_config(...) followed by" - f" {cls}.from_config(...) instead. Otherwise, please make sure to pass a configuration dictionary" - " instead. This functionality will be removed in v1.0.0." - ) - deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False) - config, kwargs = cls.load_config(pretrained_model_name_or_path=config, return_unused_kwargs=True, **kwargs) - - init_dict, unused_kwargs, hidden_dict = cls.extract_init_dict(config, **kwargs) - - # Allow dtype to be specified on initialization - if "dtype" in unused_kwargs: - init_dict["dtype"] = unused_kwargs.pop("dtype") - - # add possible deprecated kwargs - for deprecated_kwarg in cls._deprecated_kwargs: - if deprecated_kwarg in unused_kwargs: - init_dict[deprecated_kwarg] = unused_kwargs.pop(deprecated_kwarg) - - # Return model and optionally state and/or unused_kwargs - model = cls(**init_dict) - - # make sure to also save config parameters that might be used for compatible classes - model.register_to_config(**hidden_dict) - - # add hidden kwargs of compatible classes to unused_kwargs - unused_kwargs = {**unused_kwargs, **hidden_dict} - - if return_unused_kwargs: - return (model, unused_kwargs) - else: - return model - - @classmethod - def get_config_dict(cls, *args, **kwargs): - deprecation_message = ( - f" The function get_config_dict is deprecated. Please use {cls}.load_config instead. This function will be" - " removed in version v1.0.0" - ) - deprecate("get_config_dict", "1.0.0", deprecation_message, standard_warn=False) - return cls.load_config(*args, **kwargs) - - @classmethod - def load_config( - cls, pretrained_model_name_or_path: Union[str, os.PathLike], return_unused_kwargs=False, **kwargs - ) -> Tuple[Dict[str, Any], Dict[str, Any]]: - r""" - Instantiate a Python class from a config dictionary - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *model id* of a model repo on huggingface.co. Valid model ids should have an - organization name, like `google/ddpm-celebahq-256`. - - A path to a *directory* containing model weights saved using [`~ConfigMixin.save_config`], e.g., - `./my_model_directory/`. - - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (i.e., do not try to download the model). - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `transformers-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - subfolder (`str`, *optional*, defaults to `""`): - In case the relevant files are located inside a subfolder of the model repo (either remote in - huggingface.co or downloaded locally), you can specify the folder name here. - - - - It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated - models](https://huggingface.co/docs/hub/models-gated#gated-models). - - - - - - Activate the special ["offline-mode"](https://huggingface.co/transformers/installation.html#offline-mode) to - use this method in a firewalled environment. - - - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - force_download = kwargs.pop("force_download", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - use_auth_token = kwargs.pop("use_auth_token", None) - local_files_only = kwargs.pop("local_files_only", False) - revision = kwargs.pop("revision", None) - _ = kwargs.pop("mirror", None) - subfolder = kwargs.pop("subfolder", None) - - user_agent = {"file_type": "config"} - - pretrained_model_name_or_path = str(pretrained_model_name_or_path) - - if cls.config_name is None: - raise ValueError( - "`self.config_name` is not defined. Note that one should not load a config from " - "`ConfigMixin`. Please make sure to define `config_name` in a class inheriting from `ConfigMixin`" - ) - - if os.path.isfile(pretrained_model_name_or_path): - config_file = pretrained_model_name_or_path - elif os.path.isdir(pretrained_model_name_or_path): - if os.path.isfile(os.path.join(pretrained_model_name_or_path, cls.config_name)): - # Load from a PyTorch checkpoint - config_file = os.path.join(pretrained_model_name_or_path, cls.config_name) - elif subfolder is not None and os.path.isfile( - os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name) - ): - config_file = os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name) - else: - raise EnvironmentError( - f"Error no file named {cls.config_name} found in directory {pretrained_model_name_or_path}." - ) - else: - try: - # Load from URL or cache if already cached - config_file = hf_hub_download( - pretrained_model_name_or_path, - filename=cls.config_name, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - user_agent=user_agent, - subfolder=subfolder, - revision=revision, - ) - - except RepositoryNotFoundError: - raise EnvironmentError( - f"{pretrained_model_name_or_path} is not a local folder and is not a valid model identifier" - " listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a" - " token having permission to this repo with `use_auth_token` or log in with `huggingface-cli" - " login`." - ) - except RevisionNotFoundError: - raise EnvironmentError( - f"{revision} is not a valid git identifier (branch name, tag name or commit id) that exists for" - " this model name. Check the model page at" - f" 'https://huggingface.co/{pretrained_model_name_or_path}' for available revisions." - ) - except EntryNotFoundError: - raise EnvironmentError( - f"{pretrained_model_name_or_path} does not appear to have a file named {cls.config_name}." - ) - except HTTPError as err: - raise EnvironmentError( - "There was a specific connection error when trying to load" - f" {pretrained_model_name_or_path}:\n{err}" - ) - except ValueError: - raise EnvironmentError( - f"We couldn't connect to '{HUGGINGFACE_CO_RESOLVE_ENDPOINT}' to load this model, couldn't find it" - f" in the cached files and it looks like {pretrained_model_name_or_path} is not the path to a" - f" directory containing a {cls.config_name} file.\nCheckout your internet connection or see how to" - " run the library in offline mode at" - " 'https://huggingface.co/docs/diffusers/installation#offline-mode'." - ) - except EnvironmentError: - raise EnvironmentError( - f"Can't load config for '{pretrained_model_name_or_path}'. If you were trying to load it from " - "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " - f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory " - f"containing a {cls.config_name} file" - ) - - try: - # Load config dict - config_dict = cls._dict_from_json_file(config_file) - except (json.JSONDecodeError, UnicodeDecodeError): - raise EnvironmentError(f"It looks like the config file at '{config_file}' is not a valid JSON file.") - - if return_unused_kwargs: - return config_dict, kwargs - - return config_dict - - @staticmethod - def _get_init_keys(cls): - return set(dict(inspect.signature(cls.__init__).parameters).keys()) - - @classmethod - def extract_init_dict(cls, config_dict, **kwargs): - # 0. Copy origin config dict - original_dict = {k: v for k, v in config_dict.items()} - - # 1. Retrieve expected config attributes from __init__ signature - expected_keys = cls._get_init_keys(cls) - expected_keys.remove("self") - # remove general kwargs if present in dict - if "kwargs" in expected_keys: - expected_keys.remove("kwargs") - # remove flax internal keys - if hasattr(cls, "_flax_internal_args"): - for arg in cls._flax_internal_args: - expected_keys.remove(arg) - - # 2. Remove attributes that cannot be expected from expected config attributes - # remove keys to be ignored - if len(cls.ignore_for_config) > 0: - expected_keys = expected_keys - set(cls.ignore_for_config) - - # load diffusers library to import compatible and original scheduler - diffusers_library = importlib.import_module(__name__.split(".")[0]) - - if cls.has_compatibles: - compatible_classes = [c for c in cls._get_compatibles() if not isinstance(c, DummyObject)] - else: - compatible_classes = [] - - expected_keys_comp_cls = set() - for c in compatible_classes: - expected_keys_c = cls._get_init_keys(c) - expected_keys_comp_cls = expected_keys_comp_cls.union(expected_keys_c) - expected_keys_comp_cls = expected_keys_comp_cls - cls._get_init_keys(cls) - config_dict = {k: v for k, v in config_dict.items() if k not in expected_keys_comp_cls} - - # remove attributes from orig class that cannot be expected - orig_cls_name = config_dict.pop("_class_name", cls.__name__) - if orig_cls_name != cls.__name__ and hasattr(diffusers_library, orig_cls_name): - orig_cls = getattr(diffusers_library, orig_cls_name) - unexpected_keys_from_orig = cls._get_init_keys(orig_cls) - expected_keys - config_dict = {k: v for k, v in config_dict.items() if k not in unexpected_keys_from_orig} - - # remove private attributes - config_dict = {k: v for k, v in config_dict.items() if not k.startswith("_")} - - # 3. Create keyword arguments that will be passed to __init__ from expected keyword arguments - init_dict = {} - for key in expected_keys: - # if config param is passed to kwarg and is present in config dict - # it should overwrite existing config dict key - if key in kwargs and key in config_dict: - config_dict[key] = kwargs.pop(key) - - if key in kwargs: - # overwrite key - init_dict[key] = kwargs.pop(key) - elif key in config_dict: - # use value from config dict - init_dict[key] = config_dict.pop(key) - - # 4. Give nice warning if unexpected values have been passed - if len(config_dict) > 0: - logger.warning( - f"The config attributes {config_dict} were passed to {cls.__name__}, " - "but are not expected and will be ignored. Please verify your " - f"{cls.config_name} configuration file." - ) - - # 5. Give nice info if config attributes are initiliazed to default because they have not been passed - passed_keys = set(init_dict.keys()) - if len(expected_keys - passed_keys) > 0: - logger.info( - f"{expected_keys - passed_keys} was not found in config. Values will be initialized to default values." - ) - - # 6. Define unused keyword arguments - unused_kwargs = {**config_dict, **kwargs} - - # 7. Define "hidden" config parameters that were saved for compatible classes - hidden_config_dict = {k: v for k, v in original_dict.items() if k not in init_dict} - - return init_dict, unused_kwargs, hidden_config_dict - - @classmethod - def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]): - with open(json_file, "r", encoding="utf-8") as reader: - text = reader.read() - return json.loads(text) - - def __repr__(self): - return f"{self.__class__.__name__} {self.to_json_string()}" - - @property - def config(self) -> Dict[str, Any]: - """ - Returns the config of the class as a frozen dictionary - - Returns: - `Dict[str, Any]`: Config of the class. - """ - return self._internal_dict - - def to_json_string(self) -> str: - """ - Serializes this instance to a JSON string. - - Returns: - `str`: String containing all the attributes that make up this configuration instance in JSON format. - """ - config_dict = self._internal_dict if hasattr(self, "_internal_dict") else {} - config_dict["_class_name"] = self.__class__.__name__ - config_dict["_diffusers_version"] = __version__ - - def to_json_saveable(value): - if isinstance(value, np.ndarray): - value = value.tolist() - return value - - config_dict = {k: to_json_saveable(v) for k, v in config_dict.items()} - return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" - - def to_json_file(self, json_file_path: Union[str, os.PathLike]): - """ - Save this instance to a JSON file. - - Args: - json_file_path (`str` or `os.PathLike`): - Path to the JSON file in which this configuration instance's parameters will be saved. - """ - with open(json_file_path, "w", encoding="utf-8") as writer: - writer.write(self.to_json_string()) - - -def register_to_config(init): - r""" - Decorator to apply on the init of classes inheriting from [`ConfigMixin`] so that all the arguments are - automatically sent to `self.register_for_config`. To ignore a specific argument accepted by the init but that - shouldn't be registered in the config, use the `ignore_for_config` class variable - - Warning: Once decorated, all private arguments (beginning with an underscore) are trashed and not sent to the init! - """ - - @functools.wraps(init) - def inner_init(self, *args, **kwargs): - # Ignore private kwargs in the init. - init_kwargs = {k: v for k, v in kwargs.items() if not k.startswith("_")} - config_init_kwargs = {k: v for k, v in kwargs.items() if k.startswith("_")} - if not isinstance(self, ConfigMixin): - raise RuntimeError( - f"`@register_for_config` was applied to {self.__class__.__name__} init method, but this class does " - "not inherit from `ConfigMixin`." - ) - - ignore = getattr(self, "ignore_for_config", []) - # Get positional arguments aligned with kwargs - new_kwargs = {} - signature = inspect.signature(init) - parameters = { - name: p.default for i, (name, p) in enumerate(signature.parameters.items()) if i > 0 and name not in ignore - } - for arg, name in zip(args, parameters.keys()): - new_kwargs[name] = arg - - # Then add all kwargs - new_kwargs.update( - { - k: init_kwargs.get(k, default) - for k, default in parameters.items() - if k not in ignore and k not in new_kwargs - } - ) - new_kwargs = {**config_init_kwargs, **new_kwargs} - getattr(self, "register_to_config")(**new_kwargs) - init(self, *args, **init_kwargs) - - return inner_init - - -def flax_register_to_config(cls): - original_init = cls.__init__ - - @functools.wraps(original_init) - def init(self, *args, **kwargs): - if not isinstance(self, ConfigMixin): - raise RuntimeError( - f"`@register_for_config` was applied to {self.__class__.__name__} init method, but this class does " - "not inherit from `ConfigMixin`." - ) - - # Ignore private kwargs in the init. Retrieve all passed attributes - init_kwargs = {k: v for k, v in kwargs.items()} - - # Retrieve default values - fields = dataclasses.fields(self) - default_kwargs = {} - for field in fields: - # ignore flax specific attributes - if field.name in self._flax_internal_args: - continue - if type(field.default) == dataclasses._MISSING_TYPE: - default_kwargs[field.name] = None - else: - default_kwargs[field.name] = getattr(self, field.name) - - # Make sure init_kwargs override default kwargs - new_kwargs = {**default_kwargs, **init_kwargs} - # dtype should be part of `init_kwargs`, but not `new_kwargs` - if "dtype" in new_kwargs: - new_kwargs.pop("dtype") - - # Get positional arguments aligned with kwargs - for i, arg in enumerate(args): - name = fields[i].name - new_kwargs[name] = arg - - getattr(self, "register_to_config")(**new_kwargs) - original_init(self, *args, **kwargs) - - cls.__init__ = init - return cls diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/boostmonodepth_utils.py b/spaces/Jacks2003/3D_Photo_Inpainting/boostmonodepth_utils.py deleted file mode 100644 index 5f752b0caf9b8c9a64d9113e10d8b1fb2fa782b0..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/boostmonodepth_utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -import cv2 -import glob -import numpy as np -import imageio -from MiDaS.MiDaS_utils import write_depth - -BOOST_BASE = 'BoostingMonocularDepth' - -BOOST_INPUTS = 'inputs' -BOOST_OUTPUTS = 'outputs' - -def run_boostmonodepth(img_names, src_folder, depth_folder): - - if not isinstance(img_names, list): - img_names = [img_names] - - # remove irrelevant files first - clean_folder(os.path.join(BOOST_BASE, BOOST_INPUTS)) - clean_folder(os.path.join(BOOST_BASE, BOOST_OUTPUTS)) - - tgt_names = [] - for img_name in img_names: - base_name = os.path.basename(img_name) - tgt_name = os.path.join(BOOST_BASE, BOOST_INPUTS, base_name) - os.system(f'cp {img_name} {tgt_name}') - - # keep only the file name here. - # they save all depth as .png file - tgt_names.append(os.path.basename(tgt_name).replace('.jpg', '.png')) - - os.system(f'cd {BOOST_BASE} && python run.py --Final --data_dir {BOOST_INPUTS}/ --output_dir {BOOST_OUTPUTS} --depthNet 0') - - for i, (img_name, tgt_name) in enumerate(zip(img_names, tgt_names)): - img = imageio.imread(img_name) - H, W = img.shape[:2] - scale = 640. / max(H, W) - - # resize and save depth - target_height, target_width = int(round(H * scale)), int(round(W * scale)) - depth = imageio.imread(os.path.join(BOOST_BASE, BOOST_OUTPUTS, tgt_name)) - depth = np.array(depth).astype(np.float32) - depth = resize_depth(depth, target_width, target_height) - np.save(os.path.join(depth_folder, tgt_name.replace('.png', '.npy')), depth / 32768. - 1.) - write_depth(os.path.join(depth_folder, tgt_name.replace('.png', '')), depth) - -def clean_folder(folder, img_exts=['.png', '.jpg', '.npy']): - - for img_ext in img_exts: - paths_to_check = os.path.join(folder, f'*{img_ext}') - if len(glob.glob(paths_to_check)) == 0: - continue - print(paths_to_check) - os.system(f'rm {paths_to_check}') - -def resize_depth(depth, width, height): - """Resize numpy (or image read by imageio) depth map - - Args: - depth (numpy): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = cv2.blur(depth, (3, 3)) - return cv2.resize(depth, (width, height), interpolation=cv2.INTER_AREA) diff --git a/spaces/Jo0xFF/4xArText/app.py b/spaces/Jo0xFF/4xArText/app.py deleted file mode 100644 index 6b965d472c1aa11d743d39c992b46111fac16473..0000000000000000000000000000000000000000 --- a/spaces/Jo0xFF/4xArText/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -from PIL import Image -import os -import upscale -from pathlib import Path - -root_dir = "ESRGAN" -output_dir = "output" -input_dir = "input" -# Define function to process image -def process_image(input_image): - # print(input_image) - # print(os.path.join(root_dir, output_dir)) - # print("The type of image: ", type(input_image)) - - # Save image to directory - input_image.save(os.path.join(input_dir, "image.png"), format="PNG") - - # Run instance from upscale - upscale.main(model=str("models/4xArabicText.pth"), - cpu=True, - input=Path("input"), - output=Path("output"), - reverse=False, - skip_existing=False, - delete_input=False, - seamless=None, - fp16=False, - device_id=0, - cache_max_split_depth=False, - binary_alpha=False, - ternary_alpha=False, - alpha_threshold=0.5, - alpha_boundary_offset=0.2, - alpha_mode=None, - verbose=False) - - # Open image from dir - img = Image.open(os.path.join(output_dir, "image.png")) - - return img - -# Create gradio app content -title = "ArText Upscaling | ArabicText 0.0.1a" -description = "

This app will be used to upscale image up to 4x the original size, Please be patient the process may take upto 5 minute.

PS: Project still in alpha release!

" -footer = "
This app made by Yousif. Check my social media @Twitter, @Github" - -gr_app = gr.Interface(fn=process_image, - inputs=gr.Image(type="pil"), - outputs=gr.Image(type="pil", shape=(720, 1280)), - title=title, - description=description, - article=footer, - theme=gr.themes.Soft(), - allow_flagging="never") - - -gr_app.queue(max_size=16).launch(debug=True) \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/utils.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/utils.py deleted file mode 100644 index 0fafe8793b0d539fa58dd024342250b24b6187a9..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/utils.py +++ /dev/null @@ -1,120 +0,0 @@ -import torch -import numpy as np -from tqdm import tqdm -import json - - -def load_data(file_name: str = "./lib/uvr5_pack/name_params.json") -> dict: - with open(file_name, "r") as f: - data = json.load(f) - - return data - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def inference(X_spec, device, model, aggressiveness, data): - """ - data : dic configs - """ - - def _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half=True - ): - model.eval() - with torch.no_grad(): - preds = [] - - iterations = [n_window] - - total_iterations = sum(iterations) - for i in tqdm(range(n_window)): - start = i * roi_size - X_mag_window = X_mag_pad[ - None, :, :, start : start + data["window_size"] - ] - X_mag_window = torch.from_numpy(X_mag_window) - if is_half: - X_mag_window = X_mag_window.half() - X_mag_window = X_mag_window.to(device) - - pred = model.predict(X_mag_window, aggressiveness) - - pred = pred.detach().cpu().numpy() - preds.append(pred[0]) - - pred = np.concatenate(preds, axis=2) - return pred - - def preprocess(X_spec): - X_mag = np.abs(X_spec) - X_phase = np.angle(X_spec) - - return X_mag, X_phase - - X_mag, X_phase = preprocess(X_spec) - - coef = X_mag.max() - X_mag_pre = X_mag / coef - - n_frame = X_mag_pre.shape[2] - pad_l, pad_r, roi_size = make_padding(n_frame, data["window_size"], model.offset) - n_window = int(np.ceil(n_frame / roi_size)) - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - if list(model.state_dict().values())[0].dtype == torch.float16: - is_half = True - else: - is_half = False - pred = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred = pred[:, :, :n_frame] - - if data["tta"]: - pad_l += roi_size // 2 - pad_r += roi_size // 2 - n_window += 1 - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - pred_tta = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred_tta = pred_tta[:, :, roi_size // 2 :] - pred_tta = pred_tta[:, :, :n_frame] - - return (pred + pred_tta) * 0.5 * coef, X_mag, np.exp(1.0j * X_phase) - else: - return pred * coef, X_mag, np.exp(1.0j * X_phase) - - -def _get_name_params(model_path, model_hash): - data = load_data() - flag = False - ModelName = model_path - for type in list(data): - for model in list(data[type][0]): - for i in range(len(data[type][0][model])): - if str(data[type][0][model][i]["hash_name"]) == model_hash: - flag = True - elif str(data[type][0][model][i]["hash_name"]) in ModelName: - flag = True - - if flag: - model_params_auto = data[type][0][model][i]["model_params"] - param_name_auto = data[type][0][model][i]["param_name"] - if type == "equivalent": - return param_name_auto, model_params_auto - else: - flag = False - return param_name_auto, model_params_auto diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/vgg.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/vgg.py deleted file mode 100644 index 5ca1c6551eb6ad238838011a2c98d965138fd770..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/vgg.py +++ /dev/null @@ -1,77 +0,0 @@ -"""VGG2L definition for transformer-transducer.""" - -import torch - - -class VGG2L(torch.nn.Module): - """VGG2L module for transformer-transducer encoder.""" - - def __init__(self, idim, odim): - """Construct a VGG2L object. - - Args: - idim (int): dimension of inputs - odim (int): dimension of outputs - - """ - super(VGG2L, self).__init__() - - self.vgg2l = torch.nn.Sequential( - torch.nn.Conv2d(1, 64, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.Conv2d(64, 64, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.MaxPool2d((3, 2)), - torch.nn.Conv2d(64, 128, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.Conv2d(128, 128, 3, stride=1, padding=1), - torch.nn.ReLU(), - torch.nn.MaxPool2d((2, 2)), - ) - - self.output = torch.nn.Linear(128 * ((idim // 2) // 2), odim) - - def forward(self, x, x_mask): - """VGG2L forward for x. - - Args: - x (torch.Tensor): input torch (B, T, idim) - x_mask (torch.Tensor): (B, 1, T) - - Returns: - x (torch.Tensor): input torch (B, sub(T), attention_dim) - x_mask (torch.Tensor): (B, 1, sub(T)) - - """ - x = x.unsqueeze(1) - x = self.vgg2l(x) - - b, c, t, f = x.size() - - x = self.output(x.transpose(1, 2).contiguous().view(b, t, c * f)) - - if x_mask is None: - return x, None - else: - x_mask = self.create_new_mask(x_mask, x) - - return x, x_mask - - def create_new_mask(self, x_mask, x): - """Create a subsampled version of x_mask. - - Args: - x_mask (torch.Tensor): (B, 1, T) - x (torch.Tensor): (B, sub(T), attention_dim) - - Returns: - x_mask (torch.Tensor): (B, 1, sub(T)) - - """ - x_t1 = x_mask.size(2) - (x_mask.size(2) % 3) - x_mask = x_mask[:, :, :x_t1][:, :, ::3] - - x_t2 = x_mask.size(2) - (x_mask.size(2) % 2) - x_mask = x_mask[:, :, :x_t2][:, :, ::2] - - return x_mask diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/audio_utils.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/audio_utils.py deleted file mode 100644 index 1dbeddbc65d2048fd90b348db6ff15a420a70f2b..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/audio_utils.py +++ /dev/null @@ -1,60 +0,0 @@ - -import torch -import torch.utils.data -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - -def _dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def _spectral_normalize_torch(magnitudes): - output = _dynamic_range_compression_torch(magnitudes) - return output - -mel_basis = {} -hann_window = {} - -def mel_spectrogram( - y, - n_fft, - num_mels, - sampling_rate, - hop_size, - win_size, - fmin, - fmax, - center=False, - output_energy=False, -): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - mel_spec = torch.matmul(mel_basis[str(fmax)+'_'+str(y.device)], spec) - mel_spec = _spectral_normalize_torch(mel_spec) - if output_energy: - energy = torch.norm(spec, dim=1) - return mel_spec, energy - else: - return mel_spec diff --git a/spaces/Kevin676/Clone-Your-Voice/README.md b/spaces/Kevin676/Clone-Your-Voice/README.md deleted file mode 100644 index 96318e9af0f67c1567eaf0b889c328f4548a2228..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Clone-Your-Voice/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Clone Your Voice -emoji: 📚 -colorFrom: blue -colorTo: yellow -python_version: 3.8.4 -sdk: gradio -sdk_version: 3.0.4 -app_file: app.py -pinned: false -duplicated_from: ruslanmv/Clone-Your-Voice ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kreaols/ChuanhuChatGPT/modules/models/configuration_moss.py b/spaces/Kreaols/ChuanhuChatGPT/modules/models/configuration_moss.py deleted file mode 100644 index 9bad4396ecea6578c1628732d0ef077d8964d45d..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/modules/models/configuration_moss.py +++ /dev/null @@ -1,118 +0,0 @@ -""" Moss model configuration""" - -from transformers.utils import logging -from transformers.configuration_utils import PretrainedConfig - - -logger = logging.get_logger(__name__) - - -class MossConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`MossModel`]. It is used to instantiate a - Moss model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the Moss - [fnlp/moss-moon-003-base](https://huggingface.co/fnlp/moss-moon-003-base) architecture. Configuration objects - inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from - [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 107008): - Vocabulary size of the Moss model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`MossModel`]. - n_positions (`int`, *optional*, defaults to 2048): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 4096): - Dimensionality of the embeddings and hidden states. - n_layer (`int`, *optional*, defaults to 28): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - rotary_dim (`int`, *optional*, defaults to 64): - Number of dimensions in the embedding that Rotary Position Embedding is applied to. - n_inner (`int`, *optional*, defaults to None): - Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd - activation_function (`str`, *optional*, defaults to `"gelu_new"`): - Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`int`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - attn_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-5): - The epsilon to use in the layer normalization layers. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - - Example: - - ```python - >>> from modeling_moss import MossModel - >>> from configuration_moss import MossConfig - - >>> # Initializing a moss-moon-003-base configuration - >>> configuration = MossConfig() - - >>> # Initializing a model (with random weights) from the configuration - >>> model = MossModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "moss" - attribute_map = { - "max_position_embeddings": "n_positions", - "hidden_size": "n_embd", - "num_attention_heads": "n_head", - "num_hidden_layers": "n_layer", - } - - def __init__( - self, - vocab_size=107008, - n_positions=2048, - n_ctx=2048, - n_embd=4096, - n_layer=28, - n_head=16, - rotary_dim=64, - n_inner=None, - activation_function="gelu_new", - resid_pdrop=0.0, - embd_pdrop=0.0, - attn_pdrop=0.0, - layer_norm_epsilon=1e-5, - initializer_range=0.02, - use_cache=True, - bos_token_id=106028, - eos_token_id=106068, - tie_word_embeddings=False, - **kwargs, - ): - self.vocab_size = vocab_size - self.n_ctx = n_ctx - self.n_positions = n_positions - self.n_embd = n_embd - self.n_layer = n_layer - self.n_head = n_head - self.n_inner = n_inner - self.rotary_dim = rotary_dim - self.activation_function = activation_function - self.resid_pdrop = resid_pdrop - self.embd_pdrop = embd_pdrop - self.attn_pdrop = attn_pdrop - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_range = initializer_range - self.use_cache = use_cache - - self.bos_token_id = bos_token_id - self.eos_token_id = eos_token_id - - super().__init__( - bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs - ) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/yolo.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/yolo.py deleted file mode 100644 index 5cb9a9cd250a2c26af22032b1ed4bb5a7a8af605..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/yolo.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .single_stage import SingleStageDetector - - -@MODELS.register_module() -class YOLOV3(SingleStageDetector): - r"""Implementation of `Yolov3: An incremental improvement - `_ - - Args: - backbone (:obj:`ConfigDict` or dict): The backbone module. - neck (:obj:`ConfigDict` or dict): The neck module. - bbox_head (:obj:`ConfigDict` or dict): The bbox head module. - train_cfg (:obj:`ConfigDict` or dict, optional): The training config - of YOLOX. Default: None. - test_cfg (:obj:`ConfigDict` or dict, optional): The testing config - of YOLOX. Default: None. - data_preprocessor (:obj:`ConfigDict` or dict, optional): - Model preprocessing config for processing the input data. - it usually includes ``to_rgb``, ``pad_size_divisor``, - ``pad_value``, ``mean`` and ``std``. Defaults to None. - init_cfg (:obj:`ConfigDict` or dict, optional): the config to control - the initialization. Defaults to None. - """ - - def __init__(self, - backbone: ConfigType, - neck: ConfigType, - bbox_head: ConfigType, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/__init__.py deleted file mode 100644 index c8fc99df1ce51e4e5e9cce67d58530be4d945791..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/__init__.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activations import SiLU -from .bbox_nms import fast_nms, multiclass_nms -from .brick_wrappers import AdaptiveAvgPool2d, adaptive_avg_pool2d -from .conv_upsample import ConvUpsample -from .csp_layer import CSPLayer -from .dropblock import DropBlock -from .ema import ExpMomentumEMA -from .inverted_residual import InvertedResidual -from .matrix_nms import mask_matrix_nms -from .msdeformattn_pixel_decoder import MSDeformAttnPixelDecoder -from .normed_predictor import NormedConv2d, NormedLinear -from .pixel_decoder import PixelDecoder, TransformerEncoderPixelDecoder -from .positional_encoding import (LearnedPositionalEncoding, - SinePositionalEncoding) -from .res_layer import ResLayer, SimplifiedBasicBlock -from .se_layer import ChannelAttention, DyReLU, SELayer -# yapf: disable -from .transformer import (MLP, AdaptivePadding, CdnQueryGenerator, - ConditionalAttention, - ConditionalDetrTransformerDecoder, - ConditionalDetrTransformerDecoderLayer, - DABDetrTransformerDecoder, - DABDetrTransformerDecoderLayer, - DABDetrTransformerEncoder, - DeformableDetrTransformerDecoder, - DeformableDetrTransformerDecoderLayer, - DeformableDetrTransformerEncoder, - DeformableDetrTransformerEncoderLayer, - DetrTransformerDecoder, DetrTransformerDecoderLayer, - DetrTransformerEncoder, DetrTransformerEncoderLayer, - DinoTransformerDecoder, DynamicConv, - Mask2FormerTransformerDecoder, - Mask2FormerTransformerDecoderLayer, - Mask2FormerTransformerEncoder, PatchEmbed, - PatchMerging, coordinate_to_encoding, - inverse_sigmoid, nchw_to_nlc, nlc_to_nchw) - -# yapf: enable - -__all__ = [ - 'fast_nms', 'multiclass_nms', 'mask_matrix_nms', 'DropBlock', - 'PixelDecoder', 'TransformerEncoderPixelDecoder', - 'MSDeformAttnPixelDecoder', 'ResLayer', 'PatchMerging', - 'SinePositionalEncoding', 'LearnedPositionalEncoding', 'DynamicConv', - 'SimplifiedBasicBlock', 'NormedLinear', 'NormedConv2d', 'InvertedResidual', - 'SELayer', 'ConvUpsample', 'CSPLayer', 'adaptive_avg_pool2d', - 'AdaptiveAvgPool2d', 'PatchEmbed', 'nchw_to_nlc', 'nlc_to_nchw', 'DyReLU', - 'ExpMomentumEMA', 'inverse_sigmoid', 'ChannelAttention', 'SiLU', 'MLP', - 'DetrTransformerEncoderLayer', 'DetrTransformerDecoderLayer', - 'DetrTransformerEncoder', 'DetrTransformerDecoder', - 'DeformableDetrTransformerEncoder', 'DeformableDetrTransformerDecoder', - 'DeformableDetrTransformerEncoderLayer', - 'DeformableDetrTransformerDecoderLayer', 'AdaptivePadding', - 'coordinate_to_encoding', 'ConditionalAttention', - 'DABDetrTransformerDecoderLayer', 'DABDetrTransformerDecoder', - 'DABDetrTransformerEncoder', 'ConditionalDetrTransformerDecoder', - 'ConditionalDetrTransformerDecoderLayer', 'DinoTransformerDecoder', - 'CdnQueryGenerator', 'Mask2FormerTransformerEncoder', - 'Mask2FormerTransformerDecoderLayer', 'Mask2FormerTransformerDecoder' -] diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/dab_detr_layers.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/dab_detr_layers.py deleted file mode 100644 index b8a6e7724a1b1ca18f26dd10455f3e3a4d696460..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/dab_detr_layers.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List - -import torch -import torch.nn as nn -from mmcv.cnn import build_norm_layer -from mmcv.cnn.bricks.transformer import FFN -from mmengine.model import ModuleList -from torch import Tensor - -from .detr_layers import (DetrTransformerDecoder, DetrTransformerDecoderLayer, - DetrTransformerEncoder, DetrTransformerEncoderLayer) -from .utils import (MLP, ConditionalAttention, coordinate_to_encoding, - inverse_sigmoid) - - -class DABDetrTransformerDecoderLayer(DetrTransformerDecoderLayer): - """Implements decoder layer in DAB-DETR transformer.""" - - def _init_layers(self): - """Initialize self-attention, cross-attention, FFN, normalization and - others.""" - self.self_attn = ConditionalAttention(**self.self_attn_cfg) - self.cross_attn = ConditionalAttention(**self.cross_attn_cfg) - self.embed_dims = self.self_attn.embed_dims - self.ffn = FFN(**self.ffn_cfg) - norms_list = [ - build_norm_layer(self.norm_cfg, self.embed_dims)[1] - for _ in range(3) - ] - self.norms = ModuleList(norms_list) - self.keep_query_pos = self.cross_attn.keep_query_pos - - def forward(self, - query: Tensor, - key: Tensor, - query_pos: Tensor, - key_pos: Tensor, - ref_sine_embed: Tensor = None, - self_attn_masks: Tensor = None, - cross_attn_masks: Tensor = None, - key_padding_mask: Tensor = None, - is_first: bool = False, - **kwargs) -> Tensor: - """ - Args: - query (Tensor): The input query with shape [bs, num_queries, - dim]. - key (Tensor): The key tensor with shape [bs, num_keys, - dim]. - query_pos (Tensor): The positional encoding for query in self - attention, with the same shape as `x`. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. - ref_sine_embed (Tensor): The positional encoding for query in - cross attention, with the same shape as `x`. - Defaults to None. - self_attn_masks (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - cross_attn_masks (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - is_first (bool): A indicator to tell whether the current layer - is the first layer of the decoder. - Defaults to False. - - Returns: - Tensor: forwarded results with shape - [bs, num_queries, dim]. - """ - - query = self.self_attn( - query=query, - key=query, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=self_attn_masks, - **kwargs) - query = self.norms[0](query) - query = self.cross_attn( - query=query, - key=key, - query_pos=query_pos, - key_pos=key_pos, - ref_sine_embed=ref_sine_embed, - attn_mask=cross_attn_masks, - key_padding_mask=key_padding_mask, - is_first=is_first, - **kwargs) - query = self.norms[1](query) - query = self.ffn(query) - query = self.norms[2](query) - - return query - - -class DABDetrTransformerDecoder(DetrTransformerDecoder): - """Decoder of DAB-DETR. - - Args: - query_dim (int): The last dimension of query pos, - 4 for anchor format, 2 for point format. - Defaults to 4. - query_scale_type (str): Type of transformation applied - to content query. Defaults to `cond_elewise`. - with_modulated_hw_attn (bool): Whether to inject h&w info - during cross conditional attention. Defaults to True. - """ - - def __init__(self, - *args, - query_dim: int = 4, - query_scale_type: str = 'cond_elewise', - with_modulated_hw_attn: bool = True, - **kwargs): - - self.query_dim = query_dim - self.query_scale_type = query_scale_type - self.with_modulated_hw_attn = with_modulated_hw_attn - - super().__init__(*args, **kwargs) - - def _init_layers(self): - """Initialize decoder layers and other layers.""" - assert self.query_dim in [2, 4], \ - f'{"dab-detr only supports anchor prior or reference point prior"}' - assert self.query_scale_type in [ - 'cond_elewise', 'cond_scalar', 'fix_elewise' - ] - - self.layers = ModuleList([ - DABDetrTransformerDecoderLayer(**self.layer_cfg) - for _ in range(self.num_layers) - ]) - - embed_dims = self.layers[0].embed_dims - self.embed_dims = embed_dims - - self.post_norm = build_norm_layer(self.post_norm_cfg, embed_dims)[1] - if self.query_scale_type == 'cond_elewise': - self.query_scale = MLP(embed_dims, embed_dims, embed_dims, 2) - elif self.query_scale_type == 'cond_scalar': - self.query_scale = MLP(embed_dims, embed_dims, 1, 2) - elif self.query_scale_type == 'fix_elewise': - self.query_scale = nn.Embedding(self.num_layers, embed_dims) - else: - raise NotImplementedError('Unknown query_scale_type: {}'.format( - self.query_scale_type)) - - self.ref_point_head = MLP(self.query_dim // 2 * embed_dims, embed_dims, - embed_dims, 2) - - if self.with_modulated_hw_attn and self.query_dim == 4: - self.ref_anchor_head = MLP(embed_dims, embed_dims, 2, 2) - - self.keep_query_pos = self.layers[0].keep_query_pos - if not self.keep_query_pos: - for layer_id in range(self.num_layers - 1): - self.layers[layer_id + 1].cross_attn.qpos_proj = None - - def forward(self, - query: Tensor, - key: Tensor, - query_pos: Tensor, - key_pos: Tensor, - reg_branches: nn.Module, - key_padding_mask: Tensor = None, - **kwargs) -> List[Tensor]: - """Forward function of decoder. - - Args: - query (Tensor): The input query with shape (bs, num_queries, dim). - key (Tensor): The input key with shape (bs, num_keys, dim). - query_pos (Tensor): The positional encoding for `query`, with the - same shape as `query`. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. - reg_branches (nn.Module): The regression branch for dynamically - updating references in each layer. - key_padding_mask (Tensor): ByteTensor with shape (bs, num_keys). - Defaults to `None`. - - Returns: - List[Tensor]: forwarded results with shape (num_decoder_layers, - bs, num_queries, dim) if `return_intermediate` is True, otherwise - with shape (1, bs, num_queries, dim). references with shape - (num_decoder_layers, bs, num_queries, 2/4). - """ - output = query - unsigmoid_references = query_pos - - reference_points = unsigmoid_references.sigmoid() - intermediate_reference_points = [reference_points] - - intermediate = [] - for layer_id, layer in enumerate(self.layers): - obj_center = reference_points[..., :self.query_dim] - ref_sine_embed = coordinate_to_encoding( - coord_tensor=obj_center, num_feats=self.embed_dims // 2) - query_pos = self.ref_point_head( - ref_sine_embed) # [bs, nq, 2c] -> [bs, nq, c] - # For the first decoder layer, do not apply transformation - if self.query_scale_type != 'fix_elewise': - if layer_id == 0: - pos_transformation = 1 - else: - pos_transformation = self.query_scale(output) - else: - pos_transformation = self.query_scale.weight[layer_id] - # apply transformation - ref_sine_embed = ref_sine_embed[ - ..., :self.embed_dims] * pos_transformation - # modulated height and weight attention - if self.with_modulated_hw_attn: - assert obj_center.size(-1) == 4 - ref_hw = self.ref_anchor_head(output).sigmoid() - ref_sine_embed[..., self.embed_dims // 2:] *= \ - (ref_hw[..., 0] / obj_center[..., 2]).unsqueeze(-1) - ref_sine_embed[..., : self.embed_dims // 2] *= \ - (ref_hw[..., 1] / obj_center[..., 3]).unsqueeze(-1) - - output = layer( - output, - key, - query_pos=query_pos, - ref_sine_embed=ref_sine_embed, - key_pos=key_pos, - key_padding_mask=key_padding_mask, - is_first=(layer_id == 0), - **kwargs) - # iter update - tmp_reg_preds = reg_branches(output) - tmp_reg_preds[..., :self.query_dim] += inverse_sigmoid( - reference_points) - new_reference_points = tmp_reg_preds[ - ..., :self.query_dim].sigmoid() - if layer_id != self.num_layers - 1: - intermediate_reference_points.append(new_reference_points) - reference_points = new_reference_points.detach() - - if self.return_intermediate: - intermediate.append(self.post_norm(output)) - - output = self.post_norm(output) - - if self.return_intermediate: - return [ - torch.stack(intermediate), - torch.stack(intermediate_reference_points), - ] - else: - return [ - output.unsqueeze(0), - torch.stack(intermediate_reference_points) - ] - - -class DABDetrTransformerEncoder(DetrTransformerEncoder): - """Encoder of DAB-DETR.""" - - def _init_layers(self): - """Initialize encoder layers.""" - self.layers = ModuleList([ - DetrTransformerEncoderLayer(**self.layer_cfg) - for _ in range(self.num_layers) - ]) - embed_dims = self.layers[0].embed_dims - self.embed_dims = embed_dims - self.query_scale = MLP(embed_dims, embed_dims, embed_dims, 2) - - def forward(self, query: Tensor, query_pos: Tensor, - key_padding_mask: Tensor, **kwargs): - """Forward function of encoder. - - Args: - query (Tensor): Input queries of encoder, has shape - (bs, num_queries, dim). - query_pos (Tensor): The positional embeddings of the queries, has - shape (bs, num_feat_points, dim). - key_padding_mask (Tensor): ByteTensor, the key padding mask - of the queries, has shape (bs, num_feat_points). - - Returns: - Tensor: With shape (num_queries, bs, dim). - """ - - for layer in self.layers: - pos_scales = self.query_scale(query) - query = layer( - query, - query_pos=query_pos * pos_scales, - key_padding_mask=key_padding_mask, - **kwargs) - - return query diff --git a/spaces/KyanChen/RSPrompter/mmpl/engine/hooks/yolox_mode_switch_hook.py b/spaces/KyanChen/RSPrompter/mmpl/engine/hooks/yolox_mode_switch_hook.py deleted file mode 100644 index 27711768c3f89b26410ae1373bc920d0bfded603..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/engine/hooks/yolox_mode_switch_hook.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from typing import Sequence - -from mmengine.hooks import Hook -from mmengine.model import is_model_wrapper -from mmengine.runner import Runner - -from mmyolo.registry import HOOKS - - -@HOOKS.register_module() -class YOLOXModeSwitchHook(Hook): - """Switch the mode of YOLOX during training. - - This hook turns off the mosaic and mixup data augmentation and switches - to use L1 loss in bbox_head. - - Args: - num_last_epochs (int): The number of latter epochs in the end of the - training to close the data augmentation and switch to L1 loss. - Defaults to 15. - """ - - def __init__(self, - num_last_epochs: int = 15, - new_train_pipeline: Sequence[dict] = None): - self.num_last_epochs = num_last_epochs - self.new_train_pipeline_cfg = new_train_pipeline - - def before_train_epoch(self, runner: Runner): - """Close mosaic and mixup augmentation and switches to use L1 loss.""" - epoch = runner.epoch - model = runner.model - if is_model_wrapper(model): - model = model.module - - if (epoch + 1) == runner.max_epochs - self.num_last_epochs: - runner.logger.info(f'New Pipeline: {self.new_train_pipeline_cfg}') - - train_dataloader_cfg = copy.deepcopy(runner.cfg.train_dataloader) - train_dataloader_cfg.dataset.pipeline = self.new_train_pipeline_cfg - # Note: Why rebuild the dataset? - # When build_dataloader will make a deep copy of the dataset, - # it will lead to potential risks, such as the global instance - # object FileClient data is disordered. - # This problem needs to be solved in the future. - new_train_dataloader = Runner.build_dataloader( - train_dataloader_cfg) - runner.train_loop.dataloader = new_train_dataloader - - runner.logger.info('recreate the dataloader!') - runner.logger.info('Add additional bbox reg loss now!') - model.bbox_head.use_bbox_aux = True diff --git a/spaces/Laihiujin/OneFormer/oneformer/evaluation/evaluator.py b/spaces/Laihiujin/OneFormer/oneformer/evaluation/evaluator.py deleted file mode 100644 index 7d0848c7ec511f7000f4230c914a8b32f690dee0..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/evaluation/evaluator.py +++ /dev/null @@ -1,228 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/evaluation/evaluator.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import datetime -import logging -import time -from collections import OrderedDict, abc -from contextlib import ExitStack, contextmanager -from typing import List, Union -import torch -from torch import nn - -from detectron2.utils.comm import get_world_size, is_main_process -from detectron2.utils.logger import log_every_n_seconds - - -class DatasetEvaluator: - """ - Base class for a dataset evaluator. - - The function :func:`inference_on_dataset` runs the model over - all samples in the dataset, and have a DatasetEvaluator to process the inputs/outputs. - - This class will accumulate information of the inputs/outputs (by :meth:`process`), - and produce evaluation results in the end (by :meth:`evaluate`). - """ - - def reset(self): - """ - Preparation for a new round of evaluation. - Should be called before starting a round of evaluation. - """ - pass - - def process(self, inputs, outputs): - """ - Process the pair of inputs and outputs. - If they contain batches, the pairs can be consumed one-by-one using `zip`: - - .. code-block:: python - - for input_, output in zip(inputs, outputs): - # do evaluation on single input/output pair - ... - - Args: - inputs (list): the inputs that's used to call the model. - outputs (list): the return value of `model(inputs)` - """ - pass - - def evaluate(self): - """ - Evaluate/summarize the performance, after processing all input/output pairs. - - Returns: - dict: - A new evaluator class can return a dict of arbitrary format - as long as the user can process the results. - In our train_net.py, we expect the following format: - - * key: the name of the task (e.g., bbox) - * value: a dict of {metric name: score}, e.g.: {"AP50": 80} - """ - pass - - -class DatasetEvaluators(DatasetEvaluator): - """ - Wrapper class to combine multiple :class:`DatasetEvaluator` instances. - - This class dispatches every evaluation call to - all of its :class:`DatasetEvaluator`. - """ - - def __init__(self, evaluators): - """ - Args: - evaluators (list): the evaluators to combine. - """ - super().__init__() - self._evaluators = evaluators - - def reset(self): - for evaluator in self._evaluators: - evaluator.reset() - - def process(self, inputs, outputs): - for evaluator in self._evaluators: - evaluator.process(inputs, outputs) - - def evaluate(self): - results = OrderedDict() - for evaluator in self._evaluators: - result = evaluator.evaluate() - if is_main_process() and result is not None: - for k, v in result.items(): - assert ( - k not in results - ), "Different evaluators produce results with the same key {}".format(k) - results[k] = v - return results - - -def inference_on_dataset( - model, data_loader, evaluator: Union[DatasetEvaluator, List[DatasetEvaluator], None] -): - """ - Run model on the data_loader and evaluate the metrics with evaluator. - Also benchmark the inference speed of `model.__call__` accurately. - The model will be used in eval mode. - - Args: - model (callable): a callable which takes an object from - `data_loader` and returns some outputs. - - If it's an nn.Module, it will be temporarily set to `eval` mode. - If you wish to evaluate a model in `training` mode instead, you can - wrap the given model and override its behavior of `.eval()` and `.train()`. - data_loader: an iterable object with a length. - The elements it generates will be the inputs to the model. - evaluator: the evaluator(s) to run. Use `None` if you only want to benchmark, - but don't want to do any evaluation. - - Returns: - The return value of `evaluator.evaluate()` - """ - num_devices = get_world_size() - logger = logging.getLogger(__name__) - logger.info("Start inference on {} batches".format(len(data_loader))) - - total = len(data_loader) # inference data loader must have a fixed length - if evaluator is None: - # create a no-op evaluator - evaluator = DatasetEvaluators([]) - if isinstance(evaluator, abc.MutableSequence): - evaluator = DatasetEvaluators(evaluator) - evaluator.reset() - - num_warmup = min(5, total - 1) - start_time = time.perf_counter() - total_data_time = 0 - total_compute_time = 0 - total_eval_time = 0 - with ExitStack() as stack: - if isinstance(model, nn.Module): - stack.enter_context(inference_context(model)) - stack.enter_context(torch.no_grad()) - - start_data_time = time.perf_counter() - for idx, inputs in enumerate(data_loader): - total_data_time += time.perf_counter() - start_data_time - if idx == num_warmup: - start_time = time.perf_counter() - total_data_time = 0 - total_compute_time = 0 - total_eval_time = 0 - - start_compute_time = time.perf_counter() - outputs = model(inputs) - if torch.cuda.is_available(): - torch.cuda.synchronize() - total_compute_time += time.perf_counter() - start_compute_time - - start_eval_time = time.perf_counter() - evaluator.process(inputs, outputs) - total_eval_time += time.perf_counter() - start_eval_time - - iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup) - data_seconds_per_iter = total_data_time / iters_after_start - compute_seconds_per_iter = total_compute_time / iters_after_start - eval_seconds_per_iter = total_eval_time / iters_after_start - total_seconds_per_iter = (time.perf_counter() - start_time) / iters_after_start - if idx >= num_warmup * 2 or compute_seconds_per_iter > 5: - eta = datetime.timedelta(seconds=int(total_seconds_per_iter * (total - idx - 1))) - log_every_n_seconds( - logging.INFO, - ( - f"Inference done {idx + 1}/{total}. " - f"Dataloading: {data_seconds_per_iter:.4f} s/iter. " - f"Inference: {compute_seconds_per_iter:.4f} s/iter. " - f"Eval: {eval_seconds_per_iter:.4f} s/iter. " - f"Total: {total_seconds_per_iter:.4f} s/iter. " - f"ETA={eta}" - ), - n=5, - ) - start_data_time = time.perf_counter() - - # Measure the time only for this worker (before the synchronization barrier) - total_time = time.perf_counter() - start_time - total_time_str = str(datetime.timedelta(seconds=total_time)) - # NOTE this format is parsed by grep - logger.info( - "Total inference time: {} ({:.6f} s / iter per device, on {} devices)".format( - total_time_str, total_time / (total - num_warmup), num_devices - ) - ) - total_compute_time_str = str(datetime.timedelta(seconds=int(total_compute_time))) - logger.info( - "Total inference pure compute time: {} ({:.6f} s / iter per device, on {} devices)".format( - total_compute_time_str, total_compute_time / (total - num_warmup), num_devices - ) - ) - - results = evaluator.evaluate() - # An evaluator may return None when not in main process. - # Replace it by an empty dict instead to make it easier for downstream code to handle - if results is None: - results = {} - return results - - -@contextmanager -def inference_context(model): - """ - A context where the model is temporarily changed to eval mode, - and restored to previous mode afterwards. - - Args: - model: a torch Module - """ - training_mode = model.training - model.eval() - yield - model.train(training_mode) diff --git a/spaces/LandonBurlingham/07-Seq2Seq/qasrl_model_pipeline.py b/spaces/LandonBurlingham/07-Seq2Seq/qasrl_model_pipeline.py deleted file mode 100644 index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000 --- a/spaces/LandonBurlingham/07-Seq2Seq/qasrl_model_pipeline.py +++ /dev/null @@ -1,183 +0,0 @@ -from typing import Optional -import json -from argparse import Namespace -from pathlib import Path -from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer - -def get_markers_for_model(is_t5_model: bool) -> Namespace: - special_tokens_constants = Namespace() - if is_t5_model: - # T5 model have 100 special tokens by default - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - - else: - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - return special_tokens_constants - -def load_trained_model(name_or_path): - import huggingface_hub as HFhub - tokenizer = AutoTokenizer.from_pretrained(name_or_path) - model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path) - # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory - kwargs_filename = None - if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files - kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json") - elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists(): - kwargs_filename = Path(name_or_path) / "experiment_kwargs.json" - - if kwargs_filename: - preprocessing_kwargs = json.load(open(kwargs_filename)) - # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing - model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs) - model.config.update(preprocessing_kwargs) - return model, tokenizer - - -class QASRL_Pipeline(Text2TextGenerationPipeline): - def __init__(self, model_repo: str, **kwargs): - model, tokenizer = load_trained_model(model_repo) - super().__init__(model, tokenizer, framework="pt") - self.is_t5_model = "t5" in model.config.model_type - self.special_tokens = get_markers_for_model(self.is_t5_model) - self.data_args = model.config.preprocessing_kwargs - # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs` - if "predicate_marker_type" not in vars(self.data_args): - self.data_args.predicate_marker_type = "generic" - if "use_bilateral_predicate_marker" not in vars(self.data_args): - self.data_args.use_bilateral_predicate_marker = True - if "append_verb_form" not in vars(self.data_args): - self.data_args.append_verb_form = True - self._update_config(**kwargs) - - def _update_config(self, **kwargs): - " Update self.model.config with initialization parameters and necessary defaults. " - # set default values that will always override model.config, but can overriden by __init__ kwargs - kwargs["max_length"] = kwargs.get("max_length", 80) - # override model.config with kwargs - for k,v in kwargs.items(): - self.model.config.__dict__[k] = v - - def _sanitize_parameters(self, **kwargs): - preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {} - if "predicate_marker" in kwargs: - preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"] - if "predicate_type" in kwargs: - preprocess_kwargs["predicate_type"] = kwargs["predicate_type"] - if "verb_form" in kwargs: - preprocess_kwargs["verb_form"] = kwargs["verb_form"] - return preprocess_kwargs, forward_kwargs, postprocess_kwargs - - def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None): - # Here, inputs is string or list of strings; apply string postprocessing - if isinstance(inputs, str): - processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form) - elif hasattr(inputs, "__iter__"): - processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs] - else: - raise ValueError("inputs must be str or Iterable[str]") - # Now pass to super.preprocess for tokenization - return super().preprocess(processed_inputs) - - def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str: - sent_tokens = seq.split(" ") - assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word" - predicate_idx = sent_tokens.index(predicate_marker) - sent_tokens.remove(predicate_marker) - sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)]) - predicate = sent_tokens[predicate_idx] - sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))]) - - if self.data_args.predicate_marker_type == "generic": - predicate_marker = self.special_tokens.predicate_generic_marker - # In case we want special marker for each predicate type: """ - elif self.data_args.predicate_marker_type == "pred_type": - assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it" - assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'" - predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker , - "nominal": self.special_tokens.predicate_nominalization_marker - }[predicate_type] - - if self.data_args.use_bilateral_predicate_marker: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}" - else: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}" - - # embed also verb_form - if self.data_args.append_verb_form and verb_form is None: - raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)") - elif self.data_args.append_verb_form: - seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} " - else: - seq = f"{seq} " - - # append source prefix (for t5 models) - prefix = self._get_source_prefix(predicate_type) - - return prefix + seq - - def _get_source_prefix(self, predicate_type: Optional[str]): - if not self.is_t5_model or self.data_args.source_prefix is None: - return '' - if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x - return self.data_args.source_prefix - if self.data_args.source_prefix == "": - if predicate_type is None: - raise ValueError("source_prefix is '' but input no `predicate_type`.") - else: - return f"Generate QAs for {predicate_type} QASRL: " - - def _forward(self, *args, **kwargs): - outputs = super()._forward(*args, **kwargs) - return outputs - - - def postprocess(self, model_outputs): - output_seq = self.tokenizer.decode( - model_outputs["output_ids"].squeeze(), - skip_special_tokens=False, - clean_up_tokenization_spaces=False, - ) - output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip() - qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs) - qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs] - return {"generated_text": output_seq, - "QAs": qas} - - def _postrocess_qa(self, seq: str) -> str: - # split question and answers - if self.special_tokens.separator_output_question_answer in seq: - question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2] - else: - print("invalid format: no separator between question and answer found...") - return None - # question, answer = seq, '' # Or: backoff to only question - # skip "_" slots in questions - question = ' '.join(t for t in question.split(' ') if t != '_') - answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)] - return {"question": question, "answers": answers} - - -if __name__ == "__main__": - pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline") - res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal") - res2 = pipe(["The doctor was interested in Luke 's treatment .", - "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10) - res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal") - print(res1) - print(res2) - print(res3) - \ No newline at end of file diff --git a/spaces/LanguageBind/LanguageBind/languagebind/depth/processing_depth.py b/spaces/LanguageBind/LanguageBind/languagebind/depth/processing_depth.py deleted file mode 100644 index 1019e0cb45c8be4bc7424c4d8f9d091dac5dab0b..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/languagebind/depth/processing_depth.py +++ /dev/null @@ -1,108 +0,0 @@ -import cv2 -import torch -from PIL import Image -from torch import nn -from torchvision import transforms -from transformers import ProcessorMixin, BatchEncoding -from transformers.image_processing_utils import BatchFeature - -OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073) -OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711) - -def make_list_of_images(x): - if not isinstance(x, list): - return [x] - return x - -def opencv_loader(path): - return cv2.imread(path, cv2.IMREAD_UNCHANGED).astype('float32') - - -class DepthNorm(nn.Module): - def __init__( - self, - max_depth=0, - min_depth=0.01, - ): - super().__init__() - self.max_depth = max_depth - self.min_depth = min_depth - self.scale = 1000.0 # nyuv2 abs.depth - - def forward(self, image): - # image = np.array(image) - depth_img = image / self.scale # (H, W) in meters - depth_img = depth_img.clip(min=self.min_depth) - if self.max_depth != 0: - depth_img = depth_img.clip(max=self.max_depth) - depth_img /= self.max_depth # 0-1 - else: - depth_img /= depth_img.max() - depth_img = torch.from_numpy(depth_img).unsqueeze(0).repeat(3, 1, 1) # assume image - return depth_img.to(torch.get_default_dtype()) - -def get_depth_transform(config): - config = config.vision_config - transform = transforms.Compose( - [ - DepthNorm(max_depth=config.max_depth), - transforms.Resize(224, interpolation=transforms.InterpolationMode.BICUBIC), - transforms.CenterCrop(224), - transforms.Normalize(OPENAI_DATASET_MEAN, OPENAI_DATASET_STD), # assume image - # transforms.Normalize((0.5, ), (0.5, )) # 0-1 to norm distribution - # transforms.Normalize((0.0418, ), (0.0295, )) # sun rgb-d imagebind - # transforms.Normalize((0.02, ), (0.00295, )) # nyuv2 - ] - ) - return transform - -def load_and_transform_depth(depth_path, transform): - depth = opencv_loader(depth_path) - depth_outputs = transform(depth) - return depth_outputs - -class LanguageBindDepthProcessor(ProcessorMixin): - attributes = [] - tokenizer_class = ("LanguageBindDepthTokenizer") - - def __init__(self, config, tokenizer=None, **kwargs): - super().__init__(**kwargs) - self.config = config - self.transform = get_depth_transform(config) - self.image_processor = load_and_transform_depth - self.tokenizer = tokenizer - - def __call__(self, images=None, text=None, context_length=77, return_tensors=None, **kwargs): - if text is None and images is None: - raise ValueError("You have to specify either text or images. Both cannot be none.") - - if text is not None: - encoding = self.tokenizer(text, max_length=context_length, padding='max_length', - truncation=True, return_tensors=return_tensors, **kwargs) - - if images is not None: - images = make_list_of_images(images) - image_features = [self.image_processor(image, self.transform) for image in images] - image_features = torch.stack(image_features) - - if text is not None and images is not None: - encoding["pixel_values"] = image_features - return encoding - elif text is not None: - return encoding - else: - return {"pixel_values": image_features} - - def batch_decode(self, skip_special_tokens=True, *args, **kwargs): - """ - This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please - refer to the docstring of this method for more information. - """ - return self.tokenizer.batch_decode(*args, skip_special_tokens=skip_special_tokens, **kwargs) - - def decode(self, skip_special_tokens=True, *args, **kwargs): - """ - This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to - the docstring of this method for more information. - """ - return self.tokenizer.decode(*args, skip_special_tokens=skip_special_tokens, **kwargs) diff --git a/spaces/LinkSoul/Chinese-LLaVa/static/css/bulma-carousel.min.css b/spaces/LinkSoul/Chinese-LLaVa/static/css/bulma-carousel.min.css deleted file mode 100644 index 4d4b7d103e0013f64e4dedd2ad0b2947cc0d11a5..0000000000000000000000000000000000000000 --- a/spaces/LinkSoul/Chinese-LLaVa/static/css/bulma-carousel.min.css +++ /dev/null @@ -1 +0,0 @@ -@-webkit-keyframes spinAround{from{-webkit-transform:rotate(0);transform:rotate(0)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes spinAround{from{-webkit-transform:rotate(0);transform:rotate(0)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.slider{position:relative;width:100%}.slider-container{display:flex;flex-wrap:nowrap;flex-direction:row;overflow:hidden;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0);min-height:100%}.slider-container.is-vertical{flex-direction:column}.slider-container .slider-item{flex:none}.slider-container .slider-item .image.is-covered img{-o-object-fit:cover;object-fit:cover;-o-object-position:center center;object-position:center center;height:100%;width:100%}.slider-container .slider-item .video-container{height:0;padding-bottom:0;padding-top:56.25%;margin:0;position:relative}.slider-container .slider-item .video-container.is-1by1,.slider-container .slider-item .video-container.is-square{padding-top:100%}.slider-container .slider-item .video-container.is-4by3{padding-top:75%}.slider-container .slider-item .video-container.is-21by9{padding-top:42.857143%}.slider-container .slider-item .video-container embed,.slider-container .slider-item .video-container iframe,.slider-container .slider-item .video-container object{position:absolute;top:0;left:0;width:100%!important;height:100%!important}.slider-navigation-next,.slider-navigation-previous{display:flex;justify-content:center;align-items:center;position:absolute;width:42px;height:42px;background:#fff center center no-repeat;background-size:20px 20px;border:1px solid #fff;border-radius:25091983px;box-shadow:0 2px 5px #3232321a;top:50%;margin-top:-20px;left:0;cursor:pointer;transition:opacity .3s,-webkit-transform .3s;transition:transform .3s,opacity .3s;transition:transform .3s,opacity .3s,-webkit-transform .3s}.slider-navigation-next:hover,.slider-navigation-previous:hover{-webkit-transform:scale(1.2);transform:scale(1.2)}.slider-navigation-next.is-hidden,.slider-navigation-previous.is-hidden{display:none;opacity:0}.slider-navigation-next svg,.slider-navigation-previous svg{width:25%}.slider-navigation-next{left:auto;right:0;background:#fff center center no-repeat;background-size:20px 20px}.slider-pagination{display:none;justify-content:center;align-items:center;position:absolute;bottom:0;left:0;right:0;padding:.5rem 1rem;text-align:center}.slider-pagination .slider-page{background:#fff;width:10px;height:10px;border-radius:25091983px;display:inline-block;margin:0 3px;box-shadow:0 2px 5px #3232321a;transition:-webkit-transform .3s;transition:transform .3s;transition:transform .3s,-webkit-transform .3s;cursor:pointer}.slider-pagination .slider-page.is-active,.slider-pagination .slider-page:hover{-webkit-transform:scale(1.4);transform:scale(1.4)}@media screen and (min-width:800px){.slider-pagination{display:flex}}.hero.has-carousel{position:relative}.hero.has-carousel+.hero-body,.hero.has-carousel+.hero-footer,.hero.has-carousel+.hero-head{z-index:10;overflow:hidden}.hero.has-carousel .hero-carousel{position:absolute;top:0;left:0;bottom:0;right:0;height:auto;border:none;margin:auto;padding:0;z-index:0}.hero.has-carousel .hero-carousel .slider{width:100%;max-width:100%;overflow:hidden;height:100%!important;max-height:100%;z-index:0}.hero.has-carousel .hero-carousel .slider .has-background{max-height:100%}.hero.has-carousel .hero-carousel .slider .has-background .is-background{-o-object-fit:cover;object-fit:cover;-o-object-position:center center;object-position:center center;height:100%;width:100%}.hero.has-carousel .hero-body{margin:0 3rem;z-index:10} \ No newline at end of file diff --git a/spaces/Lngo/paragon-AI-blip2-image-to-text/app.py b/spaces/Lngo/paragon-AI-blip2-image-to-text/app.py deleted file mode 100644 index bb5d0fe73dad7570030314af98ff52eff15f6f8e..0000000000000000000000000000000000000000 --- a/spaces/Lngo/paragon-AI-blip2-image-to-text/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/paragon-AI/blip2-image-to-text").launch() \ No newline at end of file diff --git a/spaces/LovnishVermaPRINCE/chatai/app.py b/spaces/LovnishVermaPRINCE/chatai/app.py deleted file mode 100644 index dc3fb935809dc0c4f66ae84c67dbfe8a17baace7..0000000000000000000000000000000000000000 --- a/spaces/LovnishVermaPRINCE/chatai/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import streamlit as st -from transformers import AutoModelForCausalLM, AutoTokenizer - -# Load the GPT-2 model and tokenizer -model_name = "gpt2" -model = AutoModelForCausalLM.from_pretrained(model_name) -tokenizer = AutoTokenizer.from_pretrained(model_name) - -# Streamlit app -st.title("Prompt Engineering with GPT-2") - -# User input -user_prompt = st.text_input("Enter a prompt:") - -if user_prompt: - # Generate text based on the user's prompt - input_ids = tokenizer(user_prompt, return_tensors="pt").input_ids - output = model.generate( - input_ids, - max_length=150, - num_return_sequences=1, - no_repeat_ngram_size=2, - temperature=0.6, # Adjust temperature - top_k=50 # Adjust top-k -) - generated_text = tokenizer.decode(output[0], skip_special_tokens=True) - - # Display generated text - st.subheader("Generated Text:") - st.write(generated_text) diff --git a/spaces/LuxOAI/zenFace-Recognition-SDK/Dockerfile b/spaces/LuxOAI/zenFace-Recognition-SDK/Dockerfile deleted file mode 100644 index 6c6c3d6f2b55c4b0c83cd74ef8ec94be4ba6539a..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/zenFace-Recognition-SDK/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -FROM ubuntu:20.04 -RUN ln -snf /usr/share/zoneinfo/$CONTAINER_TIMEZONE /etc/localtime && echo $CONTAINER_TIMEZONE > /etc/timezone -RUN apt-get update -y -RUN apt-get install -y python3 python3-pip python3-opencv -RUN apt-get install -y libcurl4-openssl-dev libssl-dev -RUN mkdir -p /home/FaceOnLive_v6 -RUN mkdir -p /home/FaceOnLive_v6/facewrapper -WORKDIR /home/FaceOnLive_v6 -COPY ./facewrapper ./facewrapper -COPY ./facewrapper/libs/libimutils.so /usr/lib -COPY ./gradio ./gradio -COPY ./openvino /usr/lib -COPY ./app.py ./app.py -COPY ./run.sh . -COPY ./requirements.txt ./requirements.txt -RUN pip3 install -r requirements.txt -RUN chmod a+x run.sh -CMD ["./run.sh"] -EXPOSE 8000 \ No newline at end of file diff --git a/spaces/MSLAB/PaperGPT/src/app.py b/spaces/MSLAB/PaperGPT/src/app.py deleted file mode 100644 index a5086e564e903f70acf56e832c45985589994aa4..0000000000000000000000000000000000000000 --- a/spaces/MSLAB/PaperGPT/src/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import os -import gradio as gr -from suggest import Suggest -from edit import Editor -from config import configure_logging -from utils import diff_texts - - -configure_logging() - - -with gr.Blocks() as demo: - - title = gr.Button("PaperGPT", interactive=True) - key = gr.Textbox(label="openai_key", value=os.environ.get('OPENAI_API_KEY')) - - with gr.Row(): - with gr.Tab("Edit"): - - handler = Editor() - txt_in = gr.Textbox(label="Input", lines=11, max_lines=11, value=handler.sample_content) - btn = gr.Button("Edit") - txt_out = gr.Textbox(label="Output", lines=11, max_lines=11, value="GPT will serve as your editor and modify the paragraph for you.") - btn.click(handler.generate, inputs=[txt_in, key], outputs=[txt_out]) - - with gr.Tab("Suggest"): - - max_ideas = 5 - handler = Suggest(max_ideas) - - def select(name: str): - for i in handler.idea_list: - if i['title'] == name: - return [ - gr.Textbox.update(value=i["thought"], label="thought", visible=True), - gr.Textbox.update(value=i["action"], label="action", visible=True), - gr.Textbox.update(value=i["original"], label="original", visible=True, max_lines=5, lines=5), - gr.Textbox.update(value=i["improved"], label="improved", visible=True, max_lines=5, lines=5), - gr.HighlightedText.update(value=diff_texts(i["original"], i["improved"]), visible=True) - ] - - with gr.Row().style(equal_height=True): - with gr.Column(scale=0.95): - txt_in = gr.Textbox(label="Input", lines=11, max_lines=11, value=handler.sample_content[2048+2048+256-45:]) - with gr.Column(scale=0.05): - upload = gr.File(file_count="single", file_types=["tex", ".pdf"]) - btn = gr.Button("Analyze") - upload.change(handler.read_file, inputs=upload, outputs=txt_in) - - textboxes = [] - sug = gr.Textbox("GPT will give suggestions and help you improve the paper quality.", interactive=False, show_label=False, lines=11).style(text_align="center") - with gr.Row(): - with gr.Column(scale=0.4): - for i in range(max_ideas): - t = gr.Button("", visible=False) - textboxes.append(t) - with gr.Column(scale=0.6): - thought = gr.Textbox(label="thought", visible=False, interactive=False) - action = gr.Textbox(label="action", visible=False, interactive=False) - original = gr.Textbox(label="original", visible=False, max_lines=5, lines=5, interactive=False) - improved = gr.Textbox(label="improved", visible=False, max_lines=5, lines=5, interactive=False) - diff = gr.HighlightedText( - label="Diff", - combine_adjacent=True, - show_legend=True, - visible=False, - max_lines=5, - lines=5, - interactive=False - ).style(color_map={"+": "green", "-": "red"}) - - btn.click(handler.generate, inputs=[txt_in, key], outputs=[sug, btn, thought, action, original, improved] + textboxes) - for i in textboxes: - i.click(select, inputs=[i], outputs=[thought, action, original, improved, diff]) - - with gr.Row(): - with gr.Tab("Issue"): - gr.Textbox(show_label=False, value="https://github.com/j40903272/PaperGPT/issues", interactive=False) - with gr.Tab("Author"): - gr.JSON(show_label=False, value={'author': 'YDTsai', 'email': 'bb04902103@gmail.com', 'source': 'https://github.com/j40903272/PaperGPT'}) - - # demo.launch(server_name="0.0.0.0", server_port=7653, share=True, enable_queue=True) - demo.launch(enable_queue=True) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Makiing/coolb-in-gtest/src/components/ui/sheet.tsx b/spaces/Makiing/coolb-in-gtest/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/Malmika/Physics-AI/README.md b/spaces/Malmika/Physics-AI/README.md deleted file mode 100644 index 99add31a38f261f1cb1342e1b4694c0cf38225f0..0000000000000000000000000000000000000000 --- a/spaces/Malmika/Physics-AI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Physics AI -emoji: 🐠 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/onnx_export.py b/spaces/MashiroSA/sovits-emu-voice-transform/onnx_export.py deleted file mode 100644 index a70a912cc1b6dd908ff6496bbc6fa8dd576e233b..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/onnx_export.py +++ /dev/null @@ -1,54 +0,0 @@ -import torch -from onnxexport.model_onnx import SynthesizerTrn -import utils - -def main(NetExport): - path = "SoVits4.0" - if NetExport: - device = torch.device("cpu") - hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - SVCVITS = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", SVCVITS, None) - _ = SVCVITS.eval().to(device) - for i in SVCVITS.parameters(): - i.requires_grad = False - - n_frame = 10 - test_hidden_unit = torch.rand(1, n_frame, 256) - test_pitch = torch.rand(1, n_frame) - test_mel2ph = torch.arange(0, n_frame, dtype=torch.int64)[None] # torch.LongTensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]).unsqueeze(0) - test_uv = torch.ones(1, n_frame, dtype=torch.float32) - test_noise = torch.randn(1, 192, n_frame) - test_sid = torch.LongTensor([0]) - input_names = ["c", "f0", "mel2ph", "uv", "noise", "sid"] - output_names = ["audio", ] - - torch.onnx.export(SVCVITS, - ( - test_hidden_unit.to(device), - test_pitch.to(device), - test_mel2ph.to(device), - test_uv.to(device), - test_noise.to(device), - test_sid.to(device) - ), - f"checkpoints/{path}/model.onnx", - dynamic_axes={ - "c": [0, 1], - "f0": [1], - "mel2ph": [1], - "uv": [1], - "noise": [2], - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names) - - -if __name__ == '__main__': - main(True) diff --git a/spaces/MetaDans/AIBOT/README.md b/spaces/MetaDans/AIBOT/README.md deleted file mode 100644 index 460da30608dfa47e40d3e3d221cf651bdf5b9ae6..0000000000000000000000000000000000000000 --- a/spaces/MetaDans/AIBOT/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: AIBOT -emoji: 🦀 -colorFrom: blue -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/aster/_base_aster.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/aster/_base_aster.py deleted file mode 100644 index 5f011522ca9858484d1633e67fc14c4f91fdaf9f..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/aster/_base_aster.py +++ /dev/null @@ -1,104 +0,0 @@ -dictionary = dict( - type='Dictionary', - dict_file='{{ fileDirname }}/../../../dicts/english_digits_symbols.txt', - with_padding=True, - with_unknown=True, - same_start_end=True, - with_start=True, - with_end=True) - -model = dict( - type='ASTER', - preprocessor=dict( - type='STN', - in_channels=3, - resized_image_size=(32, 64), - output_image_size=(32, 100), - num_control_points=20), - backbone=dict( - type='ResNet', - in_channels=3, - stem_channels=[32], - block_cfgs=dict(type='BasicBlock', use_conv1x1='True'), - arch_layers=[3, 4, 6, 6, 3], - arch_channels=[32, 64, 128, 256, 512], - strides=[(2, 2), (2, 2), (2, 1), (2, 1), (2, 1)], - init_cfg=[ - dict(type='Kaiming', layer='Conv2d'), - dict(type='Constant', val=1, layer='BatchNorm2d'), - ]), - encoder=dict(type='ASTEREncoder', in_channels=512), - decoder=dict( - type='ASTERDecoder', - max_seq_len=25, - in_channels=512, - emb_dims=512, - attn_dims=512, - hidden_size=512, - postprocessor=dict(type='AttentionPostprocessor'), - module_loss=dict( - type='CEModuleLoss', flatten=True, ignore_first_char=True), - dictionary=dictionary, - ), - data_preprocessor=dict( - type='TextRecogDataPreprocessor', - mean=[127.5, 127.5, 127.5], - std=[127.5, 127.5, 127.5])) - -train_pipeline = [ - dict(type='LoadImageFromFile', ignore_empty=True, min_size=0), - dict(type='LoadOCRAnnotations', with_text=True), - dict(type='Resize', scale=(256, 64)), - dict( - type='PackTextRecogInputs', - meta_keys=('img_path', 'ori_shape', 'img_shape', 'valid_ratio')) -] - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='Resize', scale=(256, 64)), - dict(type='LoadOCRAnnotations', with_text=True), - dict( - type='PackTextRecogInputs', - meta_keys=('img_path', 'ori_shape', 'img_shape', 'valid_ratio', - 'instances')) -] - -tta_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='TestTimeAug', - transforms=[[ - dict( - type='ConditionApply', - true_transforms=[ - dict( - type='ImgAugWrapper', - args=[dict(cls='Rot90', k=0, keep_size=False)]) - ], - condition="results['img_shape'][1] 1: - images = mmengine.track_parallel_progress( - load_img_info, files, nproc=nproc) - else: - images = mmengine.track_progress(load_img_info, files) - - return images - - -def load_img_info(files): - """Load the information of one image. - - Args: - files (tuple): The tuple of (img_file, groundtruth_file) - - Returns: - img_info (dict): The dict of the img and annotation information - """ - assert isinstance(files, tuple) - - img_file, gt_file = files - assert osp.basename(gt_file).split('.')[0] == osp.basename(img_file).split( - '.')[0] - # read imgs while ignoring orientations - img = mmcv.imread(img_file, 'unchanged') - - try: - img_info = dict( - file_name=osp.join(osp.basename(img_file)), - height=img.shape[0], - width=img.shape[1], - segm_file=osp.join(osp.basename(gt_file))) - except AttributeError: - print(f'Skip broken img {img_file}') - return None - - if osp.splitext(gt_file)[1] == '.xml': - img_info = load_xml_info(gt_file, img_info) - else: - raise NotImplementedError - - return img_info - - -def load_xml_info(gt_file, img_info): - """Collect the annotation information. - - The annotation format is as the following: - - ... - - SMT - Unspecified - 0 - 0 - - 157 - 294 - 237 - 357 - - - - Args: - gt_file (str): The path to ground-truth - img_info (dict): The dict of the img and annotation information - - Returns: - img_info (dict): The dict of the img and annotation information - """ - obj = ET.parse(gt_file) - root = obj.getroot() - anno_info = [] - for object in root.iter('object'): - word = object.find('name').text - x1 = int(object.find('bndbox').find('xmin').text) - y1 = int(object.find('bndbox').find('ymin').text) - x2 = int(object.find('bndbox').find('xmax').text) - y2 = int(object.find('bndbox').find('ymax').text) - - x = max(0, min(x1, x2)) - y = max(0, min(y1, y2)) - w, h = abs(x2 - x1), abs(y2 - y1) - bbox = [x, y, x + w, y, x + w, y + h, x, y + h] - anno = dict(bbox=bbox, word=word) - anno_info.append(anno) - - img_info.update(anno_info=anno_info) - - return img_info - - -def split_train_val_list(full_list, val_ratio): - """Split list by val_ratio. - - Args: - full_list (list): List to be splited - val_ratio (float): Split ratio for val set - - return: - list(list, list): Train_list and val_list - """ - n_total = len(full_list) - offset = int(n_total * val_ratio) - if n_total == 0 or offset < 1: - return [], full_list - val_list = full_list[:offset] - train_list = full_list[offset:] - return [train_list, val_list] - - -def generate_ann(root_path, image_infos, preserve_vertical, val_ratio): - """Generate cropped annotations and label txt file. - - Args: - root_path (str): The root path of the dataset - split (str): The split of dataset. Namely: training or test - image_infos (list[dict]): A list of dicts of the img and - annotation information - preserve_vertical (bool): Whether to preserve vertical texts - val_ratio (float): Split ratio for val set - """ - - assert val_ratio <= 1. - - if val_ratio: - image_infos = split_train_val_list(image_infos, val_ratio) - splits = ['training', 'val'] - - else: - image_infos = [image_infos] - splits = ['training'] - - for i, split in enumerate(splits): - dst_image_root = osp.join(root_path, 'crops', split) - ignore_image_root = osp.join(root_path, 'ignores', split) - dst_label_file = osp.join(root_path, f'{split}_label.json') - os.makedirs(dst_image_root, exist_ok=True) - - img_info = [] - for image_info in image_infos[i]: - index = 1 - src_img_path = osp.join(root_path, 'imgs', image_info['file_name']) - image = mmcv.imread(src_img_path) - src_img_root = image_info['file_name'].split('.')[0] - - for anno in image_info['anno_info']: - word = anno['word'] - dst_img = crop_img(image, anno['bbox'], 0, 0) - h, w, _ = dst_img.shape - - dst_img_name = f'{src_img_root}_{index}.png' - index += 1 - # Skip invalid annotations - if min(dst_img.shape) == 0: - continue - # Skip vertical texts - if not preserve_vertical and h / w > 2 and split == 'training': - dst_img_path = osp.join(ignore_image_root, dst_img_name) - mmcv.imwrite(dst_img, dst_img_path) - continue - - dst_img_path = osp.join(dst_image_root, dst_img_name) - mmcv.imwrite(dst_img, dst_img_path) - img_info.append({ - 'file_name': dst_img_name, - 'anno_info': [{ - 'text': word - }] - }) - - ensure_ascii = dict(ensure_ascii=False) - dump_ocr_data(img_info, dst_label_file, 'textrecog', **ensure_ascii) - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training and val set of ILST ') - parser.add_argument('root_path', help='Root dir path of ILST') - parser.add_argument( - '--preserve-vertical', - help='Preserve samples containing vertical texts', - action='store_true') - parser.add_argument( - '--val-ratio', help='Split ratio for val set', default=0., type=float) - parser.add_argument( - '--nproc', default=1, type=int, help='Number of processes') - args = parser.parse_args(['data/IIIT-ILST']) - return args - - -def main(): - args = parse_args() - root_path = args.root_path - with mmengine.Timer(print_tmpl='It takes {}s to convert ILST annotation'): - files = collect_files( - osp.join(root_path, 'imgs'), osp.join(root_path, 'annotations')) - image_infos = collect_annotations(files, nproc=args.nproc) - # filter broken images - image_infos = list(filter(None, image_infos)) - generate_ann(root_path, image_infos, args.preserve_vertical, - args.val_ratio) - - -if __name__ == '__main__': - main() diff --git a/spaces/MuGeminorum/insecta/khandy/image/translate.py b/spaces/MuGeminorum/insecta/khandy/image/translate.py deleted file mode 100644 index 1e05ce066bdd01dad40246bf53d64af21db0fa15..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/image/translate.py +++ /dev/null @@ -1,57 +0,0 @@ -import numbers - -import khandy - - -def translate_image(image, x_shift, y_shift, border_value=0): - """Translate an image. - - Args: - image (ndarray): Image to be translated with format (h, w) or (h, w, c). - x_shift (int): The offset used for translate in horizontal - direction. right is the positive direction. - y_shift (int): The offset used for translate in vertical - direction. down is the positive direction. - border_value (int | tuple[int]): Value used in case of a - constant border. - - Returns: - ndarray: The translated image. - - See Also: - crop_or_pad - """ - assert khandy.is_numpy_image(image) - assert isinstance(x_shift, numbers.Integral) - assert isinstance(y_shift, numbers.Integral) - image_height, image_width = image.shape[:2] - channels = 1 if image.ndim == 2 else image.shape[2] - - if isinstance(border_value, (tuple, list)): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels ' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - border_value = (border_value,) * channels - dst_image = khandy.create_solid_color_image( - image_height, image_width, border_value, dtype=image.dtype) - - if (abs(x_shift) >= image_width) or (abs(y_shift) >= image_height): - return dst_image - - src_x_begin = max(-x_shift, 0) - src_x_end = min(image_width - x_shift, image_width) - dst_x_begin = max(x_shift, 0) - dst_x_end = min(image_width + x_shift, image_width) - - src_y_begin = max(-y_shift, 0) - src_y_end = min(image_height - y_shift, image_height) - dst_y_begin = max(y_shift, 0) - dst_y_end = min(image_height + y_shift, image_height) - - dst_image[dst_y_begin:dst_y_end, dst_x_begin:dst_x_end] = \ - image[src_y_begin:src_y_end, src_x_begin:src_x_end] - return dst_image - - \ No newline at end of file diff --git a/spaces/Mysterykey/test/greeting.md b/spaces/Mysterykey/test/greeting.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NCTCMumbai/NCTC/models/official/recommendation/neumf_model.py b/spaces/NCTCMumbai/NCTC/models/official/recommendation/neumf_model.py deleted file mode 100644 index 48b09293af065a19db2dbfb1d44023439c2b9765..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/recommendation/neumf_model.py +++ /dev/null @@ -1,431 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Defines NeuMF model for NCF framework. - -Some abbreviations used in the code base: -NeuMF: Neural Matrix Factorization -NCF: Neural Collaborative Filtering -GMF: Generalized Matrix Factorization -MLP: Multi-Layer Perceptron - -GMF applies a linear kernel to model the latent feature interactions, and MLP -uses a nonlinear kernel to learn the interaction function from data. NeuMF model -is a fused model of GMF and MLP to better model the complex user-item -interactions, and unifies the strengths of linearity of MF and non-linearity of -MLP for modeling the user-item latent structures. - -In NeuMF model, it allows GMF and MLP to learn separate embeddings, and combine -the two models by concatenating their last hidden layer. -""" -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import sys - -from six.moves import xrange # pylint: disable=redefined-builtin -import tensorflow as tf -from typing import Any, Dict, Text - -from official.recommendation import constants as rconst -from official.recommendation import movielens -from official.recommendation import ncf_common -from official.recommendation import stat_utils - - -def sparse_to_dense_grads(grads_and_vars): - """Convert sparse gradients to dense gradients. - - All sparse gradients, which are represented as instances of tf.IndexedSlices, - are converted to dense Tensors. Dense gradients, which are represents as - Tensors, are unchanged. - - The purpose of this conversion is that for small embeddings, which are used by - this model, applying dense gradients with the AdamOptimizer is faster than - applying sparse gradients. - - Args - grads_and_vars: A list of (gradient, variable) tuples. Each gradient can - be a Tensor or an IndexedSlices. Tensors are unchanged, and IndexedSlices - are converted to dense Tensors. - Returns: - The same list of (gradient, variable) as `grads_and_vars`, except each - IndexedSlices gradient is converted to a Tensor. - """ - - # Calling convert_to_tensor changes IndexedSlices into Tensors, and leaves - # Tensors unchanged. - return [(tf.convert_to_tensor(g), v) for g, v in grads_and_vars] - - -def neumf_model_fn(features, labels, mode, params): - """Model Function for NeuMF estimator.""" - if params.get("use_seed"): - tf.set_random_seed(stat_utils.random_int32()) - - users = features[movielens.USER_COLUMN] - items = features[movielens.ITEM_COLUMN] - - user_input = tf.keras.layers.Input(tensor=users) - item_input = tf.keras.layers.Input(tensor=items) - logits = construct_model(user_input, item_input, params).output - - # Softmax with the first column of zeros is equivalent to sigmoid. - softmax_logits = ncf_common.convert_to_softmax_logits(logits) - - if mode == tf.estimator.ModeKeys.EVAL: - duplicate_mask = tf.cast(features[rconst.DUPLICATE_MASK], tf.float32) - return _get_estimator_spec_with_metrics( - logits, - softmax_logits, - duplicate_mask, - params["num_neg"], - params["match_mlperf"], - use_tpu_spec=params["use_tpu"]) - - elif mode == tf.estimator.ModeKeys.TRAIN: - labels = tf.cast(labels, tf.int32) - valid_pt_mask = features[rconst.VALID_POINT_MASK] - - optimizer = tf.compat.v1.train.AdamOptimizer( - learning_rate=params["learning_rate"], - beta1=params["beta1"], - beta2=params["beta2"], - epsilon=params["epsilon"]) - if params["use_tpu"]: - optimizer = tf.compat.v1.tpu.CrossShardOptimizer(optimizer) - - loss = tf.compat.v1.losses.sparse_softmax_cross_entropy( - labels=labels, - logits=softmax_logits, - weights=tf.cast(valid_pt_mask, tf.float32) - ) - - tf.identity(loss, name="cross_entropy") - - global_step = tf.compat.v1.train.get_global_step() - tvars = tf.compat.v1.trainable_variables() - gradients = optimizer.compute_gradients( - loss, tvars, colocate_gradients_with_ops=True) - gradients = sparse_to_dense_grads(gradients) - minimize_op = optimizer.apply_gradients( - gradients, global_step=global_step, name="train") - update_ops = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS) - train_op = tf.group(minimize_op, update_ops) - - return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) - - else: - raise NotImplementedError - - -def _strip_first_and_last_dimension(x, batch_size): - return tf.reshape(x[0, :], (batch_size,)) - - -def construct_model(user_input: tf.Tensor, item_input: tf.Tensor, - params: Dict[Text, Any]) -> tf.keras.Model: - """Initialize NeuMF model. - - Args: - user_input: keras input layer for users - item_input: keras input layer for items - params: Dict of hyperparameters. - - Raises: - ValueError: if the first model layer is not even. - Returns: - model: a keras Model for computing the logits - """ - num_users = params["num_users"] - num_items = params["num_items"] - - model_layers = params["model_layers"] - - mf_regularization = params["mf_regularization"] - mlp_reg_layers = params["mlp_reg_layers"] - - mf_dim = params["mf_dim"] - - if model_layers[0] % 2 != 0: - raise ValueError("The first layer size should be multiple of 2!") - - # Initializer for embedding layers - embedding_initializer = "glorot_uniform" - - def mf_slice_fn(x): - x = tf.squeeze(x, [1]) - return x[:, :mf_dim] - - def mlp_slice_fn(x): - x = tf.squeeze(x, [1]) - return x[:, mf_dim:] - - # It turns out to be significantly more effecient to store the MF and MLP - # embedding portions in the same table, and then slice as needed. - embedding_user = tf.keras.layers.Embedding( - num_users, - mf_dim + model_layers[0] // 2, - embeddings_initializer=embedding_initializer, - embeddings_regularizer=tf.keras.regularizers.l2(mf_regularization), - input_length=1, - name="embedding_user")( - user_input) - - embedding_item = tf.keras.layers.Embedding( - num_items, - mf_dim + model_layers[0] // 2, - embeddings_initializer=embedding_initializer, - embeddings_regularizer=tf.keras.regularizers.l2(mf_regularization), - input_length=1, - name="embedding_item")( - item_input) - - # GMF part - mf_user_latent = tf.keras.layers.Lambda( - mf_slice_fn, name="embedding_user_mf")(embedding_user) - mf_item_latent = tf.keras.layers.Lambda( - mf_slice_fn, name="embedding_item_mf")(embedding_item) - - # MLP part - mlp_user_latent = tf.keras.layers.Lambda( - mlp_slice_fn, name="embedding_user_mlp")(embedding_user) - mlp_item_latent = tf.keras.layers.Lambda( - mlp_slice_fn, name="embedding_item_mlp")(embedding_item) - - # Element-wise multiply - mf_vector = tf.keras.layers.multiply([mf_user_latent, mf_item_latent]) - - # Concatenation of two latent features - mlp_vector = tf.keras.layers.concatenate([mlp_user_latent, mlp_item_latent]) - - num_layer = len(model_layers) # Number of layers in the MLP - for layer in xrange(1, num_layer): - model_layer = tf.keras.layers.Dense( - model_layers[layer], - kernel_regularizer=tf.keras.regularizers.l2(mlp_reg_layers[layer]), - activation="relu") - mlp_vector = model_layer(mlp_vector) - - # Concatenate GMF and MLP parts - predict_vector = tf.keras.layers.concatenate([mf_vector, mlp_vector]) - - # Final prediction layer - logits = tf.keras.layers.Dense( - 1, activation=None, kernel_initializer="lecun_uniform", - name=movielens.RATING_COLUMN)(predict_vector) - - # Print model topology. - model = tf.keras.models.Model([user_input, item_input], logits) - model.summary() - sys.stdout.flush() - - return model - - -def _get_estimator_spec_with_metrics(logits: tf.Tensor, - softmax_logits: tf.Tensor, - duplicate_mask: tf.Tensor, - num_training_neg: int, - match_mlperf: bool = False, - use_tpu_spec: bool = False): - """Returns a EstimatorSpec that includes the metrics.""" - cross_entropy, \ - metric_fn, \ - in_top_k, \ - ndcg, \ - metric_weights = compute_eval_loss_and_metrics_helper( - logits, - softmax_logits, - duplicate_mask, - num_training_neg, - match_mlperf) - - if use_tpu_spec: - return tf.estimator.tpu.TPUEstimatorSpec( - mode=tf.estimator.ModeKeys.EVAL, - loss=cross_entropy, - eval_metrics=(metric_fn, [in_top_k, ndcg, metric_weights])) - - return tf.estimator.EstimatorSpec( - mode=tf.estimator.ModeKeys.EVAL, - loss=cross_entropy, - eval_metric_ops=metric_fn(in_top_k, ndcg, metric_weights) - ) - - -def compute_eval_loss_and_metrics_helper(logits: tf.Tensor, - softmax_logits: tf.Tensor, - duplicate_mask: tf.Tensor, - num_training_neg: int, - match_mlperf: bool = False): - """Model evaluation with HR and NDCG metrics. - - The evaluation protocol is to rank the test interacted item (truth items) - among the randomly chosen 999 items that are not interacted by the user. - The performance of the ranked list is judged by Hit Ratio (HR) and Normalized - Discounted Cumulative Gain (NDCG). - - For evaluation, the ranked list is truncated at 10 for both metrics. As such, - the HR intuitively measures whether the test item is present on the top-10 - list, and the NDCG accounts for the position of the hit by assigning higher - scores to hits at top ranks. Both metrics are calculated for each test user, - and the average scores are reported. - - If `match_mlperf` is True, then the HR and NDCG computations are done in a - slightly unusual way to match the MLPerf reference implementation. - Specifically, if the evaluation negatives contain duplicate items, it will be - treated as if the item only appeared once. Effectively, for duplicate items in - a row, the predicted score for all but one of the items will be set to - -infinity - - For example, suppose we have that following inputs: - logits_by_user: [[ 2, 3, 3], - [ 5, 4, 4]] - - items_by_user: [[10, 20, 20], - [30, 40, 40]] - - # Note: items_by_user is not explicitly present. Instead the relevant \ - information is contained within `duplicate_mask` - - top_k: 2 - - Then with match_mlperf=True, the HR would be 2/2 = 1.0. With - match_mlperf=False, the HR would be 1/2 = 0.5. This is because each user has - predicted scores for only 2 unique items: 10 and 20 for the first user, and 30 - and 40 for the second. Therefore, with match_mlperf=True, it's guaranteed the - first item's score is in the top 2. With match_mlperf=False, this function - would compute the first user's first item is not in the top 2, because item 20 - has a higher score, and item 20 occurs twice. - - Args: - logits: A tensor containing the predicted logits for each user. The shape of - logits is (num_users_per_batch * (1 + NUM_EVAL_NEGATIVES),) Logits for a - user are grouped, and the last element of the group is the true element. - softmax_logits: The same tensor, but with zeros left-appended. - duplicate_mask: A vector with the same shape as logits, with a value of 1 if - the item corresponding to the logit at that position has already appeared - for that user. - num_training_neg: The number of negatives per positive during training. - match_mlperf: Use the MLPerf reference convention for computing rank. - - Returns: - cross_entropy: the loss - metric_fn: the metrics function - in_top_k: hit rate metric - ndcg: ndcg metric - metric_weights: metric weights - """ - in_top_k, ndcg, metric_weights, logits_by_user = compute_top_k_and_ndcg( - logits, duplicate_mask, match_mlperf) - - # Examples are provided by the eval Dataset in a structured format, so eval - # labels can be reconstructed on the fly. - eval_labels = tf.reshape(shape=(-1,), tensor=tf.one_hot( - tf.zeros(shape=(logits_by_user.shape[0],), dtype=tf.int32) + - rconst.NUM_EVAL_NEGATIVES, logits_by_user.shape[1], dtype=tf.int32)) - - eval_labels_float = tf.cast(eval_labels, tf.float32) - - # During evaluation, the ratio of negatives to positives is much higher - # than during training. (Typically 999 to 1 vs. 4 to 1) By adjusting the - # weights for the negative examples we compute a loss which is consistent with - # the training data. (And provides apples-to-apples comparison) - negative_scale_factor = num_training_neg / rconst.NUM_EVAL_NEGATIVES - example_weights = ( - (eval_labels_float + (1 - eval_labels_float) * negative_scale_factor) * - (1 + rconst.NUM_EVAL_NEGATIVES) / (1 + num_training_neg)) - - # Tile metric weights back to logit dimensions - expanded_metric_weights = tf.reshape(tf.tile( - metric_weights[:, tf.newaxis], (1, rconst.NUM_EVAL_NEGATIVES + 1)), (-1,)) - - # ignore padded examples - example_weights *= tf.cast(expanded_metric_weights, tf.float32) - - cross_entropy = tf.compat.v1.losses.sparse_softmax_cross_entropy( - logits=softmax_logits, labels=eval_labels, weights=example_weights) - - def metric_fn(top_k_tensor, ndcg_tensor, weight_tensor): - return { - rconst.HR_KEY: tf.compat.v1.metrics.mean(top_k_tensor, - weights=weight_tensor, - name=rconst.HR_METRIC_NAME), - rconst.NDCG_KEY: tf.compat.v1.metrics.mean(ndcg_tensor, - weights=weight_tensor, - name=rconst.NDCG_METRIC_NAME) - } - - return cross_entropy, metric_fn, in_top_k, ndcg, metric_weights - - -def compute_top_k_and_ndcg(logits: tf.Tensor, - duplicate_mask: tf.Tensor, - match_mlperf: bool = False): - """Compute inputs of metric calculation. - - Args: - logits: A tensor containing the predicted logits for each user. The shape of - logits is (num_users_per_batch * (1 + NUM_EVAL_NEGATIVES),) Logits for a - user are grouped, and the first element of the group is the true element. - duplicate_mask: A vector with the same shape as logits, with a value of 1 if - the item corresponding to the logit at that position has already appeared - for that user. - match_mlperf: Use the MLPerf reference convention for computing rank. - - Returns: - is_top_k, ndcg and weights, all of which has size (num_users_in_batch,), and - logits_by_user which has size - (num_users_in_batch, (rconst.NUM_EVAL_NEGATIVES + 1)). - """ - logits_by_user = tf.reshape(logits, (-1, rconst.NUM_EVAL_NEGATIVES + 1)) - duplicate_mask_by_user = tf.cast( - tf.reshape(duplicate_mask, (-1, rconst.NUM_EVAL_NEGATIVES + 1)), - logits_by_user.dtype) - - if match_mlperf: - # Set duplicate logits to the min value for that dtype. The MLPerf - # reference dedupes during evaluation. - logits_by_user *= (1 - duplicate_mask_by_user) - logits_by_user += duplicate_mask_by_user * logits_by_user.dtype.min - - # Determine the location of the first element in each row after the elements - # are sorted. - sort_indices = tf.argsort( - logits_by_user, axis=1, direction="DESCENDING") - - # Use matrix multiplication to extract the position of the true item from the - # tensor of sorted indices. This approach is chosen because both GPUs and TPUs - # perform matrix multiplications very quickly. This is similar to np.argwhere. - # However this is a special case because the target will only appear in - # sort_indices once. - one_hot_position = tf.cast(tf.equal(sort_indices, rconst.NUM_EVAL_NEGATIVES), - tf.int32) - sparse_positions = tf.multiply( - one_hot_position, tf.range(logits_by_user.shape[1])[tf.newaxis, :]) - position_vector = tf.reduce_sum(sparse_positions, axis=1) - - in_top_k = tf.cast(tf.less(position_vector, rconst.TOP_K), tf.float32) - ndcg = tf.math.log(2.) / tf.math.log( - tf.cast(position_vector, tf.float32) + 2) - ndcg *= in_top_k - - # If a row is a padded row, all but the first element will be a duplicate. - metric_weights = tf.not_equal(tf.reduce_sum(duplicate_mask_by_user, axis=1), - rconst.NUM_EVAL_NEGATIVES) - - return in_top_k, ndcg, metric_weights, logits_by_user diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/common/config_lib_test.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/common/config_lib_test.py deleted file mode 100644 index cdc96f92d2428f06e780930979662fdfda92e3f5..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/common/config_lib_test.py +++ /dev/null @@ -1,425 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -"""Tests for common.config_lib.""" - -import tensorflow as tf - -from common import config_lib # brain coder - - -class ConfigLibTest(tf.test.TestCase): - - def testConfig(self): - config = config_lib.Config(hello='world', foo='bar', num=123, f=56.7) - self.assertEqual('world', config.hello) - self.assertEqual('bar', config['foo']) - config.hello = 'everyone' - config['bar'] = 9000 - self.assertEqual('everyone', config['hello']) - self.assertEqual(9000, config.bar) - self.assertEqual(5, len(config)) - - def testConfigUpdate(self): - config = config_lib.Config(a=1, b=2, c=3) - config.update({'b': 10, 'd': 4}) - self.assertEqual({'a': 1, 'b': 10, 'c': 3, 'd': 4}, config) - - config = config_lib.Config(a=1, b=2, c=3) - config.update(b=10, d=4) - self.assertEqual({'a': 1, 'b': 10, 'c': 3, 'd': 4}, config) - - config = config_lib.Config(a=1, b=2, c=3) - config.update({'e': 5}, b=10, d=4) - self.assertEqual({'a': 1, 'b': 10, 'c': 3, 'd': 4, 'e': 5}, config) - - config = config_lib.Config( - a=1, - b=2, - x=config_lib.Config( - l='a', - y=config_lib.Config(m=1, n=2), - z=config_lib.Config( - q=config_lib.Config(a=10, b=20), - r=config_lib.Config(s=1, t=2)))) - config.update(x={'y': {'m': 10}, 'z': {'r': {'s': 5}}}) - self.assertEqual( - config_lib.Config( - a=1, b=2, - x=config_lib.Config( - l='a', - y=config_lib.Config(m=10, n=2), - z=config_lib.Config( - q=config_lib.Config(a=10, b=20), - r=config_lib.Config(s=5, t=2)))), - config) - - config = config_lib.Config( - foo='bar', - num=100, - x=config_lib.Config(a=1, b=2, c=config_lib.Config(h=10, i=20, j=30)), - y=config_lib.Config(qrs=5, tuv=10), - d={'a': 1, 'b': 2}, - l=[1, 2, 3]) - config.update( - config_lib.Config( - foo='hat', - num=50.5, - x={'a': 5, 'z': -10}, - y=config_lib.Config(wxyz=-1)), - d={'a': 10, 'c': 20}, - l=[3, 4, 5, 6]) - self.assertEqual( - config_lib.Config( - foo='hat', - num=50.5, - x=config_lib.Config(a=5, b=2, z=-10, - c=config_lib.Config(h=10, i=20, j=30)), - y=config_lib.Config(qrs=5, tuv=10, wxyz=-1), - d={'a': 10, 'c': 20}, - l=[3, 4, 5, 6]), - config) - self.assertTrue(isinstance(config.x, config_lib.Config)) - self.assertTrue(isinstance(config.x.c, config_lib.Config)) - self.assertTrue(isinstance(config.y, config_lib.Config)) - - config = config_lib.Config( - foo='bar', - num=100, - x=config_lib.Config(a=1, b=2, c=config_lib.Config(h=10, i=20, j=30)), - y=config_lib.Config(qrs=5, tuv=10), - d={'a': 1, 'b': 2}, - l=[1, 2, 3]) - config.update( - config_lib.Config( - foo=1234, - num='hello', - x={'a': 5, 'z': -10, 'c': {'h': -5, 'k': 40}}, - y=[1, 2, 3, 4], - d='stuff', - l={'a': 1, 'b': 2})) - self.assertEqual( - config_lib.Config( - foo=1234, - num='hello', - x=config_lib.Config(a=5, b=2, z=-10, - c=config_lib.Config(h=-5, i=20, j=30, k=40)), - y=[1, 2, 3, 4], - d='stuff', - l={'a': 1, 'b': 2}), - config) - self.assertTrue(isinstance(config.x, config_lib.Config)) - self.assertTrue(isinstance(config.x.c, config_lib.Config)) - self.assertTrue(isinstance(config.y, list)) - - def testConfigStrictUpdate(self): - config = config_lib.Config(a=1, b=2, c=3) - config.strict_update({'b': 10, 'c': 20}) - self.assertEqual({'a': 1, 'b': 10, 'c': 20}, config) - - config = config_lib.Config(a=1, b=2, c=3) - config.strict_update(b=10, c=20) - self.assertEqual({'a': 1, 'b': 10, 'c': 20}, config) - - config = config_lib.Config(a=1, b=2, c=3, d=4) - config.strict_update({'d': 100}, b=10, a=20) - self.assertEqual({'a': 20, 'b': 10, 'c': 3, 'd': 100}, config) - - config = config_lib.Config( - a=1, - b=2, - x=config_lib.Config( - l='a', - y=config_lib.Config(m=1, n=2), - z=config_lib.Config( - q=config_lib.Config(a=10, b=20), - r=config_lib.Config(s=1, t=2)))) - config.strict_update(x={'y': {'m': 10}, 'z': {'r': {'s': 5}}}) - self.assertEqual( - config_lib.Config( - a=1, b=2, - x=config_lib.Config( - l='a', - y=config_lib.Config(m=10, n=2), - z=config_lib.Config( - q=config_lib.Config(a=10, b=20), - r=config_lib.Config(s=5, t=2)))), - config) - - config = config_lib.Config( - foo='bar', - num=100, - x=config_lib.Config(a=1, b=2, c=config_lib.Config(h=10, i=20, j=30)), - y=config_lib.Config(qrs=5, tuv=10), - d={'a': 1, 'b': 2}, - l=[1, 2, 3]) - config.strict_update( - config_lib.Config( - foo='hat', - num=50, - x={'a': 5, 'c': {'h': 100}}, - y=config_lib.Config(tuv=-1)), - d={'a': 10, 'c': 20}, - l=[3, 4, 5, 6]) - self.assertEqual( - config_lib.Config( - foo='hat', - num=50, - x=config_lib.Config(a=5, b=2, - c=config_lib.Config(h=100, i=20, j=30)), - y=config_lib.Config(qrs=5, tuv=-1), - d={'a': 10, 'c': 20}, - l=[3, 4, 5, 6]), - config) - - def testConfigStrictUpdateFail(self): - config = config_lib.Config(a=1, b=2, c=3, x=config_lib.Config(a=1, b=2)) - with self.assertRaises(KeyError): - config.strict_update({'b': 10, 'c': 20, 'd': 50}) - with self.assertRaises(KeyError): - config.strict_update(b=10, d=50) - with self.assertRaises(KeyError): - config.strict_update(x={'c': 3}) - with self.assertRaises(TypeError): - config.strict_update(a='string') - with self.assertRaises(TypeError): - config.strict_update(x={'a': 'string'}) - with self.assertRaises(TypeError): - config.strict_update(x=[1, 2, 3]) - - def testConfigFromStr(self): - config = config_lib.Config.from_str("{'c': {'d': 5}, 'b': 2, 'a': 1}") - self.assertEqual( - {'c': {'d': 5}, 'b': 2, 'a': 1}, config) - self.assertTrue(isinstance(config, config_lib.Config)) - self.assertTrue(isinstance(config.c, config_lib.Config)) - - def testConfigParse(self): - config = config_lib.Config.parse( - 'hello="world",num=1234.5,lst=[10,20.5,True,"hi",("a","b","c")],' - 'dct={9:10,"stuff":"qwerty","subdict":{1:True,2:False}},' - 'subconfig=c(a=1,b=[1,2,[3,4]],c=c(f="f",g="g"))') - self.assertEqual( - {'hello': 'world', 'num': 1234.5, - 'lst': [10, 20.5, True, 'hi', ('a', 'b', 'c')], - 'dct': {9: 10, 'stuff': 'qwerty', 'subdict': {1: True, 2: False}}, - 'subconfig': {'a': 1, 'b': [1, 2, [3, 4]], 'c': {'f': 'f', 'g': 'g'}}}, - config) - self.assertTrue(isinstance(config, config_lib.Config)) - self.assertTrue(isinstance(config.subconfig, config_lib.Config)) - self.assertTrue(isinstance(config.subconfig.c, config_lib.Config)) - self.assertFalse(isinstance(config.dct, config_lib.Config)) - self.assertFalse(isinstance(config.dct['subdict'], config_lib.Config)) - self.assertTrue(isinstance(config.lst[4], tuple)) - - def testConfigParseErrors(self): - with self.assertRaises(SyntaxError): - config_lib.Config.parse('a=[1,2,b="hello"') - with self.assertRaises(SyntaxError): - config_lib.Config.parse('a=1,b=c(x="a",y="b"') - with self.assertRaises(SyntaxError): - config_lib.Config.parse('a=1,b=c(x="a")y="b"') - with self.assertRaises(SyntaxError): - config_lib.Config.parse('a=1,b=c(x="a"),y="b",') - - def testOneOf(self): - def make_config(): - return config_lib.Config( - data=config_lib.OneOf( - [config_lib.Config(task=1, a='hello'), - config_lib.Config(task=2, a='world', b='stuff'), - config_lib.Config(task=3, c=1234)], - task=2), - model=config_lib.Config(stuff=1)) - - config = make_config() - config.update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=1,a="hi")')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=1, a='hi'), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=2,a="hi")')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=2, a='hi', b='stuff'), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=3)')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=3, c=1234), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.update(config_lib.Config.parse( - 'model=c(stuff=2)')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=2, a='world', b='stuff'), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=4,d=9999)')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=4, d=9999), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.update(config_lib.Config.parse( - 'model=c(stuff=2),data=5')) - self.assertEqual( - config_lib.Config( - data=5, - model=config_lib.Config(stuff=2)), - config) - - def testOneOfStrict(self): - def make_config(): - return config_lib.Config( - data=config_lib.OneOf( - [config_lib.Config(task=1, a='hello'), - config_lib.Config(task=2, a='world', b='stuff'), - config_lib.Config(task=3, c=1234)], - task=2), - model=config_lib.Config(stuff=1)) - - config = make_config() - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=1,a="hi")')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=1, a='hi'), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=2,a="hi")')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=2, a='hi', b='stuff'), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=3)')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=3, c=1234), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2)')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config(task=2, a='world', b='stuff'), - model=config_lib.Config(stuff=2)), - config) - - def testNestedOneOf(self): - def make_config(): - return config_lib.Config( - data=config_lib.OneOf( - [config_lib.Config(task=1, a='hello'), - config_lib.Config( - task=2, - a=config_lib.OneOf( - [config_lib.Config(x=1, y=2), - config_lib.Config(x=-1, y=1000, z=4)], - x=1)), - config_lib.Config(task=3, c=1234)], - task=2), - model=config_lib.Config(stuff=1)) - - config = make_config() - config.update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=2,a=c(x=-1,z=8))')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config( - task=2, - a=config_lib.Config(x=-1, y=1000, z=8)), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=2,a=c(x=-1,z=8))')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config( - task=2, - a=config_lib.Config(x=-1, y=1000, z=8)), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.update(config_lib.Config.parse('model=c(stuff=2)')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config( - task=2, - a=config_lib.Config(x=1, y=2)), - model=config_lib.Config(stuff=2)), - config) - - config = make_config() - config.strict_update(config_lib.Config.parse('model=c(stuff=2)')) - self.assertEqual( - config_lib.Config( - data=config_lib.Config( - task=2, - a=config_lib.Config(x=1, y=2)), - model=config_lib.Config(stuff=2)), - config) - - def testOneOfStrictErrors(self): - def make_config(): - return config_lib.Config( - data=config_lib.OneOf( - [config_lib.Config(task=1, a='hello'), - config_lib.Config(task=2, a='world', b='stuff'), - config_lib.Config(task=3, c=1234)], - task=2), - model=config_lib.Config(stuff=1)) - - config = make_config() - with self.assertRaises(TypeError): - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=[1,2,3]')) - - config = make_config() - with self.assertRaises(KeyError): - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=3,c=5678,d=9999)')) - - config = make_config() - with self.assertRaises(ValueError): - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=c(task=4,d=9999)')) - - config = make_config() - with self.assertRaises(TypeError): - config.strict_update(config_lib.Config.parse( - 'model=c(stuff=2),data=5')) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NoCrypt/mikuTTS/mygit.sh b/spaces/NoCrypt/mikuTTS/mygit.sh deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py deleted file mode 100644 index a44fad07f7c718f99cccd445f33c62b0e3c562f4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# Use: echo {text} | python tokenize_indic.py {language} - -import sys - -from indicnlp.normalize.indic_normalize import IndicNormalizerFactory -from indicnlp.tokenize.indic_tokenize import trivial_tokenize - - -factory = IndicNormalizerFactory() -normalizer = factory.get_normalizer( - sys.argv[1], remove_nuktas=False, nasals_mode="do_nothing" -) - -for line in sys.stdin: - normalized_line = normalizer.normalize(line.strip()) - tokenized_line = " ".join(trivial_tokenize(normalized_line, sys.argv[1])) - print(tokenized_line) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py deleted file mode 100644 index 9e7b655feee0042d42ac2b13cec5f1d2a88e201e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.multilingual_transformer import MultilingualTransformerModel -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - base_architecture, -) -from fairseq.utils import safe_hasattr - -from .latent_transformer import LatentTransformerDecoder, LatentTransformerEncoder - - -@register_model("latent_multilingual_transformer") -class LatentMultilingualTransformerModel(MultilingualTransformerModel): - """A variant of standard multilingual Transformer models which encoder and/or - decoders supports latent depth, as is in "Deep Transformer with Latent Depth" - (https://arxiv.org/abs/2009.13102). - """ - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - MultilingualTransformerModel.add_args(parser) - parser.add_argument( - '--soft-select', - action='store_true', - help='use soft samples in training an inference', - ) - parser.add_argument( - '--sampling-tau', - type=float, - default=5., - help='sampling temperature', - ) - - @classmethod - def _get_module_class(cls, is_encoder, args, lang_dict, embed_tokens, langs): - if is_encoder: - if safe_hasattr(args, "encoder_latent_layer") and args.encoder_latent_layer: - return LatentTransformerEncoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerEncoder(args, lang_dict, embed_tokens) - else: - if safe_hasattr(args, "decoder_latent_layer") and args.decoder_latent_layer: - return LatentTransformerDecoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerDecoder(args, lang_dict, embed_tokens) - - -@register_model_architecture( - "latent_multilingual_transformer", "latent_multilingual_transformer" -) -def latent_multilingual_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 24) - args.share_encoders = getattr(args, "share_encoders", True) - args.share_decoders = getattr(args, "share_decoders", True) - args.share_encoder_embeddings = getattr(args, "share_encoder_embeddings", True) - args.share_decoder_embeddings = getattr(args, "share_decoder_embeddings", True) - - base_architecture(args) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/train_multilingual_model.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/train_multilingual_model.sh deleted file mode 100644 index cc050bd3f02de8a2f303737f187442d2eb80e4ef..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/train_multilingual_model.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -path_2_data=$1 # which contains binarized data for each directions -lang_list=$2 # -lang_pairs=$3 #a list language pairs to train multilingual models, e.g. "en-fr,en-cs,fr-en,cs-en" - -fairseq-train "$path_2_data" \ - --encoder-normalize-before --decoder-normalize-before \ - --arch transformer --layernorm-embedding \ - --task translation_multi_simple_epoch \ - --sampling-method "temperature" \ - --sampling-temperature 1.5 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py deleted file mode 100644 index a92da3a298e21528b7007df3f8198bb3af94a485..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py +++ /dev/null @@ -1 +0,0 @@ -../truncated_bptt/truncated_bptt_lm_task.py \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/byte_level_bpe/get_bitext.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/byte_level_bpe/get_bitext.py deleted file mode 100644 index 6ac1eeec1e6167ec6bafd76b37173ee6987cae7e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/byte_level_bpe/get_bitext.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -import os -import os.path as op -from collections import namedtuple -from multiprocessing import cpu_count -from typing import List, Optional - -import sentencepiece as sp -from fairseq.data.encoders.byte_bpe import ByteBPE -from fairseq.data.encoders.byte_utils import byte_encode -from fairseq.data.encoders.bytes import Bytes -from fairseq.data.encoders.characters import Characters -from fairseq.data.encoders.moses_tokenizer import MosesTokenizer -from fairseq.data.encoders.sentencepiece_bpe import SentencepieceBPE - - -SPLITS = ["train", "valid", "test"] - - -def _convert_xml(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - ss = s.strip() - if not ss.startswith("", "").split('">') - assert len(ss) == 2 - f_o.write(ss[1].strip() + "\n") - - -def _convert_train(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - ss = s.strip() - if ss.startswith("<"): - continue - f_o.write(ss.strip() + "\n") - - -def _get_bytes(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(Bytes.encode(s.strip()) + "\n") - - -def _get_chars(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(Characters.encode(s.strip()) + "\n") - - -def pretokenize(in_path: str, out_path: str, src: str, tgt: str): - Args = namedtuple( - "Args", - [ - "moses_source_lang", - "moses_target_lang", - "moses_no_dash_splits", - "moses_no_escape", - ], - ) - args = Args( - moses_source_lang=src, - moses_target_lang=tgt, - moses_no_dash_splits=False, - moses_no_escape=False, - ) - pretokenizer = MosesTokenizer(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(pretokenizer.encode(s.strip()) + "\n") - - -def _convert_to_bchar(in_path_prefix: str, src: str, tgt: str, out_path: str): - with open(out_path, "w") as f_o: - for lang in [src, tgt]: - with open(f"{in_path_prefix}.{lang}") as f: - for s in f: - f_o.write(byte_encode(s.strip()) + "\n") - - -def _get_bpe(in_path: str, model_prefix: str, vocab_size: int): - arguments = [ - f"--input={in_path}", - f"--model_prefix={model_prefix}", - f"--model_type=bpe", - f"--vocab_size={vocab_size}", - "--character_coverage=1.0", - "--normalization_rule_name=identity", - f"--num_threads={cpu_count()}", - ] - sp.SentencePieceTrainer.Train(" ".join(arguments)) - - -def _apply_bbpe(model_path: str, in_path: str, out_path: str): - Args = namedtuple("Args", ["sentencepiece_model_path"]) - args = Args(sentencepiece_model_path=model_path) - tokenizer = ByteBPE(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(tokenizer.encode(s.strip()) + "\n") - - -def _apply_bpe(model_path: str, in_path: str, out_path: str): - Args = namedtuple("Args", ["sentencepiece_model"]) - args = Args(sentencepiece_model=model_path) - tokenizer = SentencepieceBPE(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(tokenizer.encode(s.strip()) + "\n") - - -def _concat_files(in_paths: List[str], out_path: str): - with open(out_path, "w") as f_o: - for p in in_paths: - with open(p) as f: - for r in f: - f_o.write(r) - - -def preprocess_iwslt17( - root: str, - src: str, - tgt: str, - bpe_size: Optional[int], - need_chars: bool, - bbpe_size: Optional[int], - need_bytes: bool, -): - # extract bitext - in_root = op.join(root, f"{src}-{tgt}") - for lang in [src, tgt]: - _convert_train( - op.join(in_root, f"train.tags.{src}-{tgt}.{lang}"), - op.join(root, f"train.{lang}"), - ) - _convert_xml( - op.join(in_root, f"IWSLT17.TED.dev2010.{src}-{tgt}.{lang}.xml"), - op.join(root, f"valid.{lang}"), - ) - _convert_xml( - op.join(in_root, f"IWSLT17.TED.tst2015.{src}-{tgt}.{lang}.xml"), - op.join(root, f"test.{lang}"), - ) - # pre-tokenize - for lang in [src, tgt]: - for split in SPLITS: - pretokenize( - op.join(root, f"{split}.{lang}"), - op.join(root, f"{split}.moses.{lang}"), - src, - tgt, - ) - # tokenize with BPE vocabulary - if bpe_size is not None: - # learn vocabulary - concated_train_path = op.join(root, "train.all") - _concat_files( - [op.join(root, "train.moses.fr"), op.join(root, "train.moses.en")], - concated_train_path, - ) - bpe_model_prefix = op.join(root, f"spm_bpe{bpe_size}") - _get_bpe(concated_train_path, bpe_model_prefix, bpe_size) - os.remove(concated_train_path) - # apply - for lang in [src, tgt]: - for split in SPLITS: - _apply_bpe( - bpe_model_prefix + ".model", - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bpe{bpe_size}.{lang}"), - ) - # tokenize with bytes vocabulary - if need_bytes: - for lang in [src, tgt]: - for split in SPLITS: - _get_bytes( - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bytes.{lang}"), - ) - # tokenize with characters vocabulary - if need_chars: - for lang in [src, tgt]: - for split in SPLITS: - _get_chars( - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.chars.{lang}"), - ) - # tokenize with byte-level BPE vocabulary - if bbpe_size is not None: - # learn vocabulary - bchar_path = op.join(root, "train.bchar") - _convert_to_bchar(op.join(root, "train.moses"), src, tgt, bchar_path) - bbpe_model_prefix = op.join(root, f"spm_bbpe{bbpe_size}") - _get_bpe(bchar_path, bbpe_model_prefix, bbpe_size) - os.remove(bchar_path) - # apply - for lang in [src, tgt]: - for split in SPLITS: - _apply_bbpe( - bbpe_model_prefix + ".model", - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bbpe{bbpe_size}.{lang}"), - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--root", type=str, default="data") - parser.add_argument( - "--bpe-vocab", - default=None, - type=int, - help="Generate tokenized bitext with BPE of size K." - "Default to None (disabled).", - ) - parser.add_argument( - "--bbpe-vocab", - default=None, - type=int, - help="Generate tokenized bitext with BBPE of size K." - "Default to None (disabled).", - ) - parser.add_argument( - "--byte-vocab", - action="store_true", - help="Generate tokenized bitext with bytes vocabulary", - ) - parser.add_argument( - "--char-vocab", - action="store_true", - help="Generate tokenized bitext with chars vocabulary", - ) - args = parser.parse_args() - - preprocess_iwslt17( - args.root, - "fr", - "en", - args.bpe_vocab, - args.char_vocab, - args.bbpe_vocab, - args.byte_vocab, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/translation/prepare-wmt14en2de.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/translation/prepare-wmt14en2de.sh deleted file mode 100644 index 6702c88b568c9e680b525593ff0c9fb0a474825d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/translation/prepare-wmt14en2de.sh +++ /dev/null @@ -1,142 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=40000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://data.statmt.org/wmt17/translation-task/training-parallel-nc-v12.tgz" - "http://data.statmt.org/wmt17/translation-task/dev.tgz" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-nc-v12.tgz" - "dev.tgz" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.de-en" - "commoncrawl.de-en" - "training/news-commentary-v12.de-en" -) - -# This will make the dataset compatible to the one used in "Convolutional Sequence to Sequence Learning" -# https://arxiv.org/abs/1705.03122 -if [ "$1" == "--icml17" ]; then - URLS[2]="http://statmt.org/wmt14/training-parallel-nc-v9.tgz" - FILES[2]="training-parallel-nc-v9.tgz" - CORPORA[2]="training/news-commentary-v9.de-en" - OUTDIR=wmt14_en_de -else - OUTDIR=wmt17_en_de -fi - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=en -tgt=de -lang=en-de -prep=$OUTDIR -tmp=$prep/tmp -orig=orig -dev=dev/newstest2013 - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit -1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%100 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%100 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.de-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/colorize_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/colorize_dataset.py deleted file mode 100644 index 6ef097bff1a013f4944b1cb55e1e7e4e2480b3a6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/colorize_dataset.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset - - -class ColorizeDataset(BaseWrapperDataset): - """ Adds 'colors' property to net input that is obtained from the provided color getter for use by models """ - - def __init__(self, dataset, color_getter): - super().__init__(dataset) - self.color_getter = color_getter - - def collater(self, samples): - base_collate = super().collater(samples) - if len(base_collate) > 0: - base_collate["net_input"]["colors"] = torch.tensor( - list(self.color_getter(self.dataset, s["id"]) for s in samples), - dtype=torch.long, - ) - return base_collate diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/vite.config.ts b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/vite.config.ts deleted file mode 100644 index 9e955ed31156c2da1af61dbf1285b311cd1b1f74..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/vite.config.ts +++ /dev/null @@ -1,11 +0,0 @@ -import { defineConfig } from 'vite'; -import react from '@vitejs/plugin-react'; - -// https://vitejs.dev/config/ -export default defineConfig({ - plugins: [react()], - server: { - host: true, - port: 5555, - }, -}); diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/sem_seg_evaluation.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/sem_seg_evaluation.py deleted file mode 100644 index 7a19db71562ef47569dc7f77ec616af85447f0ec..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/sem_seg_evaluation.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import json -import logging -import numpy as np -import os -from collections import OrderedDict -import PIL.Image as Image -import pycocotools.mask as mask_util -import torch - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.comm import all_gather, is_main_process, synchronize -from detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class SemSegEvaluator(DatasetEvaluator): - """ - Evaluate semantic segmentation metrics. - """ - - def __init__( - self, - dataset_name, - distributed=True, - output_dir=None, - *, - num_classes=None, - ignore_label=None, - ): - """ - Args: - dataset_name (str): name of the dataset to be evaluated. - distributed (bool): if True, will collect results from all ranks for evaluation. - Otherwise, will evaluate the results in the current process. - output_dir (str): an output directory to dump results. - num_classes, ignore_label: deprecated argument - """ - self._logger = logging.getLogger(__name__) - if num_classes is not None: - self._logger.warn( - "SemSegEvaluator(num_classes) is deprecated! It should be obtained from metadata." - ) - if ignore_label is not None: - self._logger.warn( - "SemSegEvaluator(ignore_label) is deprecated! It should be obtained from metadata." - ) - self._dataset_name = dataset_name - self._distributed = distributed - self._output_dir = output_dir - - self._cpu_device = torch.device("cpu") - - self.input_file_to_gt_file = { - dataset_record["file_name"]: dataset_record["sem_seg_file_name"] - for dataset_record in DatasetCatalog.get(dataset_name) - } - - meta = MetadataCatalog.get(dataset_name) - # Dict that maps contiguous training ids to COCO category ids - try: - c2d = meta.stuff_dataset_id_to_contiguous_id - self._contiguous_id_to_dataset_id = {v: k for k, v in c2d.items()} - except AttributeError: - self._contiguous_id_to_dataset_id = None - self._class_names = meta.stuff_classes - self._num_classes = len(meta.stuff_classes) - if num_classes is not None: - assert self._num_classes == num_classes, f"{self._num_classes} != {num_classes}" - self._ignore_label = ignore_label if ignore_label is not None else meta.ignore_label - - def reset(self): - self._conf_matrix = np.zeros((self._num_classes + 1, self._num_classes + 1), dtype=np.int64) - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a model. - It is a list of dicts. Each dict corresponds to an image and - contains keys like "height", "width", "file_name". - outputs: the outputs of a model. It is either list of semantic segmentation predictions - (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic - segmentation prediction in the same format. - """ - for input, output in zip(inputs, outputs): - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device) - pred = np.array(output, dtype=np.int) - with PathManager.open(self.input_file_to_gt_file[input["file_name"]], "rb") as f: - gt = np.array(Image.open(f), dtype=np.int) - - gt[gt == self._ignore_label] = self._num_classes - - self._conf_matrix += np.bincount( - (self._num_classes + 1) * pred.reshape(-1) + gt.reshape(-1), - minlength=self._conf_matrix.size, - ).reshape(self._conf_matrix.shape) - - self._predictions.extend(self.encode_json_sem_seg(pred, input["file_name"])) - - def evaluate(self): - """ - Evaluates standard semantic segmentation metrics (http://cocodataset.org/#stuff-eval): - - * Mean intersection-over-union averaged across classes (mIoU) - * Frequency Weighted IoU (fwIoU) - * Mean pixel accuracy averaged across classes (mACC) - * Pixel Accuracy (pACC) - """ - if self._distributed: - synchronize() - conf_matrix_list = all_gather(self._conf_matrix) - self._predictions = all_gather(self._predictions) - self._predictions = list(itertools.chain(*self._predictions)) - if not is_main_process(): - return - - self._conf_matrix = np.zeros_like(self._conf_matrix) - for conf_matrix in conf_matrix_list: - self._conf_matrix += conf_matrix - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "sem_seg_predictions.json") - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(self._predictions)) - - acc = np.full(self._num_classes, np.nan, dtype=np.float) - iou = np.full(self._num_classes, np.nan, dtype=np.float) - tp = self._conf_matrix.diagonal()[:-1].astype(np.float) - pos_gt = np.sum(self._conf_matrix[:-1, :-1], axis=0).astype(np.float) - class_weights = pos_gt / np.sum(pos_gt) - pos_pred = np.sum(self._conf_matrix[:-1, :-1], axis=1).astype(np.float) - acc_valid = pos_gt > 0 - acc[acc_valid] = tp[acc_valid] / pos_gt[acc_valid] - iou_valid = (pos_gt + pos_pred) > 0 - union = pos_gt + pos_pred - tp - iou[acc_valid] = tp[acc_valid] / union[acc_valid] - macc = np.sum(acc[acc_valid]) / np.sum(acc_valid) - miou = np.sum(iou[acc_valid]) / np.sum(iou_valid) - fiou = np.sum(iou[acc_valid] * class_weights[acc_valid]) - pacc = np.sum(tp) / np.sum(pos_gt) - - res = {} - res["mIoU"] = 100 * miou - res["fwIoU"] = 100 * fiou - for i, name in enumerate(self._class_names): - res["IoU-{}".format(name)] = 100 * iou[i] - res["mACC"] = 100 * macc - res["pACC"] = 100 * pacc - for i, name in enumerate(self._class_names): - res["ACC-{}".format(name)] = 100 * acc[i] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "sem_seg_evaluation.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(res, f) - results = OrderedDict({"sem_seg": res}) - self._logger.info(results) - return results - - def encode_json_sem_seg(self, sem_seg, input_file_name): - """ - Convert semantic segmentation to COCO stuff format with segments encoded as RLEs. - See http://cocodataset.org/#format-results - """ - json_list = [] - for label in np.unique(sem_seg): - if self._contiguous_id_to_dataset_id is not None: - assert ( - label in self._contiguous_id_to_dataset_id - ), "Label {} is not in the metadata info for {}".format(label, self._dataset_name) - dataset_id = self._contiguous_id_to_dataset_id[label] - else: - dataset_id = int(label) - mask = (sem_seg == label).astype(np.uint8) - mask_rle = mask_util.encode(np.array(mask[:, :, None], order="F"))[0] - mask_rle["counts"] = mask_rle["counts"].decode("utf-8") - json_list.append( - {"file_name": input_file_name, "category_id": dataset_id, "segmentation": mask_rle} - ) - return json_list diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/data/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/rots2joints/base.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/rots2joints/base.py deleted file mode 100644 index 524f830f071a61962163aa77e895be0090d7ba35..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/rots2joints/base.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2020 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -from typing import Optional - -import torch -from torch import Tensor, nn -from pathlib import Path -import os -# import hydra - -class Rots2Joints(nn.Module): - def __init__(self, path: Optional[str] = None, - normalization: bool = False, - eps: float = 1e-12, - **kwargs) -> None: - if normalization and path is None: - raise TypeError("You should provide a path if normalization is on.") - - super().__init__() - self.normalization = normalization - self.eps = eps - # workaround for cluster local/sync - if path is not None: - rel_p = path.split('/') - rel_p = rel_p[rel_p.index('deps'):] - rel_p = '/'.join(rel_p) - # path = hydra.utils.get_original_cwd() + '/' + rel_p - if normalization: - mean_path = Path(path) / "mean.pt" - std_path = Path(path) / "std.pt" - self.register_buffer('mean', torch.load(mean_path)) - self.register_buffer('std', torch.load(std_path)) - - def normalize(self, features: Tensor) -> Tensor: - if self.normalization: - features = (features - self.mean)/(self.std + self.eps) - return features - - def unnormalize(self, features: Tensor) -> Tensor: - if self.normalization: - features = features * self.std + self.mean - return features diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/data/util.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/data/util.py deleted file mode 100644 index c65d7cfd51356e92e0d64510e91e32bc2c538150..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/data/util.py +++ /dev/null @@ -1,574 +0,0 @@ -import glob -import math -import os -import pickle -import random - -import cv2 -import numpy as np -import torch - - -#################### -# Files & IO -#################### - -# get image path list -IMG_EXTENSIONS = [".jpg", ".JPG", ".jpeg", ".JPEG", ".png", ".PNG", ".ppm", ".PPM", ".bmp", ".BMP"] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def _get_paths_from_images(path): - """get image path list from image folder""" - assert os.path.isdir(path), "{:s} is not a valid directory".format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, "{:s} has no valid image file".format(path) - return images - - -def _get_paths_from_lmdb(dataroot): - """get image path list from lmdb meta info""" - meta_info = pickle.load(open(os.path.join(dataroot, "meta_info.pkl"), "rb")) - paths = meta_info["keys"] - sizes = meta_info["resolution"] - if len(sizes) == 1: - sizes = sizes * len(paths) - return paths, sizes - - -def get_image_paths(data_type, dataroot): - """get image path list - support lmdb or image files""" - paths, sizes = None, None - if dataroot is not None: - if data_type == "lmdb": - paths, sizes = _get_paths_from_lmdb(dataroot) - elif data_type == "img": - paths = sorted(_get_paths_from_images(dataroot)) - else: - raise NotImplementedError( - f"data_type {data_type} \ - is not recognized." - ) - return paths, sizes - - -def glob_file_list(root): - return sorted(glob.glob(os.path.join(root, "*"))) - - -# read images -def _read_img_lmdb(env, key, size): - """read image from lmdb with key (w/ and w/o fixed size) - size: (C, H, W) tuple""" - with env.begin(write=False) as txn: - buf = txn.get(key.encode("ascii")) - if buf is None: - print(key) - img_flat = np.frombuffer(buf, dtype=np.uint8) - C, H, W = size - img = img_flat.reshape(H, W, C) - return img - - -def read_img(env, path, size=None): - """read image by cv2 or from lmdb - return: Numpy float32, HWC, BGR, [0,1]""" - if env is None: # img - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - else: - img = _read_img_lmdb(env, path, size) - img = img.astype(np.float32) / 255.0 - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -def read_img_gray(env, path, size=None): - """read image by cv2 or from lmdb - return: Numpy float32, HWC, BGR, [0,1]""" - img = _read_img_lmdb(env, path, size) - img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - img = img.astype(np.float32) / 255.0 - img = img[:, :, np.newaxis] - return img - - -def read_img_seq(path): - """Read a sequence of images from a given folder path - Args: - path (list/str): list of image paths/image folder path - - Returns: - imgs (Tensor): size (T, C, H, W), RGB, [0, 1] - """ - if type(path) is list: - img_path_l = path - else: - img_path_l = sorted(glob.glob(os.path.join(path, "*"))) - img_l = [read_img(None, v) for v in img_path_l] - # stack to Torch tensor - imgs = np.stack(img_l, axis=0) - imgs = imgs[:, :, :, [2, 1, 0]] - imgs = torch.from_numpy(np.ascontiguousarray(np.transpose(imgs, (0, 3, 1, 2)))).float() - return imgs - - -def index_generation(crt_i, max_n, N, padding="reflection"): - """Generate an index list for reading N frames from a sequence of images - Args: - crt_i (int): current center index - max_n (int): max number of the sequence of images (calculated from 1) - N (int): reading N frames - padding (str): padding mode, one of - replicate | reflection | new_info | circle - Example: crt_i = 0, N = 5 - replicate: [0, 0, 0, 1, 2] - reflection: [2, 1, 0, 1, 2] - new_info: [4, 3, 0, 1, 2] - circle: [3, 4, 0, 1, 2] - - Returns: - return_l (list [int]): a list of indexes - """ - max_n = max_n - 1 - n_pad = N // 2 - return_l = [] - - for i in range(crt_i - n_pad, crt_i + n_pad + 1): - if i < 0: - if padding == "replicate": - add_idx = 0 - elif padding == "reflection": - add_idx = -i - elif padding == "new_info": - add_idx = (crt_i + n_pad) + (-i) - elif padding == "circle": - add_idx = N + i - else: - raise ValueError("Wrong padding mode") - elif i > max_n: - if padding == "replicate": - add_idx = max_n - elif padding == "reflection": - add_idx = max_n * 2 - i - elif padding == "new_info": - add_idx = (crt_i - n_pad) - (i - max_n) - elif padding == "circle": - add_idx = i - N - else: - raise ValueError("Wrong padding mode") - else: - add_idx = i - return_l.append(add_idx) - return return_l - - -#################### -# image processing -# process on numpy image -#################### - - -def augment(img_list, hflip=True, rot=True): - """horizontal flip OR rotate (0, 90, 180, 270 degrees)""" - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -def augment_flow(img_list, flow_list, hflip=True, rot=True): - """horizontal flip OR rotate (0, 90, 180, 270 degrees) with flows""" - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - def _augment_flow(flow): - if hflip: - flow = flow[:, ::-1, :] - flow[:, :, 0] *= -1 - if vflip: - flow = flow[::-1, :, :] - flow[:, :, 1] *= -1 - if rot90: - flow = flow.transpose(1, 0, 2) - flow = flow[:, :, [1, 0]] - return flow - - rlt_img_list = [_augment(img) for img in img_list] - rlt_flow_list = [_augment_flow(flow) for flow in flow_list] - - return rlt_img_list, rlt_flow_list - - -def channel_convert(in_c, tar_type, img_list): - """conversion among BGR, gray and y""" - if in_c == 3 and tar_type == "gray": # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == "y": # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == "RGB": # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -def rgb2ycbcr(img, only_y=True): - """same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - """ - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255.0 - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], [24.966, 112.0, -18.214]] - ) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255.0 - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - """bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - """ - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255.0 - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]] - ) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255.0 - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - """same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - """ - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255.0 - # convert - rlt = np.matmul( - img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], [0.00625893, -0.00318811, 0]] - ) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255.0 - return rlt.astype(in_img_type) - - -def modcrop(img_in, scale): - """img_in: Numpy, HWC or HW""" - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[: H - H_r, : W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[: H - H_r, : W - W_r, :] - else: - raise ValueError("Wrong img ndim: [{:d}].".format(img.ndim)) - return img - - -#################### -# Functions -#################### - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx ** 2 - absx3 = absx ** 3 - return (1.5 * absx3 - 2.5 * absx2 + 1) * ((absx <= 1).type_as(absx)) + ( - -0.5 * absx3 + 2.5 * absx2 - 4 * absx + 2 - ) * (((absx > 1) * (absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - """ - Use a modified kernel to simultaneously interpolate - and antialias- larger kernel width - """ - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(1, P).expand( - out_length, P - ) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. - # Only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: CHW RGB [0,1] - # output: CHW RGB [0,1] w/o round - - in_C, in_H, in_W = img.size() - _, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = "cubic" - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing - ) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing - ) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - out_1[0, i, :] = img_aug[0, idx : idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - out_1[1, i, :] = img_aug[1, idx : idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - out_1[2, i, :] = img_aug[2, idx : idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - out_2[0, :, i] = out_1_aug[0, :, idx : idx + kernel_width].mv(weights_W[i]) - out_2[1, :, i] = out_1_aug[1, :, idx : idx + kernel_width].mv(weights_W[i]) - out_2[2, :, i] = out_1_aug[2, :, idx : idx + kernel_width].mv(weights_W[i]) - - return out_2 - - -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC BGR [0,1] - # output: HWC BGR [0,1] w/o round - img = torch.from_numpy(img) - - in_H, in_W, in_C = img.size() - _, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = "cubic" - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing - ) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing - ) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - out_1[i, :, 0] = img_aug[idx : idx + kernel_width, :, 0].transpose(0, 1).mv(weights_H[i]) - out_1[i, :, 1] = img_aug[idx : idx + kernel_width, :, 1].transpose(0, 1).mv(weights_H[i]) - out_1[i, :, 2] = img_aug[idx : idx + kernel_width, :, 2].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - out_2[:, i, 0] = out_1_aug[:, idx : idx + kernel_width, 0].mv(weights_W[i]) - out_2[:, i, 1] = out_1_aug[:, idx : idx + kernel_width, 1].mv(weights_W[i]) - out_2[:, i, 2] = out_1_aug[:, idx : idx + kernel_width, 2].mv(weights_W[i]) - - return out_2.numpy() - - -if __name__ == "__main__": - # test imresize function - # read images - img = cv2.imread("test.png") - img = img * 1.0 / 255 - img = torch.from_numpy(np.transpose(img[:, :, [2, 1, 0]], (2, 0, 1))).float() - # imresize - scale = 1 / 4 - import time - - total_time = 0 - for i in range(10): - start_time = time.time() - rlt = imresize(img, scale, antialiasing=True) - use_time = time.time() - start_time - total_time += use_time - print("average time: {}".format(total_time / 10)) - - import torchvision.utils - - torchvision.utils.save_image((rlt * 255).round() / 255, "rlt.png", nrow=1, padding=0, normalize=False) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-71.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-71.go deleted file mode 100644 index e6aa03127a174175efe117c162663edf58036ad5..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-71.go and /dev/null differ diff --git a/spaces/PaulHilders/IEAI_CLIPGroundingExplainability/clip_grounding/utils/visualize.py b/spaces/PaulHilders/IEAI_CLIPGroundingExplainability/clip_grounding/utils/visualize.py deleted file mode 100644 index aaee90b5be63568dbcde91da84e9560a580c7f89..0000000000000000000000000000000000000000 --- a/spaces/PaulHilders/IEAI_CLIPGroundingExplainability/clip_grounding/utils/visualize.py +++ /dev/null @@ -1,183 +0,0 @@ -"""Helpers for visualization""" -import numpy as np -import matplotlib -import matplotlib.pyplot as plt -import cv2 -from PIL import Image - - -# define predominanat colors -COLORS = { - "pink": (242, 116, 223), - "cyan": (46, 242, 203), - "red": (255, 0, 0), - "green": (0, 255, 0), - "blue": (0, 0, 255), - "yellow": (255, 255, 0), -} - - -def show_single_image(image: np.ndarray, figsize: tuple = (8, 8), title: str = None, titlesize=18, cmap: str = None, ticks=False, save=False, save_path=None): - """Show a single image.""" - fig, ax = plt.subplots(1, 1, figsize=figsize) - - if isinstance(image, Image.Image): - image = np.asarray(image) - - ax.set_title(title, fontsize=titlesize) - ax.imshow(image, cmap=cmap) - - if not ticks: - ax.set_xticks([]) - ax.set_yticks([]) - - if save: - plt.savefig(save_path, bbox_inches='tight') - - plt.show() - - -def show_grid_of_images( - images: np.ndarray, n_cols: int = 4, figsize: tuple = (8, 8), - cmap=None, subtitles=None, title=None, subtitlesize=18, - save=False, save_path=None, titlesize=20, - ): - """Show a grid of images.""" - n_cols = min(n_cols, len(images)) - - copy_of_images = images.copy() - for i, image in enumerate(copy_of_images): - if isinstance(image, Image.Image): - image = np.asarray(image) - images[i] = image - - if subtitles is None: - subtitles = [None] * len(images) - - n_rows = int(np.ceil(len(images) / n_cols)) - fig, axes = plt.subplots(n_rows, n_cols, figsize=figsize) - for i, ax in enumerate(axes.flat): - if i < len(images): - if len(images[i].shape) == 2 and cmap is None: - cmap="gray" - ax.imshow(images[i], cmap=cmap) - ax.set_title(subtitles[i], fontsize=subtitlesize) - ax.axis('off') - fig.set_tight_layout(True) - plt.suptitle(title, y=0.8, fontsize=titlesize) - - if save: - plt.savefig(save_path, bbox_inches='tight') - plt.close() - else: - plt.show() - - -def show_keypoint_matches( - img1, kp1, img2, kp2, matches, - K=10, figsize=(10, 5), drawMatches_args=dict(matchesThickness=3, singlePointColor=(0, 0, 0)), - choose_matches="random", - ): - """Displays matches found in the pair of images""" - if choose_matches == "random": - selected_matches = np.random.choice(matches, K) - elif choose_matches == "all": - K = len(matches) - selected_matches = matches - elif choose_matches == "topk": - selected_matches = matches[:K] - else: - raise ValueError(f"Unknown value for choose_matches: {choose_matches}") - - # color each match with a different color - cmap = matplotlib.cm.get_cmap('gist_rainbow', K) - colors = [[int(x*255) for x in cmap(i)[:3]] for i in np.arange(0,K)] - drawMatches_args.update({"matchColor": -1, "singlePointColor": (100, 100, 100)}) - - img3 = cv2.drawMatches(img1, kp1, img2, kp2, selected_matches, outImg=None, **drawMatches_args) - show_single_image( - img3, - figsize=figsize, - title=f"[{choose_matches.upper()}] Selected K = {K} matches between the pair of images.", - ) - return img3 - - -def draw_kps_on_image(image: np.ndarray, kps: np.ndarray, color=COLORS["red"], radius=3, thickness=-1, return_as="numpy"): - """ - Draw keypoints on image. - - Args: - image: Image to draw keypoints on. - kps: Keypoints to draw. Note these should be in (x, y) format. - """ - if isinstance(image, Image.Image): - image = np.asarray(image) - - for kp in kps: - image = cv2.circle( - image, (int(kp[0]), int(kp[1])), radius=radius, color=color, thickness=thickness) - - if return_as == "PIL": - return Image.fromarray(image) - - return image - - -def get_concat_h(im1, im2): - """Concatenate two images horizontally""" - dst = Image.new('RGB', (im1.width + im2.width, im1.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (im1.width, 0)) - return dst - - -def get_concat_v(im1, im2): - """Concatenate two images vertically""" - dst = Image.new('RGB', (im1.width, im1.height + im2.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (0, im1.height)) - return dst - - -def show_images_with_keypoints(images: list, kps: list, radius=15, color=(0, 220, 220), figsize=(10, 8), return_images=False, save=False, save_path="sample.png"): - assert len(images) == len(kps) - - # generate - images_with_kps = [] - for i in range(len(images)): - img_with_kps = draw_kps_on_image(images[i], kps[i], radius=radius, color=color, return_as="PIL") - images_with_kps.append(img_with_kps) - - # show - show_grid_of_images(images_with_kps, n_cols=len(images), figsize=figsize, save=save, save_path=save_path) - - if return_images: - return images_with_kps - - -def set_latex_fonts(usetex=True, fontsize=14, show_sample=False, **kwargs): - try: - plt.rcParams.update({ - "text.usetex": usetex, - "font.family": "serif", - "font.serif": ["Computer Modern Roman"], - "font.size": fontsize, - **kwargs, - }) - if show_sample: - plt.figure() - plt.title("Sample $y = x^2$") - plt.plot(np.arange(0, 10), np.arange(0, 10)**2, "--o") - plt.grid() - plt.show() - except: - print("Failed to setup LaTeX fonts. Proceeding without.") - pass - - -def get_colors(num_colors, palette="jet"): - cmap = plt.get_cmap(palette) - colors = [cmap(i) for i in np.linspace(0, 1, num_colors)] - return colors - diff --git a/spaces/PeepDaSlan9/whisper-web/README.md b/spaces/PeepDaSlan9/whisper-web/README.md deleted file mode 100644 index 57a87130dd9523be27ea8fdff00cb033eef38368..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/whisper-web/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Whisper Web -emoji: 👀 -colorFrom: indigo -colorTo: indigo -sdk: static -pinned: false -duplicated_from: Xenova/whisper-web ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/solver/lr_scheduler.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/solver/lr_scheduler.py deleted file mode 100644 index f73673386efc8e502c79da5237b6c073eca6536f..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/solver/lr_scheduler.py +++ /dev/null @@ -1,164 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from bisect import bisect_right - -import math -import torch - - -# FIXME ideally this would be achieved with a CombinedLRScheduler, -# separating MultiStepLR with WarmupLR -# but the current LRScheduler design doesn't allow it -class WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler): - def __init__( - self, - optimizer, - milestones, - gamma=0.1, - warmup_factor=1.0 / 3, - warmup_iters=500, - warmup_method="linear", - last_epoch=-1, - ): - if not list(milestones) == sorted(milestones): - raise ValueError( - "Milestones should be a list of" " increasing integers. Got {}", - milestones, - ) - - if warmup_method not in ("constant", "linear"): - raise ValueError( - "Only 'constant' or 'linear' warmup_method accepted" - "got {}".format(warmup_method) - ) - self.milestones = milestones - self.gamma = gamma - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - super(WarmupMultiStepLR, self).__init__(optimizer, last_epoch) - - def get_lr(self): - warmup_factor = 1 - if self.last_epoch < self.warmup_iters: - if self.warmup_method == "constant": - warmup_factor = self.warmup_factor - elif self.warmup_method == "linear": - alpha = float(self.last_epoch) / self.warmup_iters - warmup_factor = self.warmup_factor * (1 - alpha) + alpha - return [ - base_lr - * warmup_factor - * self.gamma ** bisect_right(self.milestones, self.last_epoch) - for base_lr in self.base_lrs - ] - - -class WarmupCosineAnnealingLR(torch.optim.lr_scheduler._LRScheduler): - def __init__( - self, - optimizer, - max_iters, - gamma=0.1, - warmup_factor=1.0 / 3, - warmup_iters=500, - warmup_method="linear", - eta_min = 0, - last_epoch=-1, - ): - - if warmup_method not in ("constant", "linear"): - raise ValueError( - "Only 'constant' or 'linear' warmup_method accepted" - "got {}".format(warmup_method) - ) - self.max_iters = max_iters - self.gamma = gamma - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - self.eta_min = eta_min - super(WarmupCosineAnnealingLR, self).__init__(optimizer, last_epoch) - - def get_lr(self): - warmup_factor = 1 - - if self.last_epoch < self.warmup_iters: - if self.warmup_method == "constant": - warmup_factor = self.warmup_factor - elif self.warmup_method == "linear": - alpha = float(self.last_epoch) / self.warmup_iters - warmup_factor = self.warmup_factor * (1 - alpha) + alpha - return [ - base_lr - * warmup_factor - for base_lr in self.base_lrs - ] - else: - return [ - self.eta_min - + (base_lr - self.eta_min) - * (1 + math.cos(math.pi * (self.last_epoch - self.warmup_iters) / self.max_iters)) / 2 - for base_lr in self.base_lrs - ] - -class WarmupReduceLROnPlateau(torch.optim.lr_scheduler.ReduceLROnPlateau): - def __init__( - self, - optimizer, - max_iters, - gamma=0.1, - warmup_factor=1.0 / 3, - warmup_iters=500, - warmup_method="linear", - eta_min = 0, - last_epoch=-1, - patience = 5, - verbose = False, - ): - - if warmup_method not in ("constant", "linear"): - raise ValueError( - "Only 'constant' or 'linear' warmup_method accepted" - "got {}".format(warmup_method) - ) - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - self.eta_min = eta_min - - if last_epoch == -1: - for group in optimizer.param_groups: - group.setdefault('initial_lr', group['lr']) - else: - for i, group in enumerate(optimizer.param_groups): - if 'initial_lr' not in group: - raise KeyError("param 'initial_lr' is not specified " - "in param_groups[{}] when resuming an optimizer".format(i)) - self.base_lrs = list(map(lambda group: group['initial_lr'], optimizer.param_groups)) - super(WarmupReduceLROnPlateau, self).__init__(optimizer, factor=gamma, patience=patience, mode='max', min_lr=eta_min, verbose = verbose) - - def step(self, metrics=None): - warmup_factor = 1 - - if self.last_epoch < self.warmup_iters: - if self.warmup_method == "constant": - warmup_factor = self.warmup_factor - elif self.warmup_method == "linear": - alpha = float(self.last_epoch) / self.warmup_iters - warmup_factor = self.warmup_factor * (1 - alpha) + alpha - - if self.last_epoch >= self.warmup_iters-1: - warmup_factor = 1.0 - - warmup_lrs = [ - base_lr - * warmup_factor - for base_lr in self.base_lrs - ] - - for param_group, lr in zip(self.optimizer.param_groups, warmup_lrs): - param_group['lr'] = lr - - self.last_epoch += 1 - elif metrics: - super().step(metrics) \ No newline at end of file diff --git a/spaces/Pranjal2041/SemSup-XC/cleaned_code/wandb/run-20221030_084100-1hva11jp/files/code/cleaned_code/main.py b/spaces/Pranjal2041/SemSup-XC/cleaned_code/wandb/run-20221030_084100-1hva11jp/files/code/cleaned_code/main.py deleted file mode 100644 index 829f28918850a481e75e1342060a8838a1a4d34b..0000000000000000000000000000000000000000 --- a/spaces/Pranjal2041/SemSup-XC/cleaned_code/wandb/run-20221030_084100-1hva11jp/files/code/cleaned_code/main.py +++ /dev/null @@ -1,595 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2020 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Finetuning the library models for sequence classification on GLUE.""" -# You can also adapt this script on your own text classification task. Pointers for this are left as comments. -print('The script has began') -import itertools -import logging -import os -import random -import sys -import time -from dataclasses import dataclass, field -from typing import Optional -import shutil - -import datasets -import numpy as np -from datasets import load_dataset, load_metric - -import torch - -import transformers -from transformers import ( - AutoConfig, - AutoModelForSequenceClassification, - AutoTokenizer, - DataCollatorWithPadding, - EvalPrediction, - HfArgumentParser, - PretrainedConfig, - Trainer, - TrainingArguments, - default_data_collator, - set_seed, - TrainerCallback, -) -from transformers import BertForSequenceClassification -from transformers.trainer_utils import get_last_checkpoint -from transformers.utils import check_min_version -from transformers.utils.versions import require_version -from src import getTokenizedLabelDescriptions -from src import getLabelModel -from src import SemSupDataset -from src import AutoModelForMultiLabelClassification -from src import multilabel_metrics -from src import task_to_keys, task_to_label_keys, dataset_to_numlabels -from src import DataTrainingArguments, ModelArguments, CustomTrainingArguments -from src import dataset_classification_type -from src import BertForSemanticEmbedding -from src import read_yaml_config -from transformers import AdamW, get_linear_schedule_with_warmup -from torch.utils.data import DataLoader -import os - - -require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/text-classification/requirements.txt") - -logger = logging.getLogger(__name__) - - - - -def setup_logging(training_args): - # Setup logging - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - handlers=[logging.StreamHandler(sys.stdout)], - ) - - log_level = training_args.get_process_log_level() - logger.setLevel(log_level) - datasets.utils.logging.set_verbosity(log_level) - transformers.utils.logging.set_verbosity(log_level) - transformers.utils.logging.enable_default_handler() - transformers.utils.logging.enable_explicit_format() - - # Log on each process the small summary: - logger.warning( - f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" - + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" - ) - logger.info(f"Training/evaluation parameters {training_args}") - -def get_last_check(training_args): - last_checkpoint = None - if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: - last_checkpoint = get_last_checkpoint(training_args.output_dir) - if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: - raise ValueError( - f"Output directory ({training_args.output_dir}) already exists and is not empty. " - "Use --overwrite_output_dir to overcome." - ) - elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: - logger.info( - f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " - "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." - ) - return last_checkpoint - -def main(): - - TIME_LIMIT = 50 * 60 # 1 HOUR - start_time = time.time() - - - # See all possible arguments in src/transformers/training_args.py - # or by passing the --help flag to this script. - # We now keep distinct sets of args, for a cleaner separation of concerns. - print('Main Function is Called!!!', sys.argv) - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, CustomTrainingArguments)) - - if len(sys.argv) > 1 and sys.argv[1].startswith('--local_rank'): - extra_args = {'local_rank' : sys.argv[1].split('=')[1]} - argv = sys.argv[0:1] + sys.argv[2:] - else: - argv = sys.argv - extra_args = {} - - print(len(argv) == 3 and argv[1].endswith(".yml")) - if len(argv) == 2 and argv[1].endswith(".json"): - # If we pass only one argument to the script and it's the path to a json file, - # let's parse it to get our arguments. - model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(argv[1])) - elif len(argv) == 3 and argv[1].endswith(".yml"): - print('Lets Go!!!') - model_args, data_args, training_args = parser.parse_dict(read_yaml_config(os.path.abspath(argv[1]), output_dir = argv[2], extra_args = extra_args)) - print('training args', training_args) - else: - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - setup_logging(training_args) - if training_args.seed == -1: - training_args.seed = np.random.randint(0, 100000000) - print(training_args.seed) - - - last_checkpoint = get_last_check(training_args) - - set_seed(training_args.seed) - - - if data_args.dataset_name is not None and not data_args.load_from_local: - # Downloading and loading a dataset from the hub. - raw_datasets = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - else: - # Loading a dataset from your local files. - # CSV/JSON training and evaluation files are needed. - data_files = {"train": data_args.train_file, "validation": data_args.validation_file} - - # Get the test dataset: you can provide your own CSV/JSON test file (see below) - # when you use `do_predict` without specifying a GLUE benchmark task. - if training_args.do_predict: - if data_args.test_file is not None: - data_files["test"] = data_args.test_file - else: - raise ValueError("Need a test file for `do_predict`.") - - for key in data_files.keys(): - logger.info(f"load a local file for {key}: {data_files[key]}") - - if data_args.train_file.endswith(".csv"): - # Loading a dataset from local csv files - raw_datasets = load_dataset( - "csv", - data_files=data_files, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - else: - # Loading a dataset from local json files - print('df are', data_files, model_args.cache_dir) - raw_datasets = load_dataset( - "json", - data_files=data_files, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - # See more about loading any type of standard or custom dataset at - # https://huggingface.co/docs/datasets/loading_datasets.html. - - - # Labels - if data_args.task_name is not None: - label_key = task_to_label_keys[data_args.task_name] - - - if training_args.scenario == 'unseen_labels': - label_list = [x.strip() for x in open(data_args.all_labels).readlines()] - train_labels = list(set([item for sublist in raw_datasets['train'][label_key] for item in sublist])) - if data_args.test_labels is not None: - test_labels = [x.strip() for x in open(data_args.test_labels).readlines()] - else: - test_labels = list(set([item for sublist in raw_datasets['validation'][label_key] for item in sublist])) - - else: - label_list = list(set(itertools.chain(*[ - [item for sublist in raw_datasets[split_key][label_key] for item in sublist] - for split_key in raw_datasets.keys()] - ))) - num_labels = len(label_list) - label_list.sort() # For consistency - print('Debugging: num_labels: ', num_labels) - print('Debugging: label_list[:50]: ', label_list[:50]) - else: - # Trying to have good defaults here, don't hesitate to tweak to your needs. - # A useful fast method: - # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.unique - label_list = raw_datasets["train"].unique("label") - label_list.sort() # Let's sort it for determinism - num_labels = len(label_list) - - # Load pretrained model and tokenizer - # - # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently - # download model & vocab. - if model_args.semsup: - label_model, label_tokenizer = getLabelModel(data_args, model_args) - - config = AutoConfig.from_pretrained( - model_args.config_name if model_args.config_name else model_args.model_name_or_path, - # num_labels=num_labels, - finetuning_task=data_args.task_name, - cache_dir=model_args.cache_dir, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - - config.model_name_or_path = model_args.model_name_or_path - config.problem_type = dataset_classification_type[data_args.task_name] - config.negative_sampling = model_args.negative_sampling - config.semsup = model_args.semsup - config.encoder_model_type = model_args.encoder_model_type - config.arch_type = model_args.arch_type - config.coil = model_args.coil - config.token_dim = model_args.token_dim - config.colbert = model_args.colbert - - if config.semsup: - config.label_hidden_size = label_model.config.hidden_size - print('Label hidden size is ', label_model.config.hidden_size) - - config.cluster_labels_dim = data_args.num_clusters - temp_label_id = {v: i for i, v in enumerate(label_list)} - - tokenizer = AutoTokenizer.from_pretrained( - model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - use_fast=True,#model_args.use_fast_tokenizer, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - - - # Preprocessing the raw_datasets - if data_args.task_name is not None: - sentence1_key, sentence2_key = task_to_keys[data_args.task_name] - else: - # Again, we try to have some nice defaults but don't hesitate to tweak to your use case. - non_label_column_names = [name for name in raw_datasets["train"].column_names if name != "label"] - if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names: - sentence1_key, sentence2_key = "sentence1", "sentence2" - else: - if len(non_label_column_names) >= 2: - sentence1_key, sentence2_key = non_label_column_names[:2] - else: - sentence1_key, sentence2_key = non_label_column_names[0], None - - # Padding strategy - if data_args.pad_to_max_length: - padding = "max_length" - else: - # We will pad later, dynamically at batch creation, to the max sequence length in each batch - padding = False - - # Some models have set the order of the labels to use, so let's make sure we do use it. - def model_init(): - model = BertForSemanticEmbedding(config) - num_frozen_layers = model_args.num_frozen_layers - if num_frozen_layers > 0: - try: - for param in model.encoder.bert.embeddings.parameters(): - param.requires_grad = False - for param in model.encoder.bert.pooler.parameters(): - param.requires_grad = False - for layer in model.encoder.bert.encoder.layer[:num_frozen_layers]: - for param in layer.parameters(): - param.requires_grad = False - except: - for param in model.encoder.embeddings.parameters(): - param.requires_grad = False - for param in model.encoder.pooler.parameters(): - param.requires_grad = False - for layer in model.encoder.encoder.layer[:num_frozen_layers]: - for param in layer.parameters(): - param.requires_grad = False - - # Place the label model inside the main model - if model_args.semsup: - model.label_model = label_model - model.label_tokenizer = label_tokenizer - if model_args.tie_weights: - for i in range(9): - if num_frozen_layers >= 9: - try: - model.label_model.encoder.layer[i] = model.encoder.bert.encoder.layer[i] - except: - model.label_model.encoder.layer[i] = model.encoder.encoder.layer[i] - else: - for param in model.label_model.encoder.layer[i].parameters(): - param.requires_grad = False - for param in model.label_model.embeddings.parameters(): - param.requires_grad = False - for param in model.label_model.pooler.parameters(): - param.requires_grad = False - else: - label_frozen_layers = model_args.label_frozen_layers - if label_frozen_layers > 0: - print(model.label_model) - for param in model.label_model.embeddings.parameters(): - param.requires_grad = False - for param in model.label_model.pooler.parameters(): - param.requires_grad = False - for layer in model.label_model.encoder.layer[:label_frozen_layers]: - for param in layer.parameters(): - param.requires_grad = False - - model.config.label2id = {l: i for i, l in enumerate(label_list)} - model.config.id2label = {id: label for label, id in config.label2id.items()} - return model - model = model_init() - if model_args.pretrained_model_path != '': - model.load_state_dict(torch.load(model_args.pretrained_model_path, map_location = list(model.parameters())[0].device)) - if model_args.pretrained_label_model_path != '': - model.label_model.load_state_dict(torch.load(model_args.pretrained_label_model_path, map_location = list(model.parameters())[0].device)) - - id2label = model.config.id2label - label_to_id = model.config.label2id - - if data_args.max_seq_length > tokenizer.model_max_length: - logger.warning( - f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the" - f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}." - ) - max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length) - - def preprocess_function(examples): - # Tokenize the texts - args = ( - (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) - ) - result = tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True) - - # Map labels to IDs (not necessary for GLUE tasks) - if label_to_id is not None and label_key in examples: - # check if multi-label problem - if isinstance(examples[label_key][0], list): - # Multi-Label, create one-hot encoding - labels = [[label_to_id[l] for l in examples[label_key][i]] for i in range(len(examples[label_key]))] - result["label"] = [[1 if j in labels[i] else 0 for j in range(num_labels)] for i in range(len(labels))] - else: - result["label"] = [(label_to_id[l] if l != -1 else -1) for l in examples["label"]] - - - # Labels keyword should not be present(it may contain string) - try: del input['labels'] - except: ... - - return result - - try: - if data_args.test_descriptions_file == '': - data_args.test_descriptions_file = data_args.descriptions_file - except: data_args.test_descriptions_file = data_args.descriptions_file - - print('Running with_transform') - raw_datasets = raw_datasets.with_transform(preprocess_function) - - class_descs_tokenized = None - if model_args.semsup and data_args.large_dset and os.path.exists(data_args.tokenized_descs_file): - if data_args.tokenized_descs_file.endswith('npy'): - class_descs_tokenized = np.load(data_args.tokenized_descs_file, allow_pickle=True) - - - if training_args.do_train: - if "train" not in raw_datasets: - raise ValueError("--do_train requires a train dataset") - train_dataset = raw_datasets["train"] - if data_args.max_train_samples is not None: - max_train_samples = min(len(train_dataset), data_args.max_train_samples) - train_dataset = train_dataset.select(np.random.choice(len(train_dataset), max_train_samples)) - if model_args.semsup: - train_dataset = SemSupDataset(train_dataset, data_args, data_args.descriptions_file, label_to_id, id2label, label_tokenizer, return_desc_embeddings = True, sampleRandom = data_args.contrastive_learning_samples, cl_min_positive_descs= data_args.cl_min_positive_descs, seen_labels = None if training_args.scenario == 'seen' else train_labels, add_label_name = model_args.add_label_name, max_descs_per_label = data_args.max_descs_per_label, use_precomputed_embeddings = model_args.use_precomputed_embeddings, bm_short_file = data_args.bm_short_file, ignore_pos_labels_file = data_args.ignore_pos_labels_file, class_descs_tokenized = class_descs_tokenized) - else: - train_dataset = SemSupDataset(train_dataset, data_args, data_args.descriptions_file, label_to_id, id2label, None, useSemSup = False, add_label_name = model_args.add_label_name) - - if training_args.do_eval: - if "validation" not in raw_datasets and "validation_matched" not in raw_datasets: - raise ValueError("--do_eval requires a validation dataset") - eval_dataset = raw_datasets["validation_matched" if data_args.task_name == "mnli" else "validation"] - choice_indexes = None - - if data_args.max_eval_samples is not None: - max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) - if data_args.random_sample_seed != -1: - l = len(eval_dataset) - - np.random.seed(data_args.random_sample_seed) - choice_indexes = np.random.choice(l, max_eval_samples, replace = False).tolist() - choice_indexes = [x for x in choice_indexes] - import pickle - pickle.dump(choice_indexes, open('choice_indexes.pkl','wb')) - eval_dataset = eval_dataset.select(choice_indexes) - np.random.seed() - else: - choice_indexes = None - eval_dataset = eval_dataset.select(range(max_eval_samples)) - - if model_args.semsup: - eval_dataset = SemSupDataset(eval_dataset, data_args, data_args.test_descriptions_file, label_to_id, id2label, label_tokenizer, return_desc_embeddings=True, seen_labels = None if training_args.scenario == 'seen' else test_labels, add_label_name = model_args.add_label_name, max_descs_per_label = data_args.max_descs_per_label, use_precomputed_embeddings = model_args.use_precomputed_embeddings, class_descs_tokenized = class_descs_tokenized, isTrain = False, choice_indexes = choice_indexes) - - if training_args.do_predict: - if "test" not in raw_datasets and "test_matched" not in raw_datasets: - raise ValueError("--do_predict requires a test dataset") - predict_dataset = raw_datasets["test_matched" if data_args.task_name == "mnli" else "test"] - if data_args.max_predict_samples is not None: - max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples) - predict_dataset = predict_dataset.select(range(max_predict_samples)) - - - compute_metrics = multilabel_metrics(data_args, model.config.id2label, model.config.label2id, {}, training_args) - - if data_args.pad_to_max_length: - data_collator = default_data_collator - elif training_args.fp16: - data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8) - else: - data_collator = None - - - # Initialize our Trainer - print('Initializing Optimizers') - from torch.optim import AdamW - # from transformers import AdamW - if model_args.use_custom_optimizer: - decay_cond = lambda x: x[0].lower().find('layernorm.weight')!=-1 or x[0].lower().find('bias')!=-1 - if model_args.semsup and not data_args.hyper_search: - main_decay_params = list(map(lambda x: x[1], filter(lambda x: x[1].requires_grad and decay_cond(x) and (x[0][12:], x[1]) not in model.label_model.named_parameters() , model.named_parameters()))) - main_no_decay_params = list(map(lambda x: x[1],filter(lambda x: x[1].requires_grad and not decay_cond(x) and (x[0][12:], x[1]) not in model.label_model.named_parameters(), model.named_parameters()))) - label_decay_params = list(map(lambda x: x[1], filter(lambda x: x[1].requires_grad and decay_cond(x) , model.label_model.named_parameters()))) - label_no_decay_params = list(map(lambda x: x[1],filter(lambda x: x[1].requires_grad and not decay_cond(x), model.label_model.named_parameters()))) - if model_args.tie_weights: - label_decay_params = list(set(label_decay_params).difference(main_decay_params)) - label_no_decay_params = list(set(label_no_decay_params).difference(main_no_decay_params)) - - optimizer = AdamW([ - {'params': main_decay_params, 'weight_decay': 1e-2}, - {'params': main_no_decay_params, 'weight_decay': 0}, - {'params': label_decay_params, 'weight_decay': 1e-2, 'lr' : training_args.output_learning_rate}, - {'params': label_no_decay_params, 'weight_decay': 0, 'lr' : training_args.output_learning_rate} - ], - lr = training_args.learning_rate, eps= 1e-6) - ... - else: - decay_params = list(map(lambda x: x[1], filter(lambda x: decay_cond(x), model.named_parameters()))) - no_decay_params = list(map(lambda x: x[1],filter(lambda x: not decay_cond(x), model.named_parameters()))) - optimizer = optim.AdamW([ - {'params': decay_params, 'weight_decay': 1e-2}, - {'params': no_decay_params, 'weight_decay': 0}], - lr = training_args.learning_rate, eps= 1e-6) - - - trainer = Trainer( - model=model, - model_init= None, - args=training_args, - train_dataset=train_dataset if training_args.do_train else None, - eval_dataset=eval_dataset if training_args.do_eval else None, - compute_metrics=compute_metrics, - tokenizer=tokenizer, - data_collator=data_collator, - optimizers = (optimizer if model_args.use_custom_optimizer else None, None) , - ) - - # Training - if training_args.do_train: - checkpoint = None - if training_args.resume_from_checkpoint is not None: - checkpoint = training_args.resume_from_checkpoint - elif last_checkpoint is not None: - checkpoint = last_checkpoint - train_result = trainer.train(resume_from_checkpoint=checkpoint) - metrics = train_result.metrics - max_train_samples = ( - data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset) - ) - metrics["train_samples"] = min(max_train_samples, len(train_dataset)) - - trainer.save_model() # Saves the tokenizer too for easy upload - - trainer.log_metrics("train", metrics) - trainer.save_metrics("train", metrics) - trainer.save_state() - - # Evaluation - if training_args.do_eval: - logger.info("*** Evaluate ***") - - # Loop to handle MNLI double evaluation (matched, mis-matched) - tasks = [data_args.task_name] - eval_datasets = [eval_dataset] - if data_args.task_name == "mnli": - tasks.append("mnli-mm") - eval_datasets.append(raw_datasets["validation_mismatched"]) - combined = {} - - for eval_dataset, task in zip(eval_datasets, tasks): - metrics = trainer.evaluate(eval_dataset=eval_dataset) - - max_eval_samples = ( - data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset) - ) - metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset)) - - if task == "mnli-mm": - metrics = {k + "_mm": v for k, v in metrics.items()} - if task is not None and "mnli" in task: - combined.update(metrics) - - trainer.save_metrics("eval", combined if task is not None and "mnli" in task else metrics) - trainer.log_metrics("eval", metrics) - - if training_args.do_predict: - logger.info("*** Predict ***") - - # Loop to handle MNLI double evaluation (matched, mis-matched) - tasks = [data_args.task_name] - predict_datasets = [predict_dataset] - if data_args.task_name == "mnli": - tasks.append("mnli-mm") - predict_datasets.append(raw_datasets["test_mismatched"]) - - for predict_dataset, task in zip(predict_datasets, tasks): - # Removing the `label` columns because it contains -1 and Trainer won't like that. - predict_dataset = predict_dataset.remove_columns("label") - predictions = trainer.predict(predict_dataset, metric_key_prefix="predict").predictions - predictions = np.argmax(predictions, axis=1) - - output_predict_file = os.path.join(training_args.output_dir, f"predict_results_{task}.txt") - if trainer.is_world_process_zero(): - with open(output_predict_file, "w") as writer: - logger.info(f"***** Predict results {task} *****") - writer.write("index\tprediction\n") - for index, item in enumerate(predictions): - item = label_list[item] - writer.write(f"{index}\t{item}\n") - - kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "text-classification"} - if data_args.task_name is not None: - kwargs["language"] = "en" - kwargs["dataset_tags"] = "glue" - kwargs["dataset_args"] = data_args.task_name - kwargs["dataset"] = f"GLUE {data_args.task_name.upper()}" - - if training_args.push_to_hub: - trainer.push_to_hub(**kwargs) - else: - trainer.create_model_card(**kwargs) - - -def _mp_fn(index): - # For xla_spawn (TPUs) - main() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/modules/test_rope.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/modules/test_rope.py deleted file mode 100644 index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/modules/test_rope.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend - - -def test_rope(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/Promit/BrainSEG/app.py b/spaces/Promit/BrainSEG/app.py deleted file mode 100644 index 7ec9993e5b03387b7626cef735d5217df702a2ed..0000000000000000000000000000000000000000 --- a/spaces/Promit/BrainSEG/app.py +++ /dev/null @@ -1,66 +0,0 @@ -import numpy as np -import cv2 -import matplotlib.pyplot as plt -import tensorflow as tf -smooth=1. -#-----------------------------------------------------------------------------------------------------------------------------------------------------------# -'''Function for returning dice coefficient''' -def DICE_COEFF(y_true, y_pred): - y_true = K.flatten(y_true) - y_pred = K.flatten(y_pred) - intersection = K.sum(y_true * y_pred) - union = K.sum(y_true) + K.sum(y_pred) - return (2.0 * intersection + smooth) / (union + smooth) -'''Dice Coefficient Loss''' -def dice_coef_loss(y_true, y_pred): - return 1 - DICE_COEFF(y_true, y_pred) -#---------------------------------------------------------# -'''Function for combining binary cross entropy with dice coeffcients for loss function''' -def bce_dice_loss(y_true, y_pred): - bce = tf.keras.losses.BinaryCrossentropy() - return dice_coef_loss(y_true, y_pred) + bce(y_true, y_pred) - -#----------------------------------------------------------------------------------------------------------------------------------------------------------# - -'''Function for Jacards coefficient''' -def IOU_JACARD(y_true, y_pred): - y_true=K.flatten(y_true) - y_pred=K.flatten(y_pred) - intersection = K.sum(y_true * y_pred) - sum_jac = K.sum(y_true + y_pred) - jac = (intersection + smooth) / (sum_jac - intersection + smooth) - return jac -#----------------------------------------------------------------------------------------------------------------------------------------------------------# - -model = tf.keras.models.load_model('MRI_Attention_UNet_ResNet.hdf5',custom_objects={'bce_dice_loss':bce_dice_loss,'IOU_JACARD': IOU_JACARD,'DICE_COEFF':DICE_COEFF}) - - - -def segment(image): - - - - images = np.array(image) - - # Converting it to 'float32' - images = images.astype('float32') - - # Normalize the Numpy array (if desired) - images = images / 255.0 - - # Convert the Numpy array to a TensorFlow tensor - images = tf.convert_to_tensor(images) - images=tf.image.resize(images,[256,256]) - images=np.array(images) - images=tf.expand_dims(images,axis=0) - images=model.predict(images) - images=np.array(images) - images=images.reshape((256,256)) - print(images.shape) - return images - - -import gradio as gr - -# Write 1 line of Python to create a simple GUI -gr.Interface(fn=segment, inputs="image", outputs="image").launch(); \ No newline at end of file diff --git a/spaces/Q-b1t/Dog_Emotions_Vision_Classifier/README.md b/spaces/Q-b1t/Dog_Emotions_Vision_Classifier/README.md deleted file mode 100644 index 6d2c9e07b7162c9066979e91ec979032c49d9e80..0000000000000000000000000000000000000000 --- a/spaces/Q-b1t/Dog_Emotions_Vision_Classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dog Emotions Vision Classifier -emoji: 🐶 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/QinQiuFox/get_ppt/style.css b/spaces/QinQiuFox/get_ppt/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/QinQiuFox/get_ppt/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/RaidedCluster/Sniffusion_PomerAInian/app.py b/spaces/RaidedCluster/Sniffusion_PomerAInian/app.py deleted file mode 100644 index 6992c7a01488192e3e31973e442f5c4e78c19e22..0000000000000000000000000000000000000000 --- a/spaces/RaidedCluster/Sniffusion_PomerAInian/app.py +++ /dev/null @@ -1,35 +0,0 @@ -from tensorflow.keras.models import load_model -from huggingface_hub import from_pretrained_keras -import streamlit as st -import numpy as np -import cv2 -from PIL import Image - -st.markdown('Image', unsafe_allow_html=True) - -st.header("Sniffusion PomerAInian") -st.write("Human/AI Art Classifier.") -st.write("NOTE: PomerAInian is a small model and was only trained on LEXICA Stable Diffusion images, images generated by other models may not be classified correctly.") -upload= st.file_uploader('Insert image for detection:', type=['png','jpg']) -c1, c2= st.columns(2) -if upload is not None: - im= Image.open(upload) - img= np.asarray(im) - img = cv2.resize(img, (224, 224)) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = img / 255.0 - img = np.expand_dims(img, axis=0) - c1.header('Input Image') - c1.image(im) - c1.write(img.shape) - model = from_pretrained_keras("RaidedCluster/Sniffusion-PomerAInian") - prediction = model.predict(img) - hf=str(prediction[0][0]*100)+'% Human Factor' - c2.header('Output') - c2.subheader('Estimation:') - if prediction >=0.5: - est="Estimated to be Human Art." - else: - est="Estimated to be AI Art." - c2.write(est) - c2.write(hf) \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py deleted file mode 100644 index f5bc343b91be9cd7038a64cda0df08a37f932e61..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py +++ /dev/null @@ -1,556 +0,0 @@ -import logging -import sys -from typing import TYPE_CHECKING, Any, FrozenSet, Iterable, Optional, Tuple, Union, cast - -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.packaging.version import Version - -from pip._internal.exceptions import ( - HashError, - InstallationSubprocessError, - MetadataInconsistent, -) -from pip._internal.metadata import BaseDistribution -from pip._internal.models.link import Link, links_equivalent -from pip._internal.models.wheel import Wheel -from pip._internal.req.constructors import ( - install_req_from_editable, - install_req_from_line, -) -from pip._internal.req.req_install import InstallRequirement -from pip._internal.utils.direct_url_helpers import direct_url_from_link -from pip._internal.utils.misc import normalize_version_info - -from .base import Candidate, CandidateVersion, Requirement, format_name - -if TYPE_CHECKING: - from .factory import Factory - -logger = logging.getLogger(__name__) - -BaseCandidate = Union[ - "AlreadyInstalledCandidate", - "EditableCandidate", - "LinkCandidate", -] - -# Avoid conflicting with the PyPI package "Python". -REQUIRES_PYTHON_IDENTIFIER = cast(NormalizedName, "") - - -def as_base_candidate(candidate: Candidate) -> Optional[BaseCandidate]: - """The runtime version of BaseCandidate.""" - base_candidate_classes = ( - AlreadyInstalledCandidate, - EditableCandidate, - LinkCandidate, - ) - if isinstance(candidate, base_candidate_classes): - return candidate - return None - - -def make_install_req_from_link( - link: Link, template: InstallRequirement -) -> InstallRequirement: - assert not template.editable, "template is editable" - if template.req: - line = str(template.req) - else: - line = link.url - ireq = install_req_from_line( - line, - user_supplied=template.user_supplied, - comes_from=template.comes_from, - use_pep517=template.use_pep517, - isolated=template.isolated, - constraint=template.constraint, - options=dict( - install_options=template.install_options, - global_options=template.global_options, - hashes=template.hash_options, - ), - config_settings=template.config_settings, - ) - ireq.original_link = template.original_link - ireq.link = link - return ireq - - -def make_install_req_from_editable( - link: Link, template: InstallRequirement -) -> InstallRequirement: - assert template.editable, "template not editable" - return install_req_from_editable( - link.url, - user_supplied=template.user_supplied, - comes_from=template.comes_from, - use_pep517=template.use_pep517, - isolated=template.isolated, - constraint=template.constraint, - permit_editable_wheels=template.permit_editable_wheels, - options=dict( - install_options=template.install_options, - global_options=template.global_options, - hashes=template.hash_options, - ), - config_settings=template.config_settings, - ) - - -def _make_install_req_from_dist( - dist: BaseDistribution, template: InstallRequirement -) -> InstallRequirement: - if template.req: - line = str(template.req) - elif template.link: - line = f"{dist.canonical_name} @ {template.link.url}" - else: - line = f"{dist.canonical_name}=={dist.version}" - ireq = install_req_from_line( - line, - user_supplied=template.user_supplied, - comes_from=template.comes_from, - use_pep517=template.use_pep517, - isolated=template.isolated, - constraint=template.constraint, - options=dict( - install_options=template.install_options, - global_options=template.global_options, - hashes=template.hash_options, - ), - config_settings=template.config_settings, - ) - ireq.satisfied_by = dist - return ireq - - -class _InstallRequirementBackedCandidate(Candidate): - """A candidate backed by an ``InstallRequirement``. - - This represents a package request with the target not being already - in the environment, and needs to be fetched and installed. The backing - ``InstallRequirement`` is responsible for most of the leg work; this - class exposes appropriate information to the resolver. - - :param link: The link passed to the ``InstallRequirement``. The backing - ``InstallRequirement`` will use this link to fetch the distribution. - :param source_link: The link this candidate "originates" from. This is - different from ``link`` when the link is found in the wheel cache. - ``link`` would point to the wheel cache, while this points to the - found remote link (e.g. from pypi.org). - """ - - dist: BaseDistribution - is_installed = False - - def __init__( - self, - link: Link, - source_link: Link, - ireq: InstallRequirement, - factory: "Factory", - name: Optional[NormalizedName] = None, - version: Optional[CandidateVersion] = None, - ) -> None: - self._link = link - self._source_link = source_link - self._factory = factory - self._ireq = ireq - self._name = name - self._version = version - self.dist = self._prepare() - - def __str__(self) -> str: - return f"{self.name} {self.version}" - - def __repr__(self) -> str: - return "{class_name}({link!r})".format( - class_name=self.__class__.__name__, - link=str(self._link), - ) - - def __hash__(self) -> int: - return hash((self.__class__, self._link)) - - def __eq__(self, other: Any) -> bool: - if isinstance(other, self.__class__): - return links_equivalent(self._link, other._link) - return False - - @property - def source_link(self) -> Optional[Link]: - return self._source_link - - @property - def project_name(self) -> NormalizedName: - """The normalised name of the project the candidate refers to""" - if self._name is None: - self._name = self.dist.canonical_name - return self._name - - @property - def name(self) -> str: - return self.project_name - - @property - def version(self) -> CandidateVersion: - if self._version is None: - self._version = self.dist.version - return self._version - - def format_for_error(self) -> str: - return "{} {} (from {})".format( - self.name, - self.version, - self._link.file_path if self._link.is_file else self._link, - ) - - def _prepare_distribution(self) -> BaseDistribution: - raise NotImplementedError("Override in subclass") - - def _check_metadata_consistency(self, dist: BaseDistribution) -> None: - """Check for consistency of project name and version of dist.""" - if self._name is not None and self._name != dist.canonical_name: - raise MetadataInconsistent( - self._ireq, - "name", - self._name, - dist.canonical_name, - ) - if self._version is not None and self._version != dist.version: - raise MetadataInconsistent( - self._ireq, - "version", - str(self._version), - str(dist.version), - ) - - def _prepare(self) -> BaseDistribution: - try: - dist = self._prepare_distribution() - except HashError as e: - # Provide HashError the underlying ireq that caused it. This - # provides context for the resulting error message to show the - # offending line to the user. - e.req = self._ireq - raise - except InstallationSubprocessError as exc: - # The output has been presented already, so don't duplicate it. - exc.context = "See above for output." - raise - - self._check_metadata_consistency(dist) - return dist - - def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: - requires = self.dist.iter_dependencies() if with_requires else () - for r in requires: - yield self._factory.make_requirement_from_spec(str(r), self._ireq) - yield self._factory.make_requires_python_requirement(self.dist.requires_python) - - def get_install_requirement(self) -> Optional[InstallRequirement]: - return self._ireq - - -class LinkCandidate(_InstallRequirementBackedCandidate): - is_editable = False - - def __init__( - self, - link: Link, - template: InstallRequirement, - factory: "Factory", - name: Optional[NormalizedName] = None, - version: Optional[CandidateVersion] = None, - ) -> None: - source_link = link - cache_entry = factory.get_wheel_cache_entry(link, name) - if cache_entry is not None: - logger.debug("Using cached wheel link: %s", cache_entry.link) - link = cache_entry.link - ireq = make_install_req_from_link(link, template) - assert ireq.link == link - if ireq.link.is_wheel and not ireq.link.is_file: - wheel = Wheel(ireq.link.filename) - wheel_name = canonicalize_name(wheel.name) - assert name == wheel_name, f"{name!r} != {wheel_name!r} for wheel" - # Version may not be present for PEP 508 direct URLs - if version is not None: - wheel_version = Version(wheel.version) - assert version == wheel_version, "{!r} != {!r} for wheel {}".format( - version, wheel_version, name - ) - - if cache_entry is not None: - if cache_entry.persistent and template.link is template.original_link: - ireq.original_link_is_in_wheel_cache = True - if cache_entry.origin is not None: - ireq.download_info = cache_entry.origin - else: - # Legacy cache entry that does not have origin.json. - # download_info may miss the archive_info.hash field. - ireq.download_info = direct_url_from_link( - source_link, link_is_in_wheel_cache=cache_entry.persistent - ) - - super().__init__( - link=link, - source_link=source_link, - ireq=ireq, - factory=factory, - name=name, - version=version, - ) - - def _prepare_distribution(self) -> BaseDistribution: - preparer = self._factory.preparer - return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True) - - -class EditableCandidate(_InstallRequirementBackedCandidate): - is_editable = True - - def __init__( - self, - link: Link, - template: InstallRequirement, - factory: "Factory", - name: Optional[NormalizedName] = None, - version: Optional[CandidateVersion] = None, - ) -> None: - super().__init__( - link=link, - source_link=link, - ireq=make_install_req_from_editable(link, template), - factory=factory, - name=name, - version=version, - ) - - def _prepare_distribution(self) -> BaseDistribution: - return self._factory.preparer.prepare_editable_requirement(self._ireq) - - -class AlreadyInstalledCandidate(Candidate): - is_installed = True - source_link = None - - def __init__( - self, - dist: BaseDistribution, - template: InstallRequirement, - factory: "Factory", - ) -> None: - self.dist = dist - self._ireq = _make_install_req_from_dist(dist, template) - self._factory = factory - - # This is just logging some messages, so we can do it eagerly. - # The returned dist would be exactly the same as self.dist because we - # set satisfied_by in _make_install_req_from_dist. - # TODO: Supply reason based on force_reinstall and upgrade_strategy. - skip_reason = "already satisfied" - factory.preparer.prepare_installed_requirement(self._ireq, skip_reason) - - def __str__(self) -> str: - return str(self.dist) - - def __repr__(self) -> str: - return "{class_name}({distribution!r})".format( - class_name=self.__class__.__name__, - distribution=self.dist, - ) - - def __hash__(self) -> int: - return hash((self.__class__, self.name, self.version)) - - def __eq__(self, other: Any) -> bool: - if isinstance(other, self.__class__): - return self.name == other.name and self.version == other.version - return False - - @property - def project_name(self) -> NormalizedName: - return self.dist.canonical_name - - @property - def name(self) -> str: - return self.project_name - - @property - def version(self) -> CandidateVersion: - return self.dist.version - - @property - def is_editable(self) -> bool: - return self.dist.editable - - def format_for_error(self) -> str: - return f"{self.name} {self.version} (Installed)" - - def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: - if not with_requires: - return - for r in self.dist.iter_dependencies(): - yield self._factory.make_requirement_from_spec(str(r), self._ireq) - - def get_install_requirement(self) -> Optional[InstallRequirement]: - return None - - -class ExtrasCandidate(Candidate): - """A candidate that has 'extras', indicating additional dependencies. - - Requirements can be for a project with dependencies, something like - foo[extra]. The extras don't affect the project/version being installed - directly, but indicate that we need additional dependencies. We model that - by having an artificial ExtrasCandidate that wraps the "base" candidate. - - The ExtrasCandidate differs from the base in the following ways: - - 1. It has a unique name, of the form foo[extra]. This causes the resolver - to treat it as a separate node in the dependency graph. - 2. When we're getting the candidate's dependencies, - a) We specify that we want the extra dependencies as well. - b) We add a dependency on the base candidate. - See below for why this is needed. - 3. We return None for the underlying InstallRequirement, as the base - candidate will provide it, and we don't want to end up with duplicates. - - The dependency on the base candidate is needed so that the resolver can't - decide that it should recommend foo[extra1] version 1.0 and foo[extra2] - version 2.0. Having those candidates depend on foo=1.0 and foo=2.0 - respectively forces the resolver to recognise that this is a conflict. - """ - - def __init__( - self, - base: BaseCandidate, - extras: FrozenSet[str], - ) -> None: - self.base = base - self.extras = extras - - def __str__(self) -> str: - name, rest = str(self.base).split(" ", 1) - return "{}[{}] {}".format(name, ",".join(self.extras), rest) - - def __repr__(self) -> str: - return "{class_name}(base={base!r}, extras={extras!r})".format( - class_name=self.__class__.__name__, - base=self.base, - extras=self.extras, - ) - - def __hash__(self) -> int: - return hash((self.base, self.extras)) - - def __eq__(self, other: Any) -> bool: - if isinstance(other, self.__class__): - return self.base == other.base and self.extras == other.extras - return False - - @property - def project_name(self) -> NormalizedName: - return self.base.project_name - - @property - def name(self) -> str: - """The normalised name of the project the candidate refers to""" - return format_name(self.base.project_name, self.extras) - - @property - def version(self) -> CandidateVersion: - return self.base.version - - def format_for_error(self) -> str: - return "{} [{}]".format( - self.base.format_for_error(), ", ".join(sorted(self.extras)) - ) - - @property - def is_installed(self) -> bool: - return self.base.is_installed - - @property - def is_editable(self) -> bool: - return self.base.is_editable - - @property - def source_link(self) -> Optional[Link]: - return self.base.source_link - - def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: - factory = self.base._factory - - # Add a dependency on the exact base - # (See note 2b in the class docstring) - yield factory.make_requirement_from_candidate(self.base) - if not with_requires: - return - - # The user may have specified extras that the candidate doesn't - # support. We ignore any unsupported extras here. - valid_extras = self.extras.intersection(self.base.dist.iter_provided_extras()) - invalid_extras = self.extras.difference(self.base.dist.iter_provided_extras()) - for extra in sorted(invalid_extras): - logger.warning( - "%s %s does not provide the extra '%s'", - self.base.name, - self.version, - extra, - ) - - for r in self.base.dist.iter_dependencies(valid_extras): - requirement = factory.make_requirement_from_spec( - str(r), self.base._ireq, valid_extras - ) - if requirement: - yield requirement - - def get_install_requirement(self) -> Optional[InstallRequirement]: - # We don't return anything here, because we always - # depend on the base candidate, and we'll get the - # install requirement from that. - return None - - -class RequiresPythonCandidate(Candidate): - is_installed = False - source_link = None - - def __init__(self, py_version_info: Optional[Tuple[int, ...]]) -> None: - if py_version_info is not None: - version_info = normalize_version_info(py_version_info) - else: - version_info = sys.version_info[:3] - self._version = Version(".".join(str(c) for c in version_info)) - - # We don't need to implement __eq__() and __ne__() since there is always - # only one RequiresPythonCandidate in a resolution, i.e. the host Python. - # The built-in object.__eq__() and object.__ne__() do exactly what we want. - - def __str__(self) -> str: - return f"Python {self._version}" - - @property - def project_name(self) -> NormalizedName: - return REQUIRES_PYTHON_IDENTIFIER - - @property - def name(self) -> str: - return REQUIRES_PYTHON_IDENTIFIER - - @property - def version(self) -> CandidateVersion: - return self._version - - def format_for_error(self) -> str: - return f"Python {self.version}" - - def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: - return () - - def get_install_requirement(self) -> Optional[InstallRequirement]: - return None diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools_rust/clean.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools_rust/clean.py deleted file mode 100644 index 6a46f3ee57ec87b0a8cff44013e0b39480b94a86..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools_rust/clean.py +++ /dev/null @@ -1,30 +0,0 @@ -import subprocess -import sys - -from .command import RustCommand -from .extension import RustExtension - - -class clean_rust(RustCommand): - """Clean Rust extensions.""" - - description = "clean Rust extensions (compile/link to build directory)" - - def initialize_options(self) -> None: - super().initialize_options() - self.inplace = False - - def run_for_extension(self, ext: RustExtension) -> None: - # build cargo command - args = ["cargo", "clean", "--manifest-path", ext.path] - if ext.cargo_manifest_args: - args.extend(ext.cargo_manifest_args) - - if not ext.quiet: - print(" ".join(args), file=sys.stderr) - - # Execute cargo command - try: - subprocess.check_output(args) - except: - pass diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/ASpanFormer/aspan_module/attention.py b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/ASpanFormer/aspan_module/attention.py deleted file mode 100644 index 10049e3b5a4e39147a17ce3683f760afd8de73ae..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/ASpanFormer/aspan_module/attention.py +++ /dev/null @@ -1,315 +0,0 @@ -import torch -from torch.nn import Module -import torch.nn as nn -from itertools import product -from torch.nn import functional as F - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class layernorm2d(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - self.affine = nn.parameter.Parameter(torch.ones(dim), requires_grad=True) - self.bias = nn.parameter.Parameter(torch.zeros(dim), requires_grad=True) - - def forward(self, x): - # x: B*C*H*W - mean, std = x.mean(dim=1, keepdim=True), x.std(dim=1, keepdim=True) - return ( - self.affine[None, :, None, None] * (x - mean) / (std + 1e-6) - + self.bias[None, :, None, None] - ) - - -class HierachicalAttention(Module): - def __init__(self, d_model, nhead, nsample, radius_scale, nlevel=3): - super().__init__() - self.d_model = d_model - self.nhead = nhead - self.nsample = nsample - self.nlevel = nlevel - self.radius_scale = radius_scale - self.merge_head = nn.Sequential( - nn.Conv1d(d_model * 3, d_model, kernel_size=1, bias=False), - nn.ReLU(True), - nn.Conv1d(d_model, d_model, kernel_size=1, bias=False), - ) - self.fullattention = FullAttention(d_model, nhead) - self.temp = nn.parameter.Parameter(torch.tensor(1.0), requires_grad=True) - sample_offset = torch.tensor( - [ - [pos[0] - nsample[1] / 2 + 0.5, pos[1] - nsample[1] / 2 + 0.5] - for pos in product(range(nsample[1]), range(nsample[1])) - ] - ) # r^2*2 - self.sample_offset = nn.parameter.Parameter(sample_offset, requires_grad=False) - - def forward( - self, - query, - key, - value, - flow, - size_q, - size_kv, - mask0=None, - mask1=None, - ds0=[4, 4], - ds1=[4, 4], - ): - """ - Args: - q,k,v (torch.Tensor): [B, C, L] - mask (torch.Tensor): [B, L] - flow (torch.Tensor): [B, H, W, 4] - Return: - all_message (torch.Tensor): [B, C, H, W] - """ - - variance = flow[:, :, :, 2:] - offset = flow[:, :, :, :2] # B*H*W*2 - bs = query.shape[0] - h0, w0 = size_q[0], size_q[1] - h1, w1 = size_kv[0], size_kv[1] - variance = torch.exp(0.5 * variance) * self.radius_scale # b*h*w*2(pixel scale) - span_scale = torch.clamp((variance * 2 / self.nsample[1]), min=1) # b*h*w*2 - - sub_sample0, sub_sample1 = [ds0, 2, 1], [ds1, 2, 1] - q_list = [ - F.avg_pool2d( - query.view(bs, -1, h0, w0), kernel_size=sub_size, stride=sub_size - ) - for sub_size in sub_sample0 - ] - k_list = [ - F.avg_pool2d( - key.view(bs, -1, h1, w1), kernel_size=sub_size, stride=sub_size - ) - for sub_size in sub_sample1 - ] - v_list = [ - F.avg_pool2d( - value.view(bs, -1, h1, w1), kernel_size=sub_size, stride=sub_size - ) - for sub_size in sub_sample1 - ] # n_level - - offset_list = [ - F.avg_pool2d( - offset.permute(0, 3, 1, 2), - kernel_size=sub_size * self.nsample[0], - stride=sub_size * self.nsample[0], - ).permute(0, 2, 3, 1) - / sub_size - for sub_size in sub_sample0[1:] - ] # n_level-1 - span_list = [ - F.avg_pool2d( - span_scale.permute(0, 3, 1, 2), - kernel_size=sub_size * self.nsample[0], - stride=sub_size * self.nsample[0], - ).permute(0, 2, 3, 1) - for sub_size in sub_sample0[1:] - ] # n_level-1 - - if mask0 is not None: - mask0, mask1 = mask0.view(bs, 1, h0, w0), mask1.view(bs, 1, h1, w1) - mask0_list = [ - -F.max_pool2d(-mask0, kernel_size=sub_size, stride=sub_size) - for sub_size in sub_sample0 - ] - mask1_list = [ - -F.max_pool2d(-mask1, kernel_size=sub_size, stride=sub_size) - for sub_size in sub_sample1 - ] - else: - mask0_list = mask1_list = [None, None, None] - - message_list = [] - # full attention at coarse scale - mask0_flatten = mask0_list[0].view(bs, -1) if mask0 is not None else None - mask1_flatten = mask1_list[0].view(bs, -1) if mask1 is not None else None - message_list.append( - self.fullattention( - q_list[0], k_list[0], v_list[0], mask0_flatten, mask1_flatten, self.temp - ).view(bs, self.d_model, h0 // ds0[0], w0 // ds0[1]) - ) - - for index in range(1, self.nlevel): - q, k, v = q_list[index], k_list[index], v_list[index] - mask0, mask1 = mask0_list[index], mask1_list[index] - s, o = span_list[index - 1], offset_list[index - 1] # B*h*w(*2) - q, k, v, sample_pixel, mask_sample = self.partition_token( - q, k, v, o, s, mask0 - ) # B*Head*D*G*N(G*N=H*W for q) - message_list.append( - self.group_attention(q, k, v, 1, mask_sample).view( - bs, self.d_model, h0 // sub_sample0[index], w0 // sub_sample0[index] - ) - ) - # fuse - all_message = torch.cat( - [ - F.upsample( - message_list[idx], scale_factor=sub_sample0[idx], mode="nearest" - ) - for idx in range(self.nlevel) - ], - dim=1, - ).view( - bs, -1, h0 * w0 - ) # b*3d*H*W - - all_message = self.merge_head(all_message).view(bs, -1, h0, w0) # b*d*H*W - return all_message - - def partition_token(self, q, k, v, offset, span_scale, maskv): - # q,k,v: B*C*H*W - # o: B*H/2*W/2*2 - # span_scale:B*H*W - bs = q.shape[0] - h, w = q.shape[2], q.shape[3] - hk, wk = k.shape[2], k.shape[3] - offset = offset.view(bs, -1, 2) - span_scale = span_scale.view(bs, -1, 1, 2) - # B*G*2 - offset_sample = self.sample_offset[None, None] * span_scale - sample_pixel = offset[:, :, None] + offset_sample # B*G*r^2*2 - sample_norm = ( - sample_pixel / torch.tensor([wk / 2, hk / 2]).to(device)[None, None, None] - - 1 - ) - - q = ( - q.view( - bs, - -1, - h // self.nsample[0], - self.nsample[0], - w // self.nsample[0], - self.nsample[0], - ) - .permute(0, 1, 2, 4, 3, 5) - .contiguous() - .view(bs, self.nhead, self.d_model // self.nhead, -1, self.nsample[0] ** 2) - ) # B*head*D*G*N(G*N=H*W for q) - # sample token - k = F.grid_sample(k, grid=sample_norm).view( - bs, self.nhead, self.d_model // self.nhead, -1, self.nsample[1] ** 2 - ) # B*head*D*G*r^2 - v = F.grid_sample(v, grid=sample_norm).view( - bs, self.nhead, self.d_model // self.nhead, -1, self.nsample[1] ** 2 - ) # B*head*D*G*r^2 - # import pdb;pdb.set_trace() - if maskv is not None: - mask_sample = ( - F.grid_sample( - maskv.view(bs, -1, h, w).float(), grid=sample_norm, mode="nearest" - ) - == 1 - ) # B*1*G*r^2 - else: - mask_sample = None - return q, k, v, sample_pixel, mask_sample - - def group_attention(self, query, key, value, temp, mask_sample=None): - # q,k,v: B*Head*D*G*N(G*N=H*W for q) - bs = query.shape[0] - # import pdb;pdb.set_trace() - QK = torch.einsum("bhdgn,bhdgm->bhgnm", query, key) - if mask_sample is not None: - num_head, number_n = QK.shape[1], QK.shape[3] - QK.masked_fill_( - ~(mask_sample[:, :, :, None]) - .expand(-1, num_head, -1, number_n, -1) - .bool(), - float(-1e8), - ) - # Compute the attention and the weighted average - softmax_temp = temp / query.size(2) ** 0.5 # sqrt(D) - A = torch.softmax(softmax_temp * QK, dim=-1) - queried_values = ( - torch.einsum("bhgnm,bhdgm->bhdgn", A, value) - .contiguous() - .view(bs, self.d_model, -1) - ) - return queried_values - - -class FullAttention(Module): - def __init__(self, d_model, nhead): - super().__init__() - self.d_model = d_model - self.nhead = nhead - - def forward(self, q, k, v, mask0=None, mask1=None, temp=1): - """Multi-head scaled dot-product attention, a.k.a full attention. - Args: - q,k,v: [N, D, L] - mask: [N, L] - Returns: - msg: [N,L] - """ - bs = q.shape[0] - q, k, v = ( - q.view(bs, self.nhead, self.d_model // self.nhead, -1), - k.view(bs, self.nhead, self.d_model // self.nhead, -1), - v.view(bs, self.nhead, self.d_model // self.nhead, -1), - ) - # Compute the unnormalized attention and apply the masks - QK = torch.einsum("nhdl,nhds->nhls", q, k) - if mask0 is not None: - QK.masked_fill_( - ~(mask0[:, None, :, None] * mask1[:, None, None]).bool(), float(-1e8) - ) - # Compute the attention and the weighted average - softmax_temp = temp / q.size(2) ** 0.5 # sqrt(D) - A = torch.softmax(softmax_temp * QK, dim=-1) - queried_values = ( - torch.einsum("nhls,nhds->nhdl", A, v) - .contiguous() - .view(bs, self.d_model, -1) - ) - return queried_values - - -def elu_feature_map(x): - return F.elu(x) + 1 - - -class LinearAttention(Module): - def __init__(self, eps=1e-6): - super().__init__() - self.feature_map = elu_feature_map - self.eps = eps - - def forward(self, queries, keys, values, q_mask=None, kv_mask=None): - """Multi-Head linear attention proposed in "Transformers are RNNs" - Args: - queries: [N, L, H, D] - keys: [N, S, H, D] - values: [N, S, H, D] - q_mask: [N, L] - kv_mask: [N, S] - Returns: - queried_values: (N, L, H, D) - """ - Q = self.feature_map(queries) - K = self.feature_map(keys) - - # set padded position to zero - if q_mask is not None: - Q = Q * q_mask[:, :, None, None] - if kv_mask is not None: - K = K * kv_mask[:, :, None, None] - values = values * kv_mask[:, :, None, None] - - v_length = values.size(1) - values = values / v_length # prevent fp16 overflow - KV = torch.einsum("nshd,nshv->nhdv", K, values) # (S,D)' @ S,V - Z = 1 / (torch.einsum("nlhd,nhd->nlh", Q, K.sum(dim=1)) + self.eps) - queried_values = torch.einsum("nlhd,nhdv,nlh->nlhv", Q, KV, Z) * v_length - - return queried_values.contiguous() diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/pose_estimation.py b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/pose_estimation.py deleted file mode 100644 index d4ebe66700f895f0d1fac1b21d502b3a7de02325..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/pose_estimation.py +++ /dev/null @@ -1,161 +0,0 @@ -import argparse -import cv2 -import numpy as np -import os -import math -import subprocess -from tqdm import tqdm - - -def compute_essential(matched_kp1, matched_kp2, K): - pts1 = cv2.undistortPoints( - matched_kp1, - cameraMatrix=K, - distCoeffs=(-0.117918271740560, 0.075246403574314, 0, 0), - ) - pts2 = cv2.undistortPoints( - matched_kp2, - cameraMatrix=K, - distCoeffs=(-0.117918271740560, 0.075246403574314, 0, 0), - ) - K_1 = np.eye(3) - # Estimate the homography between the matches using RANSAC - ransac_model, ransac_inliers = cv2.findEssentialMat( - pts1, pts2, K_1, method=cv2.RANSAC, prob=0.999, threshold=0.001, maxIters=10000 - ) - if ransac_inliers is None or ransac_model.shape != (3, 3): - ransac_inliers = np.array([]) - ransac_model = None - return ransac_model, ransac_inliers, pts1, pts2 - - -def compute_error(R_GT, t_GT, E, pts1_norm, pts2_norm, inliers): - """Compute the angular error between two rotation matrices and two translation vectors. - Keyword arguments: - R -- 2D numpy array containing an estimated rotation - gt_R -- 2D numpy array containing the corresponding ground truth rotation - t -- 2D numpy array containing an estimated translation as column - gt_t -- 2D numpy array containing the corresponding ground truth translation - """ - - inliers = inliers.ravel() - R = np.eye(3) - t = np.zeros((3, 1)) - sst = True - try: - _, R, t, _ = cv2.recoverPose(E, pts1_norm, pts2_norm, np.eye(3), inliers) - except: - sst = False - # calculate angle between provided rotations - # - if sst: - dR = np.matmul(R, np.transpose(R_GT)) - dR = cv2.Rodrigues(dR)[0] - dR = np.linalg.norm(dR) * 180 / math.pi - - # calculate angle between provided translations - dT = float(np.dot(t_GT.T, t)) - dT /= float(np.linalg.norm(t_GT)) - - if dT > 1 or dT < -1: - print("Domain warning! dT:", dT) - dT = max(-1, min(1, dT)) - dT = math.acos(dT) * 180 / math.pi - dT = np.minimum(dT, 180 - dT) # ambiguity of E estimation - else: - dR, dT = 180.0, 180.0 - return dR, dT - - -def pose_evaluation(result_base_dir, dark_name1, dark_name2, enhancer, K, R_GT, t_GT): - try: - m_kp1 = np.load(result_base_dir + enhancer + "/DarkFeat/POINT_1/" + dark_name1) - m_kp2 = np.load(result_base_dir + enhancer + "/DarkFeat/POINT_2/" + dark_name2) - except: - return 180.0, 180.0 - try: - E, inliers, pts1, pts2 = compute_essential(m_kp1, m_kp2, K) - except: - E, inliers, pts1, pts2 = np.zeros((3, 3)), np.array([]), None, None - dR, dT = compute_error(R_GT, t_GT, E, pts1, pts2, inliers) - return dR, dT - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--histeq", action="store_true") - parser.add_argument("--dataset_dir", type=str, default="/data/hyz/MID/") - opt = parser.parse_args() - - sizer = (960, 640) - focallength_x = 4.504986436499113e03 / (6744 / sizer[0]) - focallength_y = 4.513311442889859e03 / (4502 / sizer[1]) - K = np.eye(3) - K[0, 0] = focallength_x - K[1, 1] = focallength_y - K[0, 2] = 3.363322177533149e03 / (6744 / sizer[0]) - K[1, 2] = 2.291824660547715e03 / (4502 / sizer[1]) - Kinv = np.linalg.inv(K) - Kinvt = np.transpose(Kinv) - - PE_MT = np.zeros((6, 8)) - - enhancer = "None" if not opt.histeq else "HistEQ" - - for scene in ["Indoor", "Outdoor"]: - dir_base = opt.dataset_dir + "/" + scene + "/" - base_save = "result_errors/" + scene + "/" - pair_list = sorted(os.listdir(dir_base)) - - os.makedirs(base_save, exist_ok=True) - - for pair in tqdm(pair_list): - opention = 1 - if scene == "Outdoor": - pass - else: - if int(pair[4::]) <= 17: - opention = 0 - else: - pass - name = [] - files = sorted(os.listdir(dir_base + pair)) - for file_ in files: - if file_.endswith(".cr2"): - name.append(file_[0:9]) - ISO = [ - "00100", - "00200", - "00400", - "00800", - "01600", - "03200", - "06400", - "12800", - ] - if opention == 1: - Shutter_speed = ["0.005", "0.01", "0.025", "0.05", "0.17", "0.5"] - else: - Shutter_speed = ["0.01", "0.02", "0.05", "0.1", "0.3", "1"] - - E_GT = np.load(dir_base + pair + "/GT_Correspondence/" + "E_estimated.npy") - F_GT = np.dot(np.dot(Kinvt, E_GT), Kinv) - R_GT = np.load(dir_base + pair + "/GT_Correspondence/" + "R_GT.npy") - t_GT = np.load(dir_base + pair + "/GT_Correspondence/" + "T_GT.npy") - result_base_dir = "result/" + scene + "/" + pair + "/" - for iso in ISO: - for ex in Shutter_speed: - dark_name1 = name[0] + iso + "_" + ex + "_" + scene + ".npy" - dark_name2 = name[1] + iso + "_" + ex + "_" + scene + ".npy" - - dr, dt = pose_evaluation( - result_base_dir, dark_name1, dark_name2, enhancer, K, R_GT, t_GT - ) - PE_MT[Shutter_speed.index(ex), ISO.index(iso)] = max(dr, dt) - - subprocess.check_output( - ["mkdir", "-p", base_save + pair + f"/{enhancer}/"] - ) - np.save( - base_save + pair + f"/{enhancer}/Pose_error_DarkFeat.npy", PE_MT - ) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ssd_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ssd_head.py deleted file mode 100644 index 145622b64e3f0b3f7f518fc61a2a01348ebfa4f3..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ssd_head.py +++ /dev/null @@ -1,265 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import xavier_init -from mmcv.runner import force_fp32 - -from mmdet.core import (build_anchor_generator, build_assigner, - build_bbox_coder, build_sampler, multi_apply) -from ..builder import HEADS -from ..losses import smooth_l1_loss -from .anchor_head import AnchorHead - - -# TODO: add loss evaluator for SSD -@HEADS.register_module() -class SSDHead(AnchorHead): - """SSD head used in https://arxiv.org/abs/1512.02325. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - """ # noqa: W605 - - def __init__(self, - num_classes=80, - in_channels=(512, 1024, 512, 256, 256, 256), - anchor_generator=dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=300, - strides=[8, 16, 32, 64, 100, 300], - ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]), - basesize_ratio_range=(0.1, 0.9)), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - reg_decoded_bbox=False, - train_cfg=None, - test_cfg=None): - super(AnchorHead, self).__init__() - self.num_classes = num_classes - self.in_channels = in_channels - self.cls_out_channels = num_classes + 1 # add background class - self.anchor_generator = build_anchor_generator(anchor_generator) - num_anchors = self.anchor_generator.num_base_anchors - - reg_convs = [] - cls_convs = [] - for i in range(len(in_channels)): - reg_convs.append( - nn.Conv2d( - in_channels[i], - num_anchors[i] * 4, - kernel_size=3, - padding=1)) - cls_convs.append( - nn.Conv2d( - in_channels[i], - num_anchors[i] * (num_classes + 1), - kernel_size=3, - padding=1)) - self.reg_convs = nn.ModuleList(reg_convs) - self.cls_convs = nn.ModuleList(cls_convs) - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.reg_decoded_bbox = reg_decoded_bbox - self.use_sigmoid_cls = False - self.cls_focal_loss = False - self.train_cfg = train_cfg - self.test_cfg = test_cfg - # set sampling=False for archor_target - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform', bias=0) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - cls_scores = [] - bbox_preds = [] - for feat, reg_conv, cls_conv in zip(feats, self.reg_convs, - self.cls_convs): - cls_scores.append(cls_conv(feat)) - bbox_preds.append(reg_conv(feat)) - return cls_scores, bbox_preds - - def loss_single(self, cls_score, bbox_pred, anchor, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single image. - - Args: - cls_score (Tensor): Box scores for eachimage - Has shape (num_total_anchors, num_classes). - bbox_pred (Tensor): Box energies / deltas for each image - level with shape (num_total_anchors, 4). - anchors (Tensor): Box reference for each scale level with shape - (num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (num_total_anchors,). - label_weights (Tensor): Label weights of each anchor with shape - (num_total_anchors,) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - loss_cls_all = F.cross_entropy( - cls_score, labels, reduction='none') * label_weights - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((labels >= 0) & - (labels < self.num_classes)).nonzero().reshape(-1) - neg_inds = (labels == self.num_classes).nonzero().view(-1) - - num_pos_samples = pos_inds.size(0) - num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples - if num_neg_samples > neg_inds.size(0): - num_neg_samples = neg_inds.size(0) - topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples) - loss_cls_pos = loss_cls_all[pos_inds].sum() - loss_cls_neg = topk_loss_cls_neg.sum() - loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples - - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(anchor, bbox_pred) - - loss_bbox = smooth_l1_loss( - bbox_pred, - bbox_targets, - bbox_weights, - beta=self.train_cfg.smoothl1_beta, - avg_factor=num_total_samples) - return loss_cls[None], loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=1, - unmap_outputs=False) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/atss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/atss.py deleted file mode 100644 index db7139c6b4fcd7e83007cdb785520743ddae7066..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/atss.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class ATSS(SingleStageDetector): - """Implementation of `ATSS `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(ATSS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/__init__.py deleted file mode 100644 index c6f424debd1623e7511dd77da464a6639d816745..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -from .auto_augment import (AutoAugment, BrightnessTransform, ColorTransform, - ContrastTransform, EqualizeTransform, Rotate, Shear, - Translate) -from .compose import Compose -from .formating import (Collect, DefaultFormatBundle, ImageToTensor, - ToDataContainer, ToTensor, Transpose, to_tensor) -from .instaboost import InstaBoost -from .loading import (LoadAnnotations, LoadImageFromFile, LoadImageFromWebcam, - LoadMultiChannelImageFromFiles, LoadProposals) -from .test_time_aug import MultiScaleFlipAug -from .transforms import (Albu, CutOut, Expand, MinIoURandomCrop, Normalize, - Pad, PhotoMetricDistortion, RandomCenterCropPad, - RandomCrop, RandomFlip, Resize, SegRescale) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'DefaultFormatBundle', 'LoadAnnotations', - 'LoadImageFromFile', 'LoadImageFromWebcam', - 'LoadMultiChannelImageFromFiles', 'LoadProposals', 'MultiScaleFlipAug', - 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', 'Normalize', 'SegRescale', - 'MinIoURandomCrop', 'Expand', 'PhotoMetricDistortion', 'Albu', - 'InstaBoost', 'RandomCenterCropPad', 'AutoAugment', 'CutOut', 'Shear', - 'Rotate', 'ColorTransform', 'EqualizeTransform', 'BrightnessTransform', - 'ContrastTransform', 'Translate' -] diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/utils/utils.py b/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/utils/utils.py deleted file mode 100644 index d48a5ed28e8555d4b8cfb15fdee86426bbb9e368..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/utils/utils.py +++ /dev/null @@ -1,169 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""Utility functions.""" - -import fnmatch -import logging -import os -import sys - -import h5py -import numpy as np - - -def find_files(root_dir, query="*.wav", include_root_dir=True): - """Find files recursively. - - Args: - root_dir (str): Root root_dir to find. - query (str): Query to find. - include_root_dir (bool): If False, root_dir name is not included. - - Returns: - list: List of found filenames. - - """ - files = [] - for root, dirnames, filenames in os.walk(root_dir, followlinks=True): - for filename in fnmatch.filter(filenames, query): - files.append(os.path.join(root, filename)) - if not include_root_dir: - files = [file_.replace(root_dir + "/", "") for file_ in files] - - return files - - -def read_hdf5(hdf5_name, hdf5_path): - """Read hdf5 dataset. - - Args: - hdf5_name (str): Filename of hdf5 file. - hdf5_path (str): Dataset name in hdf5 file. - - Return: - any: Dataset values. - - """ - if not os.path.exists(hdf5_name): - logging.error(f"There is no such a hdf5 file ({hdf5_name}).") - sys.exit(1) - - hdf5_file = h5py.File(hdf5_name, "r") - - if hdf5_path not in hdf5_file: - logging.error(f"There is no such a data in hdf5 file. ({hdf5_path})") - sys.exit(1) - - hdf5_data = hdf5_file[hdf5_path][()] - hdf5_file.close() - - return hdf5_data - - -def write_hdf5(hdf5_name, hdf5_path, write_data, is_overwrite=True): - """Write dataset to hdf5. - - Args: - hdf5_name (str): Hdf5 dataset filename. - hdf5_path (str): Dataset path in hdf5. - write_data (ndarray): Data to write. - is_overwrite (bool): Whether to overwrite dataset. - - """ - # convert to numpy array - write_data = np.array(write_data) - - # check folder existence - folder_name, _ = os.path.split(hdf5_name) - if not os.path.exists(folder_name) and len(folder_name) != 0: - os.makedirs(folder_name) - - # check hdf5 existence - if os.path.exists(hdf5_name): - # if already exists, open with r+ mode - hdf5_file = h5py.File(hdf5_name, "r+") - # check dataset existence - if hdf5_path in hdf5_file: - if is_overwrite: - logging.warning("Dataset in hdf5 file already exists. " - "recreate dataset in hdf5.") - hdf5_file.__delitem__(hdf5_path) - else: - logging.error("Dataset in hdf5 file already exists. " - "if you want to overwrite, please set is_overwrite = True.") - hdf5_file.close() - sys.exit(1) - else: - # if not exists, open with w mode - hdf5_file = h5py.File(hdf5_name, "w") - - # write data to hdf5 - hdf5_file.create_dataset(hdf5_path, data=write_data) - hdf5_file.flush() - hdf5_file.close() - - -class HDF5ScpLoader(object): - """Loader class for a fests.scp file of hdf5 file. - - Examples: - key1 /some/path/a.h5:feats - key2 /some/path/b.h5:feats - key3 /some/path/c.h5:feats - key4 /some/path/d.h5:feats - ... - >>> loader = HDF5ScpLoader("hdf5.scp") - >>> array = loader["key1"] - - key1 /some/path/a.h5 - key2 /some/path/b.h5 - key3 /some/path/c.h5 - key4 /some/path/d.h5 - ... - >>> loader = HDF5ScpLoader("hdf5.scp", "feats") - >>> array = loader["key1"] - - """ - - def __init__(self, feats_scp, default_hdf5_path="feats"): - """Initialize HDF5 scp loader. - - Args: - feats_scp (str): Kaldi-style feats.scp file with hdf5 format. - default_hdf5_path (str): Path in hdf5 file. If the scp contain the info, not used. - - """ - self.default_hdf5_path = default_hdf5_path - with open(feats_scp) as f: - lines = [line.replace("\n", "") for line in f.readlines()] - self.data = {} - for line in lines: - key, value = line.split() - self.data[key] = value - - def get_path(self, key): - """Get hdf5 file path for a given key.""" - return self.data[key] - - def __getitem__(self, key): - """Get ndarray for a given key.""" - p = self.data[key] - if ":" in p: - return read_hdf5(*p.split(":")) - else: - return read_hdf5(p, self.default_hdf5_path) - - def __len__(self): - """Return the length of the scp file.""" - return len(self.data) - - def __iter__(self): - """Return the iterator of the scp file.""" - return iter(self.data) - - def keys(self): - """Return the keys of the scp file.""" - return self.data.keys() diff --git a/spaces/Ryukijano/fastai_pet_classifier_resnet50/app.py b/spaces/Ryukijano/fastai_pet_classifier_resnet50/app.py deleted file mode 100644 index 091bd06e378ab1afac2c3741c128e39f6c4815eb..0000000000000000000000000000000000000000 --- a/spaces/Ryukijano/fastai_pet_classifier_resnet50/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import PIL -import skimage - -learn = load_learner('export.pkl') - -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Pet Breed Classifier" -description = "A pet breed classifier trained on the Oxford Pets dataset with fastai. Created as a demo for Gradio and HuggingFace Spaces." -examples = ['siamese.jpg'] -interpretation='default' -enable_queue=True - -gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(num_top_classes=3),title=title,description=description,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch() \ No newline at end of file diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py b/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py deleted file mode 100644 index 69b6d1c4b5724a3ef61f8bc3d64fc45c5e51e270..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - #unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - unnormalized_derivatives_ = torch.zeros((1, 1, unnormalized_derivatives.size(2), unnormalized_derivatives.size(3)+2)) - unnormalized_derivatives_[...,1:-1] = unnormalized_derivatives - unnormalized_derivatives = unnormalized_derivatives_ - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/archs/unet_arch.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/archs/unet_arch.py deleted file mode 100644 index b110d6938a0a1565e07518bb98a04eb608fc3f14..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/archs/unet_arch.py +++ /dev/null @@ -1,693 +0,0 @@ -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer, - build_norm_layer, build_upsample_layer, constant_init, - kaiming_init) -from mmcv.runner import load_checkpoint -from mmcv.utils.parrots_wrapper import _BatchNorm -from mmseg.utils import get_root_logger - - -class UpConvBlock(nn.Module): - """Upsample convolution block in decoder for UNet. - - This upsample convolution block consists of one upsample module - followed by one convolution block. The upsample module expands the - high-level low-resolution feature map and the convolution block fuses - the upsampled high-level low-resolution feature map and the low-level - high-resolution feature map from encoder. - - Args: - conv_block (nn.Sequential): Sequential of convolutional layers. - in_channels (int): Number of input channels of the high-level - skip_channels (int): Number of input channels of the low-level - high-resolution feature map from encoder. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers in the conv_block. - Default: 2. - stride (int): Stride of convolutional layer in conv_block. Default: 1. - dilation (int): Dilation rate of convolutional layer in conv_block. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). If the size of - high-level feature map is the same as that of skip feature map - (low-level feature map from encoder), it does not need upsample the - high-level feature map and the upsample_cfg is None. - dcn (bool): Use deformable convoluton in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - conv_block, - in_channels, - skip_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - dcn=None, - plugins=None): - super(UpConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.conv_block = conv_block( - in_channels=2 * skip_channels, - out_channels=out_channels, - num_convs=num_convs, - stride=stride, - dilation=dilation, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None) - if upsample_cfg is not None: - self.upsample = build_upsample_layer( - cfg=upsample_cfg, - in_channels=in_channels, - out_channels=skip_channels, - with_cp=with_cp, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.upsample = ConvModule( - in_channels, - skip_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, skip, x): - """Forward function.""" - - x = self.upsample(x) - out = torch.cat([skip, x], dim=1) - out = self.conv_block(out) - - return out - - -class BasicConvBlock(nn.Module): - """Basic convolutional block for UNet. - - This module consists of several plain convolutional layers. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers. Default: 2. - stride (int): Whether use stride convolution to downsample - the input feature map. If stride=2, it only uses stride convolution - in the first convolutional layer to downsample the input feature - map. Options are 1 or 2. Default: 1. - dilation (int): Whether use dilated convolution to expand the - receptive field. Set dilation rate of each convolutional layer and - the dilation rate of the first convolutional layer is always 1. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - dcn (bool): Use deformable convoluton in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dcn=None, - plugins=None): - super(BasicConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.with_cp = with_cp - convs = [] - for i in range(num_convs): - convs.append( - ConvModule( - in_channels=in_channels if i == 0 else out_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride if i == 0 else 1, - dilation=1 if i == 0 else dilation, - padding=1 if i == 0 else dilation, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.convs = nn.Sequential(*convs) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.convs, x) - else: - out = self.convs(x) - return out - - -class DeconvModule(nn.Module): - """Deconvolution upsample module in decoder for UNet (2X upsample). - - This module uses deconvolution to upsample feature map in the decoder - of UNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - kernel_size (int): Kernel size of the convolutional layer. Default: 4. - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - kernel_size=4, - scale_factor=2): - super(DeconvModule, self).__init__() - - assert (kernel_size - scale_factor >= 0) and\ - (kernel_size - scale_factor) % 2 == 0,\ - f'kernel_size should be greater than or equal to scale_factor '\ - f'and (kernel_size - scale_factor) should be even numbers, '\ - f'while the kernel size is {kernel_size} and scale_factor is '\ - f'{scale_factor}.' - - stride = scale_factor - padding = (kernel_size - scale_factor) // 2 - self.with_cp = with_cp - deconv = nn.ConvTranspose2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding) - - norm_name, norm = build_norm_layer(norm_cfg, out_channels) - activate = build_activation_layer(act_cfg) - self.deconv_upsamping = nn.Sequential(deconv, norm, activate) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.deconv_upsamping, x) - else: - out = self.deconv_upsamping(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class InterpConv(nn.Module): - """Interpolation upsample module in decoder for UNet. - - This module uses interpolation to upsample feature map in the decoder - of UNet. It consists of one interpolation upsample layer and one - convolutional layer. It can be one interpolation upsample layer followed - by one convolutional layer (conv_first=False) or one convolutional layer - followed by one interpolation upsample layer (conv_first=True). - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - conv_first (bool): Whether convolutional layer or interpolation - upsample layer first. Default: False. It means interpolation - upsample layer followed by one convolutional layer. - kernel_size (int): Kernel size of the convolutional layer. Default: 1. - stride (int): Stride of the convolutional layer. Default: 1. - padding (int): Padding of the convolutional layer. Default: 1. - upsampe_cfg (dict): Interpolation config of the upsample layer. - Default: dict( - scale_factor=2, mode='bilinear', align_corners=False). - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - conv_cfg=None, - conv_first=False, - kernel_size=1, - stride=1, - padding=0, - upsampe_cfg=dict( - scale_factor=2, mode='bilinear', align_corners=False)): - super(InterpConv, self).__init__() - - self.with_cp = with_cp - conv = ConvModule( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - upsample = nn.Upsample(**upsampe_cfg) - if conv_first: - self.interp_upsample = nn.Sequential(conv, upsample) - else: - self.interp_upsample = nn.Sequential(upsample, conv) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.interp_upsample, x) - else: - out = self.interp_upsample(x) - return out - - -class UNet(nn.Module): - """UNet backbone. - U-Net: Convolutional Networks for Biomedical Image Segmentation. - https://arxiv.org/pdf/1505.04597.pdf - - Args: - in_channels (int): Number of input image channels. Default" 3. - base_channels (int): Number of base channels of each stage. - The output channels of the first stage. Default: 64. - num_stages (int): Number of stages in encoder, normally 5. Default: 5. - strides (Sequence[int 1 | 2]): Strides of each stage in encoder. - len(strides) is equal to num_stages. Normally the stride of the - first stage in encoder is 1. If strides[i]=2, it uses stride - convolution to downsample in the correspondence encoder stage. - Default: (1, 1, 1, 1, 1). - enc_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence encoder stage. - Default: (2, 2, 2, 2, 2). - dec_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence decoder stage. - Default: (2, 2, 2, 2). - downsamples (Sequence[int]): Whether use MaxPool to downsample the - feature map after the first stage of encoder - (stages: [1, num_stages)). If the correspondence encoder stage use - stride convolution (strides[i]=2), it will never use MaxPool to - downsample, even downsamples[i-1]=True. - Default: (True, True, True, True). - enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. - Default: (1, 1, 1, 1, 1). - dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. - Default: (1, 1, 1, 1). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - - Notice: - The input image size should be devisible by the whole downsample rate - of the encoder. More detail of the whole downsample rate can be found - in UNet._check_input_devisible. - - """ - - def __init__(self, - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False, - dcn=None, - plugins=None): - super(UNet, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert len(strides) == num_stages, \ - 'The length of strides should be equal to num_stages, '\ - f'while the strides is {strides}, the length of '\ - f'strides is {len(strides)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_num_convs) == num_stages, \ - 'The length of enc_num_convs should be equal to num_stages, '\ - f'while the enc_num_convs is {enc_num_convs}, the length of '\ - f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_num_convs) == (num_stages-1), \ - 'The length of dec_num_convs should be equal to (num_stages-1), '\ - f'while the dec_num_convs is {dec_num_convs}, the length of '\ - f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(downsamples) == (num_stages-1), \ - 'The length of downsamples should be equal to (num_stages-1), '\ - f'while the downsamples is {downsamples}, the length of '\ - f'downsamples is {len(downsamples)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_dilations) == num_stages, \ - 'The length of enc_dilations should be equal to num_stages, '\ - f'while the enc_dilations is {enc_dilations}, the length of '\ - f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_dilations) == (num_stages-1), \ - 'The length of dec_dilations should be equal to (num_stages-1), '\ - f'while the dec_dilations is {dec_dilations}, the length of '\ - f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ - f'{num_stages}.' - self.num_stages = num_stages - self.strides = strides - self.downsamples = downsamples - self.norm_eval = norm_eval - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - for i in range(num_stages): - enc_conv_block = [] - if i != 0: - if strides[i] == 1 and downsamples[i - 1]: - enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) - upsample = (strides[i] != 1 or downsamples[i - 1]) - self.decoder.append( - UpConvBlock( - conv_block=BasicConvBlock, - in_channels=base_channels * 2**i, - skip_channels=base_channels * 2**(i - 1), - out_channels=base_channels * 2**(i - 1), - num_convs=dec_num_convs[i - 1], - stride=1, - dilation=dec_dilations[i - 1], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - upsample_cfg=upsample_cfg if upsample else None, - dcn=None, - plugins=None)) - - enc_conv_block.append( - BasicConvBlock( - in_channels=in_channels, - out_channels=base_channels * 2**i, - num_convs=enc_num_convs[i], - stride=strides[i], - dilation=enc_dilations[i], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None)) - self.encoder.append((nn.Sequential(*enc_conv_block))) - in_channels = base_channels * 2**i - - def forward(self, x): - enc_outs = [] - - for enc in self.encoder: - x = enc(x) - enc_outs.append(x) - dec_outs = [x] - for i in reversed(range(len(self.decoder))): - x = self.decoder[i](enc_outs[i], x) - dec_outs.append(x) - - return dec_outs - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - -class ShapeUNet(nn.Module): - """ShapeUNet backbone with small modifications. - U-Net: Convolutional Networks for Biomedical Image Segmentation. - https://arxiv.org/pdf/1505.04597.pdf - - Args: - in_channels (int): Number of input image channels. Default" 3. - base_channels (int): Number of base channels of each stage. - The output channels of the first stage. Default: 64. - num_stages (int): Number of stages in encoder, normally 5. Default: 5. - strides (Sequence[int 1 | 2]): Strides of each stage in encoder. - len(strides) is equal to num_stages. Normally the stride of the - first stage in encoder is 1. If strides[i]=2, it uses stride - convolution to downsample in the correspondance encoder stage. - Default: (1, 1, 1, 1, 1). - enc_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondance encoder stage. - Default: (2, 2, 2, 2, 2). - dec_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondance decoder stage. - Default: (2, 2, 2, 2). - downsamples (Sequence[int]): Whether use MaxPool to downsample the - feature map after the first stage of encoder - (stages: [1, num_stages)). If the correspondance encoder stage use - stride convolution (strides[i]=2), it will never use MaxPool to - downsample, even downsamples[i-1]=True. - Default: (True, True, True, True). - enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. - Default: (1, 1, 1, 1, 1). - dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. - Default: (1, 1, 1, 1). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (bool): Use deformable convoluton in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - - Notice: - The input image size should be devisible by the whole downsample rate - of the encoder. More detail of the whole downsample rate can be found - in UNet._check_input_devisible. - - """ - - def __init__(self, - in_channels=3, - base_channels=64, - num_stages=5, - attr_embedding=128, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False, - dcn=None, - plugins=None): - super(ShapeUNet, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert len(strides) == num_stages, \ - 'The length of strides should be equal to num_stages, '\ - f'while the strides is {strides}, the length of '\ - f'strides is {len(strides)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_num_convs) == num_stages, \ - 'The length of enc_num_convs should be equal to num_stages, '\ - f'while the enc_num_convs is {enc_num_convs}, the length of '\ - f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_num_convs) == (num_stages-1), \ - 'The length of dec_num_convs should be equal to (num_stages-1), '\ - f'while the dec_num_convs is {dec_num_convs}, the length of '\ - f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(downsamples) == (num_stages-1), \ - 'The length of downsamples should be equal to (num_stages-1), '\ - f'while the downsamples is {downsamples}, the length of '\ - f'downsamples is {len(downsamples)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_dilations) == num_stages, \ - 'The length of enc_dilations should be equal to num_stages, '\ - f'while the enc_dilations is {enc_dilations}, the length of '\ - f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_dilations) == (num_stages-1), \ - 'The length of dec_dilations should be equal to (num_stages-1), '\ - f'while the dec_dilations is {dec_dilations}, the length of '\ - f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ - f'{num_stages}.' - self.num_stages = num_stages - self.strides = strides - self.downsamples = downsamples - self.norm_eval = norm_eval - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - for i in range(num_stages): - enc_conv_block = [] - if i != 0: - if strides[i] == 1 and downsamples[i - 1]: - enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) - upsample = (strides[i] != 1 or downsamples[i - 1]) - self.decoder.append( - UpConvBlock( - conv_block=BasicConvBlock, - in_channels=base_channels * 2**i, - skip_channels=base_channels * 2**(i - 1), - out_channels=base_channels * 2**(i - 1), - num_convs=dec_num_convs[i - 1], - stride=1, - dilation=dec_dilations[i - 1], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - upsample_cfg=upsample_cfg if upsample else None, - dcn=None, - plugins=None)) - - enc_conv_block.append( - BasicConvBlock( - in_channels=in_channels + attr_embedding, - out_channels=base_channels * 2**i, - num_convs=enc_num_convs[i], - stride=strides[i], - dilation=enc_dilations[i], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None)) - self.encoder.append((nn.Sequential(*enc_conv_block))) - in_channels = base_channels * 2**i - - def forward(self, x, attr_embedding): - enc_outs = [] - Be, Ce = attr_embedding.size() - for enc in self.encoder: - _, _, H, W = x.size() - x = enc( - torch.cat([ - x, - attr_embedding.view(Be, Ce, 1, 1).expand((Be, Ce, H, W)) - ], - dim=1)) - enc_outs.append(x) - dec_outs = [x] - for i in reversed(range(len(self.decoder))): - x = self.decoder[i](enc_outs[i], x) - dec_outs.append(x) - - return dec_outs - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') diff --git a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py b/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py deleted file mode 100644 index f02fa114a8e1607136fd1c8247e3cabb763b4415..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py +++ /dev/null @@ -1,279 +0,0 @@ -import inspect -import warnings -from typing import List, Optional, Union - -import torch - -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -class StableDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for text-to-image generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offsensive or harmful. - Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - scheduler = scheduler.set_format("pt") - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = 512, - width: Optional[int] = 512, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - - if "torch_device" in kwargs: - device = kwargs.pop("torch_device") - warnings.warn( - "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0." - " Consider using `pipe.to(torch_device)` instead." - ) - - # Set device as before (to be removed in 0.3.0) - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.to(device) - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # get the initial random noise unless the user supplied it - - # Unlike in other pipelines, latents need to be generated in the target device - # for 1-to-1 results reproducibility with the CompVis implementation. - # However this currently doesn't work in `mps`. - latents_device = "cpu" if self.device.type == "mps" else self.device - latents_shape = (batch_size, self.unet.in_channels, height // 8, width // 8) - if latents is None: - latents = torch.randn( - latents_shape, - generator=generator, - device=latents_device, - ) - else: - if latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - latents = latents.to(self.device) - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - if accepts_offset: - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = latents * self.scheduler.sigmas[0] - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - if isinstance(self.scheduler, LMSDiscreteScheduler): - sigma = self.scheduler.sigmas[i] - # the model input needs to be scaled to match the continuous ODE formulation in K-LMS - latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs).prev_sample - else: - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - # run safety checker - safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device) - image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values) - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/datasets/vg_vqa_datasets.py b/spaces/SeViLA/SeViLA/lavis/datasets/datasets/vg_vqa_datasets.py deleted file mode 100644 index 08bd909db553c49495d46ea60e8327d801a52bf5..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/datasets/vg_vqa_datasets.py +++ /dev/null @@ -1,37 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import os - -from PIL import Image - -from lavis.datasets.datasets.vqa_datasets import VQADataset - - -class VGVQADataset(VQADataset): - def __init__(self, vis_processor, text_processor, vis_root, ann_paths): - super().__init__(vis_processor, text_processor, vis_root, ann_paths) - - def __getitem__(self, index): - ann = self.annotation[index] - - image_path = os.path.join(self.vis_root, ann["image"]) - image = Image.open(image_path).convert("RGB") - - image = self.vis_processor(image) - question = self.text_processor(ann["question"]) - - answers = [ann["answer"]] - # TODO this should be configured better - weights = [0.2] - - return { - "image": image, - "text_input": question, - "answers": answers, - "weights": weights, - } diff --git a/spaces/ServerX/PorcoDiaz/tools/app.py b/spaces/ServerX/PorcoDiaz/tools/app.py deleted file mode 100644 index 602fbb71a49f2537295337cdcecf501abdd74153..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/tools/app.py +++ /dev/null @@ -1,148 +0,0 @@ -import logging -import os - -# os.system("wget -P cvec/ https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt") -import gradio as gr -from dotenv import load_dotenv - -from configs.config import Config -from i18n import I18nAuto -from infer.modules.vc.pipeline import Pipeline -VC = Pipeline - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) -logger = logging.getLogger(__name__) - -i18n = I18nAuto() -#(i18n) - -load_dotenv() -config = Config() -vc = VC(config) - -weight_root = os.getenv("weight_root") -weight_uvr5_root = os.getenv("weight_uvr5_root") -index_root = os.getenv("index_root") -names = [] -hubert_model = None -for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) -index_paths = [] -for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("在线demo"): - gr.Markdown( - value=""" - RVC 在线demo - """ - ) - sid = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names)) - with gr.Column(): - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label=i18n("请选择说话人id"), - value=0, - visible=False, - interactive=True, - ) - sid.change(fn=vc.get_vc, inputs=[sid], outputs=[spk_item]) - gr.Markdown( - value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ") - ) - vc_input3 = gr.Audio(label="上传音频(长度小于90秒)") - vc_transform0 = gr.Number(label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0) - f0method0 = gr.Radio( - label=i18n("选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"), - choices=["pm", "harvest", "crepe", "rmvpe"], - value="pm", - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index1 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=False, - visible=False, - ) - file_index2 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=0.88, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n("保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"), - value=0.33, - step=0.01, - interactive=True, - ) - f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调")) - but0 = gr.Button(i18n("转换"), variant="primary") - vc_output1 = gr.Textbox(label=i18n("输出信息")) - vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)")) - but0.click( - vc.vc_single, - [ - spk_item, - vc_input3, - vc_transform0, - f0_file, - f0method0, - file_index1, - file_index2, - # file_big_npy1, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - [vc_output1, vc_output2], - ) - - -app.launch() diff --git a/spaces/SpfIo/Whisper_TL_Streaming_API/app.py b/spaces/SpfIo/Whisper_TL_Streaming_API/app.py deleted file mode 100644 index 3655a1e850f40a55a87cfbcba0b34fb1621a120a..0000000000000000000000000000000000000000 --- a/spaces/SpfIo/Whisper_TL_Streaming_API/app.py +++ /dev/null @@ -1,222 +0,0 @@ -import os -import gradio as gr -import time -import requests - -list_stack=[] -api_url='http://34f4-34-90-84-31.ngrok.io/upload' - -def get_transcription_whissper_api(audio,api_url=api_url): - audio_file = open(audio, 'rb') - files = {'audio': ('audio.wav', audio_file)} - - response = requests.post(api_url, files=files) - json_response = response.json() - - if response.status_code == 200: - print("Audio file uploaded successfully.") - return(json_response['message']) - else: - return("Error uploading the audio file.") - -def empty_list(): - list_stack.clear() - -def inference_upload(audio,state=""): - state+= get_transcription_whissper_api(audio)+" " - return (state,state) - -def inference(audio,state=""): - state += get_transcription_whissper_api(audio) + " " - delimiter=" " - list_stack.append(state) - all_transcriptions=(delimiter.join(list_stack)) - return (state,all_transcriptions) - - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 1030px; - margin: auto; - padding-top: 1.5rem; - } - - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .prompt h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; margin-top: 1.5rem !important; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; - } - #share-btn * { - all: unset; - } -""" - -block = gr.Blocks(css=css) - -with block: - gr.HTML( - """ -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - -

- Experiment Whisper via API -

-
-

- This page can be used to simply try out the capabilities of tagaloc + english transcription. The model used is the smallest because this process only runs with small computations. Speed ​​and accuracy can be improved with more powerful computing.

-
- - """ - ) - - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - audio = gr.Audio( - label="Input voice", - source="microphone", - type="filepath", - streaming=True - # every=1 - - ) - audio_file_upload = gr.Audio( - label="Input From Example", - source="upload", - type="filepath", - ) - btn_clear = gr.Button("Clear") - btn_trnscribe = gr.Button("Transcribe") - text = gr.Textbox(label="Transcriptions Now", elem_id="result-textarea") - text_all = gr.Textbox(label="All Transcriptions", elem_id="result-textarea") - btn_clear.click(empty_list) - btn_trnscribe.click(inference_upload,inputs=audio_file_upload,outputs=[text,text_all],show_progress="minimal") - audio.stream(inference,inputs=audio,outputs=[text,text_all],show_progress="minimal") - gr.HTML(""" -

- Tagaloc audio -

- """) - example_gr_bark = gr.Examples( - examples=[ - ["#1 How mature are you as a Christian Ptr Joey.mp3"], - ["#2 Masakit Pero May Dahilan.mp3"] - ], - inputs = audio_file_upload - ) - - gr.HTML(''' - - ''') - -if __name__ == "__main__": - block.launch(debug=True) diff --git a/spaces/Spyhack225/second-brain/app.py b/spaces/Spyhack225/second-brain/app.py deleted file mode 100644 index 39e0da8bbc5fd52acd14a4d386a977b8bce94521..0000000000000000000000000000000000000000 --- a/spaces/Spyhack225/second-brain/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import streamlit as st -import whisper - -st.title("Audio Transcript with Whisper") - -audio_file = st.file_uploader("Upload Audio", type=["mp3", "wav", "m4a"]) - -if audio_file is not None: - with open(audio_file.name, "wb") as f: - f.write(audio_file.getbuffer()) - st.sidebar.success("File saved!") - model = whisper.load_model("large") - st.text("Whisper Model Loaded") - -if st.button("Transcribe Audio"): - if audio_file is not None: - st.success("Transcribing Audio file") - transcript = model.transcribe(audio_file.name) - st.success("Transcription Complete") - st.markdown(transcript["text"]) - else: - st.error("Please upload an audio file") - -st.sidebar.header("Play Audio file") -if audio_file is not None: - st.sidebar.audio(audio_file) -else: - st.sidebar.warning("No audio file uploaded") diff --git a/spaces/Sukhyun/MBTI_translator/load_data.py b/spaces/Sukhyun/MBTI_translator/load_data.py deleted file mode 100644 index 20e071f16acd7712526bdaea0c042f45b793bb91..0000000000000000000000000000000000000000 --- a/spaces/Sukhyun/MBTI_translator/load_data.py +++ /dev/null @@ -1,4 +0,0 @@ -import json - -with open("mbti_map.json", "r") as f: - keywords_en = json.load(f) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/pytest_ipdoctest.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/pytest_ipdoctest.py deleted file mode 100644 index fd19ba4966590445dfdd7905726641d686d6564e..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/pytest_ipdoctest.py +++ /dev/null @@ -1,859 +0,0 @@ -# Based on Pytest doctest.py -# Original license: -# The MIT License (MIT) -# -# Copyright (c) 2004-2021 Holger Krekel and others -"""Discover and run ipdoctests in modules and test files.""" -import builtins -import bdb -import inspect -import os -import platform -import sys -import traceback -import types -import warnings -from contextlib import contextmanager -from pathlib import Path -from typing import Any -from typing import Callable -from typing import Dict -from typing import Generator -from typing import Iterable -from typing import List -from typing import Optional -from typing import Pattern -from typing import Sequence -from typing import Tuple -from typing import Type -from typing import TYPE_CHECKING -from typing import Union - -import pytest -from _pytest import outcomes -from _pytest._code.code import ExceptionInfo -from _pytest._code.code import ReprFileLocation -from _pytest._code.code import TerminalRepr -from _pytest._io import TerminalWriter -from _pytest.compat import safe_getattr -from _pytest.config import Config -from _pytest.config.argparsing import Parser -from _pytest.fixtures import FixtureRequest -from _pytest.nodes import Collector -from _pytest.outcomes import OutcomeException -from _pytest.pathlib import fnmatch_ex -from _pytest.pathlib import import_path -from _pytest.python_api import approx -from _pytest.warning_types import PytestWarning - -if TYPE_CHECKING: - import doctest - -DOCTEST_REPORT_CHOICE_NONE = "none" -DOCTEST_REPORT_CHOICE_CDIFF = "cdiff" -DOCTEST_REPORT_CHOICE_NDIFF = "ndiff" -DOCTEST_REPORT_CHOICE_UDIFF = "udiff" -DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE = "only_first_failure" - -DOCTEST_REPORT_CHOICES = ( - DOCTEST_REPORT_CHOICE_NONE, - DOCTEST_REPORT_CHOICE_CDIFF, - DOCTEST_REPORT_CHOICE_NDIFF, - DOCTEST_REPORT_CHOICE_UDIFF, - DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE, -) - -# Lazy definition of runner class -RUNNER_CLASS = None -# Lazy definition of output checker class -CHECKER_CLASS: Optional[Type["IPDoctestOutputChecker"]] = None - - -def pytest_addoption(parser: Parser) -> None: - parser.addini( - "ipdoctest_optionflags", - "option flags for ipdoctests", - type="args", - default=["ELLIPSIS"], - ) - parser.addini( - "ipdoctest_encoding", "encoding used for ipdoctest files", default="utf-8" - ) - group = parser.getgroup("collect") - group.addoption( - "--ipdoctest-modules", - action="store_true", - default=False, - help="run ipdoctests in all .py modules", - dest="ipdoctestmodules", - ) - group.addoption( - "--ipdoctest-report", - type=str.lower, - default="udiff", - help="choose another output format for diffs on ipdoctest failure", - choices=DOCTEST_REPORT_CHOICES, - dest="ipdoctestreport", - ) - group.addoption( - "--ipdoctest-glob", - action="append", - default=[], - metavar="pat", - help="ipdoctests file matching pattern, default: test*.txt", - dest="ipdoctestglob", - ) - group.addoption( - "--ipdoctest-ignore-import-errors", - action="store_true", - default=False, - help="ignore ipdoctest ImportErrors", - dest="ipdoctest_ignore_import_errors", - ) - group.addoption( - "--ipdoctest-continue-on-failure", - action="store_true", - default=False, - help="for a given ipdoctest, continue to run after the first failure", - dest="ipdoctest_continue_on_failure", - ) - - -def pytest_unconfigure() -> None: - global RUNNER_CLASS - - RUNNER_CLASS = None - - -def pytest_collect_file( - file_path: Path, - parent: Collector, -) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]: - config = parent.config - if file_path.suffix == ".py": - if config.option.ipdoctestmodules and not any( - (_is_setup_py(file_path), _is_main_py(file_path)) - ): - mod: IPDoctestModule = IPDoctestModule.from_parent(parent, path=file_path) - return mod - elif _is_ipdoctest(config, file_path, parent): - txt: IPDoctestTextfile = IPDoctestTextfile.from_parent(parent, path=file_path) - return txt - return None - - -if int(pytest.__version__.split(".")[0]) < 7: - _collect_file = pytest_collect_file - - def pytest_collect_file( - path, - parent: Collector, - ) -> Optional[Union["IPDoctestModule", "IPDoctestTextfile"]]: - return _collect_file(Path(path), parent) - - _import_path = import_path - - def import_path(path, root): - import py.path - - return _import_path(py.path.local(path)) - - -def _is_setup_py(path: Path) -> bool: - if path.name != "setup.py": - return False - contents = path.read_bytes() - return b"setuptools" in contents or b"distutils" in contents - - -def _is_ipdoctest(config: Config, path: Path, parent: Collector) -> bool: - if path.suffix in (".txt", ".rst") and parent.session.isinitpath(path): - return True - globs = config.getoption("ipdoctestglob") or ["test*.txt"] - return any(fnmatch_ex(glob, path) for glob in globs) - - -def _is_main_py(path: Path) -> bool: - return path.name == "__main__.py" - - -class ReprFailDoctest(TerminalRepr): - def __init__( - self, reprlocation_lines: Sequence[Tuple[ReprFileLocation, Sequence[str]]] - ) -> None: - self.reprlocation_lines = reprlocation_lines - - def toterminal(self, tw: TerminalWriter) -> None: - for reprlocation, lines in self.reprlocation_lines: - for line in lines: - tw.line(line) - reprlocation.toterminal(tw) - - -class MultipleDoctestFailures(Exception): - def __init__(self, failures: Sequence["doctest.DocTestFailure"]) -> None: - super().__init__() - self.failures = failures - - -def _init_runner_class() -> Type["IPDocTestRunner"]: - import doctest - from .ipdoctest import IPDocTestRunner - - class PytestDoctestRunner(IPDocTestRunner): - """Runner to collect failures. - - Note that the out variable in this case is a list instead of a - stdout-like object. - """ - - def __init__( - self, - checker: Optional["IPDoctestOutputChecker"] = None, - verbose: Optional[bool] = None, - optionflags: int = 0, - continue_on_failure: bool = True, - ) -> None: - super().__init__(checker=checker, verbose=verbose, optionflags=optionflags) - self.continue_on_failure = continue_on_failure - - def report_failure( - self, - out, - test: "doctest.DocTest", - example: "doctest.Example", - got: str, - ) -> None: - failure = doctest.DocTestFailure(test, example, got) - if self.continue_on_failure: - out.append(failure) - else: - raise failure - - def report_unexpected_exception( - self, - out, - test: "doctest.DocTest", - example: "doctest.Example", - exc_info: Tuple[Type[BaseException], BaseException, types.TracebackType], - ) -> None: - if isinstance(exc_info[1], OutcomeException): - raise exc_info[1] - if isinstance(exc_info[1], bdb.BdbQuit): - outcomes.exit("Quitting debugger") - failure = doctest.UnexpectedException(test, example, exc_info) - if self.continue_on_failure: - out.append(failure) - else: - raise failure - - return PytestDoctestRunner - - -def _get_runner( - checker: Optional["IPDoctestOutputChecker"] = None, - verbose: Optional[bool] = None, - optionflags: int = 0, - continue_on_failure: bool = True, -) -> "IPDocTestRunner": - # We need this in order to do a lazy import on doctest - global RUNNER_CLASS - if RUNNER_CLASS is None: - RUNNER_CLASS = _init_runner_class() - # Type ignored because the continue_on_failure argument is only defined on - # PytestDoctestRunner, which is lazily defined so can't be used as a type. - return RUNNER_CLASS( # type: ignore - checker=checker, - verbose=verbose, - optionflags=optionflags, - continue_on_failure=continue_on_failure, - ) - - -class IPDoctestItem(pytest.Item): - def __init__( - self, - name: str, - parent: "Union[IPDoctestTextfile, IPDoctestModule]", - runner: Optional["IPDocTestRunner"] = None, - dtest: Optional["doctest.DocTest"] = None, - ) -> None: - super().__init__(name, parent) - self.runner = runner - self.dtest = dtest - self.obj = None - self.fixture_request: Optional[FixtureRequest] = None - - @classmethod - def from_parent( # type: ignore - cls, - parent: "Union[IPDoctestTextfile, IPDoctestModule]", - *, - name: str, - runner: "IPDocTestRunner", - dtest: "doctest.DocTest", - ): - # incompatible signature due to imposed limits on subclass - """The public named constructor.""" - return super().from_parent(name=name, parent=parent, runner=runner, dtest=dtest) - - def setup(self) -> None: - if self.dtest is not None: - self.fixture_request = _setup_fixtures(self) - globs = dict(getfixture=self.fixture_request.getfixturevalue) - for name, value in self.fixture_request.getfixturevalue( - "ipdoctest_namespace" - ).items(): - globs[name] = value - self.dtest.globs.update(globs) - - from .ipdoctest import IPExample - - if isinstance(self.dtest.examples[0], IPExample): - # for IPython examples *only*, we swap the globals with the ipython - # namespace, after updating it with the globals (which doctest - # fills with the necessary info from the module being tested). - self._user_ns_orig = {} - self._user_ns_orig.update(_ip.user_ns) - _ip.user_ns.update(self.dtest.globs) - # We must remove the _ key in the namespace, so that Python's - # doctest code sets it naturally - _ip.user_ns.pop("_", None) - _ip.user_ns["__builtins__"] = builtins - self.dtest.globs = _ip.user_ns - - def teardown(self) -> None: - from .ipdoctest import IPExample - - # Undo the test.globs reassignment we made - if isinstance(self.dtest.examples[0], IPExample): - self.dtest.globs = {} - _ip.user_ns.clear() - _ip.user_ns.update(self._user_ns_orig) - del self._user_ns_orig - - self.dtest.globs.clear() - - def runtest(self) -> None: - assert self.dtest is not None - assert self.runner is not None - _check_all_skipped(self.dtest) - self._disable_output_capturing_for_darwin() - failures: List["doctest.DocTestFailure"] = [] - - # exec(compile(..., "single", ...), ...) puts result in builtins._ - had_underscore_value = hasattr(builtins, "_") - underscore_original_value = getattr(builtins, "_", None) - - # Save our current directory and switch out to the one where the - # test was originally created, in case another doctest did a - # directory change. We'll restore this in the finally clause. - curdir = os.getcwd() - os.chdir(self.fspath.dirname) - try: - # Type ignored because we change the type of `out` from what - # ipdoctest expects. - self.runner.run(self.dtest, out=failures, clear_globs=False) # type: ignore[arg-type] - finally: - os.chdir(curdir) - if had_underscore_value: - setattr(builtins, "_", underscore_original_value) - elif hasattr(builtins, "_"): - delattr(builtins, "_") - - if failures: - raise MultipleDoctestFailures(failures) - - def _disable_output_capturing_for_darwin(self) -> None: - """Disable output capturing. Otherwise, stdout is lost to ipdoctest (pytest#985).""" - if platform.system() != "Darwin": - return - capman = self.config.pluginmanager.getplugin("capturemanager") - if capman: - capman.suspend_global_capture(in_=True) - out, err = capman.read_global_capture() - sys.stdout.write(out) - sys.stderr.write(err) - - # TODO: Type ignored -- breaks Liskov Substitution. - def repr_failure( # type: ignore[override] - self, - excinfo: ExceptionInfo[BaseException], - ) -> Union[str, TerminalRepr]: - import doctest - - failures: Optional[ - Sequence[Union[doctest.DocTestFailure, doctest.UnexpectedException]] - ] = None - if isinstance( - excinfo.value, (doctest.DocTestFailure, doctest.UnexpectedException) - ): - failures = [excinfo.value] - elif isinstance(excinfo.value, MultipleDoctestFailures): - failures = excinfo.value.failures - - if failures is None: - return super().repr_failure(excinfo) - - reprlocation_lines = [] - for failure in failures: - example = failure.example - test = failure.test - filename = test.filename - if test.lineno is None: - lineno = None - else: - lineno = test.lineno + example.lineno + 1 - message = type(failure).__name__ - # TODO: ReprFileLocation doesn't expect a None lineno. - reprlocation = ReprFileLocation(filename, lineno, message) # type: ignore[arg-type] - checker = _get_checker() - report_choice = _get_report_choice(self.config.getoption("ipdoctestreport")) - if lineno is not None: - assert failure.test.docstring is not None - lines = failure.test.docstring.splitlines(False) - # add line numbers to the left of the error message - assert test.lineno is not None - lines = [ - "%03d %s" % (i + test.lineno + 1, x) for (i, x) in enumerate(lines) - ] - # trim docstring error lines to 10 - lines = lines[max(example.lineno - 9, 0) : example.lineno + 1] - else: - lines = [ - "EXAMPLE LOCATION UNKNOWN, not showing all tests of that example" - ] - indent = ">>>" - for line in example.source.splitlines(): - lines.append(f"??? {indent} {line}") - indent = "..." - if isinstance(failure, doctest.DocTestFailure): - lines += checker.output_difference( - example, failure.got, report_choice - ).split("\n") - else: - inner_excinfo = ExceptionInfo.from_exc_info(failure.exc_info) - lines += ["UNEXPECTED EXCEPTION: %s" % repr(inner_excinfo.value)] - lines += [ - x.strip("\n") for x in traceback.format_exception(*failure.exc_info) - ] - reprlocation_lines.append((reprlocation, lines)) - return ReprFailDoctest(reprlocation_lines) - - def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]: - assert self.dtest is not None - return self.path, self.dtest.lineno, "[ipdoctest] %s" % self.name - - if int(pytest.__version__.split(".")[0]) < 7: - - @property - def path(self) -> Path: - return Path(self.fspath) - - -def _get_flag_lookup() -> Dict[str, int]: - import doctest - - return dict( - DONT_ACCEPT_TRUE_FOR_1=doctest.DONT_ACCEPT_TRUE_FOR_1, - DONT_ACCEPT_BLANKLINE=doctest.DONT_ACCEPT_BLANKLINE, - NORMALIZE_WHITESPACE=doctest.NORMALIZE_WHITESPACE, - ELLIPSIS=doctest.ELLIPSIS, - IGNORE_EXCEPTION_DETAIL=doctest.IGNORE_EXCEPTION_DETAIL, - COMPARISON_FLAGS=doctest.COMPARISON_FLAGS, - ALLOW_UNICODE=_get_allow_unicode_flag(), - ALLOW_BYTES=_get_allow_bytes_flag(), - NUMBER=_get_number_flag(), - ) - - -def get_optionflags(parent): - optionflags_str = parent.config.getini("ipdoctest_optionflags") - flag_lookup_table = _get_flag_lookup() - flag_acc = 0 - for flag in optionflags_str: - flag_acc |= flag_lookup_table[flag] - return flag_acc - - -def _get_continue_on_failure(config): - continue_on_failure = config.getvalue("ipdoctest_continue_on_failure") - if continue_on_failure: - # We need to turn off this if we use pdb since we should stop at - # the first failure. - if config.getvalue("usepdb"): - continue_on_failure = False - return continue_on_failure - - -class IPDoctestTextfile(pytest.Module): - obj = None - - def collect(self) -> Iterable[IPDoctestItem]: - import doctest - from .ipdoctest import IPDocTestParser - - # Inspired by doctest.testfile; ideally we would use it directly, - # but it doesn't support passing a custom checker. - encoding = self.config.getini("ipdoctest_encoding") - text = self.path.read_text(encoding) - filename = str(self.path) - name = self.path.name - globs = {"__name__": "__main__"} - - optionflags = get_optionflags(self) - - runner = _get_runner( - verbose=False, - optionflags=optionflags, - checker=_get_checker(), - continue_on_failure=_get_continue_on_failure(self.config), - ) - - parser = IPDocTestParser() - test = parser.get_doctest(text, globs, name, filename, 0) - if test.examples: - yield IPDoctestItem.from_parent( - self, name=test.name, runner=runner, dtest=test - ) - - if int(pytest.__version__.split(".")[0]) < 7: - - @property - def path(self) -> Path: - return Path(self.fspath) - - @classmethod - def from_parent( - cls, - parent, - *, - fspath=None, - path: Optional[Path] = None, - **kw, - ): - if path is not None: - import py.path - - fspath = py.path.local(path) - return super().from_parent(parent=parent, fspath=fspath, **kw) - - -def _check_all_skipped(test: "doctest.DocTest") -> None: - """Raise pytest.skip() if all examples in the given DocTest have the SKIP - option set.""" - import doctest - - all_skipped = all(x.options.get(doctest.SKIP, False) for x in test.examples) - if all_skipped: - pytest.skip("all docstests skipped by +SKIP option") - - -def _is_mocked(obj: object) -> bool: - """Return if an object is possibly a mock object by checking the - existence of a highly improbable attribute.""" - return ( - safe_getattr(obj, "pytest_mock_example_attribute_that_shouldnt_exist", None) - is not None - ) - - -@contextmanager -def _patch_unwrap_mock_aware() -> Generator[None, None, None]: - """Context manager which replaces ``inspect.unwrap`` with a version - that's aware of mock objects and doesn't recurse into them.""" - real_unwrap = inspect.unwrap - - def _mock_aware_unwrap( - func: Callable[..., Any], *, stop: Optional[Callable[[Any], Any]] = None - ) -> Any: - try: - if stop is None or stop is _is_mocked: - return real_unwrap(func, stop=_is_mocked) - _stop = stop - return real_unwrap(func, stop=lambda obj: _is_mocked(obj) or _stop(func)) - except Exception as e: - warnings.warn( - "Got %r when unwrapping %r. This is usually caused " - "by a violation of Python's object protocol; see e.g. " - "https://github.com/pytest-dev/pytest/issues/5080" % (e, func), - PytestWarning, - ) - raise - - inspect.unwrap = _mock_aware_unwrap - try: - yield - finally: - inspect.unwrap = real_unwrap - - -class IPDoctestModule(pytest.Module): - def collect(self) -> Iterable[IPDoctestItem]: - import doctest - from .ipdoctest import DocTestFinder, IPDocTestParser - - class MockAwareDocTestFinder(DocTestFinder): - """A hackish ipdoctest finder that overrides stdlib internals to fix a stdlib bug. - - https://github.com/pytest-dev/pytest/issues/3456 - https://bugs.python.org/issue25532 - """ - - def _find_lineno(self, obj, source_lines): - """Doctest code does not take into account `@property`, this - is a hackish way to fix it. https://bugs.python.org/issue17446 - - Wrapped Doctests will need to be unwrapped so the correct - line number is returned. This will be reported upstream. #8796 - """ - if isinstance(obj, property): - obj = getattr(obj, "fget", obj) - - if hasattr(obj, "__wrapped__"): - # Get the main obj in case of it being wrapped - obj = inspect.unwrap(obj) - - # Type ignored because this is a private function. - return super()._find_lineno( # type:ignore[misc] - obj, - source_lines, - ) - - def _find( - self, tests, obj, name, module, source_lines, globs, seen - ) -> None: - if _is_mocked(obj): - return - with _patch_unwrap_mock_aware(): - # Type ignored because this is a private function. - super()._find( # type:ignore[misc] - tests, obj, name, module, source_lines, globs, seen - ) - - if self.path.name == "conftest.py": - if int(pytest.__version__.split(".")[0]) < 7: - module = self.config.pluginmanager._importconftest( - self.path, - self.config.getoption("importmode"), - ) - else: - module = self.config.pluginmanager._importconftest( - self.path, - self.config.getoption("importmode"), - rootpath=self.config.rootpath, - ) - else: - try: - module = import_path(self.path, root=self.config.rootpath) - except ImportError: - if self.config.getvalue("ipdoctest_ignore_import_errors"): - pytest.skip("unable to import module %r" % self.path) - else: - raise - # Uses internal doctest module parsing mechanism. - finder = MockAwareDocTestFinder(parser=IPDocTestParser()) - optionflags = get_optionflags(self) - runner = _get_runner( - verbose=False, - optionflags=optionflags, - checker=_get_checker(), - continue_on_failure=_get_continue_on_failure(self.config), - ) - - for test in finder.find(module, module.__name__): - if test.examples: # skip empty ipdoctests - yield IPDoctestItem.from_parent( - self, name=test.name, runner=runner, dtest=test - ) - - if int(pytest.__version__.split(".")[0]) < 7: - - @property - def path(self) -> Path: - return Path(self.fspath) - - @classmethod - def from_parent( - cls, - parent, - *, - fspath=None, - path: Optional[Path] = None, - **kw, - ): - if path is not None: - import py.path - - fspath = py.path.local(path) - return super().from_parent(parent=parent, fspath=fspath, **kw) - - -def _setup_fixtures(doctest_item: IPDoctestItem) -> FixtureRequest: - """Used by IPDoctestTextfile and IPDoctestItem to setup fixture information.""" - - def func() -> None: - pass - - doctest_item.funcargs = {} # type: ignore[attr-defined] - fm = doctest_item.session._fixturemanager - doctest_item._fixtureinfo = fm.getfixtureinfo( # type: ignore[attr-defined] - node=doctest_item, func=func, cls=None, funcargs=False - ) - fixture_request = FixtureRequest(doctest_item, _ispytest=True) - fixture_request._fillfixtures() - return fixture_request - - -def _init_checker_class() -> Type["IPDoctestOutputChecker"]: - import doctest - import re - from .ipdoctest import IPDoctestOutputChecker - - class LiteralsOutputChecker(IPDoctestOutputChecker): - # Based on doctest_nose_plugin.py from the nltk project - # (https://github.com/nltk/nltk) and on the "numtest" doctest extension - # by Sebastien Boisgerault (https://github.com/boisgera/numtest). - - _unicode_literal_re = re.compile(r"(\W|^)[uU]([rR]?[\'\"])", re.UNICODE) - _bytes_literal_re = re.compile(r"(\W|^)[bB]([rR]?[\'\"])", re.UNICODE) - _number_re = re.compile( - r""" - (?P - (?P - (?P [+-]?\d*)\.(?P\d+) - | - (?P [+-]?\d+)\. - ) - (?: - [Ee] - (?P [+-]?\d+) - )? - | - (?P [+-]?\d+) - (?: - [Ee] - (?P [+-]?\d+) - ) - ) - """, - re.VERBOSE, - ) - - def check_output(self, want: str, got: str, optionflags: int) -> bool: - if super().check_output(want, got, optionflags): - return True - - allow_unicode = optionflags & _get_allow_unicode_flag() - allow_bytes = optionflags & _get_allow_bytes_flag() - allow_number = optionflags & _get_number_flag() - - if not allow_unicode and not allow_bytes and not allow_number: - return False - - def remove_prefixes(regex: Pattern[str], txt: str) -> str: - return re.sub(regex, r"\1\2", txt) - - if allow_unicode: - want = remove_prefixes(self._unicode_literal_re, want) - got = remove_prefixes(self._unicode_literal_re, got) - - if allow_bytes: - want = remove_prefixes(self._bytes_literal_re, want) - got = remove_prefixes(self._bytes_literal_re, got) - - if allow_number: - got = self._remove_unwanted_precision(want, got) - - return super().check_output(want, got, optionflags) - - def _remove_unwanted_precision(self, want: str, got: str) -> str: - wants = list(self._number_re.finditer(want)) - gots = list(self._number_re.finditer(got)) - if len(wants) != len(gots): - return got - offset = 0 - for w, g in zip(wants, gots): - fraction: Optional[str] = w.group("fraction") - exponent: Optional[str] = w.group("exponent1") - if exponent is None: - exponent = w.group("exponent2") - precision = 0 if fraction is None else len(fraction) - if exponent is not None: - precision -= int(exponent) - if float(w.group()) == approx(float(g.group()), abs=10**-precision): - # They're close enough. Replace the text we actually - # got with the text we want, so that it will match when we - # check the string literally. - got = ( - got[: g.start() + offset] + w.group() + got[g.end() + offset :] - ) - offset += w.end() - w.start() - (g.end() - g.start()) - return got - - return LiteralsOutputChecker - - -def _get_checker() -> "IPDoctestOutputChecker": - """Return a IPDoctestOutputChecker subclass that supports some - additional options: - - * ALLOW_UNICODE and ALLOW_BYTES options to ignore u'' and b'' - prefixes (respectively) in string literals. Useful when the same - ipdoctest should run in Python 2 and Python 3. - - * NUMBER to ignore floating-point differences smaller than the - precision of the literal number in the ipdoctest. - - An inner class is used to avoid importing "ipdoctest" at the module - level. - """ - global CHECKER_CLASS - if CHECKER_CLASS is None: - CHECKER_CLASS = _init_checker_class() - return CHECKER_CLASS() - - -def _get_allow_unicode_flag() -> int: - """Register and return the ALLOW_UNICODE flag.""" - import doctest - - return doctest.register_optionflag("ALLOW_UNICODE") - - -def _get_allow_bytes_flag() -> int: - """Register and return the ALLOW_BYTES flag.""" - import doctest - - return doctest.register_optionflag("ALLOW_BYTES") - - -def _get_number_flag() -> int: - """Register and return the NUMBER flag.""" - import doctest - - return doctest.register_optionflag("NUMBER") - - -def _get_report_choice(key: str) -> int: - """Return the actual `ipdoctest` module flag value. - - We want to do it as late as possible to avoid importing `ipdoctest` and all - its dependencies when parsing options, as it adds overhead and breaks tests. - """ - import doctest - - return { - DOCTEST_REPORT_CHOICE_UDIFF: doctest.REPORT_UDIFF, - DOCTEST_REPORT_CHOICE_CDIFF: doctest.REPORT_CDIFF, - DOCTEST_REPORT_CHOICE_NDIFF: doctest.REPORT_NDIFF, - DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE: doctest.REPORT_ONLY_FIRST_FAILURE, - DOCTEST_REPORT_CHOICE_NONE: 0, - }[key] - - -@pytest.fixture(scope="session") -def ipdoctest_namespace() -> Dict[str, Any]: - """Fixture that returns a :py:class:`dict` that will be injected into the - namespace of ipdoctests.""" - return dict() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_backends/_trio.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_backends/_trio.py deleted file mode 100644 index cf2894350952e1169a6c77ea7c767e892f3efc1e..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_backends/_trio.py +++ /dev/null @@ -1,996 +0,0 @@ -from __future__ import annotations - -import array -import math -import socket -from concurrent.futures import Future -from contextvars import copy_context -from dataclasses import dataclass -from functools import partial -from io import IOBase -from os import PathLike -from signal import Signals -from types import TracebackType -from typing import ( - IO, - TYPE_CHECKING, - Any, - AsyncGenerator, - AsyncIterator, - Awaitable, - Callable, - Collection, - Coroutine, - Generic, - Iterable, - Mapping, - NoReturn, - Sequence, - TypeVar, - cast, -) - -import sniffio -import trio.from_thread -from outcome import Error, Outcome, Value -from trio.socket import SocketType as TrioSocketType -from trio.to_thread import run_sync - -from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc -from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable -from .._core._eventloop import claim_worker_thread -from .._core._exceptions import ( - BrokenResourceError, - BusyResourceError, - ClosedResourceError, - EndOfStream, -) -from .._core._exceptions import ExceptionGroup as BaseExceptionGroup -from .._core._sockets import convert_ipv6_sockaddr -from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter -from .._core._synchronization import Event as BaseEvent -from .._core._synchronization import ResourceGuard -from .._core._tasks import CancelScope as BaseCancelScope -from ..abc import IPSockAddrType, UDPPacketType - -if TYPE_CHECKING: - from trio_typing import TaskStatus - -try: - from trio import lowlevel as trio_lowlevel -except ImportError: - from trio import hazmat as trio_lowlevel # type: ignore[no-redef] - from trio.hazmat import wait_readable, wait_writable -else: - from trio.lowlevel import wait_readable, wait_writable - -try: - trio_open_process = trio_lowlevel.open_process -except AttributeError: - # isort: off - from trio import ( # type: ignore[attr-defined, no-redef] - open_process as trio_open_process, - ) - -T_Retval = TypeVar("T_Retval") -T_SockAddr = TypeVar("T_SockAddr", str, IPSockAddrType) - - -# -# Event loop -# - -run = trio.run -current_token = trio.lowlevel.current_trio_token -RunVar = trio.lowlevel.RunVar - - -# -# Miscellaneous -# - -sleep = trio.sleep - - -# -# Timeouts and cancellation -# - - -class CancelScope(BaseCancelScope): - def __new__( - cls, original: trio.CancelScope | None = None, **kwargs: object - ) -> CancelScope: - return object.__new__(cls) - - def __init__(self, original: trio.CancelScope | None = None, **kwargs: Any) -> None: - self.__original = original or trio.CancelScope(**kwargs) - - def __enter__(self) -> CancelScope: - self.__original.__enter__() - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - # https://github.com/python-trio/trio-typing/pull/79 - return self.__original.__exit__( # type: ignore[func-returns-value] - exc_type, exc_val, exc_tb - ) - - def cancel(self) -> DeprecatedAwaitable: - self.__original.cancel() - return DeprecatedAwaitable(self.cancel) - - @property - def deadline(self) -> float: - return self.__original.deadline - - @deadline.setter - def deadline(self, value: float) -> None: - self.__original.deadline = value - - @property - def cancel_called(self) -> bool: - return self.__original.cancel_called - - @property - def shield(self) -> bool: - return self.__original.shield - - @shield.setter - def shield(self, value: bool) -> None: - self.__original.shield = value - - -CancelledError = trio.Cancelled -checkpoint = trio.lowlevel.checkpoint -checkpoint_if_cancelled = trio.lowlevel.checkpoint_if_cancelled -cancel_shielded_checkpoint = trio.lowlevel.cancel_shielded_checkpoint -current_effective_deadline = trio.current_effective_deadline -current_time = trio.current_time - - -# -# Task groups -# - - -class ExceptionGroup(BaseExceptionGroup, trio.MultiError): - pass - - -class TaskGroup(abc.TaskGroup): - def __init__(self) -> None: - self._active = False - self._nursery_manager = trio.open_nursery() - self.cancel_scope = None # type: ignore[assignment] - - async def __aenter__(self) -> TaskGroup: - self._active = True - self._nursery = await self._nursery_manager.__aenter__() - self.cancel_scope = CancelScope(self._nursery.cancel_scope) - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - try: - return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb) - except trio.MultiError as exc: - raise ExceptionGroup(exc.exceptions) from None - finally: - self._active = False - - def start_soon( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - self._nursery.start_soon(func, *args, name=name) - - async def start( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> object: - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - return await self._nursery.start(func, *args, name=name) - - -# -# Threads -# - - -async def run_sync_in_worker_thread( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: trio.CapacityLimiter | None = None, -) -> T_Retval: - def wrapper() -> T_Retval: - with claim_worker_thread("trio"): - return func(*args) - - # TODO: remove explicit context copying when trio 0.20 is the minimum requirement - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, None) - return await run_sync( - context.run, wrapper, cancellable=cancellable, limiter=limiter - ) - - -# TODO: remove this workaround when trio 0.20 is the minimum requirement -def run_async_from_thread( - fn: Callable[..., Awaitable[T_Retval]], *args: Any -) -> T_Retval: - async def wrapper() -> T_Retval: - retval: T_Retval - - async def inner() -> None: - nonlocal retval - __tracebackhide__ = True - retval = await fn(*args) - - async with trio.open_nursery() as n: - context.run(n.start_soon, inner) - - __tracebackhide__ = True - return retval # noqa: F821 - - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, "trio") - return trio.from_thread.run(wrapper) - - -def run_sync_from_thread(fn: Callable[..., T_Retval], *args: Any) -> T_Retval: - # TODO: remove explicit context copying when trio 0.20 is the minimum requirement - retval = trio.from_thread.run_sync(copy_context().run, fn, *args) - return cast(T_Retval, retval) - - -class BlockingPortal(abc.BlockingPortal): - def __new__(cls) -> BlockingPortal: - return object.__new__(cls) - - def __init__(self) -> None: - super().__init__() - self._token = trio.lowlevel.current_trio_token() - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: dict[str, Any], - name: object, - future: Future, - ) -> None: - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, "trio") - trio.from_thread.run_sync( - context.run, - partial(self._task_group.start_soon, name=name), - self._call_func, - func, - args, - kwargs, - future, - trio_token=self._token, - ) - - -# -# Subprocesses -# - - -@dataclass(eq=False) -class ReceiveStreamWrapper(abc.ByteReceiveStream): - _stream: trio.abc.ReceiveStream - - async def receive(self, max_bytes: int | None = None) -> bytes: - try: - data = await self._stream.receive_some(max_bytes) - except trio.ClosedResourceError as exc: - raise ClosedResourceError from exc.__cause__ - except trio.BrokenResourceError as exc: - raise BrokenResourceError from exc.__cause__ - - if data: - return data - else: - raise EndOfStream - - async def aclose(self) -> None: - await self._stream.aclose() - - -@dataclass(eq=False) -class SendStreamWrapper(abc.ByteSendStream): - _stream: trio.abc.SendStream - - async def send(self, item: bytes) -> None: - try: - await self._stream.send_all(item) - except trio.ClosedResourceError as exc: - raise ClosedResourceError from exc.__cause__ - except trio.BrokenResourceError as exc: - raise BrokenResourceError from exc.__cause__ - - async def aclose(self) -> None: - await self._stream.aclose() - - -@dataclass(eq=False) -class Process(abc.Process): - _process: trio.Process - _stdin: abc.ByteSendStream | None - _stdout: abc.ByteReceiveStream | None - _stderr: abc.ByteReceiveStream | None - - async def aclose(self) -> None: - if self._stdin: - await self._stdin.aclose() - if self._stdout: - await self._stdout.aclose() - if self._stderr: - await self._stderr.aclose() - - await self.wait() - - async def wait(self) -> int: - return await self._process.wait() - - def terminate(self) -> None: - self._process.terminate() - - def kill(self) -> None: - self._process.kill() - - def send_signal(self, signal: Signals) -> None: - self._process.send_signal(signal) - - @property - def pid(self) -> int: - return self._process.pid - - @property - def returncode(self) -> int | None: - return self._process.returncode - - @property - def stdin(self) -> abc.ByteSendStream | None: - return self._stdin - - @property - def stdout(self) -> abc.ByteReceiveStream | None: - return self._stdout - - @property - def stderr(self) -> abc.ByteReceiveStream | None: - return self._stderr - - -async def open_process( - command: str | bytes | Sequence[str | bytes], - *, - shell: bool, - stdin: int | IO[Any] | None, - stdout: int | IO[Any] | None, - stderr: int | IO[Any] | None, - cwd: str | bytes | PathLike | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> Process: - process = await trio_open_process( # type: ignore[misc] - command, # type: ignore[arg-type] - stdin=stdin, - stdout=stdout, - stderr=stderr, - shell=shell, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - stdin_stream = SendStreamWrapper(process.stdin) if process.stdin else None - stdout_stream = ReceiveStreamWrapper(process.stdout) if process.stdout else None - stderr_stream = ReceiveStreamWrapper(process.stderr) if process.stderr else None - return Process(process, stdin_stream, stdout_stream, stderr_stream) - - -class _ProcessPoolShutdownInstrument(trio.abc.Instrument): - def after_run(self) -> None: - super().after_run() - - -current_default_worker_process_limiter: RunVar = RunVar( - "current_default_worker_process_limiter" -) - - -async def _shutdown_process_pool(workers: set[Process]) -> None: - process: Process - try: - await sleep(math.inf) - except trio.Cancelled: - for process in workers: - if process.returncode is None: - process.kill() - - with CancelScope(shield=True): - for process in workers: - await process.aclose() - - -def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None: - trio.lowlevel.spawn_system_task(_shutdown_process_pool, workers) - - -# -# Sockets and networking -# - - -class _TrioSocketMixin(Generic[T_SockAddr]): - def __init__(self, trio_socket: TrioSocketType) -> None: - self._trio_socket = trio_socket - self._closed = False - - def _check_closed(self) -> None: - if self._closed: - raise ClosedResourceError - if self._trio_socket.fileno() < 0: - raise BrokenResourceError - - @property - def _raw_socket(self) -> socket.socket: - return self._trio_socket._sock # type: ignore[attr-defined] - - async def aclose(self) -> None: - if self._trio_socket.fileno() >= 0: - self._closed = True - self._trio_socket.close() - - def _convert_socket_error(self, exc: BaseException) -> NoReturn: - if isinstance(exc, trio.ClosedResourceError): - raise ClosedResourceError from exc - elif self._trio_socket.fileno() < 0 and self._closed: - raise ClosedResourceError from None - elif isinstance(exc, OSError): - raise BrokenResourceError from exc - else: - raise exc - - -class SocketStream(_TrioSocketMixin, abc.SocketStream): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self, max_bytes: int = 65536) -> bytes: - with self._receive_guard: - try: - data = await self._trio_socket.recv(max_bytes) - except BaseException as exc: - self._convert_socket_error(exc) - - if data: - return data - else: - raise EndOfStream - - async def send(self, item: bytes) -> None: - with self._send_guard: - view = memoryview(item) - while view: - try: - bytes_sent = await self._trio_socket.send(view) - except BaseException as exc: - self._convert_socket_error(exc) - - view = view[bytes_sent:] - - async def send_eof(self) -> None: - self._trio_socket.shutdown(socket.SHUT_WR) - - -class UNIXSocketStream(SocketStream, abc.UNIXSocketStream): - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - if not isinstance(msglen, int) or msglen < 0: - raise ValueError("msglen must be a non-negative integer") - if not isinstance(maxfds, int) or maxfds < 1: - raise ValueError("maxfds must be a positive integer") - - fds = array.array("i") - await checkpoint() - with self._receive_guard: - while True: - try: - message, ancdata, flags, addr = await self._trio_socket.recvmsg( - msglen, socket.CMSG_LEN(maxfds * fds.itemsize) - ) - except BaseException as exc: - self._convert_socket_error(exc) - else: - if not message and not ancdata: - raise EndOfStream - - break - - for cmsg_level, cmsg_type, cmsg_data in ancdata: - if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS: - raise RuntimeError( - f"Received unexpected ancillary data; message = {message!r}, " - f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}" - ) - - fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) - - return message, list(fds) - - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - if not message: - raise ValueError("message must not be empty") - if not fds: - raise ValueError("fds must not be empty") - - filenos: list[int] = [] - for fd in fds: - if isinstance(fd, int): - filenos.append(fd) - elif isinstance(fd, IOBase): - filenos.append(fd.fileno()) - - fdarray = array.array("i", filenos) - await checkpoint() - with self._send_guard: - while True: - try: - await self._trio_socket.sendmsg( - [message], - [ - ( - socket.SOL_SOCKET, - socket.SCM_RIGHTS, # type: ignore[list-item] - fdarray, - ) - ], - ) - break - except BaseException as exc: - self._convert_socket_error(exc) - - -class TCPSocketListener(_TrioSocketMixin, abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - super().__init__(trio.socket.from_stdlib_socket(raw_socket)) - self._accept_guard = ResourceGuard("accepting connections from") - - async def accept(self) -> SocketStream: - with self._accept_guard: - try: - trio_socket, _addr = await self._trio_socket.accept() - except BaseException as exc: - self._convert_socket_error(exc) - - trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - return SocketStream(trio_socket) - - -class UNIXSocketListener(_TrioSocketMixin, abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - super().__init__(trio.socket.from_stdlib_socket(raw_socket)) - self._accept_guard = ResourceGuard("accepting connections from") - - async def accept(self) -> UNIXSocketStream: - with self._accept_guard: - try: - trio_socket, _addr = await self._trio_socket.accept() - except BaseException as exc: - self._convert_socket_error(exc) - - return UNIXSocketStream(trio_socket) - - -class UDPSocket(_TrioSocketMixin[IPSockAddrType], abc.UDPSocket): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self) -> tuple[bytes, IPSockAddrType]: - with self._receive_guard: - try: - data, addr = await self._trio_socket.recvfrom(65536) - return data, convert_ipv6_sockaddr(addr) - except BaseException as exc: - self._convert_socket_error(exc) - - async def send(self, item: UDPPacketType) -> None: - with self._send_guard: - try: - await self._trio_socket.sendto(*item) - except BaseException as exc: - self._convert_socket_error(exc) - - -class ConnectedUDPSocket(_TrioSocketMixin[IPSockAddrType], abc.ConnectedUDPSocket): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self) -> bytes: - with self._receive_guard: - try: - return await self._trio_socket.recv(65536) - except BaseException as exc: - self._convert_socket_error(exc) - - async def send(self, item: bytes) -> None: - with self._send_guard: - try: - await self._trio_socket.send(item) - except BaseException as exc: - self._convert_socket_error(exc) - - -async def connect_tcp( - host: str, port: int, local_address: IPSockAddrType | None = None -) -> SocketStream: - family = socket.AF_INET6 if ":" in host else socket.AF_INET - trio_socket = trio.socket.socket(family) - trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - if local_address: - await trio_socket.bind(local_address) - - try: - await trio_socket.connect((host, port)) - except BaseException: - trio_socket.close() - raise - - return SocketStream(trio_socket) - - -async def connect_unix(path: str) -> UNIXSocketStream: - trio_socket = trio.socket.socket(socket.AF_UNIX) - try: - await trio_socket.connect(path) - except BaseException: - trio_socket.close() - raise - - return UNIXSocketStream(trio_socket) - - -async def create_udp_socket( - family: socket.AddressFamily, - local_address: IPSockAddrType | None, - remote_address: IPSockAddrType | None, - reuse_port: bool, -) -> UDPSocket | ConnectedUDPSocket: - trio_socket = trio.socket.socket(family=family, type=socket.SOCK_DGRAM) - - if reuse_port: - trio_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) - - if local_address: - await trio_socket.bind(local_address) - - if remote_address: - await trio_socket.connect(remote_address) - return ConnectedUDPSocket(trio_socket) - else: - return UDPSocket(trio_socket) - - -getaddrinfo = trio.socket.getaddrinfo -getnameinfo = trio.socket.getnameinfo - - -async def wait_socket_readable(sock: socket.socket) -> None: - try: - await wait_readable(sock) - except trio.ClosedResourceError as exc: - raise ClosedResourceError().with_traceback(exc.__traceback__) from None - except trio.BusyResourceError: - raise BusyResourceError("reading from") from None - - -async def wait_socket_writable(sock: socket.socket) -> None: - try: - await wait_writable(sock) - except trio.ClosedResourceError as exc: - raise ClosedResourceError().with_traceback(exc.__traceback__) from None - except trio.BusyResourceError: - raise BusyResourceError("writing to") from None - - -# -# Synchronization -# - - -class Event(BaseEvent): - def __new__(cls) -> Event: - return object.__new__(cls) - - def __init__(self) -> None: - self.__original = trio.Event() - - def is_set(self) -> bool: - return self.__original.is_set() - - async def wait(self) -> None: - return await self.__original.wait() - - def statistics(self) -> EventStatistics: - orig_statistics = self.__original.statistics() - return EventStatistics(tasks_waiting=orig_statistics.tasks_waiting) - - def set(self) -> DeprecatedAwaitable: - self.__original.set() - return DeprecatedAwaitable(self.set) - - -class CapacityLimiter(BaseCapacityLimiter): - def __new__(cls, *args: object, **kwargs: object) -> CapacityLimiter: - return object.__new__(cls) - - def __init__( - self, *args: Any, original: trio.CapacityLimiter | None = None - ) -> None: - self.__original = original or trio.CapacityLimiter(*args) - - async def __aenter__(self) -> None: - return await self.__original.__aenter__() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> None: - await self.__original.__aexit__(exc_type, exc_val, exc_tb) - - @property - def total_tokens(self) -> float: - return self.__original.total_tokens - - @total_tokens.setter - def total_tokens(self, value: float) -> None: - self.__original.total_tokens = value - - @property - def borrowed_tokens(self) -> int: - return self.__original.borrowed_tokens - - @property - def available_tokens(self) -> float: - return self.__original.available_tokens - - def acquire_nowait(self) -> DeprecatedAwaitable: - self.__original.acquire_nowait() - return DeprecatedAwaitable(self.acquire_nowait) - - def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable: - self.__original.acquire_on_behalf_of_nowait(borrower) - return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait) - - async def acquire(self) -> None: - await self.__original.acquire() - - async def acquire_on_behalf_of(self, borrower: object) -> None: - await self.__original.acquire_on_behalf_of(borrower) - - def release(self) -> None: - return self.__original.release() - - def release_on_behalf_of(self, borrower: object) -> None: - return self.__original.release_on_behalf_of(borrower) - - def statistics(self) -> CapacityLimiterStatistics: - orig = self.__original.statistics() - return CapacityLimiterStatistics( - borrowed_tokens=orig.borrowed_tokens, - total_tokens=orig.total_tokens, - borrowers=orig.borrowers, - tasks_waiting=orig.tasks_waiting, - ) - - -_capacity_limiter_wrapper: RunVar = RunVar("_capacity_limiter_wrapper") - - -def current_default_thread_limiter() -> CapacityLimiter: - try: - return _capacity_limiter_wrapper.get() - except LookupError: - limiter = CapacityLimiter( - original=trio.to_thread.current_default_thread_limiter() - ) - _capacity_limiter_wrapper.set(limiter) - return limiter - - -# -# Signal handling -# - - -class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]): - _iterator: AsyncIterator[int] - - def __init__(self, signals: tuple[Signals, ...]): - self._signals = signals - - def __enter__(self) -> _SignalReceiver: - self._cm = trio.open_signal_receiver(*self._signals) - self._iterator = self._cm.__enter__() - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - return self._cm.__exit__(exc_type, exc_val, exc_tb) - - def __aiter__(self) -> _SignalReceiver: - return self - - async def __anext__(self) -> Signals: - signum = await self._iterator.__anext__() - return Signals(signum) - - -def open_signal_receiver(*signals: Signals) -> _SignalReceiver: - return _SignalReceiver(signals) - - -# -# Testing and debugging -# - - -def get_current_task() -> TaskInfo: - task = trio_lowlevel.current_task() - - parent_id = None - if task.parent_nursery and task.parent_nursery.parent_task: - parent_id = id(task.parent_nursery.parent_task) - - return TaskInfo(id(task), parent_id, task.name, task.coro) - - -def get_running_tasks() -> list[TaskInfo]: - root_task = trio_lowlevel.current_root_task() - task_infos = [TaskInfo(id(root_task), None, root_task.name, root_task.coro)] - nurseries = root_task.child_nurseries - while nurseries: - new_nurseries: list[trio.Nursery] = [] - for nursery in nurseries: - for task in nursery.child_tasks: - task_infos.append( - TaskInfo(id(task), id(nursery.parent_task), task.name, task.coro) - ) - new_nurseries.extend(task.child_nurseries) - - nurseries = new_nurseries - - return task_infos - - -def wait_all_tasks_blocked() -> Awaitable[None]: - import trio.testing - - return trio.testing.wait_all_tasks_blocked() - - -class TestRunner(abc.TestRunner): - def __init__(self, **options: Any) -> None: - from collections import deque - from queue import Queue - - self._call_queue: Queue[Callable[..., object]] = Queue() - self._result_queue: deque[Outcome] = deque() - self._stop_event: trio.Event | None = None - self._nursery: trio.Nursery | None = None - self._options = options - - async def _trio_main(self) -> None: - self._stop_event = trio.Event() - async with trio.open_nursery() as self._nursery: - await self._stop_event.wait() - - async def _call_func( - self, func: Callable[..., Awaitable[object]], args: tuple, kwargs: dict - ) -> None: - try: - retval = await func(*args, **kwargs) - except BaseException as exc: - self._result_queue.append(Error(exc)) - else: - self._result_queue.append(Value(retval)) - - def _main_task_finished(self, outcome: object) -> None: - self._nursery = None - - def _get_nursery(self) -> trio.Nursery: - if self._nursery is None: - trio.lowlevel.start_guest_run( - self._trio_main, - run_sync_soon_threadsafe=self._call_queue.put, - done_callback=self._main_task_finished, - **self._options, - ) - while self._nursery is None: - self._call_queue.get()() - - return self._nursery - - def _call( - self, func: Callable[..., Awaitable[T_Retval]], *args: object, **kwargs: object - ) -> T_Retval: - self._get_nursery().start_soon(self._call_func, func, args, kwargs) - while not self._result_queue: - self._call_queue.get()() - - outcome = self._result_queue.pop() - return outcome.unwrap() - - def close(self) -> None: - if self._stop_event: - self._stop_event.set() - while self._nursery is not None: - self._call_queue.get()() - - def run_asyncgen_fixture( - self, - fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]], - kwargs: dict[str, Any], - ) -> Iterable[T_Retval]: - async def fixture_runner(*, task_status: TaskStatus[T_Retval]) -> None: - agen = fixture_func(**kwargs) - retval = await agen.asend(None) - task_status.started(retval) - await teardown_event.wait() - try: - await agen.asend(None) - except StopAsyncIteration: - pass - else: - await agen.aclose() - raise RuntimeError("Async generator fixture did not stop") - - teardown_event = trio.Event() - fixture_value = self._call(lambda: self._get_nursery().start(fixture_runner)) - yield fixture_value - teardown_event.set() - - def run_fixture( - self, - fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]], - kwargs: dict[str, Any], - ) -> T_Retval: - return self._call(fixture_func, **kwargs) - - def run_test( - self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any] - ) -> None: - self._call(test_func, **kwargs) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_calltip_util.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_calltip_util.py deleted file mode 100644 index aca108fa095ee3b020b797de600cdbf0514c6fd1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_calltip_util.py +++ /dev/null @@ -1,155 +0,0 @@ -''' -License: Apache 2.0 -Author: Yuli Fitterman -''' -import types - -from _pydevd_bundle.pydevd_constants import IS_JYTHON - -try: - import inspect -except: - import traceback; - - traceback.print_exc() # Ok, no inspect available (search will not work) - -from _pydev_bundle._pydev_imports_tipper import signature_from_docstring - - -def is_bound_method(obj): - if isinstance(obj, types.MethodType): - return getattr(obj, '__self__', getattr(obj, 'im_self', None)) is not None - else: - return False - - -def get_class_name(instance): - return getattr(getattr(instance, "__class__", None), "__name__", None) - - -def get_bound_class_name(obj): - my_self = getattr(obj, '__self__', getattr(obj, 'im_self', None)) - if my_self is None: - return None - return get_class_name(my_self) - - -def get_description(obj): - try: - ob_call = obj.__call__ - except: - ob_call = None - - if isinstance(obj, type) or type(obj).__name__ == 'classobj': - fob = getattr(obj, '__init__', lambda: None) - if not isinstance(fob, (types.FunctionType, types.MethodType)): - fob = obj - elif is_bound_method(ob_call): - fob = ob_call - else: - fob = obj - - argspec = "" - fn_name = None - fn_class = None - if isinstance(fob, (types.FunctionType, types.MethodType)): - spec_info = inspect.getfullargspec(fob) - argspec = inspect.formatargspec(*spec_info) - fn_name = getattr(fob, '__name__', None) - if isinstance(obj, type) or type(obj).__name__ == 'classobj': - fn_name = "__init__" - fn_class = getattr(obj, "__name__", "UnknownClass") - elif is_bound_method(obj) or is_bound_method(ob_call): - fn_class = get_bound_class_name(obj) or "UnknownClass" - - else: - fn_name = getattr(fob, '__name__', None) - fn_self = getattr(fob, '__self__', None) - if fn_self is not None and not isinstance(fn_self, types.ModuleType): - fn_class = get_class_name(fn_self) - - doc_string = get_docstring(ob_call) if is_bound_method(ob_call) else get_docstring(obj) - return create_method_stub(fn_name, fn_class, argspec, doc_string) - - -def create_method_stub(fn_name, fn_class, argspec, doc_string): - if fn_name and argspec: - doc_string = "" if doc_string is None else doc_string - fn_stub = create_function_stub(fn_name, argspec, doc_string, indent=1 if fn_class else 0) - if fn_class: - expr = fn_class if fn_name == '__init__' else fn_class + '().' + fn_name - return create_class_stub(fn_class, fn_stub) + "\n" + expr - else: - expr = fn_name - return fn_stub + "\n" + expr - elif doc_string: - if fn_name: - restored_signature, _ = signature_from_docstring(doc_string, fn_name) - if restored_signature: - return create_method_stub(fn_name, fn_class, restored_signature, doc_string) - return create_function_stub('unknown', '(*args, **kwargs)', doc_string) + '\nunknown' - - else: - return '' - - -def get_docstring(obj): - if obj is not None: - try: - if IS_JYTHON: - # Jython - doc = obj.__doc__ - if doc is not None: - return doc - - from _pydev_bundle import _pydev_jy_imports_tipper - - is_method, infos = _pydev_jy_imports_tipper.ismethod(obj) - ret = '' - if is_method: - for info in infos: - ret += info.get_as_doc() - return ret - - else: - - doc = inspect.getdoc(obj) - if doc is not None: - return doc - except: - pass - else: - return '' - try: - # if no attempt succeeded, try to return repr()... - return repr(obj) - except: - try: - # otherwise the class - return str(obj.__class__) - except: - # if all fails, go to an empty string - return '' - - -def create_class_stub(class_name, contents): - return "class %s(object):\n%s" % (class_name, contents) - - -def create_function_stub(fn_name, fn_argspec, fn_docstring, indent=0): - - def shift_right(string, prefix): - return ''.join(prefix + line for line in string.splitlines(True)) - - fn_docstring = shift_right(inspect.cleandoc(fn_docstring), " " * (indent + 1)) - ret = ''' -def %s%s: - """%s""" - pass -''' % (fn_name, fn_argspec, fn_docstring) - ret = ret[1:] # remove first /n - ret = ret.replace('\t', " ") - if indent: - prefix = " " * indent - ret = shift_right(ret, prefix) - return ret diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_bytecode_utils.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_bytecode_utils.py deleted file mode 100644 index e8c9f5479c0bb20aea64f4d0b808ed87a7539876..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_bytecode_utils.py +++ /dev/null @@ -1,843 +0,0 @@ -""" -Bytecode analysing utils. Originally added for using in smart step into. - -Note: not importable from Python 2. -""" - -from _pydev_bundle import pydev_log -from types import CodeType -from _pydevd_frame_eval.vendored.bytecode.instr import _Variable -from _pydevd_frame_eval.vendored import bytecode -from _pydevd_frame_eval.vendored.bytecode import cfg as bytecode_cfg -import dis -import opcode as _opcode - -from _pydevd_bundle.pydevd_constants import KeyifyList, DebugInfoHolder, IS_PY311_OR_GREATER -from bisect import bisect -from collections import deque - -# When True, throws errors on unknown bytecodes, when False, ignore those as if they didn't change the stack. -STRICT_MODE = False - -DEBUG = False - -_BINARY_OPS = set([opname for opname in dis.opname if opname.startswith('BINARY_')]) - -_BINARY_OP_MAP = { - 'BINARY_POWER': '__pow__', - 'BINARY_MULTIPLY': '__mul__', - 'BINARY_MATRIX_MULTIPLY': '__matmul__', - 'BINARY_FLOOR_DIVIDE': '__floordiv__', - 'BINARY_TRUE_DIVIDE': '__div__', - 'BINARY_MODULO': '__mod__', - 'BINARY_ADD': '__add__', - 'BINARY_SUBTRACT': '__sub__', - 'BINARY_LSHIFT': '__lshift__', - 'BINARY_RSHIFT': '__rshift__', - 'BINARY_AND': '__and__', - 'BINARY_OR': '__or__', - 'BINARY_XOR': '__xor__', - 'BINARY_SUBSCR': '__getitem__', - 'BINARY_DIVIDE': '__div__' -} - -_COMP_OP_MAP = { - '<': '__lt__', - '<=': '__le__', - '==': '__eq__', - '!=': '__ne__', - '>': '__gt__', - '>=': '__ge__', - 'in': '__contains__', - 'not in': '__contains__', -} - - -class Target(object): - __slots__ = ['arg', 'lineno', 'offset', 'children_targets'] - - def __init__(self, arg, lineno, offset, children_targets=()): - self.arg = arg - self.lineno = lineno - self.offset = offset - self.children_targets = children_targets - - def __repr__(self): - ret = [] - for s in self.__slots__: - ret.append('%s: %s' % (s, getattr(self, s))) - return 'Target(%s)' % ', '.join(ret) - - __str__ = __repr__ - - -class _TargetIdHashable(object): - - def __init__(self, target): - self.target = target - - def __eq__(self, other): - if not hasattr(other, 'target'): - return - return other.target is self.target - - def __ne__(self, other): - return not self == other - - def __hash__(self): - return id(self.target) - - -class _StackInterpreter(object): - ''' - Good reference: https://github.com/python/cpython/blob/fcb55c0037baab6f98f91ee38ce84b6f874f034a/Python/ceval.c - ''' - - def __init__(self, bytecode): - self.bytecode = bytecode - self._stack = deque() - self.function_calls = [] - self.load_attrs = {} - self.func = set() - self.func_name_id_to_code_object = {} - - def __str__(self): - return 'Stack:\nFunction calls:\n%s\nLoad attrs:\n%s\n' % (self.function_calls, list(self.load_attrs.values())) - - def _getname(self, instr): - if instr.opcode in _opcode.hascompare: - cmp_op = dis.cmp_op[instr.arg] - if cmp_op not in ('exception match', 'BAD'): - return _COMP_OP_MAP.get(cmp_op, cmp_op) - return instr.arg - - def _getcallname(self, instr): - if instr.name == 'BINARY_SUBSCR': - return '__getitem__().__call__' - if instr.name == 'CALL_FUNCTION': - # Note: previously a '__call__().__call__' was returned, but this was a bit weird - # and on Python 3.9 this construct could appear for some internal things where - # it wouldn't be expected. - # Note: it'd be what we had in func()(). - return None - if instr.name == 'MAKE_FUNCTION': - return '__func__().__call__' - if instr.name == 'LOAD_ASSERTION_ERROR': - return 'AssertionError' - name = self._getname(instr) - if isinstance(name, CodeType): - name = name.co_qualname # Note: only available for Python 3.11 - if isinstance(name, _Variable): - name = name.name - - if not isinstance(name, str): - return None - if name.endswith('>'): # xxx., xxx., ... - return name.split('.')[-1] - return name - - def _no_stack_change(self, instr): - pass # Can be aliased when the instruction does nothing. - - def on_LOAD_GLOBAL(self, instr): - self._stack.append(instr) - - def on_POP_TOP(self, instr): - try: - self._stack.pop() - except IndexError: - pass # Ok (in the end of blocks) - - def on_LOAD_ATTR(self, instr): - self.on_POP_TOP(instr) # replaces the current top - self._stack.append(instr) - self.load_attrs[_TargetIdHashable(instr)] = Target(self._getname(instr), instr.lineno, instr.offset) - - on_LOOKUP_METHOD = on_LOAD_ATTR # Improvement in PyPy - - def on_LOAD_CONST(self, instr): - self._stack.append(instr) - - on_LOAD_DEREF = on_LOAD_CONST - on_LOAD_NAME = on_LOAD_CONST - on_LOAD_CLOSURE = on_LOAD_CONST - on_LOAD_CLASSDEREF = on_LOAD_CONST - - # Although it actually changes the stack, it's inconsequential for us as a function call can't - # really be found there. - on_IMPORT_NAME = _no_stack_change - on_IMPORT_FROM = _no_stack_change - on_IMPORT_STAR = _no_stack_change - on_SETUP_ANNOTATIONS = _no_stack_change - - def on_STORE_FAST(self, instr): - try: - self._stack.pop() - except IndexError: - pass # Ok, we may have a block just with the store - - # Note: it stores in the locals and doesn't put anything in the stack. - - on_STORE_GLOBAL = on_STORE_FAST - on_STORE_DEREF = on_STORE_FAST - on_STORE_ATTR = on_STORE_FAST - on_STORE_NAME = on_STORE_FAST - - on_DELETE_NAME = on_POP_TOP - on_DELETE_ATTR = on_POP_TOP - on_DELETE_GLOBAL = on_POP_TOP - on_DELETE_FAST = on_POP_TOP - on_DELETE_DEREF = on_POP_TOP - - on_DICT_UPDATE = on_POP_TOP - on_SET_UPDATE = on_POP_TOP - - on_GEN_START = on_POP_TOP - - def on_NOP(self, instr): - pass - - def _handle_call_from_instr(self, func_name_instr, func_call_instr): - self.load_attrs.pop(_TargetIdHashable(func_name_instr), None) - call_name = self._getcallname(func_name_instr) - target = None - if not call_name: - pass # Ignore if we can't identify a name - elif call_name in ('', '', '', ''): - code_obj = self.func_name_id_to_code_object[_TargetIdHashable(func_name_instr)] - if code_obj is not None: - children_targets = _get_smart_step_into_targets(code_obj) - if children_targets: - # i.e.: we have targets inside of a or . - # Note that to actually match this in the debugger we need to do matches on 2 frames, - # the one with the and then the actual target inside the . - target = Target(call_name, func_name_instr.lineno, func_call_instr.offset, children_targets) - self.function_calls.append( - target) - - else: - # Ok, regular call - target = Target(call_name, func_name_instr.lineno, func_call_instr.offset) - self.function_calls.append(target) - - if DEBUG and target is not None: - print('Created target', target) - self._stack.append(func_call_instr) # Keep the func call as the result - - def on_COMPARE_OP(self, instr): - try: - _right = self._stack.pop() - except IndexError: - return - try: - _left = self._stack.pop() - except IndexError: - return - - cmp_op = dis.cmp_op[instr.arg] - if cmp_op not in ('exception match', 'BAD'): - self.function_calls.append(Target(self._getname(instr), instr.lineno, instr.offset)) - - self._stack.append(instr) - - def on_IS_OP(self, instr): - try: - self._stack.pop() - except IndexError: - return - try: - self._stack.pop() - except IndexError: - return - - def on_BINARY_SUBSCR(self, instr): - try: - _sub = self._stack.pop() - except IndexError: - return - try: - _container = self._stack.pop() - except IndexError: - return - self.function_calls.append(Target(_BINARY_OP_MAP[instr.name], instr.lineno, instr.offset)) - self._stack.append(instr) - - on_BINARY_MATRIX_MULTIPLY = on_BINARY_SUBSCR - on_BINARY_POWER = on_BINARY_SUBSCR - on_BINARY_MULTIPLY = on_BINARY_SUBSCR - on_BINARY_FLOOR_DIVIDE = on_BINARY_SUBSCR - on_BINARY_TRUE_DIVIDE = on_BINARY_SUBSCR - on_BINARY_MODULO = on_BINARY_SUBSCR - on_BINARY_ADD = on_BINARY_SUBSCR - on_BINARY_SUBTRACT = on_BINARY_SUBSCR - on_BINARY_LSHIFT = on_BINARY_SUBSCR - on_BINARY_RSHIFT = on_BINARY_SUBSCR - on_BINARY_AND = on_BINARY_SUBSCR - on_BINARY_OR = on_BINARY_SUBSCR - on_BINARY_XOR = on_BINARY_SUBSCR - - def on_LOAD_METHOD(self, instr): - self.on_POP_TOP(instr) # Remove the previous as we're loading something from it. - self._stack.append(instr) - - def on_MAKE_FUNCTION(self, instr): - if not IS_PY311_OR_GREATER: - # The qualifier name is no longer put in the stack. - qualname = self._stack.pop() - code_obj_instr = self._stack.pop() - else: - # In 3.11 the code object has a co_qualname which we can use. - qualname = code_obj_instr = self._stack.pop() - - arg = instr.arg - if arg & 0x08: - _func_closure = self._stack.pop() - if arg & 0x04: - _func_annotations = self._stack.pop() - if arg & 0x02: - _func_kwdefaults = self._stack.pop() - if arg & 0x01: - _func_defaults = self._stack.pop() - - call_name = self._getcallname(qualname) - if call_name in ('', '', '', ''): - if isinstance(code_obj_instr.arg, CodeType): - self.func_name_id_to_code_object[_TargetIdHashable(qualname)] = code_obj_instr.arg - self._stack.append(qualname) - - def on_LOAD_FAST(self, instr): - self._stack.append(instr) - - def on_LOAD_ASSERTION_ERROR(self, instr): - self._stack.append(instr) - - on_LOAD_BUILD_CLASS = on_LOAD_FAST - - def on_CALL_METHOD(self, instr): - # pop the actual args - for _ in range(instr.arg): - self._stack.pop() - - func_name_instr = self._stack.pop() - self._handle_call_from_instr(func_name_instr, instr) - - def on_PUSH_NULL(self, instr): - self._stack.append(instr) - - def on_CALL_FUNCTION(self, instr): - arg = instr.arg - - argc = arg & 0xff # positional args - argc += ((arg >> 8) * 2) # keyword args - - # pop the actual args - for _ in range(argc): - try: - self._stack.pop() - except IndexError: - return - - try: - func_name_instr = self._stack.pop() - except IndexError: - return - self._handle_call_from_instr(func_name_instr, instr) - - def on_CALL_FUNCTION_KW(self, instr): - # names of kw args - _names_of_kw_args = self._stack.pop() - - # pop the actual args - arg = instr.arg - - argc = arg & 0xff # positional args - argc += ((arg >> 8) * 2) # keyword args - - for _ in range(argc): - self._stack.pop() - - func_name_instr = self._stack.pop() - self._handle_call_from_instr(func_name_instr, instr) - - def on_CALL_FUNCTION_VAR(self, instr): - # var name - _var_arg = self._stack.pop() - - # pop the actual args - arg = instr.arg - - argc = arg & 0xff # positional args - argc += ((arg >> 8) * 2) # keyword args - - for _ in range(argc): - self._stack.pop() - - func_name_instr = self._stack.pop() - self._handle_call_from_instr(func_name_instr, instr) - - def on_CALL_FUNCTION_VAR_KW(self, instr): - # names of kw args - _names_of_kw_args = self._stack.pop() - - arg = instr.arg - - argc = arg & 0xff # positional args - argc += ((arg >> 8) * 2) # keyword args - - # also pop **kwargs - self._stack.pop() - - # pop the actual args - for _ in range(argc): - self._stack.pop() - - func_name_instr = self._stack.pop() - self._handle_call_from_instr(func_name_instr, instr) - - def on_CALL_FUNCTION_EX(self, instr): - if instr.arg & 0x01: - _kwargs = self._stack.pop() - _callargs = self._stack.pop() - func_name_instr = self._stack.pop() - self._handle_call_from_instr(func_name_instr, instr) - - on_YIELD_VALUE = _no_stack_change - on_GET_AITER = _no_stack_change - on_GET_ANEXT = _no_stack_change - on_END_ASYNC_FOR = _no_stack_change - on_BEFORE_ASYNC_WITH = _no_stack_change - on_SETUP_ASYNC_WITH = _no_stack_change - on_YIELD_FROM = _no_stack_change - on_SETUP_LOOP = _no_stack_change - on_FOR_ITER = _no_stack_change - on_BREAK_LOOP = _no_stack_change - on_JUMP_ABSOLUTE = _no_stack_change - on_RERAISE = _no_stack_change - on_LIST_TO_TUPLE = _no_stack_change - on_CALL_FINALLY = _no_stack_change - on_POP_FINALLY = _no_stack_change - - def on_JUMP_IF_FALSE_OR_POP(self, instr): - try: - self._stack.pop() - except IndexError: - return - - on_JUMP_IF_TRUE_OR_POP = on_JUMP_IF_FALSE_OR_POP - - def on_JUMP_IF_NOT_EXC_MATCH(self, instr): - try: - self._stack.pop() - except IndexError: - return - try: - self._stack.pop() - except IndexError: - return - - def on_ROT_TWO(self, instr): - try: - p0 = self._stack.pop() - except IndexError: - return - - try: - p1 = self._stack.pop() - except: - self._stack.append(p0) - return - - self._stack.append(p0) - self._stack.append(p1) - - def on_ROT_THREE(self, instr): - try: - p0 = self._stack.pop() - except IndexError: - return - - try: - p1 = self._stack.pop() - except: - self._stack.append(p0) - return - - try: - p2 = self._stack.pop() - except: - self._stack.append(p0) - self._stack.append(p1) - return - - self._stack.append(p0) - self._stack.append(p1) - self._stack.append(p2) - - def on_ROT_FOUR(self, instr): - try: - p0 = self._stack.pop() - except IndexError: - return - - try: - p1 = self._stack.pop() - except: - self._stack.append(p0) - return - - try: - p2 = self._stack.pop() - except: - self._stack.append(p0) - self._stack.append(p1) - return - - try: - p3 = self._stack.pop() - except: - self._stack.append(p0) - self._stack.append(p1) - self._stack.append(p2) - return - - self._stack.append(p0) - self._stack.append(p1) - self._stack.append(p2) - self._stack.append(p3) - - def on_BUILD_LIST_FROM_ARG(self, instr): - self._stack.append(instr) - - def on_BUILD_MAP(self, instr): - for _i in range(instr.arg): - self._stack.pop() - self._stack.pop() - self._stack.append(instr) - - def on_BUILD_CONST_KEY_MAP(self, instr): - self.on_POP_TOP(instr) # keys - for _i in range(instr.arg): - self.on_POP_TOP(instr) # value - self._stack.append(instr) - - on_RETURN_VALUE = on_POP_TOP - on_POP_JUMP_IF_FALSE = on_POP_TOP - on_POP_JUMP_IF_TRUE = on_POP_TOP - on_DICT_MERGE = on_POP_TOP - on_LIST_APPEND = on_POP_TOP - on_SET_ADD = on_POP_TOP - on_LIST_EXTEND = on_POP_TOP - on_UNPACK_EX = on_POP_TOP - - # ok: doesn't change the stack (converts top to getiter(top)) - on_GET_ITER = _no_stack_change - on_GET_AWAITABLE = _no_stack_change - on_GET_YIELD_FROM_ITER = _no_stack_change - - def on_RETURN_GENERATOR(self, instr): - self._stack.append(instr) - - on_RETURN_GENERATOR = _no_stack_change - on_RESUME = _no_stack_change - - def on_MAP_ADD(self, instr): - self.on_POP_TOP(instr) - self.on_POP_TOP(instr) - - def on_UNPACK_SEQUENCE(self, instr): - self._stack.pop() - for _i in range(instr.arg): - self._stack.append(instr) - - def on_BUILD_LIST(self, instr): - for _i in range(instr.arg): - self.on_POP_TOP(instr) - self._stack.append(instr) - - on_BUILD_TUPLE = on_BUILD_LIST - on_BUILD_STRING = on_BUILD_LIST - on_BUILD_TUPLE_UNPACK_WITH_CALL = on_BUILD_LIST - on_BUILD_TUPLE_UNPACK = on_BUILD_LIST - on_BUILD_LIST_UNPACK = on_BUILD_LIST - on_BUILD_MAP_UNPACK_WITH_CALL = on_BUILD_LIST - on_BUILD_MAP_UNPACK = on_BUILD_LIST - on_BUILD_SET = on_BUILD_LIST - on_BUILD_SET_UNPACK = on_BUILD_LIST - - on_SETUP_FINALLY = _no_stack_change - on_POP_FINALLY = _no_stack_change - on_BEGIN_FINALLY = _no_stack_change - on_END_FINALLY = _no_stack_change - - def on_RAISE_VARARGS(self, instr): - for _i in range(instr.arg): - self.on_POP_TOP(instr) - - on_POP_BLOCK = _no_stack_change - on_JUMP_FORWARD = _no_stack_change - on_POP_EXCEPT = _no_stack_change - on_SETUP_EXCEPT = _no_stack_change - on_WITH_EXCEPT_START = _no_stack_change - - on_END_FINALLY = _no_stack_change - on_BEGIN_FINALLY = _no_stack_change - on_SETUP_WITH = _no_stack_change - on_WITH_CLEANUP_START = _no_stack_change - on_WITH_CLEANUP_FINISH = _no_stack_change - on_FORMAT_VALUE = _no_stack_change - on_EXTENDED_ARG = _no_stack_change - - def on_INPLACE_ADD(self, instr): - # This would actually pop 2 and leave the value in the stack. - # In a += 1 it pop `a` and `1` and leave the resulting value - # for a load. In our case, let's just pop the `1` and leave the `a` - # instead of leaving the INPLACE_ADD bytecode. - try: - self._stack.pop() - except IndexError: - pass - - on_INPLACE_POWER = on_INPLACE_ADD - on_INPLACE_MULTIPLY = on_INPLACE_ADD - on_INPLACE_MATRIX_MULTIPLY = on_INPLACE_ADD - on_INPLACE_TRUE_DIVIDE = on_INPLACE_ADD - on_INPLACE_FLOOR_DIVIDE = on_INPLACE_ADD - on_INPLACE_MODULO = on_INPLACE_ADD - on_INPLACE_SUBTRACT = on_INPLACE_ADD - on_INPLACE_RSHIFT = on_INPLACE_ADD - on_INPLACE_LSHIFT = on_INPLACE_ADD - on_INPLACE_AND = on_INPLACE_ADD - on_INPLACE_OR = on_INPLACE_ADD - on_INPLACE_XOR = on_INPLACE_ADD - - def on_DUP_TOP(self, instr): - try: - i = self._stack[-1] - except IndexError: - # ok (in the start of block) - self._stack.append(instr) - else: - self._stack.append(i) - - def on_DUP_TOP_TWO(self, instr): - if len(self._stack) == 0: - self._stack.append(instr) - return - - if len(self._stack) == 1: - i = self._stack[-1] - self._stack.append(i) - self._stack.append(instr) - return - - i = self._stack[-1] - j = self._stack[-2] - self._stack.append(j) - self._stack.append(i) - - def on_BUILD_SLICE(self, instr): - for _ in range(instr.arg): - try: - self._stack.pop() - except IndexError: - pass - self._stack.append(instr) - - def on_STORE_SUBSCR(self, instr): - try: - self._stack.pop() - self._stack.pop() - self._stack.pop() - except IndexError: - pass - - def on_DELETE_SUBSCR(self, instr): - try: - self._stack.pop() - self._stack.pop() - except IndexError: - pass - - # Note: on Python 3 this is only found on interactive mode to print the results of - # some evaluation. - on_PRINT_EXPR = on_POP_TOP - - on_UNARY_POSITIVE = _no_stack_change - on_UNARY_NEGATIVE = _no_stack_change - on_UNARY_NOT = _no_stack_change - on_UNARY_INVERT = _no_stack_change - - on_CACHE = _no_stack_change - on_PRECALL = _no_stack_change - - -def _get_smart_step_into_targets(code): - ''' - :return list(Target) - ''' - b = bytecode.Bytecode.from_code(code) - cfg = bytecode_cfg.ControlFlowGraph.from_bytecode(b) - - ret = [] - - for block in cfg: - if DEBUG: - print('\nStart block----') - stack = _StackInterpreter(block) - for instr in block: - try: - func_name = 'on_%s' % (instr.name,) - func = getattr(stack, func_name, None) - - if DEBUG: - if instr.name != 'CACHE': # Filter the ones we don't want to see. - print('\nWill handle: ', instr, '>>', stack._getname(instr), '<<') - print('Current stack:') - for entry in stack._stack: - print(' arg:', stack._getname(entry), '(', entry, ')') - - if func is None: - if STRICT_MODE: - raise AssertionError('%s not found.' % (func_name,)) - else: - continue - func(instr) - except: - if STRICT_MODE: - raise # Error in strict mode. - else: - # In non-strict mode, log it (if in verbose mode) and keep on going. - if DebugInfoHolder.DEBUG_TRACE_LEVEL >= 2: - pydev_log.exception('Exception computing step into targets (handled).') - - ret.extend(stack.function_calls) - # No longer considering attr loads as calls (while in theory sometimes it's possible - # that something as `some.attr` can turn out to be a property which could be stepped - # in, it's not that common in practice and can be surprising for users, so, disabling - # step into from stepping into properties). - # ret.extend(stack.load_attrs.values()) - - return ret - - -# Note that the offset is unique within the frame (so, we can use it as the target id). -# Also, as the offset is the instruction offset within the frame, it's possible to -# to inspect the parent frame for frame.f_lasti to know where we actually are (as the -# caller name may not always match the new frame name). -class Variant(object): - __slots__ = ['name', 'is_visited', 'line', 'offset', 'call_order', 'children_variants', 'parent'] - - def __init__(self, name, is_visited, line, offset, call_order, children_variants=None): - self.name = name - self.is_visited = is_visited - self.line = line - self.offset = offset - self.call_order = call_order - self.children_variants = children_variants - self.parent = None - if children_variants: - for variant in children_variants: - variant.parent = self - - def __repr__(self): - ret = [] - for s in self.__slots__: - if s == 'parent': - try: - parent = self.parent - except AttributeError: - ret.append('%s: ' % (s,)) - else: - if parent is None: - ret.append('parent: None') - else: - ret.append('parent: %s (%s)' % (parent.name, parent.offset)) - continue - - if s == 'children_variants': - ret.append('children_variants: %s' % (len(self.children_variants) if self.children_variants else 0)) - continue - - try: - ret.append('%s: %s' % (s, getattr(self, s))) - except AttributeError: - ret.append('%s: ' % (s,)) - return 'Variant(%s)' % ', '.join(ret) - - __str__ = __repr__ - - -def _convert_target_to_variant(target, start_line, end_line, call_order_cache, lasti, base): - name = target.arg - if not isinstance(name, str): - return - if target.lineno > end_line: - return - if target.lineno < start_line: - return - - call_order = call_order_cache.get(name, 0) + 1 - call_order_cache[name] = call_order - is_visited = target.offset <= lasti - - children_targets = target.children_targets - children_variants = None - if children_targets: - children_variants = [ - _convert_target_to_variant(child, start_line, end_line, call_order_cache, lasti, base) - for child in target.children_targets] - - return Variant(name, is_visited, target.lineno - base, target.offset, call_order, children_variants) - - -def calculate_smart_step_into_variants(frame, start_line, end_line, base=0): - """ - Calculate smart step into variants for the given line range. - :param frame: - :type frame: :py:class:`types.FrameType` - :param start_line: - :param end_line: - :return: A list of call names from the first to the last. - :note: it's guaranteed that the offsets appear in order. - :raise: :py:class:`RuntimeError` if failed to parse the bytecode or if dis cannot be used. - """ - variants = [] - code = frame.f_code - lasti = frame.f_lasti - - call_order_cache = {} - if DEBUG: - print('dis.dis:') - if IS_PY311_OR_GREATER: - dis.dis(code, show_caches=False) - else: - dis.dis(code) - - for target in _get_smart_step_into_targets(code): - variant = _convert_target_to_variant(target, start_line, end_line, call_order_cache, lasti, base) - if variant is None: - continue - variants.append(variant) - - return variants - - -def get_smart_step_into_variant_from_frame_offset(frame_f_lasti, variants): - """ - Given the frame.f_lasti, return the related `Variant`. - - :note: if the offset is found before any variant available or no variants are - available, None is returned. - - :rtype: Variant|NoneType - """ - if not variants: - return None - - i = bisect(KeyifyList(variants, lambda entry:entry.offset), frame_f_lasti) - - if i == 0: - return None - - else: - return variants[i - 1] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_thread_lifecycle.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_thread_lifecycle.py deleted file mode 100644 index 069b6b6a9a503910546747aa20e16f25183fec72..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_thread_lifecycle.py +++ /dev/null @@ -1,96 +0,0 @@ -from _pydevd_bundle import pydevd_utils -from _pydevd_bundle.pydevd_additional_thread_info import set_additional_thread_info -from _pydevd_bundle.pydevd_comm_constants import CMD_STEP_INTO, CMD_THREAD_SUSPEND -from _pydevd_bundle.pydevd_constants import PYTHON_SUSPEND, STATE_SUSPEND, get_thread_id, STATE_RUN -from _pydev_bundle._pydev_saved_modules import threading -from _pydev_bundle import pydev_log - - -def pydevd_find_thread_by_id(thread_id): - try: - threads = threading.enumerate() - for i in threads: - tid = get_thread_id(i) - if thread_id == tid or thread_id.endswith('|' + tid): - return i - - # This can happen when a request comes for a thread which was previously removed. - pydev_log.info("Could not find thread %s.", thread_id) - pydev_log.info("Available: %s.", ([get_thread_id(t) for t in threads],)) - except: - pydev_log.exception() - - return None - - -def mark_thread_suspended(thread, stop_reason, original_step_cmd=-1): - info = set_additional_thread_info(thread) - info.suspend_type = PYTHON_SUSPEND - if original_step_cmd != -1: - stop_reason = original_step_cmd - thread.stop_reason = stop_reason - - # Note: don't set the 'pydev_original_step_cmd' here if unset. - - if info.pydev_step_cmd == -1: - # If the step command is not specified, set it to step into - # to make sure it'll break as soon as possible. - info.pydev_step_cmd = CMD_STEP_INTO - info.pydev_step_stop = None - - # Mark as suspended as the last thing. - info.pydev_state = STATE_SUSPEND - - return info - - -def internal_run_thread(thread, set_additional_thread_info): - info = set_additional_thread_info(thread) - info.pydev_original_step_cmd = -1 - info.pydev_step_cmd = -1 - info.pydev_step_stop = None - info.pydev_state = STATE_RUN - - -def resume_threads(thread_id, except_thread=None): - pydev_log.info('Resuming threads: %s (except thread: %s)', thread_id, except_thread) - threads = [] - if thread_id == '*': - threads = pydevd_utils.get_non_pydevd_threads() - - elif thread_id.startswith('__frame__:'): - pydev_log.critical("Can't make tasklet run: %s", thread_id) - - else: - threads = [pydevd_find_thread_by_id(thread_id)] - - for t in threads: - if t is None or t is except_thread: - pydev_log.info('Skipped resuming thread: %s', t) - continue - - internal_run_thread(t, set_additional_thread_info=set_additional_thread_info) - - -def suspend_all_threads(py_db, except_thread): - ''' - Suspend all except the one passed as a parameter. - :param except_thread: - ''' - pydev_log.info('Suspending all threads except: %s', except_thread) - all_threads = pydevd_utils.get_non_pydevd_threads() - for t in all_threads: - if getattr(t, 'pydev_do_not_trace', None): - pass # skip some other threads, i.e. ipython history saving thread from debug console - else: - if t is except_thread: - continue - info = mark_thread_suspended(t, CMD_THREAD_SUSPEND) - frame = info.get_topmost_frame(t) - - # Reset the tracing as in this case as it could've set scopes to be untraced. - if frame is not None: - try: - py_db.set_trace_for_frame_and_parents(frame) - finally: - frame = None diff --git a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/conv2d_layers.py b/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/conv2d_layers.py deleted file mode 100644 index d8467460c4b36e54c83ce2dcd3ebe91d3432cad2..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/conv2d_layers.py +++ /dev/null @@ -1,304 +0,0 @@ -""" Conv2D w/ SAME padding, CondConv, MixedConv - -A collection of conv layers and padding helpers needed by EfficientNet, MixNet, and -MobileNetV3 models that maintain weight compatibility with original Tensorflow models. - -Copyright 2020 Ross Wightman -""" -import collections.abc -import math -from functools import partial -from itertools import repeat -from typing import Tuple, Optional - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .config import * - - -# From PyTorch internals -def _ntuple(n): - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - return parse - - -_single = _ntuple(1) -_pair = _ntuple(2) -_triple = _ntuple(3) -_quadruple = _ntuple(4) - - -def _is_static_pad(kernel_size, stride=1, dilation=1, **_): - return stride == 1 and (dilation * (kernel_size - 1)) % 2 == 0 - - -def _get_padding(kernel_size, stride=1, dilation=1, **_): - padding = ((stride - 1) + dilation * (kernel_size - 1)) // 2 - return padding - - -def _calc_same_pad(i: int, k: int, s: int, d: int): - return max((-(i // -s) - 1) * s + (k - 1) * d + 1 - i, 0) - - -def _same_pad_arg(input_size, kernel_size, stride, dilation): - ih, iw = input_size - kh, kw = kernel_size - pad_h = _calc_same_pad(ih, kh, stride[0], dilation[0]) - pad_w = _calc_same_pad(iw, kw, stride[1], dilation[1]) - return [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2] - - -def _split_channels(num_chan, num_groups): - split = [num_chan // num_groups for _ in range(num_groups)] - split[0] += num_chan - sum(split) - return split - - -def conv2d_same( - x, weight: torch.Tensor, bias: Optional[torch.Tensor] = None, stride: Tuple[int, int] = (1, 1), - padding: Tuple[int, int] = (0, 0), dilation: Tuple[int, int] = (1, 1), groups: int = 1): - ih, iw = x.size()[-2:] - kh, kw = weight.size()[-2:] - pad_h = _calc_same_pad(ih, kh, stride[0], dilation[0]) - pad_w = _calc_same_pad(iw, kw, stride[1], dilation[1]) - x = F.pad(x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2]) - return F.conv2d(x, weight, bias, stride, (0, 0), dilation, groups) - - -class Conv2dSame(nn.Conv2d): - """ Tensorflow like 'SAME' convolution wrapper for 2D convolutions - """ - - # pylint: disable=unused-argument - def __init__(self, in_channels, out_channels, kernel_size, stride=1, - padding=0, dilation=1, groups=1, bias=True): - super(Conv2dSame, self).__init__( - in_channels, out_channels, kernel_size, stride, 0, dilation, groups, bias) - - def forward(self, x): - return conv2d_same(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups) - - -class Conv2dSameExport(nn.Conv2d): - """ ONNX export friendly Tensorflow like 'SAME' convolution wrapper for 2D convolutions - - NOTE: This does not currently work with torch.jit.script - """ - - # pylint: disable=unused-argument - def __init__(self, in_channels, out_channels, kernel_size, stride=1, - padding=0, dilation=1, groups=1, bias=True): - super(Conv2dSameExport, self).__init__( - in_channels, out_channels, kernel_size, stride, 0, dilation, groups, bias) - self.pad = None - self.pad_input_size = (0, 0) - - def forward(self, x): - input_size = x.size()[-2:] - if self.pad is None: - pad_arg = _same_pad_arg(input_size, self.weight.size()[-2:], self.stride, self.dilation) - self.pad = nn.ZeroPad2d(pad_arg) - self.pad_input_size = input_size - - if self.pad is not None: - x = self.pad(x) - return F.conv2d( - x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups) - - -def get_padding_value(padding, kernel_size, **kwargs): - dynamic = False - if isinstance(padding, str): - # for any string padding, the padding will be calculated for you, one of three ways - padding = padding.lower() - if padding == 'same': - # TF compatible 'SAME' padding, has a performance and GPU memory allocation impact - if _is_static_pad(kernel_size, **kwargs): - # static case, no extra overhead - padding = _get_padding(kernel_size, **kwargs) - else: - # dynamic padding - padding = 0 - dynamic = True - elif padding == 'valid': - # 'VALID' padding, same as padding=0 - padding = 0 - else: - # Default to PyTorch style 'same'-ish symmetric padding - padding = _get_padding(kernel_size, **kwargs) - return padding, dynamic - - -def create_conv2d_pad(in_chs, out_chs, kernel_size, **kwargs): - padding = kwargs.pop('padding', '') - kwargs.setdefault('bias', False) - padding, is_dynamic = get_padding_value(padding, kernel_size, **kwargs) - if is_dynamic: - if is_exportable(): - assert not is_scriptable() - return Conv2dSameExport(in_chs, out_chs, kernel_size, **kwargs) - else: - return Conv2dSame(in_chs, out_chs, kernel_size, **kwargs) - else: - return nn.Conv2d(in_chs, out_chs, kernel_size, padding=padding, **kwargs) - - -class MixedConv2d(nn.ModuleDict): - """ Mixed Grouped Convolution - Based on MDConv and GroupedConv in MixNet impl: - https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mixnet/custom_layers.py - """ - - def __init__(self, in_channels, out_channels, kernel_size=3, - stride=1, padding='', dilation=1, depthwise=False, **kwargs): - super(MixedConv2d, self).__init__() - - kernel_size = kernel_size if isinstance(kernel_size, list) else [kernel_size] - num_groups = len(kernel_size) - in_splits = _split_channels(in_channels, num_groups) - out_splits = _split_channels(out_channels, num_groups) - self.in_channels = sum(in_splits) - self.out_channels = sum(out_splits) - for idx, (k, in_ch, out_ch) in enumerate(zip(kernel_size, in_splits, out_splits)): - conv_groups = out_ch if depthwise else 1 - self.add_module( - str(idx), - create_conv2d_pad( - in_ch, out_ch, k, stride=stride, - padding=padding, dilation=dilation, groups=conv_groups, **kwargs) - ) - self.splits = in_splits - - def forward(self, x): - x_split = torch.split(x, self.splits, 1) - x_out = [conv(x_split[i]) for i, conv in enumerate(self.values())] - x = torch.cat(x_out, 1) - return x - - -def get_condconv_initializer(initializer, num_experts, expert_shape): - def condconv_initializer(weight): - """CondConv initializer function.""" - num_params = np.prod(expert_shape) - if (len(weight.shape) != 2 or weight.shape[0] != num_experts or - weight.shape[1] != num_params): - raise (ValueError( - 'CondConv variables must have shape [num_experts, num_params]')) - for i in range(num_experts): - initializer(weight[i].view(expert_shape)) - return condconv_initializer - - -class CondConv2d(nn.Module): - """ Conditional Convolution - Inspired by: https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/condconv/condconv_layers.py - - Grouped convolution hackery for parallel execution of the per-sample kernel filters inspired by this discussion: - https://github.com/pytorch/pytorch/issues/17983 - """ - __constants__ = ['bias', 'in_channels', 'out_channels', 'dynamic_padding'] - - def __init__(self, in_channels, out_channels, kernel_size=3, - stride=1, padding='', dilation=1, groups=1, bias=False, num_experts=4): - super(CondConv2d, self).__init__() - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - padding_val, is_padding_dynamic = get_padding_value( - padding, kernel_size, stride=stride, dilation=dilation) - self.dynamic_padding = is_padding_dynamic # if in forward to work with torchscript - self.padding = _pair(padding_val) - self.dilation = _pair(dilation) - self.groups = groups - self.num_experts = num_experts - - self.weight_shape = (self.out_channels, self.in_channels // self.groups) + self.kernel_size - weight_num_param = 1 - for wd in self.weight_shape: - weight_num_param *= wd - self.weight = torch.nn.Parameter(torch.Tensor(self.num_experts, weight_num_param)) - - if bias: - self.bias_shape = (self.out_channels,) - self.bias = torch.nn.Parameter(torch.Tensor(self.num_experts, self.out_channels)) - else: - self.register_parameter('bias', None) - - self.reset_parameters() - - def reset_parameters(self): - init_weight = get_condconv_initializer( - partial(nn.init.kaiming_uniform_, a=math.sqrt(5)), self.num_experts, self.weight_shape) - init_weight(self.weight) - if self.bias is not None: - fan_in = np.prod(self.weight_shape[1:]) - bound = 1 / math.sqrt(fan_in) - init_bias = get_condconv_initializer( - partial(nn.init.uniform_, a=-bound, b=bound), self.num_experts, self.bias_shape) - init_bias(self.bias) - - def forward(self, x, routing_weights): - B, C, H, W = x.shape - weight = torch.matmul(routing_weights, self.weight) - new_weight_shape = (B * self.out_channels, self.in_channels // self.groups) + self.kernel_size - weight = weight.view(new_weight_shape) - bias = None - if self.bias is not None: - bias = torch.matmul(routing_weights, self.bias) - bias = bias.view(B * self.out_channels) - # move batch elements with channels so each batch element can be efficiently convolved with separate kernel - x = x.view(1, B * C, H, W) - if self.dynamic_padding: - out = conv2d_same( - x, weight, bias, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups * B) - else: - out = F.conv2d( - x, weight, bias, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups * B) - out = out.permute([1, 0, 2, 3]).view(B, self.out_channels, out.shape[-2], out.shape[-1]) - - # Literal port (from TF definition) - # x = torch.split(x, 1, 0) - # weight = torch.split(weight, 1, 0) - # if self.bias is not None: - # bias = torch.matmul(routing_weights, self.bias) - # bias = torch.split(bias, 1, 0) - # else: - # bias = [None] * B - # out = [] - # for xi, wi, bi in zip(x, weight, bias): - # wi = wi.view(*self.weight_shape) - # if bi is not None: - # bi = bi.view(*self.bias_shape) - # out.append(self.conv_fn( - # xi, wi, bi, stride=self.stride, padding=self.padding, - # dilation=self.dilation, groups=self.groups)) - # out = torch.cat(out, 0) - return out - - -def select_conv2d(in_chs, out_chs, kernel_size, **kwargs): - assert 'groups' not in kwargs # only use 'depthwise' bool arg - if isinstance(kernel_size, list): - assert 'num_experts' not in kwargs # MixNet + CondConv combo not supported currently - # We're going to use only lists for defining the MixedConv2d kernel groups, - # ints, tuples, other iterables will continue to pass to normal conv and specify h, w. - m = MixedConv2d(in_chs, out_chs, kernel_size, **kwargs) - else: - depthwise = kwargs.pop('depthwise', False) - groups = out_chs if depthwise else 1 - if 'num_experts' in kwargs and kwargs['num_experts'] > 0: - m = CondConv2d(in_chs, out_chs, kernel_size, groups=groups, **kwargs) - else: - m = create_conv2d_pad(in_chs, out_chs, kernel_size, groups=groups, **kwargs) - return m diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/checkpoint/c2_model_loading.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/checkpoint/c2_model_loading.py deleted file mode 100644 index c6de2a3c830089aa7a0d27df96bb4a45fc5a7b0d..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/checkpoint/c2_model_loading.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import re -from typing import Dict, List -import torch -from tabulate import tabulate - - -def convert_basic_c2_names(original_keys): - """ - Apply some basic name conversion to names in C2 weights. - It only deals with typical backbone models. - - Args: - original_keys (list[str]): - Returns: - list[str]: The same number of strings matching those in original_keys. - """ - layer_keys = copy.deepcopy(original_keys) - layer_keys = [ - {"pred_b": "linear_b", "pred_w": "linear_w"}.get(k, k) for k in layer_keys - ] # some hard-coded mappings - - layer_keys = [k.replace("_", ".") for k in layer_keys] - layer_keys = [re.sub("\\.b$", ".bias", k) for k in layer_keys] - layer_keys = [re.sub("\\.w$", ".weight", k) for k in layer_keys] - # Uniform both bn and gn names to "norm" - layer_keys = [re.sub("bn\\.s$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.bias$", "norm.bias", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.rm", "norm.running_mean", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.running.mean$", "norm.running_mean", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.riv$", "norm.running_var", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.running.var$", "norm.running_var", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.gamma$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.beta$", "norm.bias", k) for k in layer_keys] - layer_keys = [re.sub("gn\\.s$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("gn\\.bias$", "norm.bias", k) for k in layer_keys] - - # stem - layer_keys = [re.sub("^res\\.conv1\\.norm\\.", "conv1.norm.", k) for k in layer_keys] - # to avoid mis-matching with "conv1" in other components (e.g. detection head) - layer_keys = [re.sub("^conv1\\.", "stem.conv1.", k) for k in layer_keys] - - # layer1-4 is used by torchvision, however we follow the C2 naming strategy (res2-5) - # layer_keys = [re.sub("^res2.", "layer1.", k) for k in layer_keys] - # layer_keys = [re.sub("^res3.", "layer2.", k) for k in layer_keys] - # layer_keys = [re.sub("^res4.", "layer3.", k) for k in layer_keys] - # layer_keys = [re.sub("^res5.", "layer4.", k) for k in layer_keys] - - # blocks - layer_keys = [k.replace(".branch1.", ".shortcut.") for k in layer_keys] - layer_keys = [k.replace(".branch2a.", ".conv1.") for k in layer_keys] - layer_keys = [k.replace(".branch2b.", ".conv2.") for k in layer_keys] - layer_keys = [k.replace(".branch2c.", ".conv3.") for k in layer_keys] - - # DensePose substitutions - layer_keys = [re.sub("^body.conv.fcn", "body_conv_fcn", k) for k in layer_keys] - layer_keys = [k.replace("AnnIndex.lowres", "ann_index_lowres") for k in layer_keys] - layer_keys = [k.replace("Index.UV.lowres", "index_uv_lowres") for k in layer_keys] - layer_keys = [k.replace("U.lowres", "u_lowres") for k in layer_keys] - layer_keys = [k.replace("V.lowres", "v_lowres") for k in layer_keys] - return layer_keys - - -def convert_c2_detectron_names(weights): - """ - Map Caffe2 Detectron weight names to Detectron2 names. - - Args: - weights (dict): name -> tensor - - Returns: - dict: detectron2 names -> tensor - dict: detectron2 names -> C2 names - """ - logger = logging.getLogger(__name__) - logger.info("Renaming Caffe2 weights ......") - original_keys = sorted(weights.keys()) - layer_keys = copy.deepcopy(original_keys) - - layer_keys = convert_basic_c2_names(layer_keys) - - # -------------------------------------------------------------------------- - # RPN hidden representation conv - # -------------------------------------------------------------------------- - # FPN case - # In the C2 model, the RPN hidden layer conv is defined for FPN level 2 and then - # shared for all other levels, hence the appearance of "fpn2" - layer_keys = [ - k.replace("conv.rpn.fpn2", "proposal_generator.rpn_head.conv") for k in layer_keys - ] - # Non-FPN case - layer_keys = [k.replace("conv.rpn", "proposal_generator.rpn_head.conv") for k in layer_keys] - - # -------------------------------------------------------------------------- - # RPN box transformation conv - # -------------------------------------------------------------------------- - # FPN case (see note above about "fpn2") - layer_keys = [ - k.replace("rpn.bbox.pred.fpn2", "proposal_generator.rpn_head.anchor_deltas") - for k in layer_keys - ] - layer_keys = [ - k.replace("rpn.cls.logits.fpn2", "proposal_generator.rpn_head.objectness_logits") - for k in layer_keys - ] - # Non-FPN case - layer_keys = [ - k.replace("rpn.bbox.pred", "proposal_generator.rpn_head.anchor_deltas") for k in layer_keys - ] - layer_keys = [ - k.replace("rpn.cls.logits", "proposal_generator.rpn_head.objectness_logits") - for k in layer_keys - ] - - # -------------------------------------------------------------------------- - # Fast R-CNN box head - # -------------------------------------------------------------------------- - layer_keys = [re.sub("^bbox\\.pred", "bbox_pred", k) for k in layer_keys] - layer_keys = [re.sub("^cls\\.score", "cls_score", k) for k in layer_keys] - layer_keys = [re.sub("^fc6\\.", "box_head.fc1.", k) for k in layer_keys] - layer_keys = [re.sub("^fc7\\.", "box_head.fc2.", k) for k in layer_keys] - # 4conv1fc head tensor names: head_conv1_w, head_conv1_gn_s - layer_keys = [re.sub("^head\\.conv", "box_head.conv", k) for k in layer_keys] - - # -------------------------------------------------------------------------- - # FPN lateral and output convolutions - # -------------------------------------------------------------------------- - def fpn_map(name): - """ - Look for keys with the following patterns: - 1) Starts with "fpn.inner." - Example: "fpn.inner.res2.2.sum.lateral.weight" - Meaning: These are lateral pathway convolutions - 2) Starts with "fpn.res" - Example: "fpn.res2.2.sum.weight" - Meaning: These are FPN output convolutions - """ - splits = name.split(".") - norm = ".norm" if "norm" in splits else "" - if name.startswith("fpn.inner."): - # splits example: ['fpn', 'inner', 'res2', '2', 'sum', 'lateral', 'weight'] - stage = int(splits[2][len("res") :]) - return "fpn_lateral{}{}.{}".format(stage, norm, splits[-1]) - elif name.startswith("fpn.res"): - # splits example: ['fpn', 'res2', '2', 'sum', 'weight'] - stage = int(splits[1][len("res") :]) - return "fpn_output{}{}.{}".format(stage, norm, splits[-1]) - return name - - layer_keys = [fpn_map(k) for k in layer_keys] - - # -------------------------------------------------------------------------- - # Mask R-CNN mask head - # -------------------------------------------------------------------------- - # roi_heads.StandardROIHeads case - layer_keys = [k.replace(".[mask].fcn", "mask_head.mask_fcn") for k in layer_keys] - layer_keys = [re.sub("^\\.mask\\.fcn", "mask_head.mask_fcn", k) for k in layer_keys] - layer_keys = [k.replace("mask.fcn.logits", "mask_head.predictor") for k in layer_keys] - # roi_heads.Res5ROIHeads case - layer_keys = [k.replace("conv5.mask", "mask_head.deconv") for k in layer_keys] - - # -------------------------------------------------------------------------- - # Keypoint R-CNN head - # -------------------------------------------------------------------------- - # interestingly, the keypoint head convs have blob names that are simply "conv_fcnX" - layer_keys = [k.replace("conv.fcn", "roi_heads.keypoint_head.conv_fcn") for k in layer_keys] - layer_keys = [ - k.replace("kps.score.lowres", "roi_heads.keypoint_head.score_lowres") for k in layer_keys - ] - layer_keys = [k.replace("kps.score.", "roi_heads.keypoint_head.score.") for k in layer_keys] - - # -------------------------------------------------------------------------- - # Done with replacements - # -------------------------------------------------------------------------- - assert len(set(layer_keys)) == len(layer_keys) - assert len(original_keys) == len(layer_keys) - - new_weights = {} - new_keys_to_original_keys = {} - for orig, renamed in zip(original_keys, layer_keys): - new_keys_to_original_keys[renamed] = orig - if renamed.startswith("bbox_pred.") or renamed.startswith("mask_head.predictor."): - # remove the meaningless prediction weight for background class - new_start_idx = 4 if renamed.startswith("bbox_pred.") else 1 - new_weights[renamed] = weights[orig][new_start_idx:] - logger.info( - "Remove prediction weight for background class in {}. The shape changes from " - "{} to {}.".format( - renamed, tuple(weights[orig].shape), tuple(new_weights[renamed].shape) - ) - ) - elif renamed.startswith("cls_score."): - # move weights of bg class from original index 0 to last index - logger.info( - "Move classification weights for background class in {} from index 0 to " - "index {}.".format(renamed, weights[orig].shape[0] - 1) - ) - new_weights[renamed] = torch.cat([weights[orig][1:], weights[orig][:1]]) - else: - new_weights[renamed] = weights[orig] - - return new_weights, new_keys_to_original_keys - - -# Note the current matching is not symmetric. -# it assumes model_state_dict will have longer names. -def align_and_update_state_dicts(model_state_dict, ckpt_state_dict, c2_conversion=True): - """ - Match names between the two state-dict, and returns a new chkpt_state_dict with names - converted to match model_state_dict with heuristics. The returned dict can be later - loaded with fvcore checkpointer. - If `c2_conversion==True`, `ckpt_state_dict` is assumed to be a Caffe2 - model and will be renamed at first. - - Strategy: suppose that the models that we will create will have prefixes appended - to each of its keys, for example due to an extra level of nesting that the original - pre-trained weights from ImageNet won't contain. For example, model.state_dict() - might return backbone[0].body.res2.conv1.weight, while the pre-trained model contains - res2.conv1.weight. We thus want to match both parameters together. - For that, we look for each model weight, look among all loaded keys if there is one - that is a suffix of the current weight name, and use it if that's the case. - If multiple matches exist, take the one with longest size - of the corresponding name. For example, for the same model as before, the pretrained - weight file can contain both res2.conv1.weight, as well as conv1.weight. In this case, - we want to match backbone[0].body.conv1.weight to conv1.weight, and - backbone[0].body.res2.conv1.weight to res2.conv1.weight. - """ - model_keys = sorted(model_state_dict.keys()) - if c2_conversion: - ckpt_state_dict, original_keys = convert_c2_detectron_names(ckpt_state_dict) - # original_keys: the name in the original dict (before renaming) - else: - original_keys = {x: x for x in ckpt_state_dict.keys()} - ckpt_keys = sorted(ckpt_state_dict.keys()) - - def match(a, b): - # Matched ckpt_key should be a complete (starts with '.') suffix. - # For example, roi_heads.mesh_head.whatever_conv1 does not match conv1, - # but matches whatever_conv1 or mesh_head.whatever_conv1. - return a == b or a.endswith("." + b) - - # get a matrix of string matches, where each (i, j) entry correspond to the size of the - # ckpt_key string, if it matches - match_matrix = [len(j) if match(i, j) else 0 for i in model_keys for j in ckpt_keys] - match_matrix = torch.as_tensor(match_matrix).view(len(model_keys), len(ckpt_keys)) - # use the matched one with longest size in case of multiple matches - max_match_size, idxs = match_matrix.max(1) - # remove indices that correspond to no-match - idxs[max_match_size == 0] = -1 - - logger = logging.getLogger(__name__) - # matched_pairs (matched checkpoint key --> matched model key) - matched_keys = {} - result_state_dict = {} - for idx_model, idx_ckpt in enumerate(idxs.tolist()): - if idx_ckpt == -1: - continue - key_model = model_keys[idx_model] - key_ckpt = ckpt_keys[idx_ckpt] - value_ckpt = ckpt_state_dict[key_ckpt] - shape_in_model = model_state_dict[key_model].shape - - if shape_in_model != value_ckpt.shape: - logger.warning( - "Shape of {} in checkpoint is {}, while shape of {} in model is {}.".format( - key_ckpt, value_ckpt.shape, key_model, shape_in_model - ) - ) - logger.warning( - "{} will not be loaded. Please double check and see if this is desired.".format( - key_ckpt - ) - ) - continue - - assert key_model not in result_state_dict - result_state_dict[key_model] = value_ckpt - if key_ckpt in matched_keys: # already added to matched_keys - logger.error( - "Ambiguity found for {} in checkpoint!" - "It matches at least two keys in the model ({} and {}).".format( - key_ckpt, key_model, matched_keys[key_ckpt] - ) - ) - raise ValueError("Cannot match one checkpoint key to multiple keys in the model.") - - matched_keys[key_ckpt] = key_model - - # logging: - matched_model_keys = sorted(matched_keys.values()) - if len(matched_model_keys) == 0: - logger.warning("No weights in checkpoint matched with model.") - return ckpt_state_dict - common_prefix = _longest_common_prefix(matched_model_keys) - rev_matched_keys = {v: k for k, v in matched_keys.items()} - original_keys = {k: original_keys[rev_matched_keys[k]] for k in matched_model_keys} - - model_key_groups = _group_keys_by_module(matched_model_keys, original_keys) - table = [] - memo = set() - for key_model in matched_model_keys: - if key_model in memo: - continue - if key_model in model_key_groups: - group = model_key_groups[key_model] - memo |= set(group) - shapes = [tuple(model_state_dict[k].shape) for k in group] - table.append( - ( - _longest_common_prefix([k[len(common_prefix) :] for k in group]) + "*", - _group_str([original_keys[k] for k in group]), - " ".join([str(x).replace(" ", "") for x in shapes]), - ) - ) - else: - key_checkpoint = original_keys[key_model] - shape = str(tuple(model_state_dict[key_model].shape)) - table.append((key_model[len(common_prefix) :], key_checkpoint, shape)) - table_str = tabulate( - table, tablefmt="pipe", headers=["Names in Model", "Names in Checkpoint", "Shapes"] - ) - logger.info( - "Following weights matched with " - + (f"submodule {common_prefix[:-1]}" if common_prefix else "model") - + ":\n" - + table_str - ) - - unmatched_ckpt_keys = [k for k in ckpt_keys if k not in set(matched_keys.keys())] - for k in unmatched_ckpt_keys: - result_state_dict[k] = ckpt_state_dict[k] - return result_state_dict - - -def _group_keys_by_module(keys: List[str], original_names: Dict[str, str]): - """ - Params in the same submodule are grouped together. - - Args: - keys: names of all parameters - original_names: mapping from parameter name to their name in the checkpoint - - Returns: - dict[name -> all other names in the same group] - """ - - def _submodule_name(key): - pos = key.rfind(".") - if pos < 0: - return None - prefix = key[: pos + 1] - return prefix - - all_submodules = [_submodule_name(k) for k in keys] - all_submodules = [x for x in all_submodules if x] - all_submodules = sorted(all_submodules, key=len) - - ret = {} - for prefix in all_submodules: - group = [k for k in keys if k.startswith(prefix)] - if len(group) <= 1: - continue - original_name_lcp = _longest_common_prefix_str([original_names[k] for k in group]) - if len(original_name_lcp) == 0: - # don't group weights if original names don't share prefix - continue - - for k in group: - if k in ret: - continue - ret[k] = group - return ret - - -def _longest_common_prefix(names: List[str]) -> str: - """ - ["abc.zfg", "abc.zef"] -> "abc." - """ - names = [n.split(".") for n in names] - m1, m2 = min(names), max(names) - ret = [a for a, b in zip(m1, m2) if a == b] - ret = ".".join(ret) + "." if len(ret) else "" - return ret - - -def _longest_common_prefix_str(names: List[str]) -> str: - m1, m2 = min(names), max(names) - lcp = [] - for a, b in zip(m1, m2): - if a == b: - lcp.append(a) - else: - break - lcp = "".join(lcp) - return lcp - - -def _group_str(names: List[str]) -> str: - """ - Turn "common1", "common2", "common3" into "common{1,2,3}" - """ - lcp = _longest_common_prefix_str(names) - rest = [x[len(lcp) :] for x in names] - rest = "{" + ",".join(rest) + "}" - ret = lcp + rest - - # add some simplification for BN specifically - ret = ret.replace("bn_{beta,running_mean,running_var,gamma}", "bn_*") - ret = ret.replace("bn_beta,bn_running_mean,bn_running_var,bn_gamma", "bn_*") - return ret diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/boxes.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/boxes.py deleted file mode 100644 index fd396f68645db1d6946056eed868ffcc02cd7a22..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/boxes.py +++ /dev/null @@ -1,425 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -import numpy as np -from enum import IntEnum, unique -from typing import List, Tuple, Union -import torch -from torch import device - -_RawBoxType = Union[List[float], Tuple[float, ...], torch.Tensor, np.ndarray] - - -@unique -class BoxMode(IntEnum): - """ - Enum of different ways to represent a box. - """ - - XYXY_ABS = 0 - """ - (x0, y0, x1, y1) in absolute floating points coordinates. - The coordinates in range [0, width or height]. - """ - XYWH_ABS = 1 - """ - (x0, y0, w, h) in absolute floating points coordinates. - """ - XYXY_REL = 2 - """ - Not yet supported! - (x0, y0, x1, y1) in range [0, 1]. They are relative to the size of the image. - """ - XYWH_REL = 3 - """ - Not yet supported! - (x0, y0, w, h) in range [0, 1]. They are relative to the size of the image. - """ - XYWHA_ABS = 4 - """ - (xc, yc, w, h, a) in absolute floating points coordinates. - (xc, yc) is the center of the rotated box, and the angle a is in degrees ccw. - """ - - @staticmethod - def convert(box: _RawBoxType, from_mode: "BoxMode", to_mode: "BoxMode") -> _RawBoxType: - """ - Args: - box: can be a k-tuple, k-list or an Nxk array/tensor, where k = 4 or 5 - from_mode, to_mode (BoxMode) - - Returns: - The converted box of the same type. - """ - if from_mode == to_mode: - return box - - original_type = type(box) - is_numpy = isinstance(box, np.ndarray) - single_box = isinstance(box, (list, tuple)) - if single_box: - assert len(box) == 4 or len(box) == 5, ( - "BoxMode.convert takes either a k-tuple/list or an Nxk array/tensor," - " where k == 4 or 5" - ) - arr = torch.tensor(box)[None, :] - else: - # avoid modifying the input box - if is_numpy: - arr = torch.from_numpy(np.asarray(box)).clone() - else: - arr = box.clone() - - assert to_mode not in [BoxMode.XYXY_REL, BoxMode.XYWH_REL] and from_mode not in [ - BoxMode.XYXY_REL, - BoxMode.XYWH_REL, - ], "Relative mode not yet supported!" - - if from_mode == BoxMode.XYWHA_ABS and to_mode == BoxMode.XYXY_ABS: - assert ( - arr.shape[-1] == 5 - ), "The last dimension of input shape must be 5 for XYWHA format" - original_dtype = arr.dtype - arr = arr.double() - - w = arr[:, 2] - h = arr[:, 3] - a = arr[:, 4] - c = torch.abs(torch.cos(a * math.pi / 180.0)) - s = torch.abs(torch.sin(a * math.pi / 180.0)) - # This basically computes the horizontal bounding rectangle of the rotated box - new_w = c * w + s * h - new_h = c * h + s * w - - # convert center to top-left corner - arr[:, 0] -= new_w / 2.0 - arr[:, 1] -= new_h / 2.0 - # bottom-right corner - arr[:, 2] = arr[:, 0] + new_w - arr[:, 3] = arr[:, 1] + new_h - - arr = arr[:, :4].to(dtype=original_dtype) - elif from_mode == BoxMode.XYWH_ABS and to_mode == BoxMode.XYWHA_ABS: - original_dtype = arr.dtype - arr = arr.double() - arr[:, 0] += arr[:, 2] / 2.0 - arr[:, 1] += arr[:, 3] / 2.0 - angles = torch.zeros((arr.shape[0], 1), dtype=arr.dtype) - arr = torch.cat((arr, angles), axis=1).to(dtype=original_dtype) - else: - if to_mode == BoxMode.XYXY_ABS and from_mode == BoxMode.XYWH_ABS: - arr[:, 2] += arr[:, 0] - arr[:, 3] += arr[:, 1] - elif from_mode == BoxMode.XYXY_ABS and to_mode == BoxMode.XYWH_ABS: - arr[:, 2] -= arr[:, 0] - arr[:, 3] -= arr[:, 1] - else: - raise NotImplementedError( - "Conversion from BoxMode {} to {} is not supported yet".format( - from_mode, to_mode - ) - ) - - if single_box: - return original_type(arr.flatten().tolist()) - if is_numpy: - return arr.numpy() - else: - return arr - - -class Boxes: - """ - This structure stores a list of boxes as a Nx4 torch.Tensor. - It supports some common methods about boxes - (`area`, `clip`, `nonempty`, etc), - and also behaves like a Tensor - (support indexing, `to(device)`, `.device`, and iteration over all boxes) - - Attributes: - tensor (torch.Tensor): float matrix of Nx4. Each row is (x1, y1, x2, y2). - """ - - def __init__(self, tensor: torch.Tensor): - """ - Args: - tensor (Tensor[float]): a Nx4 matrix. Each row is (x1, y1, x2, y2). - """ - if not isinstance(tensor, torch.Tensor): - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=torch.device("cpu")) - else: - tensor = tensor.to(torch.float32) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that does not depend on - # the inputs (and consequently confuses jit) - tensor = tensor.reshape((-1, 4)).to(dtype=torch.float32) - assert tensor.dim() == 2 and tensor.size(-1) == 4, tensor.size() - - self.tensor = tensor - - def clone(self) -> "Boxes": - """ - Clone the Boxes. - - Returns: - Boxes - """ - return Boxes(self.tensor.clone()) - - def to(self, device: torch.device): - # Boxes are assumed float32 and does not support to(dtype) - return Boxes(self.tensor.to(device=device)) - - def area(self) -> torch.Tensor: - """ - Computes the area of all the boxes. - - Returns: - torch.Tensor: a vector with areas of each box. - """ - box = self.tensor - area = (box[:, 2] - box[:, 0]) * (box[:, 3] - box[:, 1]) - return area - - def clip(self, box_size: Tuple[int, int]) -> None: - """ - Clip (in place) the boxes by limiting x coordinates to the range [0, width] - and y coordinates to the range [0, height]. - - Args: - box_size (height, width): The clipping box's size. - """ - assert torch.isfinite(self.tensor).all(), "Box tensor contains infinite or NaN!" - h, w = box_size - x1 = self.tensor[:, 0].clamp(min=0, max=w) - y1 = self.tensor[:, 1].clamp(min=0, max=h) - x2 = self.tensor[:, 2].clamp(min=0, max=w) - y2 = self.tensor[:, 3].clamp(min=0, max=h) - self.tensor = torch.stack((x1, y1, x2, y2), dim=-1) - - def nonempty(self, threshold: float = 0.0) -> torch.Tensor: - """ - Find boxes that are non-empty. - A box is considered empty, if either of its side is no larger than threshold. - - Returns: - Tensor: - a binary vector which represents whether each box is empty - (False) or non-empty (True). - """ - box = self.tensor - widths = box[:, 2] - box[:, 0] - heights = box[:, 3] - box[:, 1] - keep = (widths > threshold) & (heights > threshold) - return keep - - def __getitem__(self, item) -> "Boxes": - """ - Args: - item: int, slice, or a BoolTensor - - Returns: - Boxes: Create a new :class:`Boxes` by indexing. - - The following usage are allowed: - - 1. `new_boxes = boxes[3]`: return a `Boxes` which contains only one box. - 2. `new_boxes = boxes[2:10]`: return a slice of boxes. - 3. `new_boxes = boxes[vector]`, where vector is a torch.BoolTensor - with `length = len(boxes)`. Nonzero elements in the vector will be selected. - - Note that the returned Boxes might share storage with this Boxes, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Boxes(self.tensor[item].view(1, -1)) - b = self.tensor[item] - assert b.dim() == 2, "Indexing on Boxes with {} failed to return a matrix!".format(item) - return Boxes(b) - - def __len__(self) -> int: - return self.tensor.shape[0] - - def __repr__(self) -> str: - return "Boxes(" + str(self.tensor) + ")" - - def inside_box(self, box_size: Tuple[int, int], boundary_threshold: int = 0) -> torch.Tensor: - """ - Args: - box_size (height, width): Size of the reference box. - boundary_threshold (int): Boxes that extend beyond the reference box - boundary by more than boundary_threshold are considered "outside". - - Returns: - a binary vector, indicating whether each box is inside the reference box. - """ - height, width = box_size - inds_inside = ( - (self.tensor[..., 0] >= -boundary_threshold) - & (self.tensor[..., 1] >= -boundary_threshold) - & (self.tensor[..., 2] < width + boundary_threshold) - & (self.tensor[..., 3] < height + boundary_threshold) - ) - return inds_inside - - def get_centers(self) -> torch.Tensor: - """ - Returns: - The box centers in a Nx2 array of (x, y). - """ - return (self.tensor[:, :2] + self.tensor[:, 2:]) / 2 - - def scale(self, scale_x: float, scale_y: float) -> None: - """ - Scale the box with horizontal and vertical scaling factors - """ - self.tensor[:, 0::2] *= scale_x - self.tensor[:, 1::2] *= scale_y - - @classmethod - def cat(cls, boxes_list: List["Boxes"]) -> "Boxes": - """ - Concatenates a list of Boxes into a single Boxes - - Arguments: - boxes_list (list[Boxes]) - - Returns: - Boxes: the concatenated Boxes - """ - assert isinstance(boxes_list, (list, tuple)) - if len(boxes_list) == 0: - return cls(torch.empty(0)) - assert all([isinstance(box, Boxes) for box in boxes_list]) - - # use torch.cat (v.s. layers.cat) so the returned boxes never share storage with input - cat_boxes = cls(torch.cat([b.tensor for b in boxes_list], dim=0)) - return cat_boxes - - @property - def device(self) -> device: - return self.tensor.device - - # type "Iterator[torch.Tensor]", yield, and iter() not supported by torchscript - # https://github.com/pytorch/pytorch/issues/18627 - @torch.jit.unused - def __iter__(self): - """ - Yield a box as a Tensor of shape (4,) at a time. - """ - yield from self.tensor - - -def pairwise_intersection(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Given two lists of boxes of size N and M, - compute the intersection area between __all__ N x M pairs of boxes. - The box order must be (xmin, ymin, xmax, ymax) - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: intersection, sized [N,M]. - """ - boxes1, boxes2 = boxes1.tensor, boxes2.tensor - width_height = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) - torch.max( - boxes1[:, None, :2], boxes2[:, :2] - ) # [N,M,2] - - width_height.clamp_(min=0) # [N,M,2] - intersection = width_height.prod(dim=2) # [N,M] - return intersection - - -# implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py -# with slight modifications -def pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Given two lists of boxes of size N and M, compute the IoU - (intersection over union) between **all** N x M pairs of boxes. - The box order must be (xmin, ymin, xmax, ymax). - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: IoU, sized [N,M]. - """ - area1 = boxes1.area() # [N] - area2 = boxes2.area() # [M] - inter = pairwise_intersection(boxes1, boxes2) - - # handle empty boxes - iou = torch.where( - inter > 0, - inter / (area1[:, None] + area2 - inter), - torch.zeros(1, dtype=inter.dtype, device=inter.device), - ) - return iou - - -def pairwise_ioa(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Similar to :func:`pariwise_iou` but compute the IoA (intersection over boxes2 area). - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: IoA, sized [N,M]. - """ - area2 = boxes2.area() # [M] - inter = pairwise_intersection(boxes1, boxes2) - - # handle empty boxes - ioa = torch.where( - inter > 0, inter / area2, torch.zeros(1, dtype=inter.dtype, device=inter.device) - ) - return ioa - - -def pairwise_point_box_distance(points: torch.Tensor, boxes: Boxes): - """ - Pairwise distance between N points and M boxes. The distance between a - point and a box is represented by the distance from the point to 4 edges - of the box. Distances are all positive when the point is inside the box. - - Args: - points: Nx2 coordinates. Each row is (x, y) - boxes: M boxes - - Returns: - Tensor: distances of size (N, M, 4). The 4 values are distances from - the point to the left, top, right, bottom of the box. - """ - x, y = points.unsqueeze(dim=2).unbind(dim=1) # (N, 1) - x0, y0, x1, y1 = boxes.tensor.unsqueeze(dim=0).unbind(dim=2) # (1, M) - return torch.stack([x - x0, y - y0, x1 - x, y1 - y], dim=2) - - -def matched_pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Compute pairwise intersection over union (IOU) of two sets of matched - boxes that have the same number of boxes. - Similar to :func:`pairwise_iou`, but computes only diagonal elements of the matrix. - - Args: - boxes1 (Boxes): bounding boxes, sized [N,4]. - boxes2 (Boxes): same length as boxes1 - Returns: - Tensor: iou, sized [N]. - """ - assert len(boxes1) == len( - boxes2 - ), "boxlists should have the same" "number of entries, got {}, {}".format( - len(boxes1), len(boxes2) - ) - area1 = boxes1.area() # [N] - area2 = boxes2.area() # [N] - box1, box2 = boxes1.tensor, boxes2.tensor - lt = torch.max(box1[:, :2], box2[:, :2]) # [N,2] - rb = torch.min(box1[:, 2:], box2[:, 2:]) # [N,2] - wh = (rb - lt).clamp(min=0) # [N,2] - inter = wh[:, 0] * wh[:, 1] # [N] - iou = inter / (area1 + area2 - inter) # [N] - return iou diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/losses/accuracy.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/losses/accuracy.py deleted file mode 100644 index c0fd2e7e74a0f721c4a814c09d6e453e5956bb38..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/losses/accuracy.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch.nn as nn - - -def accuracy(pred, target, topk=1, thresh=None): - """Calculate accuracy according to the prediction and target. - - Args: - pred (torch.Tensor): The model prediction, shape (N, num_class, ...) - target (torch.Tensor): The target of each prediction, shape (N, , ...) - topk (int | tuple[int], optional): If the predictions in ``topk`` - matches the target, the predictions will be regarded as - correct ones. Defaults to 1. - thresh (float, optional): If not None, predictions with scores under - this threshold are considered incorrect. Default to None. - - Returns: - float | tuple[float]: If the input ``topk`` is a single integer, - the function will return a single float as accuracy. If - ``topk`` is a tuple containing multiple integers, the - function will return a tuple containing accuracies of - each ``topk`` number. - """ - assert isinstance(topk, (int, tuple)) - if isinstance(topk, int): - topk = (topk, ) - return_single = True - else: - return_single = False - - maxk = max(topk) - if pred.size(0) == 0: - accu = [pred.new_tensor(0.) for i in range(len(topk))] - return accu[0] if return_single else accu - assert pred.ndim == target.ndim + 1 - assert pred.size(0) == target.size(0) - assert maxk <= pred.size(1), \ - f'maxk {maxk} exceeds pred dimension {pred.size(1)}' - pred_value, pred_label = pred.topk(maxk, dim=1) - # transpose to shape (maxk, N, ...) - pred_label = pred_label.transpose(0, 1) - correct = pred_label.eq(target.unsqueeze(0).expand_as(pred_label)) - if thresh is not None: - # Only prediction values larger than thresh are counted as correct - correct = correct & (pred_value > thresh).t() - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / target.numel())) - return res[0] if return_single else res - - -class Accuracy(nn.Module): - """Accuracy calculation module.""" - - def __init__(self, topk=(1, ), thresh=None): - """Module to calculate the accuracy. - - Args: - topk (tuple, optional): The criterion used to calculate the - accuracy. Defaults to (1,). - thresh (float, optional): If not None, predictions with scores - under this threshold are considered incorrect. Default to None. - """ - super().__init__() - self.topk = topk - self.thresh = thresh - - def forward(self, pred, target): - """Forward function to calculate accuracy. - - Args: - pred (torch.Tensor): Prediction of models. - target (torch.Tensor): Target for each prediction. - - Returns: - tuple[float]: The accuracies under different topk criterions. - """ - return accuracy(pred, target, self.topk, self.thresh) diff --git a/spaces/Tej3/DepthEstimation/models/densenet_v2.py b/spaces/Tej3/DepthEstimation/models/densenet_v2.py deleted file mode 100644 index f788f8b84a659adb726a0f3b5db289cc88ab54fc..0000000000000000000000000000000000000000 --- a/spaces/Tej3/DepthEstimation/models/densenet_v2.py +++ /dev/null @@ -1,179 +0,0 @@ -import torch -import torch.nn as nn -import torchvision -import torch.nn.functional as F -from torchinfo import summary -from math import sqrt -# torch.autograd.set_detect_anomaly(True) - -class attention_gate(nn.Module): - def __init__(self, in_c, out_c): - super().__init__() - - self.Wg = nn.Sequential( - nn.Conv2d(in_c[0], out_c, kernel_size=1, padding=0), - nn.BatchNorm2d(out_c) - ) - self.Ws = nn.Sequential( - nn.Conv2d(in_c[1], out_c, kernel_size=1, padding=0), - nn.BatchNorm2d(out_c) - ) - self.relu = nn.ReLU(inplace=True) - self.output = nn.Sequential( - nn.Conv2d(out_c, out_c, kernel_size=1, padding=0), - nn.Sigmoid() - ) - - def forward(self, g, s): - Wg = self.Wg(g) - Ws = self.Ws(s) - out = self.relu(Wg + Ws) - out = self.output(out) - return out - -class Conv_Block(nn.Module): - def __init__(self, in_c, out_c, activation_fn=nn.LeakyReLU): - super().__init__() - - self.conv1 = nn.Conv2d(in_c, out_c, kernel_size=3, padding=1) - self.bn1 = nn.BatchNorm2d(out_c) - - self.conv2 = nn.Conv2d(out_c, out_c, kernel_size=3, padding=1) - self.bn2 = nn.BatchNorm2d(out_c) - - self.activfn = activation_fn() - - self.dropout = nn.Dropout(0.25) - - def forward(self, inputs): - - x = self.conv1(inputs) - x = self.bn1(x) - x = self.activfn(x) - # x = self.dropout(x) - - x = self.conv2(x) - x = self.bn2(x) - x = self.activfn(x) - # x = self.dropout(x) - - return x - -class Encoder_Block(nn.Module): - def __init__(self, in_c, out_c): - super().__init__() - - self.conv = Conv_Block(in_c, out_c) - self.pool = nn.MaxPool2d((2, 2)) - - def forward(self, inputs): - x = self.conv(inputs) - p = self.pool(x) - - return x, p - -class Enc_Dec_Model(nn.Module): - def __init__(self): - super(Enc_Dec_Model, self).__init__() - self.encoder1 = Encoder_Block(3, 64) - self.encoder2 = Encoder_Block(64, 128) - self.encoder3 = Encoder_Block(128, 256) - """ Bottleneck """ - self.bottleneck = Conv_Block(256, 512) - - """ Decoder """ - self.d1 = Decoder_Block([512, 256], 256) - self.d2 = Decoder_Block([256, 128], 128) - self.d3 = Decoder_Block([128, 64], 64) - - """ Classifier """ - self.outputs = nn.Conv2d(64, 1, kernel_size=1, padding=0) - - def forward(self, x): - - """ Encoder """ - s1, p1 = self.encoder1(x) - s2, p2 = self.encoder2(p1) - s3, p3 = self.encoder3(p2) - - """ Bottleneck """ - b = self.bottleneck(p3) - - """ Decoder """ - d1 = self.d1(b, s3) - d2 = self.d2(d1, s2) - d3 = self.d3(d2, s1) - - """ Classifier """ - outputs = self.outputs(d3) - out_depth = torch.sigmoid(outputs) - return out_depth - -class Decoder(nn.Module): - def __init__(self): - super(Decoder, self).__init__() - - """ Decoder """ - self.d1 = Decoder_Block(1920, 2048) - self.d2 = Decoder_Block(2048, 1024) - self.d3 = Decoder_Block(1024, 512) - self.d4 = Decoder_Block(512, 256) - self.d5 = Decoder_Block(256, 128) - # self.d6 = Decoder_Block(128, 64) - - """ Classifier """ - self.outputs = nn.Conv2d(128, 1, kernel_size=1, padding=0) - - def forward(self, x): - """ Decoder """ - # b = self.MHA2(b) - x = self.d1(x) - x = self.d2(x) - x = self.d3(x) - x = self.d4(x) - x = self.d5(x) - # x = self.d6(x) - - """ Classifier """ - outputs = self.outputs(x) - out_depth = torch.sigmoid(outputs) - return out_depth - -class Decoder_Block(nn.Module): - def __init__(self, in_c, out_c, activation_fn=nn.LeakyReLU): - super().__init__() - - self.up = nn.ConvTranspose2d(in_c, out_c, kernel_size=2, stride=2, padding=0) - self.conv = Conv_Block(out_c, out_c, activation_fn) - - def forward(self, inputs): - x = self.up(inputs) - x = self.conv(x) - - return x - - -class Densenet(nn.Module): - def __init__(self, max_depth) -> None: - super().__init__() - self.densenet = torchvision.models.densenet201(weights=torchvision.models.DenseNet201_Weights.DEFAULT) - for param in self.densenet.features.parameters(): - param.requires_grad = False - - self.densenet = torch.nn.Sequential(*(list(self.densenet.children())[:-1])) - self.decoder = Decoder() - # self.enc_dec_model = Enc_Dec_Model() - self.max_depth = max_depth - - def forward(self, x): - x = self.densenet(x) - x = self.decoder(x) - # x = self.enc_dec_model(x) - x = x*self.max_depth - # print(x.shape) - return {'pred_d':x} - -if __name__ == "__main__": - model = Densenet(max_depth=10).cuda() - print(model) - summary(model, input_size=(64,3,448,448)) \ No newline at end of file diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/blocks.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/blocks.py deleted file mode 100644 index 1995a4bf7339e8deb7eaaffda4f819dda55e7ac7..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/blocks.py +++ /dev/null @@ -1,111 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import fvcore.nn.weight_init as weight_init -from torch import nn - -from .batch_norm import FrozenBatchNorm2d, get_norm -from .wrappers import Conv2d - - -""" -CNN building blocks. -""" - - -class CNNBlockBase(nn.Module): - """ - A CNN block is assumed to have input channels, output channels and a stride. - The input and output of `forward()` method must be NCHW tensors. - The method can perform arbitrary computation but must match the given - channels and stride specification. - - Attribute: - in_channels (int): - out_channels (int): - stride (int): - """ - - def __init__(self, in_channels, out_channels, stride): - """ - The `__init__` method of any subclass should also contain these arguments. - - Args: - in_channels (int): - out_channels (int): - stride (int): - """ - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.stride = stride - - def freeze(self): - """ - Make this block not trainable. - This method sets all parameters to `requires_grad=False`, - and convert all BatchNorm layers to FrozenBatchNorm - - Returns: - the block itself - """ - for p in self.parameters(): - p.requires_grad = False - FrozenBatchNorm2d.convert_frozen_batchnorm(self) - return self - - -class DepthwiseSeparableConv2d(nn.Module): - """ - A kxk depthwise convolution + a 1x1 convolution. - - In :paper:`xception`, norm & activation are applied on the second conv. - :paper:`mobilenet` uses norm & activation on both convs. - """ - - def __init__( - self, - in_channels, - out_channels, - kernel_size=3, - padding=1, - dilation=1, - *, - norm1=None, - activation1=None, - norm2=None, - activation2=None, - ): - """ - Args: - norm1, norm2 (str or callable): normalization for the two conv layers. - activation1, activation2 (callable(Tensor) -> Tensor): activation - function for the two conv layers. - """ - super().__init__() - self.depthwise = Conv2d( - in_channels, - in_channels, - kernel_size=kernel_size, - padding=padding, - dilation=dilation, - groups=in_channels, - bias=not norm1, - norm=get_norm(norm1, in_channels), - activation=activation1, - ) - self.pointwise = Conv2d( - in_channels, - out_channels, - kernel_size=1, - bias=not norm2, - norm=get_norm(norm2, out_channels), - activation=activation2, - ) - - # default initialization - weight_init.c2_msra_fill(self.depthwise) - weight_init.c2_msra_fill(self.pointwise) - - def forward(self, x): - return self.pointwise(self.depthwise(x)) diff --git a/spaces/UVA-MSBA/Employee_Turnover_Ex/README.md b/spaces/UVA-MSBA/Employee_Turnover_Ex/README.md deleted file mode 100644 index 1f715e3bd9f5eb1c8f4b0418d5360191b55086ca..0000000000000000000000000000000000000000 --- a/spaces/UVA-MSBA/Employee_Turnover_Ex/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Employee Turnover Ex -emoji: 🦀 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/UserXTheUnknown/stablediffusion-infinity/PyPatchMatch/patch_match.py b/spaces/UserXTheUnknown/stablediffusion-infinity/PyPatchMatch/patch_match.py deleted file mode 100644 index 14febe43c78f49120c8be9f02941c3c1f8fdc3b1..0000000000000000000000000000000000000000 --- a/spaces/UserXTheUnknown/stablediffusion-infinity/PyPatchMatch/patch_match.py +++ /dev/null @@ -1,263 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : patch_match.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 01/09/2020 -# -# Distributed under terms of the MIT license. - -import ctypes -import os.path as osp -from typing import Optional, Union - -import numpy as np -from PIL import Image - - -import os -if os.name!="nt": - # Otherwise, fall back to the subprocess. - import subprocess - print('Compiling and loading c extensions from "{}".'.format(osp.realpath(osp.dirname(__file__)))) - # subprocess.check_call(['./travis.sh'], cwd=osp.dirname(__file__)) - subprocess.check_call("make clean && make", cwd=osp.dirname(__file__), shell=True) - - -__all__ = ['set_random_seed', 'set_verbose', 'inpaint', 'inpaint_regularity'] - - -class CShapeT(ctypes.Structure): - _fields_ = [ - ('width', ctypes.c_int), - ('height', ctypes.c_int), - ('channels', ctypes.c_int), - ] - - -class CMatT(ctypes.Structure): - _fields_ = [ - ('data_ptr', ctypes.c_void_p), - ('shape', CShapeT), - ('dtype', ctypes.c_int) - ] - -import tempfile -from urllib.request import urlopen, Request -import shutil -from pathlib import Path -from tqdm import tqdm - -def download_url_to_file(url, dst, hash_prefix=None, progress=True): - r"""Download object at the given URL to a local path. - - Args: - url (string): URL of the object to download - dst (string): Full path where object will be saved, e.g. ``/tmp/temporary_file`` - hash_prefix (string, optional): If not None, the SHA256 downloaded file should start with ``hash_prefix``. - Default: None - progress (bool, optional): whether or not to display a progress bar to stderr - Default: True - https://pytorch.org/docs/stable/_modules/torch/hub.html#load_state_dict_from_url - """ - file_size = None - req = Request(url) - u = urlopen(req) - meta = u.info() - if hasattr(meta, 'getheaders'): - content_length = meta.getheaders("Content-Length") - else: - content_length = meta.get_all("Content-Length") - if content_length is not None and len(content_length) > 0: - file_size = int(content_length[0]) - - # We deliberately save it in a temp file and move it after - # download is complete. This prevents a local working checkpoint - # being overridden by a broken download. - dst = os.path.expanduser(dst) - dst_dir = os.path.dirname(dst) - f = tempfile.NamedTemporaryFile(delete=False, dir=dst_dir) - - try: - with tqdm(total=file_size, disable=not progress, - unit='B', unit_scale=True, unit_divisor=1024) as pbar: - while True: - buffer = u.read(8192) - if len(buffer) == 0: - break - f.write(buffer) - pbar.update(len(buffer)) - - f.close() - shutil.move(f.name, dst) - finally: - f.close() - if os.path.exists(f.name): - os.remove(f.name) - -if os.name!="nt": - PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.so')) -else: - if not os.path.exists(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')): - download_url_to_file(url="https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/libpatchmatch.dll",dst=osp.join(osp.dirname(__file__), 'libpatchmatch.dll')) - if not os.path.exists(osp.join(osp.dirname(__file__), 'opencv_world460.dll')): - download_url_to_file(url="https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/opencv_world460.dll",dst=osp.join(osp.dirname(__file__), 'opencv_world460.dll')) - if not os.path.exists(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')): - print("[Dependency Missing] Please download https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/libpatchmatch.dll and put it into the PyPatchMatch folder") - if not os.path.exists(osp.join(osp.dirname(__file__), 'opencv_world460.dll')): - print("[Dependency Missing] Please download https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/opencv_world460.dll and put it into the PyPatchMatch folder") - PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')) - -PMLIB.PM_set_random_seed.argtypes = [ctypes.c_uint] -PMLIB.PM_set_verbose.argtypes = [ctypes.c_int] -PMLIB.PM_free_pymat.argtypes = [CMatT] -PMLIB.PM_inpaint.argtypes = [CMatT, CMatT, ctypes.c_int] -PMLIB.PM_inpaint.restype = CMatT -PMLIB.PM_inpaint_regularity.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float] -PMLIB.PM_inpaint_regularity.restype = CMatT -PMLIB.PM_inpaint2.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int] -PMLIB.PM_inpaint2.restype = CMatT -PMLIB.PM_inpaint2_regularity.argtypes = [CMatT, CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float] -PMLIB.PM_inpaint2_regularity.restype = CMatT - - -def set_random_seed(seed: int): - PMLIB.PM_set_random_seed(ctypes.c_uint(seed)) - - -def set_verbose(verbose: bool): - PMLIB.PM_set_verbose(ctypes.c_int(verbose)) - - -def inpaint( - image: Union[np.ndarray, Image.Image], - mask: Optional[Union[np.ndarray, Image.Image]] = None, - *, - global_mask: Optional[Union[np.ndarray, Image.Image]] = None, - patch_size: int = 15 -) -> np.ndarray: - """ - PatchMatch based inpainting proposed in: - - PatchMatch : A Randomized Correspondence Algorithm for Structural Image Editing - C.Barnes, E.Shechtman, A.Finkelstein and Dan B.Goldman - SIGGRAPH 2009 - - Args: - image (Union[np.ndarray, Image.Image]): the input image, should be 3-channel RGB/BGR. - mask (Union[np.array, Image.Image], optional): the mask of the hole(s) to be filled, should be 1-channel. - If not provided (None), the algorithm will treat all purely white pixels as the holes (255, 255, 255). - global_mask (Union[np.array, Image.Image], optional): the target mask of the output image. - patch_size (int): the patch size for the inpainting algorithm. - - Return: - result (np.ndarray): the repaired image, of the same size as the input image. - """ - - if isinstance(image, Image.Image): - image = np.array(image) - image = np.ascontiguousarray(image) - assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8' - - if mask is None: - mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8') - mask = np.ascontiguousarray(mask) - else: - mask = _canonize_mask_array(mask) - - if global_mask is None: - ret_pymat = PMLIB.PM_inpaint(np_to_pymat(image), np_to_pymat(mask), ctypes.c_int(patch_size)) - else: - global_mask = _canonize_mask_array(global_mask) - ret_pymat = PMLIB.PM_inpaint2(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), ctypes.c_int(patch_size)) - - ret_npmat = pymat_to_np(ret_pymat) - PMLIB.PM_free_pymat(ret_pymat) - - return ret_npmat - - -def inpaint_regularity( - image: Union[np.ndarray, Image.Image], - mask: Optional[Union[np.ndarray, Image.Image]], - ijmap: np.ndarray, - *, - global_mask: Optional[Union[np.ndarray, Image.Image]] = None, - patch_size: int = 15, guide_weight: float = 0.25 -) -> np.ndarray: - if isinstance(image, Image.Image): - image = np.array(image) - image = np.ascontiguousarray(image) - - assert isinstance(ijmap, np.ndarray) and ijmap.ndim == 3 and ijmap.shape[2] == 3 and ijmap.dtype == 'float32' - ijmap = np.ascontiguousarray(ijmap) - - assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8' - if mask is None: - mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8') - mask = np.ascontiguousarray(mask) - else: - mask = _canonize_mask_array(mask) - - - if global_mask is None: - ret_pymat = PMLIB.PM_inpaint_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight)) - else: - global_mask = _canonize_mask_array(global_mask) - ret_pymat = PMLIB.PM_inpaint2_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight)) - - ret_npmat = pymat_to_np(ret_pymat) - PMLIB.PM_free_pymat(ret_pymat) - - return ret_npmat - - -def _canonize_mask_array(mask): - if isinstance(mask, Image.Image): - mask = np.array(mask) - if mask.ndim == 2 and mask.dtype == 'uint8': - mask = mask[..., np.newaxis] - assert mask.ndim == 3 and mask.shape[2] == 1 and mask.dtype == 'uint8' - return np.ascontiguousarray(mask) - - -dtype_pymat_to_ctypes = [ - ctypes.c_uint8, - ctypes.c_int8, - ctypes.c_uint16, - ctypes.c_int16, - ctypes.c_int32, - ctypes.c_float, - ctypes.c_double, -] - - -dtype_np_to_pymat = { - 'uint8': 0, - 'int8': 1, - 'uint16': 2, - 'int16': 3, - 'int32': 4, - 'float32': 5, - 'float64': 6, -} - - -def np_to_pymat(npmat): - assert npmat.ndim == 3 - return CMatT( - ctypes.cast(npmat.ctypes.data, ctypes.c_void_p), - CShapeT(npmat.shape[1], npmat.shape[0], npmat.shape[2]), - dtype_np_to_pymat[str(npmat.dtype)] - ) - - -def pymat_to_np(pymat): - npmat = np.ctypeslib.as_array( - ctypes.cast(pymat.data_ptr, ctypes.POINTER(dtype_pymat_to_ctypes[pymat.dtype])), - (pymat.shape.height, pymat.shape.width, pymat.shape.channels) - ) - ret = np.empty(npmat.shape, npmat.dtype) - ret[:] = npmat - return ret - diff --git a/spaces/VIPLab/Track-Anything/tracker/model/network.py b/spaces/VIPLab/Track-Anything/tracker/model/network.py deleted file mode 100644 index c5f179db17ac424ffee2951ade3934e08cd6276a..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Track-Anything/tracker/model/network.py +++ /dev/null @@ -1,198 +0,0 @@ -""" -This file defines XMem, the highest level nn.Module interface -During training, it is used by trainer.py -During evaluation, it is used by inference_core.py - -It further depends on modules.py which gives more detailed implementations of sub-modules -""" - -import torch -import torch.nn as nn - -from model.aggregate import aggregate -from model.modules import * -from model.memory_util import * - - -class XMem(nn.Module): - def __init__(self, config, model_path=None, map_location=None): - """ - model_path/map_location are used in evaluation only - map_location is for converting models saved in cuda to cpu - """ - super().__init__() - model_weights = self.init_hyperparameters(config, model_path, map_location) - - self.single_object = config.get('single_object', False) - print(f'Single object mode: {self.single_object}') - - self.key_encoder = KeyEncoder() - self.value_encoder = ValueEncoder(self.value_dim, self.hidden_dim, self.single_object) - - # Projection from f16 feature space to key/value space - self.key_proj = KeyProjection(1024, self.key_dim) - - self.decoder = Decoder(self.value_dim, self.hidden_dim) - - if model_weights is not None: - self.load_weights(model_weights, init_as_zero_if_needed=True) - - def encode_key(self, frame, need_sk=True, need_ek=True): - # Determine input shape - if len(frame.shape) == 5: - # shape is b*t*c*h*w - need_reshape = True - b, t = frame.shape[:2] - # flatten so that we can feed them into a 2D CNN - frame = frame.flatten(start_dim=0, end_dim=1) - elif len(frame.shape) == 4: - # shape is b*c*h*w - need_reshape = False - else: - raise NotImplementedError - - f16, f8, f4 = self.key_encoder(frame) - key, shrinkage, selection = self.key_proj(f16, need_sk, need_ek) - - if need_reshape: - # B*C*T*H*W - key = key.view(b, t, *key.shape[-3:]).transpose(1, 2).contiguous() - if shrinkage is not None: - shrinkage = shrinkage.view(b, t, *shrinkage.shape[-3:]).transpose(1, 2).contiguous() - if selection is not None: - selection = selection.view(b, t, *selection.shape[-3:]).transpose(1, 2).contiguous() - - # B*T*C*H*W - f16 = f16.view(b, t, *f16.shape[-3:]) - f8 = f8.view(b, t, *f8.shape[-3:]) - f4 = f4.view(b, t, *f4.shape[-3:]) - - return key, shrinkage, selection, f16, f8, f4 - - def encode_value(self, frame, image_feat_f16, h16, masks, is_deep_update=True): - num_objects = masks.shape[1] - if num_objects != 1: - others = torch.cat([ - torch.sum( - masks[:, [j for j in range(num_objects) if i!=j]] - , dim=1, keepdim=True) - for i in range(num_objects)], 1) - else: - others = torch.zeros_like(masks) - - g16, h16 = self.value_encoder(frame, image_feat_f16, h16, masks, others, is_deep_update) - - return g16, h16 - - # Used in training only. - # This step is replaced by MemoryManager in test time - def read_memory(self, query_key, query_selection, memory_key, - memory_shrinkage, memory_value): - """ - query_key : B * CK * H * W - query_selection : B * CK * H * W - memory_key : B * CK * T * H * W - memory_shrinkage: B * 1 * T * H * W - memory_value : B * num_objects * CV * T * H * W - """ - batch_size, num_objects = memory_value.shape[:2] - memory_value = memory_value.flatten(start_dim=1, end_dim=2) - - affinity = get_affinity(memory_key, memory_shrinkage, query_key, query_selection) - memory = readout(affinity, memory_value) - memory = memory.view(batch_size, num_objects, self.value_dim, *memory.shape[-2:]) - - return memory - - def segment(self, multi_scale_features, memory_readout, - hidden_state, selector=None, h_out=True, strip_bg=True): - - hidden_state, logits = self.decoder(*multi_scale_features, hidden_state, memory_readout, h_out=h_out) - prob = torch.sigmoid(logits) - if selector is not None: - prob = prob * selector - - logits, prob = aggregate(prob, dim=1, return_logits=True) - if strip_bg: - # Strip away the background - prob = prob[:, 1:] - - return hidden_state, logits, prob - - def forward(self, mode, *args, **kwargs): - if mode == 'encode_key': - return self.encode_key(*args, **kwargs) - elif mode == 'encode_value': - return self.encode_value(*args, **kwargs) - elif mode == 'read_memory': - return self.read_memory(*args, **kwargs) - elif mode == 'segment': - return self.segment(*args, **kwargs) - else: - raise NotImplementedError - - def init_hyperparameters(self, config, model_path=None, map_location=None): - """ - Init three hyperparameters: key_dim, value_dim, and hidden_dim - If model_path is provided, we load these from the model weights - The actual parameters are then updated to the config in-place - - Otherwise we load it either from the config or default - """ - if model_path is not None: - # load the model and key/value/hidden dimensions with some hacks - # config is updated with the loaded parameters - model_weights = torch.load(model_path, map_location=map_location) - self.key_dim = model_weights['key_proj.key_proj.weight'].shape[0] - self.value_dim = model_weights['value_encoder.fuser.block2.conv2.weight'].shape[0] - self.disable_hidden = 'decoder.hidden_update.transform.weight' not in model_weights - if self.disable_hidden: - self.hidden_dim = 0 - else: - self.hidden_dim = model_weights['decoder.hidden_update.transform.weight'].shape[0]//3 - print(f'Hyperparameters read from the model weights: ' - f'C^k={self.key_dim}, C^v={self.value_dim}, C^h={self.hidden_dim}') - else: - model_weights = None - # load dimensions from config or default - if 'key_dim' not in config: - self.key_dim = 64 - print(f'key_dim not found in config. Set to default {self.key_dim}') - else: - self.key_dim = config['key_dim'] - - if 'value_dim' not in config: - self.value_dim = 512 - print(f'value_dim not found in config. Set to default {self.value_dim}') - else: - self.value_dim = config['value_dim'] - - if 'hidden_dim' not in config: - self.hidden_dim = 64 - print(f'hidden_dim not found in config. Set to default {self.hidden_dim}') - else: - self.hidden_dim = config['hidden_dim'] - - self.disable_hidden = (self.hidden_dim <= 0) - - config['key_dim'] = self.key_dim - config['value_dim'] = self.value_dim - config['hidden_dim'] = self.hidden_dim - - return model_weights - - def load_weights(self, src_dict, init_as_zero_if_needed=False): - # Maps SO weight (without other_mask) to MO weight (with other_mask) - for k in list(src_dict.keys()): - if k == 'value_encoder.conv1.weight': - if src_dict[k].shape[1] == 4: - print('Converting weights from single object to multiple objects.') - pads = torch.zeros((64,1,7,7), device=src_dict[k].device) - if not init_as_zero_if_needed: - print('Randomly initialized padding.') - nn.init.orthogonal_(pads) - else: - print('Zero-initialized padding.') - src_dict[k] = torch.cat([src_dict[k], pads], 1) - - self.load_state_dict(src_dict) diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Forefront.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Forefront.py deleted file mode 100644 index e7e89831cc4ec6dc37ea094d9828a7582e981ff1..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Forefront.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import json -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://forefront.com' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - json_data = { - 'text': messages[-1]['content'], - 'action': 'noauth', - 'id': '', - 'parentId': '', - 'workspaceId': '', - 'messagePersona': '607e41fe-95be-497e-8e97-010a59b2e2c0', - 'model': 'gpt-4', - 'messages': messages[:-1] if len(messages) > 1 else [], - 'internetMode': 'auto' - } - response = requests.post( 'https://streaming.tenant-forefront-default.knative.chi.coreweave.com/free-chat', - json=json_data, stream=True) - for token in response.iter_lines(): - if b'delta' in token: - token = json.loads(token.decode().split('data: ')[1])['delta'] - yield (token) -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Widium/Style-Recreation/functions/extract.py b/spaces/Widium/Style-Recreation/functions/extract.py deleted file mode 100644 index 7154ee6cacaf8cca6e66a6fc12d50cf67d851e2d..0000000000000000000000000000000000000000 --- a/spaces/Widium/Style-Recreation/functions/extract.py +++ /dev/null @@ -1,56 +0,0 @@ -# *************************************************************************** # -# # -# extract.py # -# # -# By: Widium # -# Github : https://github.com/widium # -# # -# Created: 2023/05/05 16:10:17 by Widium # -# Updated: 2023/05/05 16:10:17 by Widium # -# # -# **************************************************************************** # - -from typing import List -from keras import Model -from tensorflow import Tensor - -from .compute import gram_matrix -from .processing import preprocessing_img - -# ===================================================== # - -def extract_style(features_map)->List[Tensor]: - """ - Compute a Gram Matrix for each Feature Map and store in list - - Args: - features_map (List[Tensor]): List of Feature Map - - Returns: - List[Tensor]: List of Gram Matrix the same size of `features_map` - """ - Grams_style = list() - - for style in features_map: - Gram = gram_matrix(style) - Grams_style.append(Gram) - - return Grams_style - -# ===================================================== # - -def get_features_map(model : Model, img : Tensor)->list: - """ - Extract feature maps from the given image using the provided model. - - Args: - model (Model): The pre-trained Multi-Output VGG19 model. - img (Tensor): The input image tensor. - - Returns: - list: A list of feature maps extracted from the input image. - """ - process_img = preprocessing_img(img) - features_map = model(process_img) - - return (features_map) \ No newline at end of file diff --git a/spaces/WindVChen/INR-Harmon/datasets/build_INR_dataset.py b/spaces/WindVChen/INR-Harmon/datasets/build_INR_dataset.py deleted file mode 100644 index 141384f87bee9e4e4741dc87e6297046e05d9fe7..0000000000000000000000000000000000000000 --- a/spaces/WindVChen/INR-Harmon/datasets/build_INR_dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -from utils import misc -from albumentations import Resize - - -class Implicit2DGenerator(object): - def __init__(self, opt, mode): - if mode == 'Train': - sidelength = opt.INR_input_size - elif mode == 'Val': - sidelength = opt.input_size - else: - raise NotImplementedError - - self.mode = mode - - self.size = sidelength - - if isinstance(sidelength, int): - sidelength = (sidelength, sidelength) - - self.mgrid = misc.get_mgrid(sidelength) - - self.transform = Resize(self.size, self.size) - - def generator(self, torch_transforms, composite_image, real_image, mask): - composite_image = torch_transforms(self.transform(image=composite_image)['image']) - real_image = torch_transforms(self.transform(image=real_image)['image']) - - fg_INR_RGB = composite_image.permute(1, 2, 0).contiguous().view(-1, 3) - fg_transfer_INR_RGB = real_image.permute(1, 2, 0).contiguous().view(-1, 3) - bg_INR_RGB = real_image.permute(1, 2, 0).contiguous().view(-1, 3) - - fg_INR_coordinates = self.mgrid - bg_INR_coordinates = self.mgrid - - return fg_INR_coordinates, bg_INR_coordinates, fg_INR_RGB, fg_transfer_INR_RGB, bg_INR_RGB diff --git a/spaces/Xenova/sponsorblock-ml/src/shared.py b/spaces/Xenova/sponsorblock-ml/src/shared.py deleted file mode 100644 index 03cb53c51fadc96763e897cafbf4dc1e611c3577..0000000000000000000000000000000000000000 --- a/spaces/Xenova/sponsorblock-ml/src/shared.py +++ /dev/null @@ -1,406 +0,0 @@ -from transformers.trainer_utils import get_last_checkpoint as glc -import os -from utils import re_findall -import logging -import sys -from datasets import load_dataset -import re -import gc -from time import time_ns -import random -import numpy as np -import torch -from typing import Optional -from dataclasses import dataclass, field -from enum import Enum - - -logging.basicConfig() -logger = logging.getLogger(__name__) - -# Setup logging -logging.basicConfig( - format='%(asctime)s - %(levelname)s - %(name)s - %(message)s', - datefmt='%m/%d/%Y %H:%M:%S', - handlers=[logging.StreamHandler(sys.stdout)], -) - -CATEGORIES = [None, 'SPONSOR', 'SELFPROMO', 'INTERACTION'] - -ACTION_OPTIONS = ['skip', 'mute', 'full'] - -CATGEGORY_OPTIONS = { - 'SPONSOR': 'Sponsor', - 'SELFPROMO': 'Self/unpaid promo', - 'INTERACTION': 'Interaction reminder', -} - -START_SEGMENT_TEMPLATE = 'START_{}_TOKEN' -END_SEGMENT_TEMPLATE = 'END_{}_TOKEN' - - -class CustomTokens(Enum): - EXTRACT_SEGMENTS_PREFIX = 'EXTRACT_SEGMENTS: ' - - # Preprocessing tokens - URL = 'URL_TOKEN' - HYPHENATED_URL = 'HYPHENATED_URL_TOKEN' - NUMBER_PERCENTAGE = 'NUMBER_PERCENTAGE_TOKEN' - NUMBER = 'NUMBER_TOKEN' - - SHORT_HYPHENATED = 'SHORT_HYPHENATED_TOKEN' - LONG_WORD = 'LONG_WORD_TOKEN' - - # Custom YouTube tokens - MUSIC = '[Music]' - APPLAUSE = '[Applause]' - LAUGHTER = '[Laughter]' - - PROFANITY = 'PROFANITY_TOKEN' - - # Segment tokens - NO_SEGMENT = 'NO_SEGMENT_TOKEN' - - START_SPONSOR = START_SEGMENT_TEMPLATE.format('SPONSOR') - END_SPONSOR = END_SEGMENT_TEMPLATE.format('SPONSOR') - - START_SELFPROMO = START_SEGMENT_TEMPLATE.format('SELFPROMO') - END_SELFPROMO = END_SEGMENT_TEMPLATE.format('SELFPROMO') - - START_INTERACTION = START_SEGMENT_TEMPLATE.format('INTERACTION') - END_INTERACTION = END_SEGMENT_TEMPLATE.format('INTERACTION') - - BETWEEN_SEGMENTS = 'BETWEEN_SEGMENTS_TOKEN' - - @classmethod - def custom_tokens(cls): - return [e.value for e in cls] - - @classmethod - def add_custom_tokens(cls, tokenizer): - tokenizer.add_tokens(cls.custom_tokens()) - - -_SEGMENT_START = START_SEGMENT_TEMPLATE.format(r'(?P\w+)') -_SEGMENT_END = END_SEGMENT_TEMPLATE.format(r'\w+') -SEGMENT_MATCH_RE = fr'{_SEGMENT_START}\s*(?P.*?)\s*(?:{_SEGMENT_END}|$)' - - -def extract_sponsor_matches_from_text(text): - if CustomTokens.NO_SEGMENT.value in text: - return [] - else: - return re_findall(SEGMENT_MATCH_RE, text) - - -def extract_sponsor_matches(texts): - return list(map(extract_sponsor_matches_from_text, texts)) - - -@dataclass -class DatasetArguments: - data_dir: Optional[str] = field( - default='data', - metadata={ - 'help': 'The directory which stores train, test and/or validation data.' - }, - ) - processed_file: Optional[str] = field( - default='segments.json', - metadata={ - 'help': 'Processed data file' - }, - ) - processed_database: Optional[str] = field( - default='processed_database.json', - metadata={ - 'help': 'Processed database file' - }, - ) - - overwrite_cache: bool = field( - default=False, metadata={'help': 'Overwrite the cached training and evaluation sets'} - ) - - dataset_cache_dir: Optional[str] = field( - default=None, - metadata={ - 'help': 'Where to store the cached datasets' - }, - ) - - train_file: Optional[str] = field( - default='train.json', metadata={'help': 'The input training data file (a jsonlines file).'} - ) - validation_file: Optional[str] = field( - default='valid.json', - metadata={ - 'help': 'An optional input evaluation data file to evaluate the metrics on (a jsonlines file).' - }, - ) - test_file: Optional[str] = field( - default='test.json', - metadata={ - 'help': 'An optional input test data file to evaluate the metrics on (a jsonlines file).' - }, - ) - - c_train_file: Optional[str] = field( - default='c_train.json', metadata={'help': 'The input training data file (a jsonlines file).'} - ) - c_validation_file: Optional[str] = field( - default='c_valid.json', - metadata={ - 'help': 'An optional input evaluation data file to evaluate the metrics on (a jsonlines file).' - }, - ) - c_test_file: Optional[str] = field( - default='c_test.json', - metadata={ - 'help': 'An optional input test data file to evaluate the metrics on (a jsonlines file).' - }, - ) - - def __post_init__(self): - if self.train_file is None or self.validation_file is None: - raise ValueError( - 'Need either a dataset name or a training/validation file.') - - else: - train_extension = self.train_file.split(".")[-1] - assert train_extension in [ - "csv", "json"], "`train_file` should be a csv or a json file." - validation_extension = self.validation_file.split(".")[-1] - assert ( - validation_extension == train_extension - ), "`validation_file` should have the same extension (csv or json) as `train_file`." - - -@dataclass -class OutputArguments: - - output_dir: str = field( - default='out', - metadata={ - 'help': 'The output directory where the model predictions and checkpoints will be written to and read from.' - }, - ) - checkpoint: Optional[str] = field( - default=None, - metadata={ - 'help': 'Choose the checkpoint/model to train from or test with. Defaults to the latest checkpoint found in `output_dir`.' - }, - ) - models_dir: str = field( - default='models', - metadata={ - 'help': 'The output directory where the model predictions and checkpoints will be written to and read from.' - }, - ) - # classifier_dir: str = field( - # default='out', - # metadata={ - # 'help': 'The output directory where the model predictions and checkpoints will be written to and read from.' - # }, - # ) - - -def seed_factory(): - return time_ns() % (2**32 - 1) - - -@dataclass -class GeneralArguments: - seed: Optional[int] = field(default_factory=seed_factory, metadata={ - 'help': 'Set seed for deterministic training and testing. By default, it uses the current time (results in essentially random results).' - }) - no_cuda: bool = field(default=False, metadata={ - 'help': 'Do not use CUDA even when it is available'}) - - def __post_init__(self): - random.seed(self.seed) - np.random.seed(self.seed) - torch.manual_seed(self.seed) - torch.cuda.manual_seed_all(self.seed) - - -def seconds_to_time(seconds, remove_leading_zeroes=False): - fractional = round(seconds % 1, 3) - fractional = '' if fractional == 0 else str(fractional)[1:] - h, remainder = divmod(abs(int(seconds)), 3600) - m, s = divmod(remainder, 60) - hms = f'{h:02}:{m:02}:{s:02}' - if remove_leading_zeroes: - hms = re.sub(r'^0(?:0:0?)?', '', hms) - return f"{'-' if seconds < 0 else ''}{hms}{fractional}" - - -def reset(): - torch.clear_autocast_cache() - torch.cuda.empty_cache() - gc.collect() - print(torch.cuda.memory_summary(device=None, abbreviated=False)) - - -def load_datasets(dataset_args: DatasetArguments): - - logger.info('Reading datasets') - data_files = {} - - if dataset_args.train_file is not None: - data_files['train'] = os.path.join( - dataset_args.data_dir, dataset_args.train_file) - if dataset_args.validation_file is not None: - data_files['validation'] = os.path.join( - dataset_args.data_dir, dataset_args.validation_file) - if dataset_args.test_file is not None: - data_files['test'] = os.path.join( - dataset_args.data_dir, dataset_args.test_file) - - return load_dataset('json', data_files=data_files, cache_dir=dataset_args.dataset_cache_dir) - - -@dataclass -class AdditionalTrainingArguments: - seed: Optional[int] = GeneralArguments.__dataclass_fields__['seed'] - - num_train_epochs: float = field( - default=1, metadata={'help': 'Total number of training epochs to perform.'}) - - save_steps: int = field(default=5000, metadata={ - 'help': 'Save checkpoint every X updates steps.'}) - eval_steps: int = field(default=25000, metadata={ - 'help': 'Run an evaluation every X steps.'}) - logging_steps: int = field(default=5000, metadata={ - 'help': 'Log every X updates steps.'}) - - # do_eval: bool = field(default=False, metadata={ - # 'help': 'Whether to run eval on the dev set.'}) - # do_predict: bool = field(default=False, metadata={ - # 'help': 'Whether to run predictions on the test set.'}) - - per_device_train_batch_size: int = field( - default=4, metadata={'help': 'Batch size per GPU/TPU core/CPU for training.'} - ) - per_device_eval_batch_size: int = field( - default=4, metadata={'help': 'Batch size per GPU/TPU core/CPU for evaluation.'} - ) - - # report_to: Optional[List[str]] = field( - # default=None, metadata={"help": "The list of integrations to report the results and logs to."} - # ) - evaluation_strategy: str = field( - default='steps', - metadata={ - 'help': 'The evaluation strategy to use.', - 'choices': ['no', 'steps', 'epoch'] - }, - ) - - # evaluation_strategy (:obj:`str` or :class:`~transformers.trainer_utils.IntervalStrategy`, `optional`, defaults to :obj:`"no"`): - # The evaluation strategy to adopt during training. Possible values are: - - # * :obj:`"no"`: No evaluation is done during training. - # * :obj:`"steps"`: Evaluation is done (and logged) every :obj:`eval_steps`. - # * :obj:`"epoch"`: Evaluation is done at the end of each epoch. - - preprocessing_num_workers: Optional[int] = field( - default=None, - metadata={'help': 'The number of processes to use for the preprocessing.'}, - ) - max_seq_length: int = field( - default=512, - metadata={ - "help": "The maximum total input sequence length after tokenization. Sequences longer " - "than this will be truncated, sequences shorter will be padded." - }, - ) - max_train_samples: Optional[int] = field( - default=None, - metadata={ - "help": "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - }, - ) - max_eval_samples: Optional[int] = field( - default=None, - metadata={ - "help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this " - "value if set." - }, - ) - max_predict_samples: Optional[int] = field( - default=None, - metadata={ - "help": "For debugging purposes or quicker training, truncate the number of prediction examples to this " - "value if set." - }, - ) - - -@dataclass -class CustomTrainingArguments(OutputArguments, AdditionalTrainingArguments): - pass - - -def get_last_checkpoint(training_args): - last_checkpoint = None - if os.path.isdir(training_args.output_dir) and not training_args.overwrite_output_dir: - last_checkpoint = glc(training_args.output_dir) - if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: - raise ValueError( - f'Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.' - ) - elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: - logger.info( - f'Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch.' - ) - return last_checkpoint - - -def train_from_checkpoint(trainer, last_checkpoint, training_args): - checkpoint = None - if training_args.resume_from_checkpoint is not None: - checkpoint = training_args.resume_from_checkpoint - elif last_checkpoint is not None: - checkpoint = last_checkpoint - - train_result = trainer.train(resume_from_checkpoint=checkpoint) - - trainer.save_model() # Saves the tokenizer too for easy upload - - return train_result - - -def prepare_datasets(raw_datasets, dataset_args: DatasetArguments, training_args: CustomTrainingArguments, preprocess_function): - - with training_args.main_process_first(desc="dataset map pre-processing"): - raw_datasets = raw_datasets.map( - preprocess_function, - batched=True, - load_from_cache_file=not dataset_args.overwrite_cache, - desc="Running tokenizer on dataset", - ) - - if 'train' not in raw_datasets: - raise ValueError('Train dataset missing') - train_dataset = raw_datasets['train'] - if training_args.max_train_samples is not None: - train_dataset = train_dataset.select( - range(training_args.max_train_samples)) - - if 'validation' not in raw_datasets: - raise ValueError('Validation dataset missing') - eval_dataset = raw_datasets['validation'] - if training_args.max_eval_samples is not None: - eval_dataset = eval_dataset.select( - range(training_args.max_eval_samples)) - - if 'test' not in raw_datasets: - raise ValueError('Test dataset missing') - predict_dataset = raw_datasets['test'] - if training_args.max_predict_samples is not None: - predict_dataset = predict_dataset.select( - range(training_args.max_predict_samples)) - - return train_dataset, eval_dataset, predict_dataset diff --git a/spaces/XzJosh/TianDou-Bert-VITS2/attentions.py b/spaces/XzJosh/TianDou-Bert-VITS2/attentions.py deleted file mode 100644 index 1192dd7268c20c11010e73a6017ed09549695afe..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/TianDou-Bert-VITS2/attentions.py +++ /dev/null @@ -1,344 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import logging - -logger = logging.getLogger(__name__) - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - #if isflow: - # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - # self.cond_layer = weight_norm(cond_layer, name='weight') - # self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - logging.debug(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/XzJosh/nanami-Bert-VITS2/commons.py b/spaces/XzJosh/nanami-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nanami-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YE01/saya-vits/text/cleaners.py b/spaces/YE01/saya-vits/text/cleaners.py deleted file mode 100644 index 679eba65d64a30c57b643af1b76a788029f6585d..0000000000000000000000000000000000000000 --- a/spaces/YE01/saya-vits/text/cleaners.py +++ /dev/null @@ -1,128 +0,0 @@ -import re -from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3 -from text.korean import latin_to_hangul, number_to_hangul, divide_hangul, korean_to_lazy_ipa, korean_to_ipa -from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2 -from text.sanskrit import devanagari_to_ipa -from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2 -from text.thai import num_to_thai, latin_to_thai -# from text.shanghainese import shanghainese_to_ipa -# from text.cantonese import cantonese_to_ipa -from text.ngu_dialect import ngu_dialect_to_ipa - - -def japanese_cleaners(text): - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) - return text - - -def zh_ja_mixture_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - text = re.sub(r'([^।])$', r'\1।', text) - return text - - -def cjks_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - text = shanghainese_to_ipa(text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def chinese_dialect_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/embeddings.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/embeddings.py deleted file mode 100644 index 0221d891f171fa18f7d5648c7f6a3bbc0b1c4c90..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/embeddings.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import math - -import numpy as np -import torch -from torch import nn - - -def get_timestep_embedding( - timesteps: torch.Tensor, - embedding_dim: int, - flip_sin_to_cos: bool = False, - downscale_freq_shift: float = 1, - scale: float = 1, - max_period: int = 10000, -): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param embedding_dim: the dimension of the output. :param max_period: controls the minimum frequency of the - embeddings. :return: an [N x dim] Tensor of positional embeddings. - """ - assert len(timesteps.shape) == 1, "Timesteps should be a 1d-array" - - half_dim = embedding_dim // 2 - exponent = -math.log(max_period) * torch.arange( - start=0, end=half_dim, dtype=torch.float32, device=timesteps.device - ) - exponent = exponent / (half_dim - downscale_freq_shift) - - emb = torch.exp(exponent) - emb = timesteps[:, None].float() * emb[None, :] - - # scale embeddings - emb = scale * emb - - # concat sine and cosine embeddings - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=-1) - - # flip sine and cosine embeddings - if flip_sin_to_cos: - emb = torch.cat([emb[:, half_dim:], emb[:, :half_dim]], dim=-1) - - # zero pad - if embedding_dim % 2 == 1: - emb = torch.nn.functional.pad(emb, (0, 1, 0, 0)) - return emb - - -class TimestepEmbedding(nn.Module): - def __init__(self, in_channels: int, time_embed_dim: int, act_fn: str = "silu", out_dim: int = None): - super().__init__() - - self.linear_1 = nn.Linear(in_channels, time_embed_dim) - self.act = None - if act_fn == "silu": - self.act = nn.SiLU() - elif act_fn == "mish": - self.act = nn.Mish() - - if out_dim is not None: - time_embed_dim_out = out_dim - else: - time_embed_dim_out = time_embed_dim - self.linear_2 = nn.Linear(time_embed_dim, time_embed_dim_out) - - def forward(self, sample): - sample = self.linear_1(sample) - - if self.act is not None: - sample = self.act(sample) - - sample = self.linear_2(sample) - return sample - - -class Timesteps(nn.Module): - def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale_freq_shift: float): - super().__init__() - self.num_channels = num_channels - self.flip_sin_to_cos = flip_sin_to_cos - self.downscale_freq_shift = downscale_freq_shift - - def forward(self, timesteps): - t_emb = get_timestep_embedding( - timesteps, - self.num_channels, - flip_sin_to_cos=self.flip_sin_to_cos, - downscale_freq_shift=self.downscale_freq_shift, - ) - return t_emb - - -class GaussianFourierProjection(nn.Module): - """Gaussian Fourier embeddings for noise levels.""" - - def __init__( - self, embedding_size: int = 256, scale: float = 1.0, set_W_to_weight=True, log=True, flip_sin_to_cos=False - ): - super().__init__() - self.weight = nn.Parameter(torch.randn(embedding_size) * scale, requires_grad=False) - self.log = log - self.flip_sin_to_cos = flip_sin_to_cos - - if set_W_to_weight: - # to delete later - self.W = nn.Parameter(torch.randn(embedding_size) * scale, requires_grad=False) - - self.weight = self.W - - def forward(self, x): - if self.log: - x = torch.log(x) - - x_proj = x[:, None] * self.weight[None, :] * 2 * np.pi - - if self.flip_sin_to_cos: - out = torch.cat([torch.cos(x_proj), torch.sin(x_proj)], dim=-1) - else: - out = torch.cat([torch.sin(x_proj), torch.cos(x_proj)], dim=-1) - return out - - -class ImagePositionalEmbeddings(nn.Module): - """ - Converts latent image classes into vector embeddings. Sums the vector embeddings with positional embeddings for the - height and width of the latent space. - - For more details, see figure 10 of the dall-e paper: https://arxiv.org/abs/2102.12092 - - For VQ-diffusion: - - Output vector embeddings are used as input for the transformer. - - Note that the vector embeddings for the transformer are different than the vector embeddings from the VQVAE. - - Args: - num_embed (`int`): - Number of embeddings for the latent pixels embeddings. - height (`int`): - Height of the latent image i.e. the number of height embeddings. - width (`int`): - Width of the latent image i.e. the number of width embeddings. - embed_dim (`int`): - Dimension of the produced vector embeddings. Used for the latent pixel, height, and width embeddings. - """ - - def __init__( - self, - num_embed: int, - height: int, - width: int, - embed_dim: int, - ): - super().__init__() - - self.height = height - self.width = width - self.num_embed = num_embed - self.embed_dim = embed_dim - - self.emb = nn.Embedding(self.num_embed, embed_dim) - self.height_emb = nn.Embedding(self.height, embed_dim) - self.width_emb = nn.Embedding(self.width, embed_dim) - - def forward(self, index): - emb = self.emb(index) - - height_emb = self.height_emb(torch.arange(self.height, device=index.device).view(1, self.height)) - - # 1 x H x D -> 1 x H x 1 x D - height_emb = height_emb.unsqueeze(2) - - width_emb = self.width_emb(torch.arange(self.width, device=index.device).view(1, self.width)) - - # 1 x W x D -> 1 x 1 x W x D - width_emb = width_emb.unsqueeze(1) - - pos_emb = height_emb + width_emb - - # 1 x H x W x D -> 1 x L xD - pos_emb = pos_emb.view(1, self.height * self.width, -1) - - emb = emb + pos_emb[:, : emb.shape[1], :] - - return emb diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test_config_h32.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test_config_h32.py deleted file mode 100644 index a31e3874f76f9f7b089ac8834d85df2441af9b0e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test_config_h32.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=True, - window_size=32 - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/deepfashion.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/deepfashion.py deleted file mode 100644 index 1125376091f2d4ee6843ae4f2156b3b0453be369..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/deepfashion.py +++ /dev/null @@ -1,10 +0,0 @@ -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class DeepFashionDataset(CocoDataset): - - CLASSES = ('top', 'skirt', 'leggings', 'dress', 'outer', 'pants', 'bag', - 'neckwear', 'headwear', 'eyeglass', 'belt', 'footwear', 'hair', - 'skin', 'face') diff --git a/spaces/abyildirim/inst-inpaint/ldm/lr_scheduler.py b/spaces/abyildirim/inst-inpaint/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/abyildirim/inst-inpaint/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/achterbrain/Intel-Generative-Image-Dashboard/pages/Functions/Dashboard_functions.py b/spaces/achterbrain/Intel-Generative-Image-Dashboard/pages/Functions/Dashboard_functions.py deleted file mode 100644 index b9ed5d0dfa0bd70c50f565603bb32c41cae9e473..0000000000000000000000000000000000000000 --- a/spaces/achterbrain/Intel-Generative-Image-Dashboard/pages/Functions/Dashboard_functions.py +++ /dev/null @@ -1,356 +0,0 @@ -# General functions and routines used in the dashboard -''' -- Functions below are ordered by page on which they are used -- If possible, functions should not manipulate the session_state within them -''' - -import streamlit as st -import pandas as pd -import numpy as np -import seaborn as sns -import matplotlib.pyplot as plt -from PIL import Image - -##### Page-unspecific functions - -def if_true_rerun(bool_input): - ''' - This function triggers a rerun of the page if the input == True - ''' - if bool_input == True: - st.experimental_rerun() - -def assert_uploaded_frame(uploaded_df): - # Set up variables checked for - asserted_columns = { - 'Prompt_no':pd.api.types.is_integer_dtype, - 'Score':pd.api.types.is_bool_dtype, - 'Task':pd.api.types.is_object_dtype, - 'File_name':pd.api.types.is_object_dtype} - asserted_column_names = ['Prompt_no','Score','Task','File_name'] - - # Check whether all needed column names are present - df_columns_list = uploaded_df.columns.tolist() - existing_column_names = [(x in df_columns_list) for x in asserted_column_names] - assert all(existing_column_names), "The uploaded dataframe is missing a column needed for import. Your table needs to contain the columns: 'Prompt_no', 'Score', 'Task', 'File_name' " - - # Check whether all needed columns have correct dtypes - correct_column_dtypes = [] - for i_item in asserted_columns.items(): - dtype_test = i_item[1](uploaded_df[i_item[0]].dtype) - correct_column_dtypes.append(dtype_test) - assert all(correct_column_dtypes), "Incorrect dtypes in uploaded dataframe." - -def assert_multi_frame_upload(list_of_uploaded_dfs): - # Apply uploaded frame assert to list of frames - for i_df in list_of_uploaded_dfs: - assert_uploaded_frame(i_df) - -##### Dashboard main page -def prompt_to_csv(df, added_version_code='vNone'): - df_download = df - df_download['Filename']='p'+df_download['ID'].astype('str')+'_1_'+added_version_code+'.png' - df_download = df[['Prompt','Filename']].drop_duplicates(subset='Filename') - return df_download.to_csv().encode('utf-8') - -def prompt_df_for_download(prompt_dir): - ''' - Function to create a subset of the prompt_dir via count based selection - ''' - # Create local copy of variables - temp_prompt_dir = prompt_dir - - # Create dict to hold counts of downloaded prompts - prompt_download_dict = {} - ## Count how many prompts are in database to allow for max value in selection - prompt_task_count = temp_prompt_dir.Task.value_counts(sort=False) - prompt_task_select = prompt_task_count.copy() - - # Create numerical selector for every task in prompt directory, add count per task to dict - for i_task in prompt_task_select.index: - prompt_task_select[i_task] = st.number_input( - i_task, - value = prompt_task_count[i_task], - max_value=prompt_task_count[i_task], - min_value=0, - step = 1) - - # Create df with selected number of prompts per task - for i_task in prompt_task_select.index: - temp_df = temp_prompt_dir.loc[temp_prompt_dir['Task']==i_task][0:prompt_task_select[i_task]] - if len(temp_df)>0: - prompt_download_dict[i_task]=temp_df - - # Concat all tasks to dataframe - prompt_download = pd.concat(prompt_download_dict.values()) - - # Add linked prompts, if the user chooses to - download_linked_prompts = st.checkbox('Download linked prompts', value=True) - if download_linked_prompts: - - # Delete rows which do not have linked prompts to avoid type error - linked_prompts_info = prompt_download.dropna(subset='Linked_prompts') - - # Add relevant linked prompts - linked_prompts_ids = linked_prompts_info.Linked_prompts.str.split(',').explode().unique().astype('int') - prompt_download = pd.concat( - [prompt_download, - temp_prompt_dir.loc[temp_prompt_dir['ID'].isin(linked_prompts_ids)]]) - - # Drop rows prompts which appear twice - prompt_download = prompt_download.drop_duplicates(subset='ID') - - return prompt_download - -##### Manual assessment - -def set_eval_df_rating_vals(eval_df, picture_index, manual_eval, manual_eval_completed, manual_eval_task_score): - ''' - Function to set a block of key manual rating related variables of eval_df - ''' - temp_eval_df = eval_df - temp_eval_df.loc[picture_index,'manual_eval']=manual_eval - temp_eval_df.loc[picture_index,'manual_eval_completed']=manual_eval_completed - temp_eval_df.loc[picture_index,'manual_eval_task_score']=manual_eval_task_score - return temp_eval_df - -def radio_rating_index_translation(manual_rating_value): - if manual_rating_value == "No": - return 1 - else: - return 0 - - -def collect_linked_prompt_ratings(curr_linked_prompts, curr_eval_df, curr_prompt_dir): - ''' - Create elements to collect ratings on linked prompts: - If there are linked prompts, create df with info - Else create emtpy df which will automatically skip the rating creation for these prompts - Here we do not test for (curr_eval_df['manual_eval']==True) as the curr_linked_prompts - is already testing for valid prompt number and we want to ignore the exclusion for subprompts - ''' - if type(curr_linked_prompts)==list: - curr_linked_rows = curr_eval_df.loc[ - (curr_eval_df['manual_eval_completed']==False)& - (curr_eval_df['Prompt_no'].isin(curr_linked_prompts))] - curr_linked_rows = curr_linked_rows.groupby('Prompt_no').first() - else: - curr_linked_rows = pd.DataFrame() - - # Create rating for subprompts if a df for subprompt info was created - for row in curr_linked_rows.itertuples(): - # Preselected radio option - radio_preselect = radio_rating_index_translation(row.manual_eval_task_score) - # Prompt - st.write('Prompt: {0}'.format( - curr_prompt_dir.loc[curr_prompt_dir['ID']==int(row.Index)]['Prompt'].item() - )) - # Image - st.image(st.session_state['uploaded_img'][row.Picture_index],width=350) - # Rating - curr_linked_rows.loc[curr_linked_rows['Picture_index']==row.Picture_index,'manual_eval_task_score'] = st.radio( - "Does the image match the prompt?",('Yes', 'No'), horizontal=True, key=row.Picture_index, index=radio_preselect) - st.write(' ') - st.write(' ') - - return curr_linked_rows - - -def delete_last_manual_rating(session_history, eval_df): - ''' - Routine to delete last manual rating and hence to return to it - ''' - # Create local copies of objects - temp_session_history = session_history - temp_eval_df = eval_df.copy() - temp_submit = False - - if len(temp_session_history)>0: - if st.button('Return to last rated image'): - # The list contains sublists of images rated together, here we loop over these images to reset all of them - deleted_picture_index_list = temp_session_history.pop() - for i_picind in deleted_picture_index_list: - temp_eval_df.loc[ - i_picind,'manual_eval_completed']=False - #temp_eval_df.loc[ - # i_picind,'manual_eval_task_score']=np.nan - - # Set submit boolean to true, to rerun the page - temp_submit = True - - return temp_session_history, temp_eval_df, temp_submit - - -def add_previous_manual_assessments_upload_back(eval_df): - ''' - Routine to upload a dataframe of previous (manual) assessment to add it to existing database. - The uploaded df is assessed, matching counts are printed and it returns the imported df for furthe processing. - ''' - # Create necessary local variables - temp_eval_df = eval_df - - # Upload single dataframe, setting default to None for code type checking - temp_uploaded_ratings = None - temp_uploaded_ratings = st.file_uploader('Select .csv for upload', accept_multiple_files=False) - if temp_uploaded_ratings != None: - try: - # Import the uploaded csv as dataframe - uploaded_ratings_df = pd.read_csv(temp_uploaded_ratings) - - # Run standard assert pipeline - assert_uploaded_frame(uploaded_ratings_df) - - # Show matching image count and instructions - overlapping_files_df = pd.merge(temp_eval_df,uploaded_ratings_df,on='File_name',how='inner') - st.write('Number of matching file names found: '+ str(len(overlapping_files_df))) - st.write('Click "Add results" button to add / override current ratings with uploaded ratings.') - - return uploaded_ratings_df - except UnicodeDecodeError: - st.write('WARNING: The uploaded file has to be a .csv downloaded from the "Assessment summary" page.') - return temp_uploaded_ratings - - -def add_previous_manual_assessments_upload(eval_df, dashboard_version_code='vNone'): - ''' - Routine to upload a dataframe of previous (manual) assessment to add it to existing database. - The uploaded df is assessed, matching counts are printed and it returns the imported df for furthe processing. - ''' - # Create necessary local variables - temp_eval_df = eval_df - - # Upload single dataframe, setting default to None for code type checking - temp_uploaded_ratings = None - temp_uploaded_ratings = st.file_uploader('Select .csv for upload', accept_multiple_files=False) - if temp_uploaded_ratings != None: - try: - # Import the uploaded csv as dataframe - uploaded_ratings_df = pd.read_csv(temp_uploaded_ratings) - - # Run standard assert pipeline - assert_uploaded_frame(uploaded_ratings_df) - - # Check the uploaded df has a registered dashboard version - assert 'Dashboard_version' in uploaded_ratings_df.columns,"The uploaded dataframe needs to have a Dashboard_version column." - # Check for correct dashboard version in uploaded file - matching_dashboard_version = uploaded_ratings_df['Dashboard_version'] == dashboard_version_code - assert all(matching_dashboard_version),"The dashboard version of your uploaded results does not match the version of this dashboard." - - # Show matching image count and instructions - overlapping_files_df = pd.merge(temp_eval_df,uploaded_ratings_df,on='File_name',how='inner') - st.write('Number of matching file names found: '+ str(len(overlapping_files_df))) - ## Show warning if some of the matching images already have a rating - if len(overlapping_files_df.manual_eval_task_score.dropna())>0: - st.write('WARNING: {0} of {1} matching files already have a saved rating. These will be overriden when you click "Add results".'.format( - str(len(overlapping_files_df.manual_eval_task_score.dropna())),str(len(overlapping_files_df)))) - st.write('Click "Add results" button to add uploaded ratings to current ratings.') - return uploaded_ratings_df - except UnicodeDecodeError: - st.write('WARNING: The uploaded file has to be a .csv downloaded from the "Assessment summary" page.') - return temp_uploaded_ratings - -def add_previous_manual_assessments_submit(eval_df, uploaded_ratings): - ''' - If uploaded_ratings != None, this will create a button which when pressed will trigger - for the provided ratings to be added to eval_df - ''' - # Create necessary local variables - temp_eval_df = eval_df - temp_submitted = False - - # Create dict to translate uploaded score into str format used during manual assessment - bool_str_dict = {True:'Yes',False:'No'} - - # If a dataframe of uploaded ratings was provided: create a button which allows to add ratings to existing eval_df - if type(uploaded_ratings) == pd.DataFrame: - temp_submitted = st.button("Add results") - if temp_submitted: - for row in uploaded_ratings.itertuples(): - temp_eval_df.loc[temp_eval_df['File_name']==row.File_name,'manual_eval']=True - temp_eval_df.loc[temp_eval_df['File_name']==row.File_name,'manual_eval_completed']=True - temp_eval_df.loc[temp_eval_df['File_name']==row.File_name,'manual_eval_task_score']=bool_str_dict[row.Score] - return temp_eval_df, temp_submitted - - -def add_previous_manual_assessments(eval_df, dashboard_version_code): - ''' - Full routine to allow the user to upload past ratings and add these to eval_df - ''' - st.subheader('Add previous assessments') - st.write('Upload results of previous assessment (as downloaded from summary page) to add these results and skip these images in your current manual assessment. Note that you can only add results for images which you have uploaded using the same file name.') - - # Create necessary local variables - temp_eval_df = eval_df - - # Allow user to upload .csv with prior ratings - uploaded_ratings = add_previous_manual_assessments_upload(temp_eval_df, dashboard_version_code) - - # Add rating to eval_df, if some were uploaded - temp_eval_df, temp_submitted = add_previous_manual_assessments_submit(temp_eval_df, uploaded_ratings) - - return temp_eval_df, temp_submitted - -##### Assessment summary - -def print_results_tabs(file_upload, results_df): - ''' - #Routine used to give user the choice between showing results as bar chart or table - ''' - # Create a tab for bar chart and one for table data - fig, table = multi_comparison_plotI(results_df=results_df, uploaded_df_list=file_upload) - tab1, tab2 = st.tabs(["Bar chart", "Data table"]) - with tab1: - st.pyplot(fig) - - with tab2: - st.write(table) - - -def pre_assessment_visualisation(type_str): - ''' - Routine used to allow user to visualise uploaded results before completing any assessments - ''' - st.write('Complete {0} assessment or upload .csv with saved {0} assessment to generate summary.'.format(type_str)) - - # Display file uploader - file_upload = st.file_uploader("Upload .csv with saved {0} assessment to plot prior results.".format(type_str), accept_multiple_files=True) - if len(file_upload) > 0: - print_results_tabs(file_upload=file_upload, results_df=None) - - -def multi_comparison_plotI(results_df = None, uploaded_df_list = []): - # If list of uploaded_dfs is provided and we transform them into pd.Dfs - # Multiple file uploader returns empty list as default - file_upload_names = [x.name for x in uploaded_df_list] - plot_df_list = [pd.read_csv(x) for x in uploaded_df_list] - - # Assert that all uploaded df's have correct format - assert_multi_frame_upload(plot_df_list) - - # Add file name as model name - for i_df in range(len(file_upload_names)): - plot_df_list[i_df]= plot_df_list[i_df].assign(Model=file_upload_names[i_df]) - - # If results df is provided, add it to list of dfs to plot - if type(results_df) == pd.DataFrame: - plot_df_list.append(results_df) - - # Concat all frames to joined dataframe - plot_df = pd.concat(plot_df_list) - - # Calculate the grouped percentage scores per task category and model - grouped_series = plot_df.groupby(['Task','Model'])['Score'].sum()/plot_df.groupby(['Task','Model'])['Score'].count()*100 - grouped_series = grouped_series.rename('Percentage correct') - - # Create plot - eval_share = grouped_series.reset_index() - # Add small amount to make the bars on plot not disappear - eval_share['Percentage correct'] = eval_share['Percentage correct']+1 - - # Create plot - fig = plt.figure(figsize=(12, 3)) - sns.barplot(data=eval_share,x='Task',y='Percentage correct',hue='Model', palette='GnBu') - plt.xticks(rotation=-65) - plt.xlabel(' ') - plt.ylim(0, 100) - return fig,grouped_series diff --git a/spaces/aditi2222/paragus_paraphrase_demo/app.py b/spaces/aditi2222/paragus_paraphrase_demo/app.py deleted file mode 100644 index 2754bf781cddebe65c9d998b67ca9cd15094b08b..0000000000000000000000000000000000000000 --- a/spaces/aditi2222/paragus_paraphrase_demo/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch - -import gradio as gr - -from transformers import (PegasusForConditionalGeneration, PegasusTokenizer) - -best_model_path = "aditi2222/paragus_models" -model = PegasusForConditionalGeneration.from_pretrained(best_model_path) -#tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-xsum') -tokenizer = PegasusTokenizer.from_pretrained('aditi2222/paragus_models') - -def tokenize_data(text): - # Tokenize the review body - input_ = str(text) + ' ' - max_len = 64 - # tokenize inputs - tokenized_inputs = tokenizer(input_, padding='max_length', truncation=True, max_length=max_len, return_attention_mask=True, return_tensors='pt') - - inputs={"input_ids": tokenized_inputs['input_ids'], - "attention_mask": tokenized_inputs['attention_mask']} - return inputs - -def generate_answers(text): - inputs = tokenize_data(text) - results= model.generate(input_ids= inputs['input_ids'], attention_mask=inputs['attention_mask'], do_sample=True, - max_length=64, - top_k=120, - top_p=0.98, - early_stopping=True, - num_return_sequences=1) - answer = tokenizer.decode(results[0], skip_special_tokens=True) - return answer - -iface = gr.Interface(fn=generate_answers, inputs=['text'], outputs=["text"]) -iface.launch(inline=False, share=True) \ No newline at end of file diff --git a/spaces/aichina/MagicPrompt-Stable-Diffusion/app.py b/spaces/aichina/MagicPrompt-Stable-Diffusion/app.py deleted file mode 100644 index 4ef904995708514c1b101049af73529e23c1a3ab..0000000000000000000000000000000000000000 --- a/spaces/aichina/MagicPrompt-Stable-Diffusion/app.py +++ /dev/null @@ -1,54 +0,0 @@ -from transformers import pipeline, set_seed -import gradio as grad, random, re - - -gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2') -with open("ideas.txt", "r") as f: - line = f.readlines() - - -def generate(starting_text): - seed = random.randint(100, 1000000) - set_seed(seed) - - if starting_text == "": - starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize() - starting_text: str = re.sub(r"[,:\-–.!;?_]", '', starting_text) - - response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 90)), num_return_sequences=4) - response_list = [] - for x in response: - resp = x['generated_text'].strip() - if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False: - response_list.append(resp+'\n') - - response_end = "\n".join(response_list) - response_end = re.sub('[^ ]+\.[^ ]+','', response_end) - response_end = response_end.replace("<", "").replace(">", "") - - if response_end != "": - return response_end - - -txt = grad.Textbox(lines=1, label="Initial Text", placeholder="English Text here") -out = grad.Textbox(lines=4, label="Generated Prompts") - -examples = [] -for x in range(8): - examples.append(line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize()) - -title = "Stable Diffusion Prompt Generator" -description = 'This is a demo of the model series: "MagicPrompt", in this case, aimed at: "Stable Diffusion". To use it, simply submit your text or click on one of the examples. To learn more about the model, [click here](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion).
' - -grad.Interface(fn=generate, - inputs=txt, - outputs=out, - examples=examples, - title=title, - description=description, - article='', - allow_flagging='never', - cache_examples=False, - theme="default").launch(enable_queue=True, debug=True) - - diff --git a/spaces/aijack/hair/model.py b/spaces/aijack/hair/model.py deleted file mode 100644 index ffbb3953d5a5199663c0a8141be8e46ce45d5f11..0000000000000000000000000000000000000000 --- a/spaces/aijack/hair/model.py +++ /dev/null @@ -1,157 +0,0 @@ -from __future__ import annotations - -import argparse -import os -import pathlib -import subprocess -import sys -from typing import Callable, Union - -import dlib -import huggingface_hub -import numpy as np -import PIL.Image -import torch -import torch.nn as nn -import torchvision.transforms as T - -if os.getenv('SYSTEM') == 'spaces': - with open('patch.e4e') as f: - subprocess.run('patch -p1'.split(), cwd='encoder4editing', stdin=f) - with open('patch.hairclip') as f: - subprocess.run('patch -p1'.split(), cwd='HairCLIP', stdin=f) - -app_dir = pathlib.Path(__file__).parent - -e4e_dir = app_dir / 'encoder4editing' -sys.path.insert(0, e4e_dir.as_posix()) - -from models.psp import pSp -from utils.alignment import align_face - -hairclip_dir = app_dir / 'HairCLIP' -mapper_dir = hairclip_dir / 'mapper' -sys.path.insert(0, hairclip_dir.as_posix()) -sys.path.insert(0, mapper_dir.as_posix()) - -from mapper.datasets.latents_dataset_inference import LatentsDatasetInference -from mapper.hairclip_mapper import HairCLIPMapper - - -class Model: - def __init__(self, device: Union[torch.device, str]): - self.device = torch.device(device) - self.landmark_model = self._create_dlib_landmark_model() - self.e4e = self._load_e4e() - self.hairclip = self._load_hairclip() - self.transform = self._create_transform() - - @staticmethod - def _create_dlib_landmark_model(): - path = huggingface_hub.hf_hub_download(repo_id="aijack/jojogan", filename="face_landmarks.dat" ) - return dlib.shape_predictor(path) - - def _load_e4e(self) -> nn.Module: - ckpt_path = huggingface_hub.hf_hub_download(repo_id="aijack/e4e", filename="e4e.pt" ) - ckpt = torch.load(ckpt_path, map_location='cpu') - opts = ckpt['opts'] - opts['device'] = self.device.type - opts['checkpoint_path'] = ckpt_path - opts = argparse.Namespace(**opts) - model = pSp(opts) - model.to(self.device) - model.eval() - return model - - def _load_hairclip(self) -> nn.Module: - ckpt_path = huggingface_hub.hf_hub_download('aijack/hair', - 'hairclip.pt' - ) - ckpt = torch.load(ckpt_path, map_location='cpu') - opts = ckpt['opts'] - opts['device'] = self.device.type - opts['checkpoint_path'] = ckpt_path - opts['editing_type'] = 'both' - opts['input_type'] = 'text' - opts['hairstyle_description'] = 'HairCLIP/mapper/hairstyle_list.txt' - opts['color_description'] = 'red' - opts = argparse.Namespace(**opts) - model = HairCLIPMapper(opts) - model.to(self.device) - model.eval() - return model - - @staticmethod - def _create_transform() -> Callable: - transform = T.Compose([ - T.Resize(256), - T.CenterCrop(256), - T.ToTensor(), - T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - ]) - return transform - - def detect_and_align_face(self, image) -> PIL.Image.Image: - image = align_face(filepath=image, predictor=self.landmark_model) - return image - - @staticmethod - def denormalize(tensor: torch.Tensor) -> torch.Tensor: - return torch.clamp((tensor + 1) / 2 * 255, 0, 255).to(torch.uint8) - - def postprocess(self, tensor: torch.Tensor) -> np.ndarray: - tensor = self.denormalize(tensor) - return tensor.cpu().numpy().transpose(1, 2, 0) - - @torch.inference_mode() - def reconstruct_face( - self, image: PIL.Image.Image) -> tuple[np.ndarray, torch.Tensor]: - input_data = self.transform(image).unsqueeze(0).to(self.device) - reconstructed_images, latents = self.e4e(input_data, - randomize_noise=False, - return_latents=True) - reconstructed = torch.clamp(reconstructed_images[0].detach(), -1, 1) - reconstructed = self.postprocess(reconstructed) - return reconstructed, latents[0] - - @torch.inference_mode() - def generate(self, editing_type: str, hairstyle_index: int, - color_description: str, latent: torch.Tensor) -> np.ndarray: - opts = self.hairclip.opts - opts.editing_type = editing_type - opts.color_description = color_description - - if editing_type == 'color': - hairstyle_index = 0 - - device = torch.device(opts.device) - - dataset = LatentsDatasetInference(latents=latent.unsqueeze(0).cpu(), - opts=opts) - w, hairstyle_text_inputs_list, color_text_inputs_list = dataset[0][:3] - - w = w.unsqueeze(0).to(device) - hairstyle_text_inputs = hairstyle_text_inputs_list[ - hairstyle_index].unsqueeze(0).to(device) - color_text_inputs = color_text_inputs_list[0].unsqueeze(0).to(device) - - hairstyle_tensor_hairmasked = torch.Tensor([0]).unsqueeze(0).to(device) - color_tensor_hairmasked = torch.Tensor([0]).unsqueeze(0).to(device) - - w_hat = w + 0.1 * self.hairclip.mapper( - w, - hairstyle_text_inputs, - color_text_inputs, - hairstyle_tensor_hairmasked, - color_tensor_hairmasked, - ) - x_hat, _ = self.hairclip.decoder( - [w_hat], - input_is_latent=True, - return_latents=True, - randomize_noise=False, - truncation=1, - ) - res = torch.clamp(x_hat[0].detach(), -1, 1) - res = self.postprocess(res) - return res diff --git a/spaces/aijack/jojo/e4e/criteria/lpips/__init__.py b/spaces/aijack/jojo/e4e/criteria/lpips/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/ljspeech/voc1/local/data_download.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/ljspeech/voc1/local/data_download.sh deleted file mode 100644 index 698e3eb497addaeaa4e3c639607ffb9f2f37905f..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/ljspeech/voc1/local/data_download.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -download_dir=$1 - -# check arguments -if [ $# != 1 ]; then - echo "Usage: $0 " - exit 1 -fi - -set -euo pipefail - -cwd=$(pwd) -if [ ! -e "${download_dir}/LJSpeech-1.1" ]; then - mkdir -p "${download_dir}" - cd "${download_dir}" - wget http://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 - tar -vxf ./*.tar.bz2 - rm ./*.tar.bz2 - cd "${cwd}" - echo "Successfully downloaded data." -else - echo "Already exists. Skipped." -fi diff --git a/spaces/algomuffin/jojo_fork/e4e/utils/train_utils.py b/spaces/algomuffin/jojo_fork/e4e/utils/train_utils.py deleted file mode 100644 index 0c55177f7442010bc1fcc64de3d142585c22adc0..0000000000000000000000000000000000000000 --- a/spaces/algomuffin/jojo_fork/e4e/utils/train_utils.py +++ /dev/null @@ -1,13 +0,0 @@ - -def aggregate_loss_dict(agg_loss_dict): - mean_vals = {} - for output in agg_loss_dict: - for key in output: - mean_vals[key] = mean_vals.setdefault(key, []) + [output[key]] - for key in mean_vals: - if len(mean_vals[key]) > 0: - mean_vals[key] = sum(mean_vals[key]) / len(mean_vals[key]) - else: - print('{} has no value'.format(key)) - mean_vals[key] = 0 - return mean_vals diff --git a/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/nets_33966KB.py deleted file mode 100644 index b8986f968dc5383e65d35aac6e4367299de3378b..0000000000000000000000000000000000000000 --- a/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/nets_33966KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_33966KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16, 32)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/allknowingroger/Image-Models-Test106/app.py b/spaces/allknowingroger/Image-Models-Test106/app.py deleted file mode 100644 index 1ca509d3486cc02875ce064859f0f300792d9697..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test106/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Archfiend/ardic-ai-sd-fdb", - "rishabh063/lora-trained-xl-monkey", - "LinoyTsaban/lora-xl-3d_icons-0.0001-5e-05-1500-1-5", - "rishabh063/lora-trained-xl-colab3", - "rishabh063/lora-trained-xl-colab2", - "felixdae/lora-trained-xl-colab", - "Abhishek2003/my-pet-dog-qaz", - "Muhammadreza/mann-e-artistic-4-revised-2", - "Kyousan/lora-trained-xl-colab-licar2000-withblipbehind", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amankishore/sjc/sd1/ldm/modules/diffusionmodules/openaimodel.py b/spaces/amankishore/sjc/sd1/ldm/modules/diffusionmodules/openaimodel.py deleted file mode 100644 index fcf95d1ea8a078dd259915109203789f78f0643a..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/sd1/ldm/modules/diffusionmodules/openaimodel.py +++ /dev/null @@ -1,961 +0,0 @@ -from abc import abstractmethod -from functools import partial -import math -from typing import Iterable - -import numpy as np -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from ldm.modules.diffusionmodules.util import ( - checkpoint, - conv_nd, - linear, - avg_pool_nd, - zero_module, - normalization, - timestep_embedding, -) -from ldm.modules.attention import SpatialTransformer - - -# dummy replace -def convert_module_to_f16(x): - pass - -def convert_module_to_f32(x): - pass - - -## go -class AttentionPool2d(nn.Module): - """ - Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py - """ - - def __init__( - self, - spacial_dim: int, - embed_dim: int, - num_heads_channels: int, - output_dim: int = None, - ): - super().__init__() - self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5) - self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1) - self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1) - self.num_heads = embed_dim // num_heads_channels - self.attention = QKVAttention(self.num_heads) - - def forward(self, x): - b, c, *_spatial = x.shape - x = x.reshape(b, c, -1) # NC(HW) - x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1) - x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1) - x = self.qkv_proj(x) - x = self.attention(x) - x = self.c_proj(x) - return x[:, :, 0] - - -class TimestepBlock(nn.Module): - """ - Any module where forward() takes timestep embeddings as a second argument. - """ - - @abstractmethod - def forward(self, x, emb): - """ - Apply the module to `x` given `emb` timestep embeddings. - """ - - -class TimestepEmbedSequential(nn.Sequential, TimestepBlock): - """ - A sequential module that passes timestep embeddings to the children that - support it as an extra input. - """ - - def forward(self, x, emb, context=None): - for layer in self: - if isinstance(layer, TimestepBlock): - x = layer(x, emb) - elif isinstance(layer, SpatialTransformer): - x = layer(x, context) - else: - x = layer(x) - return x - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - if use_conv: - self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding) - - def forward(self, x): - assert x.shape[1] == self.channels - if self.dims == 3: - x = F.interpolate( - x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" - ) - else: - x = F.interpolate(x, scale_factor=2, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - -class TransposedUpsample(nn.Module): - 'Learned 2x upsampling without padding' - def __init__(self, channels, out_channels=None, ks=5): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - - self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2) - - def forward(self,x): - return self.up(x) - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd( - dims, self.channels, self.out_channels, 3, stride=stride, padding=padding - ) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(TimestepBlock): - """ - A residual block that can optionally change the number of channels. - :param channels: the number of input channels. - :param emb_channels: the number of timestep embedding channels. - :param dropout: the rate of dropout. - :param out_channels: if specified, the number of out channels. - :param use_conv: if True and out_channels is specified, use a spatial - convolution instead of a smaller 1x1 convolution to change the - channels in the skip connection. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param use_checkpoint: if True, use gradient checkpointing on this module. - :param up: if True, use this block for upsampling. - :param down: if True, use this block for downsampling. - """ - - def __init__( - self, - channels, - emb_channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - dims=2, - use_checkpoint=False, - up=False, - down=False, - ): - super().__init__() - self.channels = channels - self.emb_channels = emb_channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_checkpoint = use_checkpoint - self.use_scale_shift_norm = use_scale_shift_norm - - self.in_layers = nn.Sequential( - normalization(channels), - nn.SiLU(), - conv_nd(dims, channels, self.out_channels, 3, padding=1), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False, dims) - self.x_upd = Upsample(channels, False, dims) - elif down: - self.h_upd = Downsample(channels, False, dims) - self.x_upd = Downsample(channels, False, dims) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.emb_layers = nn.Sequential( - nn.SiLU(), - linear( - emb_channels, - 2 * self.out_channels if use_scale_shift_norm else self.out_channels, - ), - ) - self.out_layers = nn.Sequential( - normalization(self.out_channels), - nn.SiLU(), - nn.Dropout(p=dropout), - zero_module( - conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1) - ), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = conv_nd( - dims, channels, self.out_channels, 3, padding=1 - ) - else: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 1) - - def forward(self, x, emb): - """ - Apply the block to a Tensor, conditioned on a timestep embedding. - :param x: an [N x C x ...] Tensor of features. - :param emb: an [N x emb_channels] Tensor of timestep embeddings. - :return: an [N x C x ...] Tensor of outputs. - """ - return checkpoint( - self._forward, (x, emb), self.parameters(), self.use_checkpoint - ) - - - def _forward(self, x, emb): - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - emb_out = self.emb_layers(emb).type(h.dtype) - while len(emb_out.shape) < len(h.shape): - emb_out = emb_out[..., None] - if self.use_scale_shift_norm: - out_norm, out_rest = self.out_layers[0], self.out_layers[1:] - scale, shift = th.chunk(emb_out, 2, dim=1) - h = out_norm(h) * (1 + scale) + shift - h = out_rest(h) - else: - h = h + emb_out - h = self.out_layers(h) - return self.skip_connection(x) + h - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - use_checkpoint=False, - use_new_attention_order=False, - ): - super().__init__() - self.channels = channels - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.use_checkpoint = use_checkpoint - self.norm = normalization(channels) - self.qkv = conv_nd(1, channels, channels * 3, 1) - if use_new_attention_order: - # split qkv before split heads - self.attention = QKVAttention(self.num_heads) - else: - # split heads before split qkv - self.attention = QKVAttentionLegacy(self.num_heads) - - self.proj_out = zero_module(conv_nd(1, channels, channels, 1)) - - def forward(self, x): - return checkpoint(self._forward, (x,), self.parameters(), True) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!! - #return pt_checkpoint(self._forward, x) # pytorch - - def _forward(self, x): - b, c, *spatial = x.shape - x = x.reshape(b, c, -1) - qkv = self.qkv(self.norm(x)) - h = self.attention(qkv) - h = self.proj_out(h) - return (x + h).reshape(b, c, *spatial) - - -def count_flops_attn(model, _x, y): - """ - A counter for the `thop` package to count the operations in an - attention operation. - Meant to be used like: - macs, params = thop.profile( - model, - inputs=(inputs, timestamps), - custom_ops={QKVAttention: QKVAttention.count_flops}, - ) - """ - b, c, *spatial = y[0].shape - num_spatial = int(np.prod(spatial)) - # We perform two matmuls with the same number of ops. - # The first computes the weight matrix, the second computes - # the combination of the value vectors. - matmul_ops = 2 * b * (num_spatial ** 2) * c - model.total_ops += th.DoubleTensor([matmul_ops]) - - -class QKVAttentionLegacy(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class QKVAttention(nn.Module): - """ - A module which performs QKV attention and splits in a different order. - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.chunk(3, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", - (q * scale).view(bs * self.n_heads, ch, length), - (k * scale).view(bs * self.n_heads, ch, length), - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length)) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - :param use_new_attention_order: use a different attention pattern for potentially - increased efficiency. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - ): - super().__init__() - if use_spatial_transformer: - assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...' - - if context_dim is not None: - assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...' - from omegaconf.listconfig import ListConfig - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set' - - if num_head_channels == -1: - assert num_heads != -1, 'Either num_heads or num_head_channels has to be set' - - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=model_channels * mult, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = model_channels * mult - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)), - ) - if self.predict_codebook_ids: - self.id_predictor = nn.Sequential( - normalization(ch), - conv_nd(dims, model_channels, n_embed, 1), - #nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits - ) - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps=None, context=None, y=None,**kwargs): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param context: conditioning plugged in via crossattn - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - hs = [] - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb) - - if self.num_classes is not None: - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, context) - hs.append(h) - h = self.middle_block(h, emb, context) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb, context) - h = h.type(x.dtype) - if self.predict_codebook_ids: - return self.id_predictor(h) - else: - return self.out(h) - - -class EncoderUNetModel(nn.Module): - """ - The half UNet model with attention and timestep embedding. - For usage, see UNet. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - use_checkpoint=False, - use_fp16=False, - num_heads=1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - pool="adaptive", - *args, - **kwargs - ): - super().__init__() - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - self.pool = pool - if pool == "adaptive": - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - nn.AdaptiveAvgPool2d((1, 1)), - zero_module(conv_nd(dims, ch, out_channels, 1)), - nn.Flatten(), - ) - elif pool == "attention": - assert num_head_channels != -1 - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - AttentionPool2d( - (image_size // ds), ch, num_head_channels, out_channels - ), - ) - elif pool == "spatial": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - nn.ReLU(), - nn.Linear(2048, self.out_channels), - ) - elif pool == "spatial_v2": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - normalization(2048), - nn.SiLU(), - nn.Linear(2048, self.out_channels), - ) - else: - raise NotImplementedError(f"Unexpected {pool} pooling") - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - - def forward(self, x, timesteps): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :return: an [N x K] Tensor of outputs. - """ - emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) - - results = [] - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = self.middle_block(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = th.cat(results, axis=-1) - return self.out(h) - else: - h = h.type(x.dtype) - return self.out(h) - diff --git a/spaces/amit-scans/Image-Text-Detection/README.md b/spaces/amit-scans/Image-Text-Detection/README.md deleted file mode 100644 index 6ca2a036a1ab2000a55a4023135eb9302a23c890..0000000000000000000000000000000000000000 --- a/spaces/amit-scans/Image-Text-Detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Text Detection -emoji: 👀 -colorFrom: yellow -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false -license: mit -duplicated_from: ajitrajasekharan/Image-Text-Detection ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/andreslu/orion/src/utils.py b/spaces/andreslu/orion/src/utils.py deleted file mode 100644 index a4eb8b6142ff64c104a4fbdebe82794fe19e37c8..0000000000000000000000000000000000000000 --- a/spaces/andreslu/orion/src/utils.py +++ /dev/null @@ -1,132 +0,0 @@ -from ngram import NGram - - -def post_process_template(tB): - if tB.endswith('.') == False: - tB += '.' - return tB - # return tB.split('.')[0] + '.' - - -def construct_template(words, templateA, if_then=False): - if len(words) >= 2: - templates = ['{} '.format(words[0])] - for i in range(1, len(words)-1): - templates[0] += '{} '.format(words[i]) - templates[0] += '{}.'.format(words[-1]) - elif len(words) == 1: - templates = [ - # '{} is .'.format(words[0]), - '{} .'.format(words[0])] - - elif len(words) == 0: - templates = [] - - if if_then: - for word in words: - index = templateA.index('') - templateA = templateA[:index] + word + templateA[index + len(''):] - templates = ['If ' + templateA + ' then ' + template for template in templates] - - return templates - - -def filter_words(words_prob): - word_count = {} - token1_count = {} - word2_count = {} - ret = [] - for words, prob, *_ in words_prob: - filter_this = False - - # filter repetitive token - token_count = {} - for word in words: - for token in word.split(' '): - if token in token_count: - filter_this = True - token_count[token] = 1 - if filter_this: - prob *= 0.5 - - # filter repetitive words - if len(words) == 2 and words[0] == words[1]: - continue - - # filter repetitive first token - token1 = words[0].split(' ')[0] - if token1 not in token1_count: - token1_count[token1] = 1 - else: - token1_count[token1] += 1 - prob /= token1_count[token1] - - for word in words: - if word not in word_count: - word_count[word] = 0 - word_count[word] += 1 - prob /= word_count[word] - - if len(words) == 2: - if words[1] not in word2_count: - word2_count[words[1]] = 0 - word2_count[words[1]] += 1 - prob /= word2_count[words[1]] - - ret.append([words, prob]) - return sorted(ret, key=lambda x: x[1], reverse=True) - - -import math -from copy import deepcopy - - -def convert_for_print(arr): - ret = deepcopy(arr) - for i in range(len(ret)): - ret[i][1] = round(ret[i][1], 7) - if len(ret[i]) == 3: - for j in range(len(ret[i][2])): - ret[i][2][j] = round(ret[i][2][j], 7) - return ret - - -def formalize_tA(tA): - tA = tA.strip() - if tA.endswith('.'): - tA = tA[:-1].strip() + '.' - else: - tA += '.' - tA = tA.replace(' ,', ',') - tA = tA.replace(" '", "'") - return tA - - -ngram_n = 3 - - -def extract_similar_words(txt, words): - max_word_length = 0 - for word in words: - if len(word) > max_word_length: - max_word_length = len(word) - - txt_ngrams = [] - for i in range(len(txt)): - for j in range(i + ngram_n, min(len(txt), i + max_word_length + 5)): - txt_ngrams.append(txt[i:j].lower()) - n = NGram(txt_ngrams, key=lambda x: x.lower(), N=ngram_n) - ret = [] - for word in words: - matched_word = n.find(word.lower(), 0.5) - if matched_word is None: - return None - ret.append(matched_word) - return ret - - -def extract_words(txt, words): - for word in words: - if word not in txt: - return None - return [word.lower() for word in words] diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/html_generator.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/html_generator.py deleted file mode 100644 index 5f0fd43b03b9232af615eb08d2b3264bb29053a4..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/html_generator.py +++ /dev/null @@ -1,263 +0,0 @@ -''' - -This is a library for formatting text outputs as nice HTML. - -''' - -import os -import re -import time -from pathlib import Path - -import markdown -from PIL import Image, ImageOps - -# This is to store the paths to the thumbnails of the profile pictures -image_cache = {} - -with open(Path(__file__).resolve().parent / '../css/html_readable_style.css', 'r') as f: - readable_css = f.read() -with open(Path(__file__).resolve().parent / '../css/html_4chan_style.css', 'r') as css_f: - _4chan_css = css_f.read() -with open(Path(__file__).resolve().parent / '../css/html_cai_style.css', 'r') as f: - cai_css = f.read() -with open(Path(__file__).resolve().parent / '../css/html_bubble_chat_style.css', 'r') as f: - bubble_chat_css = f.read() -with open(Path(__file__).resolve().parent / '../css/html_instruct_style.css', 'r') as f: - instruct_css = f.read() - - -def fix_newlines(string): - string = string.replace('\n', '\n\n') - string = re.sub(r"\n{3,}", "\n\n", string) - string = string.strip() - return string - - -def replace_blockquote(m): - return m.group().replace('\n', '\n> ').replace('\\begin{blockquote}', '').replace('\\end{blockquote}', '') - - -def convert_to_markdown(string): - - # Blockquote - pattern = re.compile(r'\\begin{blockquote}(.*?)\\end{blockquote}', re.DOTALL) - string = pattern.sub(replace_blockquote, string) - - # Code - string = string.replace('\\begin{code}', '```') - string = string.replace('\\end{code}', '```') - string = re.sub(r"(.)```", r"\1\n```", string) - - string = fix_newlines(string) - return markdown.markdown(string, extensions=['fenced_code']) - - -def generate_basic_html(string): - string = convert_to_markdown(string) - string = f'
{string}
' - return string - - -def process_post(post, c): - t = post.split('\n') - number = t[0].split(' ')[1] - if len(t) > 1: - src = '\n'.join(t[1:]) - else: - src = '' - src = re.sub('>', '>', src) - src = re.sub('(>>[0-9]*)', '\\1', src) - src = re.sub('\n', '
\n', src) - src = f'
{src}\n' - src = f'Anonymous No.{number}\n{src}' - return src - - -def generate_4chan_html(f): - posts = [] - post = '' - c = -2 - for line in f.splitlines(): - line += "\n" - if line == '-----\n': - continue - elif line.startswith('--- '): - c += 1 - if post != '': - src = process_post(post, c) - posts.append(src) - post = line - else: - post += line - if post != '': - src = process_post(post, c) - posts.append(src) - - for i in range(len(posts)): - if i == 0: - posts[i] = f'
{posts[i]}
\n' - else: - posts[i] = f'
{posts[i]}
\n' - - output = '' - output += f'
' - for post in posts: - output += post - output += '
' - output = output.split('\n') - for i in range(len(output)): - output[i] = re.sub(r'^(>(.*?)(
|))', r'\1', output[i]) - output[i] = re.sub(r'^
(>(.*?)(
|))', r'
\1', output[i]) - output = '\n'.join(output) - - return output - - -def make_thumbnail(image): - image = image.resize((350, round(image.size[1] / image.size[0] * 350)), Image.Resampling.LANCZOS) - if image.size[1] > 470: - image = ImageOps.fit(image, (350, 470), Image.ANTIALIAS) - - return image - - -def get_image_cache(path): - cache_folder = Path("cache") - if not cache_folder.exists(): - cache_folder.mkdir() - - mtime = os.stat(path).st_mtime - if (path in image_cache and mtime != image_cache[path][0]) or (path not in image_cache): - img = make_thumbnail(Image.open(path)) - output_file = Path(f'cache/{path.name}_cache.png') - img.convert('RGB').save(output_file, format='PNG') - image_cache[path] = [mtime, output_file.as_posix()] - - return image_cache[path][1] - - -def generate_instruct_html(history): - output = f'
' - for i, _row in enumerate(history[::-1]): - row = [convert_to_markdown(entry) for entry in _row] - - output += f""" -
-
-
- {row[1]} -
-
-
- """ - - if len(row[0]) == 0: # don't display empty user messages - continue - - output += f""" -
-
-
- {row[0]} -
-
-
- """ - - output += "
" - - return output - - -def generate_cai_chat_html(history, name1, name2, reset_cache=False): - output = f'
' - - # We use ?name2 and ?time.time() to force the browser to reset caches - img_bot = f'' if Path("cache/pfp_character.png").exists() else '' - img_me = f'' if Path("cache/pfp_me.png").exists() else '' - - for i, _row in enumerate(history[::-1]): - row = [convert_to_markdown(entry) for entry in _row] - - output += f""" -
-
- {img_bot} -
-
-
- {name2} -
-
- {row[1]} -
-
-
- """ - - if len(row[0]) == 0: # don't display empty user messages - continue - - output += f""" -
-
- {img_me} -
-
-
- {name1} -
-
- {row[0]} -
-
-
- """ - - output += "
" - return output - - -def generate_chat_html(history, name1, name2, reset_cache=False): - output = f'
' - - for i, _row in enumerate(history[::-1]): - row = [convert_to_markdown(entry) for entry in _row] - - output += f""" -
-
-
- {row[1]} -
-
-
- """ - - if len(row[0]) == 0: # don't display empty user messages - continue - - output += f""" -
-
-
- {row[0]} -
-
-
- """ - - output += "
" - return output - - -def chat_html_wrapper(history, name1, name2, mode, reset_cache=False): - if mode == "cai-chat": - return generate_cai_chat_html(history, name1, name2, reset_cache) - elif mode == "chat": - return generate_chat_html(history, name1, name2) - elif mode == "instruct": - return generate_instruct_html(history) - else: - return '' diff --git a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/inference/slicer.py b/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/inference/slicer.py deleted file mode 100644 index 35a888b906e7df8634cfdcec914f650c6cefd26a..0000000000000000000000000000000000000000 --- a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/inference/slicer.py +++ /dev/null @@ -1,158 +0,0 @@ -import time - -import numpy as np -import torch -import torchaudio -from scipy.ndimage import maximum_filter1d, uniform_filter1d - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -# @timeit -def _window_maximum(arr, win_sz): - return maximum_filter1d(arr, size=win_sz)[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -# @timeit -def _window_rms(arr, win_sz): - filtered = np.sqrt(uniform_filter1d(np.power(arr, 2), win_sz) - np.power(uniform_filter1d(arr, win_sz), 2)) - return filtered[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -def level2db(levels, eps=1e-12): - return 20 * np.log10(np.clip(levels, a_min=eps, a_max=1)) - - -def _apply_slice(audio, begin, end): - if len(audio.shape) > 1: - return audio[:, begin: end] - else: - return audio[begin: end] - - -class Slicer: - def __init__(self, - sr: int, - db_threshold: float = -40, - min_length: int = 5000, - win_l: int = 300, - win_s: int = 20, - max_silence_kept: int = 500): - self.db_threshold = db_threshold - self.min_samples = round(sr * min_length / 1000) - self.win_ln = round(sr * win_l / 1000) - self.win_sn = round(sr * win_s / 1000) - self.max_silence = round(sr * max_silence_kept / 1000) - if not self.min_samples >= self.win_ln >= self.win_sn: - raise ValueError('The following condition must be satisfied: min_length >= win_l >= win_s') - if not self.max_silence >= self.win_sn: - raise ValueError('The following condition must be satisfied: max_silence_kept >= win_s') - - @timeit - def slice(self, audio): - samples = audio - if samples.shape[0] <= self.min_samples: - return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}} - # get absolute amplitudes - abs_amp = np.abs(samples - np.mean(samples)) - # calculate local maximum with large window - win_max_db = level2db(_window_maximum(abs_amp, win_sz=self.win_ln)) - sil_tags = [] - left = right = 0 - while right < win_max_db.shape[0]: - if win_max_db[right] < self.db_threshold: - right += 1 - elif left == right: - left += 1 - right += 1 - else: - if left == 0: - split_loc_l = left - else: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - if len(sil_tags) != 0 and split_loc_l - sil_tags[-1][1] < self.min_samples and right < win_max_db.shape[ - 0] - 1: - right += 1 - left = right - continue - if right == win_max_db.shape[0] - 1: - split_loc_r = right + self.win_ln - else: - sil_right_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_right = level2db(_window_rms(samples[right + self.win_ln - sil_right_n: right + self.win_ln], - win_sz=self.win_sn)) - split_win_r = right + self.win_ln - sil_right_n + np.argmin(rms_db_right) - split_loc_r = split_win_r + np.argmin(abs_amp[split_win_r: split_win_r + self.win_sn]) - sil_tags.append((split_loc_l, split_loc_r)) - right += 1 - left = right - if left != right: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - sil_tags.append((split_loc_l, samples.shape[0])) - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append({"slice": False, "split_time": f"0,{sil_tags[0][0]}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, "split_time": f"{sil_tags[i - 1][1]},{sil_tags[i][0]}"}) - # 标识所有静音片段 - chunks.append({"slice": True, "split_time": f"{sil_tags[i][0]},{sil_tags[i][1]}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] != len(audio): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1]},{len(audio)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000, win_l=300, win_s=20, max_sil_kept=500): - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - - slicer = Slicer( - sr=sr, - db_threshold=db_thresh, - min_length=min_len, - win_l=win_l, - win_s=win_s, - max_silence_kept=max_sil_kept - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr - - diff --git a/spaces/arch-123/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/arch-123/bingo/src/lib/hooks/use-at-bottom.tsx deleted file mode 100644 index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/lib/hooks/use-at-bottom.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import * as React from 'react' - -export function useAtBottom(offset = 0) { - const [isAtBottom, setIsAtBottom] = React.useState(false) - - React.useEffect(() => { - const handleScroll = () => { - setIsAtBottom( - window.innerHeight + window.scrollY >= - document.body.offsetHeight - offset - ) - } - - window.addEventListener('scroll', handleScroll, { passive: true }) - handleScroll() - - return () => { - window.removeEventListener('scroll', handleScroll) - } - }, [offset]) - - return isAtBottom -} diff --git a/spaces/arnaucas/wildfire-detection/app.py b/spaces/arnaucas/wildfire-detection/app.py deleted file mode 100644 index 25554639e109e06aeb14d471bebe91c7405feb51..0000000000000000000000000000000000000000 --- a/spaces/arnaucas/wildfire-detection/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import gradio as gr -import os -from transformers import pipeline -from pathlib import Path -from PIL import Image -import numpy as np - -example_imgs = ["examples/img0.jpg", - "examples/img1.jpg", - "examples/img2.jpg", - "examples/img3.jpg"] - -pipe = pipeline("image-classification", model="arnaucas/wildfire-classifier") - -def inference(image): - image = Image.fromarray(np.uint8(image)).convert('RGB') - output = pipe(image) - result = {item['label']: item['score'] for item in output} - return result - -gr.Interface( - fn=inference, - title="Wildfire Detection", - description = "Predict whether an image contains wildfire or not", - inputs="image", - examples=example_imgs, - outputs=gr.Label(), - cache_examples=False, - theme='earneleh/paris', - article = "Author: Arnau Castellano", -).launch(debug=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA3_256.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA3_256.py deleted file mode 100644 index b4f11ee1f0f082001218c2474f7da773d1492fa3..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA3_256.py +++ /dev/null @@ -1,174 +0,0 @@ -# -*- coding: utf-8 -*- -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -from Crypto.Util.py3compat import bord - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr, c_ubyte) - -from Crypto.Hash.keccak import _raw_keccak_lib - -class SHA3_256_Hash(object): - """A SHA3-256 hash object. - Do not instantiate directly. - Use the :func:`new` function. - - :ivar oid: ASN.1 Object ID - :vartype oid: string - - :ivar digest_size: the size in bytes of the resulting hash - :vartype digest_size: integer - """ - - # The size of the resulting hash in bytes. - digest_size = 32 - - # ASN.1 Object ID - oid = "2.16.840.1.101.3.4.2.8" - - # Input block size for HMAC - block_size = 136 - - def __init__(self, data, update_after_digest): - self._update_after_digest = update_after_digest - self._digest_done = False - self._padding = 0x06 - - state = VoidPointer() - result = _raw_keccak_lib.keccak_init(state.address_of(), - c_size_t(self.digest_size * 2), - c_ubyte(24)) - if result: - raise ValueError("Error %d while instantiating SHA-3/256" - % result) - self._state = SmartPointer(state.get(), - _raw_keccak_lib.keccak_destroy) - if data: - self.update(data) - - def update(self, data): - """Continue hashing of a message by consuming the next chunk of data. - - Args: - data (byte string/byte array/memoryview): The next chunk of the message being hashed. - """ - - if self._digest_done and not self._update_after_digest: - raise TypeError("You can only call 'digest' or 'hexdigest' on this object") - - result = _raw_keccak_lib.keccak_absorb(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data)) - ) - if result: - raise ValueError("Error %d while updating SHA-3/256" - % result) - return self - - def digest(self): - """Return the **binary** (non-printable) digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - self._digest_done = True - - bfr = create_string_buffer(self.digest_size) - result = _raw_keccak_lib.keccak_digest(self._state.get(), - bfr, - c_size_t(self.digest_size), - c_ubyte(self._padding)) - if result: - raise ValueError("Error %d while instantiating SHA-3/256" - % result) - - self._digest_value = get_raw_buffer(bfr) - return self._digest_value - - def hexdigest(self): - """Return the **printable** digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) for x in self.digest()]) - - def copy(self): - """Return a copy ("clone") of the hash object. - - The copy will have the same internal state as the original hash - object. - This can be used to efficiently compute the digests of strings that - share a common initial substring. - - :return: A hash object of the same type - """ - - clone = self.new() - result = _raw_keccak_lib.keccak_copy(self._state.get(), - clone._state.get()) - if result: - raise ValueError("Error %d while copying SHA3-256" % result) - return clone - - def new(self, data=None): - """Create a fresh SHA3-256 hash object.""" - - return type(self)(data, self._update_after_digest) - - -def new(*args, **kwargs): - """Create a new hash object. - - Args: - data (byte string/byte array/memoryview): - The very first chunk of the message to hash. - It is equivalent to an early call to :meth:`update`. - update_after_digest (boolean): - Whether :meth:`digest` can be followed by another :meth:`update` - (default: ``False``). - - :Return: A :class:`SHA3_256_Hash` hash object - """ - - data = kwargs.pop("data", None) - update_after_digest = kwargs.pop("update_after_digest", False) - if len(args) == 1: - if data: - raise ValueError("Initial data for hash specified twice") - data = args[0] - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - return SHA3_256_Hash(data, update_after_digest) - -# The size of the resulting hash in bytes. -digest_size = SHA3_256_Hash.digest_size - -# Input block size for HMAC -block_size = 136 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcxImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcxImagePlugin.py deleted file mode 100644 index 841c18a220002305c6734a16ee40d4ad0facee87..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcxImagePlugin.py +++ /dev/null @@ -1,220 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PCX file handling -# -# This format was originally used by ZSoft's popular PaintBrush -# program for the IBM PC. It is also supported by many MS-DOS and -# Windows applications, including the Windows PaintBrush program in -# Windows 3. -# -# history: -# 1995-09-01 fl Created -# 1996-05-20 fl Fixed RGB support -# 1997-01-03 fl Fixed 2-bit and 4-bit support -# 1999-02-03 fl Fixed 8-bit support (broken in 1.0b1) -# 1999-02-07 fl Added write support -# 2002-06-09 fl Made 2-bit and 4-bit support a bit more robust -# 2002-07-30 fl Seek from to current position, not beginning of file -# 2003-06-03 fl Extract DPI settings (info["dpi"]) -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import io -import logging - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 - -logger = logging.getLogger(__name__) - - -def _accept(prefix): - return prefix[0] == 10 and prefix[1] in [0, 2, 3, 5] - - -## -# Image plugin for Paintbrush images. - - -class PcxImageFile(ImageFile.ImageFile): - - format = "PCX" - format_description = "Paintbrush" - - def _open(self): - - # header - s = self.fp.read(128) - if not _accept(s): - raise SyntaxError("not a PCX file") - - # image - bbox = i16(s, 4), i16(s, 6), i16(s, 8) + 1, i16(s, 10) + 1 - if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]: - raise SyntaxError("bad PCX image size") - logger.debug("BBox: %s %s %s %s", *bbox) - - # format - version = s[1] - bits = s[3] - planes = s[65] - provided_stride = i16(s, 66) - logger.debug( - "PCX version %s, bits %s, planes %s, stride %s", - version, - bits, - planes, - provided_stride, - ) - - self.info["dpi"] = i16(s, 12), i16(s, 14) - - if bits == 1 and planes == 1: - mode = rawmode = "1" - - elif bits == 1 and planes in (2, 4): - mode = "P" - rawmode = "P;%dL" % planes - self.palette = ImagePalette.raw("RGB", s[16:64]) - - elif version == 5 and bits == 8 and planes == 1: - mode = rawmode = "L" - # FIXME: hey, this doesn't work with the incremental loader !!! - self.fp.seek(-769, io.SEEK_END) - s = self.fp.read(769) - if len(s) == 769 and s[0] == 12: - # check if the palette is linear greyscale - for i in range(256): - if s[i * 3 + 1 : i * 3 + 4] != o8(i) * 3: - mode = rawmode = "P" - break - if mode == "P": - self.palette = ImagePalette.raw("RGB", s[1:]) - self.fp.seek(128) - - elif version == 5 and bits == 8 and planes == 3: - mode = "RGB" - rawmode = "RGB;L" - - else: - raise OSError("unknown PCX mode") - - self.mode = mode - self._size = bbox[2] - bbox[0], bbox[3] - bbox[1] - - # Don't trust the passed in stride. - # Calculate the approximate position for ourselves. - # CVE-2020-35653 - stride = (self._size[0] * bits + 7) // 8 - - # While the specification states that this must be even, - # not all images follow this - if provided_stride != stride: - stride += stride % 2 - - bbox = (0, 0) + self.size - logger.debug("size: %sx%s", *self.size) - - self.tile = [("pcx", bbox, self.fp.tell(), (rawmode, planes * stride))] - - -# -------------------------------------------------------------------- -# save PCX files - - -SAVE = { - # mode: (version, bits, planes, raw mode) - "1": (2, 1, 1, "1"), - "L": (5, 8, 1, "L"), - "P": (5, 8, 1, "P"), - "RGB": (5, 8, 3, "RGB;L"), -} - - -def _save(im, fp, filename): - - try: - version, bits, planes, rawmode = SAVE[im.mode] - except KeyError as e: - raise ValueError(f"Cannot save {im.mode} images as PCX") from e - - # bytes per plane - stride = (im.size[0] * bits + 7) // 8 - # stride should be even - stride += stride % 2 - # Stride needs to be kept in sync with the PcxEncode.c version. - # Ideally it should be passed in in the state, but the bytes value - # gets overwritten. - - logger.debug( - "PcxImagePlugin._save: xwidth: %d, bits: %d, stride: %d", - im.size[0], - bits, - stride, - ) - - # under windows, we could determine the current screen size with - # "Image.core.display_mode()[1]", but I think that's overkill... - - screen = im.size - - dpi = 100, 100 - - # PCX header - fp.write( - o8(10) - + o8(version) - + o8(1) - + o8(bits) - + o16(0) - + o16(0) - + o16(im.size[0] - 1) - + o16(im.size[1] - 1) - + o16(dpi[0]) - + o16(dpi[1]) - + b"\0" * 24 - + b"\xFF" * 24 - + b"\0" - + o8(planes) - + o16(stride) - + o16(1) - + o16(screen[0]) - + o16(screen[1]) - + b"\0" * 54 - ) - - assert fp.tell() == 128 - - ImageFile._save(im, fp, [("pcx", (0, 0) + im.size, 0, (rawmode, bits * planes))]) - - if im.mode == "P": - # colour palette - fp.write(o8(12)) - palette = im.im.getpalette("RGB", "RGB") - palette += b"\x00" * (768 - len(palette)) - fp.write(palette) # 768 bytes - elif im.mode == "L": - # greyscale palette - fp.write(o8(12)) - for i in range(256): - fp.write(o8(i) * 3) - - -# -------------------------------------------------------------------- -# registry - - -Image.register_open(PcxImageFile.format, PcxImageFile, _accept) -Image.register_save(PcxImageFile.format, _save) - -Image.register_extension(PcxImageFile.format, ".pcx") - -Image.register_mime(PcxImageFile.format, "image/x-pcx") diff --git a/spaces/ashercn97/AsherTesting/extensions/openai/README.md b/spaces/ashercn97/AsherTesting/extensions/openai/README.md deleted file mode 100644 index 7bbc1e8311322cc61d175fd1993818e5321c14e2..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/extensions/openai/README.md +++ /dev/null @@ -1,231 +0,0 @@ -# An OpenedAI API (openai like) - -This extension creates an API that works kind of like openai (ie. api.openai.com). -It's incomplete so far but perhaps is functional enough for you. - -## Setup & installation - -Optional (for flask_cloudflared, embeddings): - -``` -pip3 install -r requirements.txt -``` - -It listens on tcp port 5001 by default. You can use the OPENEDAI_PORT environment variable to change this. - -Make sure you enable it in server launch parameters, it should include: - -``` ---extensions openai -``` - -You can also use the ``--listen`` argument to make the server available on the networ, and/or the ```--share``` argument to enable a public Cloudflare endpoint. - -To enable the basic image generation support (txt2img) set the environment variable SD_WEBUI_URL to point to your Stable Diffusion API ([Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)). - -For example: -``` -SD_WEBUI_URL=http://127.0.0.1:7861 -``` - -### Models - -This has been successfully tested with Alpaca, Koala, Vicuna, WizardLM and their variants, (ex. gpt4-x-alpaca, GPT4all-snoozy, stable-vicuna, wizard-vicuna, etc.) and many others. Models that have been trained for **Instruction Following** work best. If you test with other models please let me know how it goes. Less than satisfying results (so far) from: RWKV-4-Raven, llama, mpt-7b-instruct/chat. - -For best results across all API endpoints, a model like [vicuna-13b-v1.3-GPTQ](https://huggingface.co/TheBloke/vicuna-13b-v1.3-GPTQ), [stable-vicuna-13B-GPTQ](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ) or [airoboros-13B-gpt4-1.3-GPTQ](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.3-GPTQ) is a good start. - -For good results with the [Completions](https://platform.openai.com/docs/api-reference/completions) API endpoint, in addition to the above models, you can also try using a base model like [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) or Llama. - -For good results with the [ChatCompletions](https://platform.openai.com/docs/api-reference/chat) or [Edits](https://platform.openai.com/docs/api-reference/edits) API endpoints you can use almost any model trained for instruction following - within the limits of the model. Be sure that the proper instruction template is detected and loaded or the results will not be good. - -For the proper instruction format to be detected you need to have a matching model entry in your ```models/config.yaml``` file. Be sure to keep this file up to date. -A matching instruction template file in the characters/instruction-following/ folder will loaded and applied to format messages correctly for the model - this is critical for good results. - -For example, the Wizard-Vicuna family of models are trained with the Vicuna 1.1 format. In the models/config.yaml file there is this matching entry: - -``` -.*wizard.*vicuna: - mode: 'instruct' - instruction_template: 'Vicuna-v1.1' -``` - -This refers to ```characters/instruction-following/Vicuna-v1.1.yaml```, which looks like this: - -``` -user: "USER:" -bot: "ASSISTANT:" -turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n" -context: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\n" -``` - -For most common models this is already setup, but if you are using a new or uncommon model you may need add a matching entry to the models/config.yaml and possibly create your own instruction-following template and for best results. - -If you see this in your logs, it probably means that the correct format could not be loaded: -``` -Warning: Loaded default instruction-following template for model. -``` - -### Embeddings (alpha) - -Embeddings requires ```sentence-transformers``` installed, but chat and completions will function without it loaded. The embeddings endpoint is currently using the HuggingFace model: ```sentence-transformers/all-mpnet-base-v2``` for embeddings. This produces 768 dimensional embeddings (the same as the text-davinci-002 embeddings), which is different from OpenAI's current default ```text-embedding-ada-002``` model which produces 1536 dimensional embeddings. The model is small-ish and fast-ish. This model and embedding size may change in the future. - -| model name | dimensions | input max tokens | speed | size | Avg. performance | -| --- | --- | --- | --- | --- | --- | -| text-embedding-ada-002 | 1536 | 8192| - | - | - | -| text-davinci-002 | 768 | 2046 | - | - | - | -| all-mpnet-base-v2 | 768 | 384 | 2800 | 420M | 63.3 | -| all-MiniLM-L6-v2 | 384 | 256 | 14200 | 80M | 58.8 | - -In short, the all-MiniLM-L6-v2 model is 5x faster, 5x smaller ram, 2x smaller storage, and still offers good quality. Stats from (https://www.sbert.net/docs/pretrained_models.html). To change the model from the default you can set the environment variable OPENEDAI_EMBEDDING_MODEL, ex. "OPENEDAI_EMBEDDING_MODEL=all-MiniLM-L6-v2". - -Warning: You cannot mix embeddings from different models even if they have the same dimensions. They are not comparable. - -### Client Application Setup - - -Almost everything you use it with will require you to set a dummy OpenAI API key environment variable. - -With the [official python openai client](https://github.com/openai/openai-python), you can set the OPENAI_API_BASE environment variable before you import the openai module, like so: - -``` -OPENAI_API_KEY=sk-111111111111111111111111111111111111111111111111 -OPENAI_API_BASE=http://127.0.0.1:5001/v1 -``` - -If needed, replace 127.0.0.1 with the IP/port of your server. - -If using .env files to save the OPENAI_API_BASE and OPENAI_API_KEY variables, you can ensure compatibility by loading the .env file before loading the openai module, like so in python: - -``` -from dotenv import load_dotenv -load_dotenv() -import openai -``` - -With the [official Node.js openai client](https://github.com/openai/openai-node) it is slightly more more complex because the environment variables are not used by default, so small source code changes may be required to use the environment variables, like so: - -``` -const openai = OpenAI(Configuration({ - apiKey: process.env.OPENAI_API_KEY, - basePath: process.env.OPENAI_API_BASE, -})); -``` - -For apps made with the [chatgpt-api Node.js client library](https://github.com/transitive-bullshit/chatgpt-api): - -``` -const api = new ChatGPTAPI({ - apiKey: process.env.OPENAI_API_KEY, - apiBaseUrl: process.env.OPENAI_API_BASE, -}) -``` - -## API Documentation & Examples - -The OpenAI API is well documented, you can view the documentation here: https://platform.openai.com/docs/api-reference - -Examples of how to use the Completions API in Python can be found here: https://platform.openai.com/examples -Not all of them will work with all models unfortunately, See the notes on Models for how to get the best results. - -Here is a simple python example of how you can use the Edit endpoint as a translator. - -```python -import openai -response = openai.Edit.create( - model="x", - instruction="Translate this into French", - input="Our mission is to ensure that artificial general intelligence benefits all of humanity.", -) -print(response['choices'][0]['text']) -# Sample Output: -# Notre mission est de garantir que l'intelligence artificielle généralisée profite à tous les membres de l'humanité. -``` - - - -## Compatibility & not so compatibility - -| API endpoint | tested with | notes | -| --- | --- | --- | -| /v1/models | openai.Model.list() | Lists models, Currently loaded model first, plus some compatibility options | -| /v1/models/{id} | openai.Model.get() | returns whatever you ask for, model does nothing yet anyways | -| /v1/text_completion | openai.Completion.create() | the most tested, only supports single string input so far, variable quality based on the model | -| /v1/chat/completions | openai.ChatCompletion.create() | Quality depends a lot on the model | -| /v1/edits | openai.Edit.create() | Works the best of all, perfect for instruction following models | -| /v1/images/generations | openai.Image.create() | Bare bones, no model configuration, response_format='b64_json' only. | -| /v1/embeddings | openai.Embedding.create() | Using Sentence Transformer, dimensions are different and may never be directly comparable to openai embeddings. | -| /v1/moderations | openai.Moderation.create() | does nothing. successfully. | -| /v1/completions | openai api completions.create | Legacy endpoint (v0.25) | -| /v1/engines/*/embeddings | python-openai v0.25 | Legacy endpoint | -| /v1/engines/*/generate | openai engines.generate | Legacy endpoint | -| /v1/engines | openai engines.list | Legacy Lists models | -| /v1/engines/{model_name} | openai engines.get -i {model_name} | You can use this legacy endpoint to load models via the api | -| /v1/images/edits | openai.Image.create_edit() | not yet supported | -| /v1/images/variations | openai.Image.create_variation() | not yet supported | -| /v1/audio/\* | openai.Audio.\* | not yet supported | -| /v1/files\* | openai.Files.\* | not yet supported | -| /v1/fine-tunes\* | openai.FineTune.\* | not yet supported | -| /v1/search | openai.search, engines.search | not yet supported | - -The model name setting is ignored in completions, but you may need to adjust the maximum token length to fit the model (ie. set to <2048 tokens instead of 4096, 8k, etc). To mitigate some of this, the max_tokens value is halved until it is less than truncation_length for the model (typically 2k). - -Streaming, temperature, top_p, max_tokens, stop, should all work as expected, but not all parameters are mapped correctly. - -Some hacky mappings: - -| OpenAI | text-generation-webui | note | -| --- | --- | --- | -| frequency_penalty | encoder_repetition_penalty | this seems to operate with a different scale and defaults, I tried to scale it based on range & defaults, but the results are terrible. hardcoded to 1.18 until there is a better way | -| presence_penalty | repetition_penalty | same issues as frequency_penalty, hardcoded to 1.0 | -| best_of | top_k | default is 1 | -| stop | custom_stopping_strings | this is also stuffed with ['\n###', "\n{user prompt}", "{user prompt}" ] for good measure. | -| n | 1 | variations are not supported yet. | -| 1 | num_beams | hardcoded to 1 | -| 1.0 | typical_p | hardcoded to 1.0 | -| max_tokens | max_new_tokens | For Text Completions max_tokens is set smaller than the truncation_length minus the prompt length. This can cause no input to be generated if the prompt is too large. For ChatCompletions, the older chat messages may be dropped to fit the max_new_tokens requested | -| logprobs | - | not supported yet | -| logit_bias | - | not supported yet | -| messages.name | - | not supported yet | -| user | - | not supported yet | -| functions/function_call | - | function calls are not supported yet | - -defaults are mostly from openai, so are different. I use the openai defaults where I can and try to scale them to the webui defaults with the same intent. - -### Applications - -Almost everything needs the OPENAI_API_KEY environment variable set, for example: -``` -OPENAI_API_KEY=sk-111111111111111111111111111111111111111111111111 -``` -Some apps are picky about key format, but 'dummy' or 'sk-dummy' also work in most cases. -Most application will work if you also set: -``` -OPENAI_API_BASE=http://127.0.0.1:5001/v1 -``` -but there are some exceptions. - -| Compatibility | Application/Library | url | notes / setting | -| --- | --- | --- | --- | -| ✅❌ | openai-python (v0.25+) | https://github.com/openai/openai-python | only the endpoints from above are working. OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ✅❌ | openai-node | https://github.com/openai/openai-node | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) | -| ✅❌ | chatgpt-api | https://github.com/transitive-bullshit/chatgpt-api | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) | -| ✅ | anse | https://github.com/anse-app/anse | API Key & URL configurable in UI | -| ✅ | shell_gpt | https://github.com/TheR1D/shell_gpt | OPENAI_API_HOST=http://127.0.0.1:5001 | -| ✅ | gpt-shell | https://github.com/jla/gpt-shell | OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ✅ | gpt-discord-bot | https://github.com/openai/gpt-discord-bot | OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ✅ | OpenAI for Notepad++ | https://github.com/Krazal/nppopenai | api_url=http://127.0.0.1:5001 in the config file, or environment variables | -| ✅ | vscode-openai | https://marketplace.visualstudio.com/items?itemName=AndrewButson.vscode-openai | OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ✅❌ | langchain | https://github.com/hwchase17/langchain | OPENAI_API_BASE=http://127.0.0.1:5001/v1 even with a good 30B-4bit model the result is poor so far. It assumes zero shot python/json coding. Some model tailored prompt formatting improves results greatly. | -| ✅❌ | Auto-GPT | https://github.com/Significant-Gravitas/Auto-GPT | OPENAI_API_BASE=http://127.0.0.1:5001/v1 Same issues as langchain. Also assumes a 4k+ context | -| ✅❌ | babyagi | https://github.com/yoheinakajima/babyagi | OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ❌ | guidance | https://github.com/microsoft/guidance | logit_bias and logprobs not yet supported | - -## Future plans -* model changing, esp. something for swapping loras or embedding models -* consider switching to FastAPI + starlette for SSE (openai SSE seems non-standard) - -## Bugs? Feedback? Comments? Pull requests? - -To enable debugging and get copious output you can set the OPENEDAI_DEBUG=1 environment variable. - -Are all appreciated, please @matatonic and I'll try to get back to you as soon as possible. diff --git a/spaces/ashutosh1919/quantum-perceptron/quantum_perceptron/utils/plot_utils.py b/spaces/ashutosh1919/quantum-perceptron/quantum_perceptron/utils/plot_utils.py deleted file mode 100644 index 79260ee986f860d85ee2d017eb241f18d46296f4..0000000000000000000000000000000000000000 --- a/spaces/ashutosh1919/quantum-perceptron/quantum_perceptron/utils/plot_utils.py +++ /dev/null @@ -1,41 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from quantum_perceptron.utils.data_utils import ( - get_bin_int, - assert_bits, - assert_negative -) - - -def get_img_from_data(data: int, num_qubits: int) -> np.ndarray: - """ - Get n x n matrix representing the image of the data where n is - num_qubits. - - Args: - data: `int` representing data value - (correspponding to input or weight vector) - num_qubits: `int` representing number of qubits. - - Returns: Image in form of `np.ndarray`. - """ - assert_negative(data) - assert_bits(data, num_qubits) - bin_str = get_bin_int(data, num_qubits) - img = np.zeros((np.power(2, num_qubits))) - - for i, bit in enumerate(bin_str): - if bit == '0': - img[i] = 255 - - return img.reshape((num_qubits, num_qubits)) - - -def plot_img_from_data(data: int, num_qubits: int): - """ - Plot image from data. - """ - img = get_img_from_data(data, num_qubits) - ax = plt.imshow(img, cmap='gray') - ax.axes.xaxis.set_visible(False) - ax.axes.yaxis.set_visible(False) diff --git a/spaces/auto-academic/auto-draft/wrapper.py b/spaces/auto-academic/auto-draft/wrapper.py deleted file mode 100644 index 8a38c82f42c53d90084c5acc5f10fcc0a2f09918..0000000000000000000000000000000000000000 --- a/spaces/auto-academic/auto-draft/wrapper.py +++ /dev/null @@ -1,57 +0,0 @@ -""" -This script is used to wrap all generation methods together. - -todo: - A worker keeps running on the server. Monitor the Amazon SQS. Once receive a new message, do the following: - Download the corresponding configuration files on S3. - Change Task status from Pending to Running. - Call `generator_wrapper` and wait for the outputs. - If `generator_wrapper` returns results: - evaluate the results; compile it; upload results to S3 ... Change Task status from Running to Completed. - If anything goes wrong, raise Error. - If `generator_wrapper` returns nothing or Timeout, or raise any error: - Change Task status from Running to Failed. -""" -from auto_generators import generate_draft -from utils.file_operations import make_archive -import yaml -import uuid - - -def remove_special_characters(s): - return ''.join(c for c in s if c.isalnum() or c.isspace() or c == ',') - - -def generator_wrapper(config): - if not isinstance(config, dict): - with open(config, "r") as file: - config = yaml.safe_load(file) - title = config["paper"]["title"] - generator = config["generator"] - if generator == "auto_draft": - folder = generate_draft(title, config["paper"]["description"], - tldr=config["references"]["tldr"], - max_kw_refs=config["references"]["max_kw_refs"], - refs=config["references"]["refs"], - max_tokens_ref=config["references"]["max_tokens_ref"], - knowledge_database=config["domain_knowledge"]["knowledge_database"], - max_tokens_kd=config["domain_knowledge"]["max_tokens_kd"], - query_counts=config["domain_knowledge"]["query_counts"], - sections=config["output"]["selected_sections"], - model=config["output"]["model"], - template=config["output"]["template"], - prompts_mode=config["output"]["prompts_mode"], - ) - else: - raise NotImplementedError(f"The generator {generator} has not been supported yet.") - # todo: post processing: translate to Chinese, compile PDF ... - filename = remove_special_characters(title).replace(" ", "_") + uuid.uuid1().hex + ".zip" - return make_archive(folder, filename) - - -if __name__ == "__main__": - pass - # with open("configurations/default.yaml", 'r') as file: - # config = yaml.safe_load(file) - # print(config) - # generator_wrapper(config) diff --git a/spaces/avans06/whisper-webui-translate/src/whisper/abstractWhisperContainer.py b/spaces/avans06/whisper-webui-translate/src/whisper/abstractWhisperContainer.py deleted file mode 100644 index 98cae0679185e2142f3cd3c7bdf35ab67640d5b2..0000000000000000000000000000000000000000 --- a/spaces/avans06/whisper-webui-translate/src/whisper/abstractWhisperContainer.py +++ /dev/null @@ -1,115 +0,0 @@ -import abc -from typing import Any, Callable, List - -from src.config import ModelConfig, VadInitialPromptMode - -from src.hooks.progressListener import ProgressListener -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache -from src.prompts.abstractPromptStrategy import AbstractPromptStrategy - -class AbstractWhisperCallback: - def __init__(self): - pass - - @abc.abstractmethod - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - raise NotImplementedError() - -class LambdaWhisperCallback(AbstractWhisperCallback): - def __init__(self, callback_lambda: Callable[[Any, int, str, str, ProgressListener], None]): - super().__init__() - self.callback_lambda = callback_lambda - - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - return self.callback_lambda(audio, segment_index, prompt, detected_language, progress_listener) - -class AbstractWhisperContainer: - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - self.model_name = model_name - self.device = device - self.compute_type = compute_type - self.download_root = download_root - self.cache = cache - - # Will be created on demand - self.model = None - - # List of known models - self.models = models - - def get_model(self): - if self.model is None: - - if (self.cache is None): - self.model = self._create_model() - else: - model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '') - self.model = self.cache.get(model_key, self._create_model) - return self.model - - @abc.abstractmethod - def _create_model(self): - raise NotImplementedError() - - def ensure_downloaded(self): - pass - - @abc.abstractmethod - def create_callback(self, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - prompt_strategy: AbstractPromptStrategy - The prompt strategy to use for the transcription. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - raise NotImplementedError() - - # This is required for multiprocessing - def __getstate__(self): - return { - "model_name": self.model_name, - "device": self.device, - "download_root": self.download_root, - "models": self.models, - "compute_type": self.compute_type - } - - def __setstate__(self, state): - self.model_name = state["model_name"] - self.device = state["device"] - self.download_root = state["download_root"] - self.models = state["models"] - self.compute_type = state["compute_type"] - self.model = None - # Depickled objects must use the global cache - self.cache = GLOBAL_MODEL_CACHE \ No newline at end of file diff --git a/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla/README.md b/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla/README.md deleted file mode 100644 index 61fd6ce1c10809ce663225fd4c494552fc0eaca9..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🗣️Live ASR Speech Recognition Gradio🧠💾 -emoji: 🗣️Live🧠 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/awacke1/Streamlit-Google-Maps-Minnesota/README.md b/spaces/awacke1/Streamlit-Google-Maps-Minnesota/README.md deleted file mode 100644 index dd9c01cf5d8c40469365b3973cc7d36108917a09..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit-Google-Maps-Minnesota/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🏥 Minnesota Medical Centers 🌳 -emoji: 🏥🌳 -colorFrom: green -colorTo: indigo -sdk: streamlit -sdk_version: 1.28.0 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FilmShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FilmShader.js deleted file mode 100644 index 3028fbc330c9971ee9903afae8c4ba4e2520cdc8..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FilmShader.js +++ /dev/null @@ -1,104 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - * - * Film grain & scanlines shader - * - * - ported from HLSL to WebGL / GLSL - * http://www.truevision3d.com/forums/showcase/staticnoise_colorblackwhite_scanline_shaders-t18698.0.html - * - * Screen Space Static Postprocessor - * - * Produces an analogue noise overlay similar to a film grain / TV static - * - * Original implementation and noise algorithm - * Pat 'Hawthorne' Shearon - * - * Optimized scanlines + noise version with intensity scaling - * Georg 'Leviathan' Steinrohder - * - * This version is provided under a Creative Commons Attribution 3.0 License - * http://creativecommons.org/licenses/by/3.0/ - */ - -THREE.FilmShader = { - - uniforms: { - - "tDiffuse": { value: null }, - "time": { value: 0.0 }, - "nIntensity": { value: 0.5 }, - "sIntensity": { value: 0.05 }, - "sCount": { value: 4096 }, - "grayscale": { value: 1 } - - }, - - vertexShader: [ - - "varying vec2 vUv;", - - "void main() {", - - "vUv = uv;", - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "#include ", - - // control parameter - "uniform float time;", - - "uniform bool grayscale;", - - // noise effect intensity value (0 = no effect, 1 = full effect) - "uniform float nIntensity;", - - // scanlines effect intensity value (0 = no effect, 1 = full effect) - "uniform float sIntensity;", - - // scanlines effect count value (0 = no effect, 4096 = full effect) - "uniform float sCount;", - - "uniform sampler2D tDiffuse;", - - "varying vec2 vUv;", - - "void main() {", - - // sample the source - "vec4 cTextureScreen = texture2D( tDiffuse, vUv );", - - // make some noise - "float dx = rand( vUv + time );", - - // add noise - "vec3 cResult = cTextureScreen.rgb + cTextureScreen.rgb * clamp( 0.1 + dx, 0.0, 1.0 );", - - // get us a sine and cosine - "vec2 sc = vec2( sin( vUv.y * sCount ), cos( vUv.y * sCount ) );", - - // add scanlines - "cResult += cTextureScreen.rgb * vec3( sc.x, sc.y, sc.x ) * sIntensity;", - - // interpolate between source and result by intensity - "cResult = cTextureScreen.rgb + clamp( nIntensity, 0.0,1.0 ) * ( cResult - cTextureScreen.rgb );", - - // convert to grayscale if desired - "if( grayscale ) {", - - "cResult = vec3( cResult.r * 0.3 + cResult.g * 0.59 + cResult.b * 0.11 );", - - "}", - - "gl_FragColor = vec4( cResult, cTextureScreen.a );", - - "}" - - ].join( "\n" ) - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/GammaCorrectionShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/GammaCorrectionShader.js deleted file mode 100644 index 4c2a373fba16b5702ce657f3130d634dcdeaafb5..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/GammaCorrectionShader.js +++ /dev/null @@ -1,45 +0,0 @@ -/** - * @author WestLangley / http://github.com/WestLangley - * - * Gamma Correction Shader - * http://en.wikipedia.org/wiki/gamma_correction - */ - -THREE.GammaCorrectionShader = { - - uniforms: { - - "tDiffuse": { value: null } - - }, - - vertexShader: [ - - "varying vec2 vUv;", - - "void main() {", - - "vUv = uv;", - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform sampler2D tDiffuse;", - - "varying vec2 vUv;", - - "void main() {", - - "vec4 tex = texture2D( tDiffuse, vec2( vUv.x, vUv.y ) );", - - "gl_FragColor = LinearToGamma( tex, float( GAMMA_FACTOR ) );", - - "}" - - ].join( "\n" ) - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/ShapeUtils.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/extras/ShapeUtils.d.ts deleted file mode 100644 index ded77c619d2248b2c7e3a9650a2e9882c188e63e..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/ShapeUtils.d.ts +++ /dev/null @@ -1,11 +0,0 @@ -interface Vec2 { - x: number; - y: number; -} - -export namespace ShapeUtils { - export function area(contour: Vec2[]): number; - export function triangulate(contour: Vec2[], indices: boolean): number[]; - export function triangulateShape(contour: Vec2[], holes: Vec2[]): number[][]; - export function isClockWise(pts: Vec2[]): boolean; -} diff --git a/spaces/bhn4477/Car_orientation/README.md b/spaces/bhn4477/Car_orientation/README.md deleted file mode 100644 index 5da8af6c0cafb10692069d7c25fa8c9cafbf3bfa..0000000000000000000000000000000000000000 --- a/spaces/bhn4477/Car_orientation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Car Orientation -emoji: 💩 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Crack BEST Agisoft PhotoScan Professional 1.4.3 Build 6529.md b/spaces/bioriAsaeru/text-to-voice/Crack BEST Agisoft PhotoScan Professional 1.4.3 Build 6529.md deleted file mode 100644 index 345a5cb78ca44ab002cea8d22a56a385a9c62fdf..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Crack BEST Agisoft PhotoScan Professional 1.4.3 Build 6529.md +++ /dev/null @@ -1,7 +0,0 @@ - -

Agisoft Metashape Professional 1.8.4 Crack It is a software that may assist people with making 3D photographs from in case two photos, as they have an element which is crucial for reconstruction. Agisoft Metashape Download can help thousands of photos, nevertheless all procedures are transported out in your area, with no need to transfer data from your enterprise. The image positioning process, the system queries for common points and finds them, as the geometry developing method, that is depending on the approximate camera practices, displays the images as 3d polygon works. Once you may have created the geometry of an item. effortlessly undertake many designs, that may be used for orthophoto missions.

-

Furthermore, it is really a very easy-to-understand program for all user levels. Professionals, as well as beginners, can efficiently utilize this tool to produce desired 3D content. The Agisoft PhotoScan Cracked with License Code 2022 comes with everything required for professional-grade image editing. Such materials can be utilized in a diverse market field, by the invention of fits into this look of products for civil and design framework. This might find out the projection of the version at the top and also build up a matrix of peaks. This Agisoft PhotoScan pro 2020 Crack supports all the text file formats including JPG, TIF, PNG, BMP, EXR, PPM, MPO, and more.

-

CRACK Agisoft PhotoScan Professional 1.4.3 Build 6529


DOWNLOAD ►►► https://urloso.com/2uyPK4



-

Agisoft PhotoScan Crack Build 14575 allows you to make multiple types of the 3D geometry from the image. This latest version of PhotoScan comes with enhanced features. Along with some new and improved tools you will be able to create 3D models more easily. The application may provide you with the possibility to select the building material. For example, the wood, concrete, etc. And then you can modify all the models' properties and sets of parameters. In addition, Agisoft PhotoScan 13 Crack with License Code 2015 offers you the ability to add key information to a 3D model. Additionally, you can distinguish between the object and the background. Besides, it permits you to create the unique 3D mesh with autogen frames. It's also possible to configure the included angle when you work with the scene. Along with it you may create a 3D model of the camera. You are able to select the default settings, such as the exposure, the mirror, the focal length and so on. Hence, you will be able to save the projects into standard formats.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/bipin/multipurpose-ai/README.md b/spaces/bipin/multipurpose-ai/README.md deleted file mode 100644 index 81990d3f92f72c3e1b324db725981cebd1801a0a..0000000000000000000000000000000000000000 --- a/spaces/bipin/multipurpose-ai/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Multipurpose Ai -emoji: 😻 -colorFrom: indigo -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/bluuuuuuuu/test02/Dockerfile b/spaces/bluuuuuuuu/test02/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/bluuuuuuuu/test02/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/boomsss/gamedayspx/model_day.py b/spaces/boomsss/gamedayspx/model_day.py deleted file mode 100644 index 26a36127ab48f55285a91ca49f655e98e16eb960..0000000000000000000000000000000000000000 --- a/spaces/boomsss/gamedayspx/model_day.py +++ /dev/null @@ -1,434 +0,0 @@ -import streamlit as st -import pandas as pd -import pandas_datareader as pdr -import numpy as np -import yfinance as yf -import json -import requests -from bs4 import BeautifulSoup -from typing import List -import xgboost as xgb -from tqdm import tqdm -from sklearn import linear_model -import joblib -import os -from sklearn.metrics import roc_auc_score, precision_score, recall_score -import datetime -from pandas.tseries.offsets import BDay -import lightgbm as lgb - -def walk_forward_validation(df, target_column, num_training_rows, num_periods): - - # Create an XGBRegressor model - # model = xgb.XGBRegressor(n_estimators=100, objective='reg:squarederror', random_state = 42) - model = linear_model.LinearRegression() - - overall_results = [] - # Iterate over the rows in the DataFrame, one step at a time - for i in tqdm(range(num_training_rows, df.shape[0] - num_periods + 1),desc='LR Model'): - # Split the data into training and test sets - X_train = df.drop(target_column, axis=1).iloc[:i] - y_train = df[target_column].iloc[:i] - X_test = df.drop(target_column, axis=1).iloc[i:i+num_periods] - y_test = df[target_column].iloc[i:i+num_periods] - - # Fit the model to the training data - model.fit(X_train, y_train) - - # Make a prediction on the test data - predictions = model.predict(X_test) - - # Create a DataFrame to store the true and predicted values - result_df = pd.DataFrame({'True': y_test, 'Predicted': predictions}, index=y_test.index) - - overall_results.append(result_df) - - df_results = pd.concat(overall_results) - # model.save_model('model_lr.bin') - # Return the true and predicted values, and fitted model - return df_results, model - -model_cols = [ - 'BigNewsDay', - 'Quarter', - 'Perf5Day', - 'Perf5Day_n1', - 'DaysGreen', - 'DaysRed', - 'CurrentGap', - 'RangePct', - 'RangePct_n1', - 'RangePct_n2', - 'OHLC4_VIX', - 'OHLC4_VIX_n1', - 'OHLC4_VIX_n2', - 'VIXOpen', - 'VVIXOpen', - 'OpenL1', - 'OpenL2', - 'OpenH1', - 'OpenH2', - 'L1TouchPct', - 'L2TouchPct', - 'H1TouchPct', - 'H2TouchPct', - 'L1BreakPct', - 'L2BreakPct', - 'H1BreakPct', - 'H2BreakPct', - 'H1BreakTouchPct', - 'H2BreakTouchPct', - 'L1BreakTouchPct', - 'L2BreakTouchPct' -] - -def walk_forward_validation_seq(df, target_column_clf, target_column_regr, num_training_rows, num_periods): - - # Create run the regression model to get its target - res, model1 = walk_forward_validation(df.drop(columns=[target_column_clf]).dropna(), target_column_regr, num_training_rows, num_periods) - # joblib.dump(model1, 'model1.bin') - - # Merge the result df back on the df for feeding into the classifier - for_merge = res[['Predicted']] - for_merge.columns = ['RegrModelOut'] - for_merge['RegrModelOut'] = for_merge['RegrModelOut'] > 0 - df = df.merge(for_merge, left_index=True, right_index=True) - df = df.drop(columns=[target_column_regr]) - df = df[model_cols + ['RegrModelOut', target_column_clf]] - - df[target_column_clf] = df[target_column_clf].astype(bool) - df['RegrModelOut'] = df['RegrModelOut'].astype(bool) - - # Create an XGBRegressor model - # model2 = xgb.XGBClassifier(n_estimators=10, random_state = 42) - model2 = lgb.LGBMClassifier(n_estimators=10, random_state=42, verbosity=-1) - # model = linear_model.LogisticRegression(max_iter=1500) - - overall_results = [] - # Iterate over the rows in the DataFrame, one step at a time - for i in tqdm(range(num_training_rows, df.shape[0] - num_periods + 1),'CLF Model'): - # Split the data into training and test sets - X_train = df.drop(target_column_clf, axis=1).iloc[:i] - y_train = df[target_column_clf].iloc[:i] - X_test = df.drop(target_column_clf, axis=1).iloc[i:i+num_periods] - y_test = df[target_column_clf].iloc[i:i+num_periods] - - # Fit the model to the training data - model2.fit(X_train, y_train) - - # Make a prediction on the test data - predictions = model2.predict_proba(X_test)[:,-1] - - # Create a DataFrame to store the true and predicted values - result_df = pd.DataFrame({'True': y_test, 'Predicted': predictions}, index=y_test.index) - - overall_results.append(result_df) - - df_results = pd.concat(overall_results) - - # Calibrate Probabilities - def get_quantiles(df, col_name, q): - return df.groupby(pd.cut(df[col_name], q))['True'].mean() - - greenprobas = [] - meanprobas = [] - for i, pct in tqdm(enumerate(df_results['Predicted']), desc='Calibrating Probas'): - try: - df_q = get_quantiles(df_results.iloc[:i], 'Predicted', 7) - for q in df_q.index: - if q.left <= pct <= q.right: - p = df_q[q] - c = (q.left + q.right) / 2 - except: - p = None - c = None - - greenprobas.append(p) - meanprobas.append(c) - - df_results['CalibPredicted'] = greenprobas - - return df_results, model1, model2 - -def seq_predict_proba(df, trained_reg_model, trained_clf_model): - regr_pred = trained_reg_model.predict(df) - regr_pred = regr_pred > 0 - new_df = df.copy() - new_df['RegrModelOut'] = regr_pred - clf_pred_proba = trained_clf_model.predict_proba(new_df[model_cols + ['RegrModelOut']])[:,-1] - return clf_pred_proba - -def get_data(): - # f = open('settings.json') - # j = json.load(f) - # API_KEY_FRED = j["API_KEY_FRED"] - - API_KEY_FRED = os.getenv('API_KEY_FRED') - - def parse_release_dates(release_id: str) -> List[str]: - release_dates_url = f'https://api.stlouisfed.org/fred/release/dates?release_id={release_id}&realtime_start=2015-01-01&include_release_dates_with_no_data=true&api_key={API_KEY_FRED}' - r = requests.get(release_dates_url) - text = r.text - soup = BeautifulSoup(text, 'xml') - dates = [] - for release_date_tag in soup.find_all('release_date', {'release_id': release_id}): - dates.append(release_date_tag.text) - return dates - - def parse_release_dates_obs(series_id: str) -> List[str]: - obs_url = f'https://api.stlouisfed.org/fred/series/observations?series_id={series_id}&realtime_start=2015-01-01&include_release_dates_with_no_data=true&api_key={API_KEY_FRED}' - r = requests.get(obs_url) - text = r.text - soup = BeautifulSoup(text, 'xml') - observations = [] - for observation_tag in soup.find_all('observation'): - date = observation_tag.get('date') - value = observation_tag.get('value') - observations.append((date, value)) - return observations - - econ_dfs = {} - - econ_tickers = [ - 'WALCL', - 'NFCI', - 'WRESBAL' - ] - - for et in tqdm(econ_tickers, desc='getting econ tickers'): - # p = parse_release_dates_obs(et) - # df = pd.DataFrame(columns = ['ds',et], data = p) - df = pdr.get_data_fred(et) - df.index = df.index.rename('ds') - # df.index = pd.to_datetime(df.index.rename('ds')).dt.tz_localize(None) - # df['ds'] = pd.to_datetime(df['ds']).dt.tz_localize(None) - econ_dfs[et] = df - - # walcl = pd.DataFrame(columns = ['ds','WALCL'], data = p) - # walcl['ds'] = pd.to_datetime(walcl['ds']).dt.tz_localize(None) - - # nfci = pd.DataFrame(columns = ['ds','NFCI'], data = p2) - # nfci['ds'] = pd.to_datetime(nfci['ds']).dt.tz_localize(None) - - release_ids = [ - "10", # "Consumer Price Index" - "46", # "Producer Price Index" - "50", # "Employment Situation" - "53", # "Gross Domestic Product" - "103", # "Discount Rate Meeting Minutes" - "180", # "Unemployment Insurance Weekly Claims Report" - "194", # "ADP National Employment Report" - "323" # "Trimmed Mean PCE Inflation Rate" - ] - - release_names = [ - "CPI", - "PPI", - "NFP", - "GDP", - "FOMC", - "UNEMP", - "ADP", - "PCE" - ] - - releases = {} - - for rid, n in tqdm(zip(release_ids, release_names), total = len(release_ids), desc='Getting release dates'): - releases[rid] = {} - releases[rid]['dates'] = parse_release_dates(rid) - releases[rid]['name'] = n - - # Create a DF that has all dates with the name of the col as 1 - # Once merged on the main dataframe, days with econ events will be 1 or None. Fill NA with 0 - # This column serves as the true/false indicator of whether there was economic data released that day. - for rid in tqdm(release_ids, desc='Making indicators'): - releases[rid]['df'] = pd.DataFrame( - index=releases[rid]['dates'], - data={ - releases[rid]['name']: 1 - }) - releases[rid]['df'].index = pd.DatetimeIndex(releases[rid]['df'].index) - # releases[rid]['df']['ds'] = pd.to_datetime(releases[rid]['df']['ds']).dt.tz_localize(None) - # releases[rid]['df'] = releases[rid]['df'].set_index('ds') - - vix = yf.Ticker('^VIX') - vvix = yf.Ticker('^VVIX') - spx = yf.Ticker('^GSPC') - - prices_vix = vix.history(start='2018-07-01', interval='1d') - prices_spx = spx.history(start='2018-07-01', interval='1d') - prices_vvix = vvix.history(start='2018-07-01', interval='1d') - - prices_spx['index'] = [str(x).split()[0] for x in prices_spx.index] - prices_spx['index'] = pd.to_datetime(prices_spx['index']).dt.date - prices_spx.index = prices_spx['index'] - prices_spx = prices_spx.drop(columns='index') - - prices_vix['index'] = [str(x).split()[0] for x in prices_vix.index] - prices_vix['index'] = pd.to_datetime(prices_vix['index']).dt.date - prices_vix.index = prices_vix['index'] - prices_vix = prices_vix.drop(columns='index') - - prices_vvix['index'] = [str(x).split()[0] for x in prices_vvix.index] - prices_vvix['index'] = pd.to_datetime(prices_vvix['index']).dt.date - prices_vvix.index = prices_vvix['index'] - prices_vvix = prices_vvix.drop(columns='index') - - data = prices_spx.merge(prices_vix[['Open','High','Low','Close']], left_index=True, right_index=True, suffixes=['','_VIX']) - data = data.merge(prices_vvix[['Open','High','Low','Close']], left_index=True, right_index=True, suffixes=['','_VVIX']) - data.index = pd.DatetimeIndex(data.index) - - # Features - data['PrevClose'] = data['Close'].shift(1) - data['Perf5Day'] = data['Close'] > data['Close'].shift(5) - data['Perf5Day_n1'] = data['Perf5Day'].shift(1).astype(bool) - data['GreenDay'] = (data['Close'] > data['PrevClose']) * 1 - data['RedDay'] = (data['Close'] <= data['PrevClose']) * 1 - data['VIX5Day'] = data['Close_VIX'] > data['Close_VIX'].shift(5) - data['VIX5Day_n1'] = data['VIX5Day'].shift(1).astype(bool) - data['VIXOpen'] = data['Open_VIX'] > data['Close_VIX'].shift(1) - data['VVIXOpen'] = data['Open_VVIX'] > data['Close_VVIX'].shift(1) - data['VIXOpen'] = data['VIXOpen'].astype(bool) - data['VVIXOpen'] = data['VVIXOpen'].astype(bool) - data['Range'] = data[['Open','High']].max(axis=1) - data[['Low','Open']].min(axis=1) - data['RangePct'] = data['Range'] / data['Close'] - data['VIXLevel'] = pd.qcut(data['Close_VIX'], 4) - data['OHLC4_VIX'] = data[['Open_VIX','High_VIX','Low_VIX','Close_VIX']].mean(axis=1) - data['OHLC4'] = data[['Open','High','Low','Close']].mean(axis=1) - data['OHLC4_Trend'] = data['OHLC4'] > data['OHLC4'].shift(1) - data['OHLC4_Trend_n1'] = data['OHLC4_Trend'].shift(1).astype(float) - data['OHLC4_Trend_n2'] = data['OHLC4_Trend'].shift(2).astype(float) - data['RangePct_n1'] = data['RangePct'].shift(1) - data['RangePct_n2'] = data['RangePct'].shift(2) - data['OHLC4_VIX_n1'] = data['OHLC4_VIX'].shift(1) - data['OHLC4_VIX_n2'] = data['OHLC4_VIX'].shift(2) - data['CurrentGap'] = ((data['Open'] - data['PrevClose']) / data['PrevClose']).shift(-1) - data['DayOfWeek'] = pd.to_datetime(data.index) - data['DayOfWeek'] = data['DayOfWeek'].dt.day - data['up'] = 100 * (data['High'].shift(1) - data['Open'].shift(1)) / data['Close'].shift(1) - data['upSD'] = data['up'].rolling(30).std(ddof=0) - data['aveUp'] = data['up'].rolling(30).mean() - data['H1'] = data['Open'] + (data['aveUp'] / 100) * data['Open'] - data['H2'] = data['Open'] + ((data['aveUp'] + data['upSD']) / 100) * data['Open'] - data['down'] = 100 * (data['Open'].shift(1) - data['Low'].shift(1)) / data['Close'].shift(1) - data['downSD'] = data['down'].rolling(30).std(ddof=0) - data['aveDown'] = data['down'].rolling(30).mean() - data['L1'] = data['Open'] - (data['aveDown'] / 100) * data['Open'] - data['L2'] = data['Open'] - ((data['aveDown'] + data['upSD']) / 100) * data['Open'] - data['L1Touch'] = data['Low'] < data['L1'] - data['L2Touch'] = data['Low'] < data['L2'] - data['H1Touch'] = data['High'] > data['H1'] - data['H2Touch'] = data['High'] > data['H2'] - data['L1Break'] = data['Close'] < data['L1'] - data['L2Break'] = data['Close'] < data['L2'] - data['H1Break'] = data['Close'] > data['H1'] - data['H2Break'] = data['Close'] > data['H2'] - data['OpenL1'] = data['Open'] / data['L1'] - data['OpenL2'] = data['Open'] / data['L2'] - data['OpenH1'] = data['Open'] / data['H1'] - data['OpenH2'] = data['Open'] / data['H2'] - - level_cols = [ - 'L1Touch', - 'L2Touch', - 'H1Touch', - 'H2Touch', - 'L1Break', - 'L2Break', - 'H1Break', - 'H2Break' - ] - - for col in level_cols: - data[col+'Pct'] = data[col].rolling(100).mean() - - data['H1BreakTouchPct'] = data['H1Break'].rolling(100).sum() / data['H1Touch'].rolling(100).sum() - data['H2BreakTouchPct'] = data['H2Break'].rolling(100).sum() / data['H2Touch'].rolling(100).sum() - data['L1BreakTouchPct'] = data['L1Break'].rolling(100).sum() / data['L1Touch'].rolling(100).sum() - data['L2BreakTouchPct'] = data['L2Break'].rolling(100).sum() / data['L2Touch'].rolling(100).sum() - - # Target -- the next day's low - data['Target'] = (data['OHLC4'] / data['PrevClose']) - 1 - data['Target'] = data['Target'].shift(-1) - # data['Target'] = data['RangePct'].shift(-1) - - # Target for clf -- whether tomorrow will close above or below today's close - data['Target_clf'] = data['Close'] > data['PrevClose'] - data['Target_clf'] = data['Target_clf'].shift(-1) - data['DayOfWeek'] = pd.to_datetime(data.index) - data['Quarter'] = data['DayOfWeek'].dt.quarter - data['DayOfWeek'] = data['DayOfWeek'].dt.weekday - - for rid in tqdm(release_ids, desc='Merging econ data'): - # Get the name of the release - n = releases[rid]['name'] - # Merge the corresponding DF of the release - data = data.merge(releases[rid]['df'], how = 'left', left_index=True, right_index=True) - # Create a column that shifts the value in the merged column up by 1 - data[f'{n}_shift'] = data[n].shift(-1) - # Fill the rest with zeroes - data[n] = data[n].fillna(0) - data[f'{n}_shift'] = data[f'{n}_shift'].fillna(0) - - data['BigNewsDay'] = data[[x for x in data.columns if '_shift' in x]].max(axis=1) - - def cumul_sum(col): - nums = [] - s = 0 - for x in col: - if x == 1: - s += 1 - elif x == 0: - s = 0 - nums.append(s) - return nums - - consec_green = cumul_sum(data['GreenDay'].values) - consec_red = cumul_sum(data['RedDay'].values) - - data['DaysGreen'] = consec_green - data['DaysRed'] = consec_red - - final_row = data.index[-2] - - exp_row = data.index[-1] - - df_final = data.loc[:final_row, - [ - 'BigNewsDay', - 'Quarter', - 'Perf5Day', - 'Perf5Day_n1', - 'DaysGreen', - 'DaysRed', - 'CurrentGap', - 'RangePct', - 'RangePct_n1', - 'RangePct_n2', - 'OHLC4_VIX', - 'OHLC4_VIX_n1', - 'OHLC4_VIX_n2', - 'VIXOpen', - 'VVIXOpen', - 'OpenL1', - 'OpenL2', - 'OpenH1', - 'OpenH2', - 'L1TouchPct', - 'L2TouchPct', - 'H1TouchPct', - 'H2TouchPct', - 'L1BreakPct', - 'L2BreakPct', - 'H1BreakPct', - 'H2BreakPct', - 'H1BreakTouchPct', - 'H2BreakTouchPct', - 'L1BreakTouchPct', - 'L2BreakTouchPct', - 'Target', - 'Target_clf' - ]] - df_final = df_final.dropna(subset=['Target','Target_clf','Perf5Day_n1']) - return data, df_final, final_row \ No newline at end of file diff --git a/spaces/brainblow/MusiCreator/audiocraft/modules/seanet.py b/spaces/brainblow/MusiCreator/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/brainblow/MusiCreator/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h deleted file mode 100644 index 03f4211003f42f601f0cfcf4a690f5da4a0a1f67..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h +++ /dev/null @@ -1,115 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -at::Tensor ROIAlignRotated_forward_cpu( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlignRotated_backward_cpu( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); - -#if defined(WITH_CUDA) || defined(WITH_HIP) -at::Tensor ROIAlignRotated_forward_cuda( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlignRotated_backward_cuda( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); -#endif - -// Interface for Python -inline at::Tensor ROIAlignRotated_forward( - const at::Tensor& input, - const at::Tensor& rois, - const double spatial_scale, - const int64_t pooled_height, - const int64_t pooled_width, - const int64_t sampling_ratio) { - if (input.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return ROIAlignRotated_forward_cuda( - input, - rois, - spatial_scale, - pooled_height, - pooled_width, - sampling_ratio); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - return ROIAlignRotated_forward_cpu( - input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio); -} - -inline at::Tensor ROIAlignRotated_backward( - const at::Tensor& grad, - const at::Tensor& rois, - const double spatial_scale, - const int64_t pooled_height, - const int64_t pooled_width, - const int64_t batch_size, - const int64_t channels, - const int64_t height, - const int64_t width, - const int64_t sampling_ratio) { - if (grad.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return ROIAlignRotated_backward_cuda( - grad, - rois, - spatial_scale, - pooled_height, - pooled_width, - batch_size, - channels, - height, - width, - sampling_ratio); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - return ROIAlignRotated_backward_cpu( - grad, - rois, - spatial_scale, - pooled_height, - pooled_width, - batch_size, - channels, - height, - width, - sampling_ratio); -} - -} // namespace detectron2 diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/__init__.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/__init__.py deleted file mode 100644 index da53a4d25419f5de3252af664a7aca5551950f3a..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -utils/initialization -""" - - -def notebook_init(verbose=True): - # Check system software and hardware - print('Checking setup...') - - import os - import shutil - - from utils.general import check_requirements, emojis, is_colab - from utils.torch_utils import select_device # imports - - check_requirements(('psutil', 'IPython')) - import psutil - from IPython import display # to display images and clear console output - - if is_colab(): - shutil.rmtree('/content/sample_data', ignore_errors=True) # remove colab /sample_data directory - - # System info - if verbose: - gb = 1 << 30 # bytes to GiB (1024 ** 3) - ram = psutil.virtual_memory().total - total, used, free = shutil.disk_usage("/") - display.clear_output() - s = f'({os.cpu_count()} CPUs, {ram / gb:.1f} GB RAM, {(total - free) / gb:.1f}/{total / gb:.1f} GB disk)' - else: - s = '' - - select_device(newline=False) - print(emojis(f'Setup complete ✅ {s}')) - return display diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageChops.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageChops.py deleted file mode 100644 index 70120031797c2493c0ce878c13c3fd3d5554c354..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageChops.py +++ /dev/null @@ -1,303 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard channel operations -# -# History: -# 1996-03-24 fl Created -# 1996-08-13 fl Added logical operations (for "1" images) -# 2000-10-12 fl Added offset method (from Image.py) -# -# Copyright (c) 1997-2000 by Secret Labs AB -# Copyright (c) 1996-2000 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image - - -def constant(image, value): - """Fill a channel with a given grey level. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.new("L", image.size, value) - - -def duplicate(image): - """Copy a channel. Alias for :py:meth:`PIL.Image.Image.copy`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return image.copy() - - -def invert(image): - """ - Invert an image (channel). :: - - out = MAX - image - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image.load() - return image._new(image.im.chop_invert()) - - -def lighter(image1, image2): - """ - Compares the two images, pixel by pixel, and returns a new image containing - the lighter values. :: - - out = max(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_lighter(image2.im)) - - -def darker(image1, image2): - """ - Compares the two images, pixel by pixel, and returns a new image containing - the darker values. :: - - out = min(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_darker(image2.im)) - - -def difference(image1, image2): - """ - Returns the absolute value of the pixel-by-pixel difference between the two - images. :: - - out = abs(image1 - image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_difference(image2.im)) - - -def multiply(image1, image2): - """ - Superimposes two images on top of each other. - - If you multiply an image with a solid black image, the result is black. If - you multiply with a solid white image, the image is unaffected. :: - - out = image1 * image2 / MAX - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_multiply(image2.im)) - - -def screen(image1, image2): - """ - Superimposes two inverted images on top of each other. :: - - out = MAX - ((MAX - image1) * (MAX - image2) / MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_screen(image2.im)) - - -def soft_light(image1, image2): - """ - Superimposes two images on top of each other using the Soft Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_soft_light(image2.im)) - - -def hard_light(image1, image2): - """ - Superimposes two images on top of each other using the Hard Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_hard_light(image2.im)) - - -def overlay(image1, image2): - """ - Superimposes two images on top of each other using the Overlay algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_overlay(image2.im)) - - -def add(image1, image2, scale=1.0, offset=0): - """ - Adds two images, dividing the result by scale and adding the - offset. If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 + image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add(image2.im, scale, offset)) - - -def subtract(image1, image2, scale=1.0, offset=0): - """ - Subtracts two images, dividing the result by scale and adding the offset. - If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 - image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract(image2.im, scale, offset)) - - -def add_modulo(image1, image2): - """Add two images, without clipping the result. :: - - out = ((image1 + image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add_modulo(image2.im)) - - -def subtract_modulo(image1, image2): - """Subtract two images, without clipping the result. :: - - out = ((image1 - image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract_modulo(image2.im)) - - -def logical_and(image1, image2): - """Logical AND between two images. - - Both of the images must have mode "1". If you would like to perform a - logical AND on an image with a mode other than "1", try - :py:meth:`~PIL.ImageChops.multiply` instead, using a black-and-white mask - as the second image. :: - - out = ((image1 and image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_and(image2.im)) - - -def logical_or(image1, image2): - """Logical OR between two images. - - Both of the images must have mode "1". :: - - out = ((image1 or image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_or(image2.im)) - - -def logical_xor(image1, image2): - """Logical XOR between two images. - - Both of the images must have mode "1". :: - - out = ((bool(image1) != bool(image2)) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_xor(image2.im)) - - -def blend(image1, image2, alpha): - """Blend images using constant transparency weight. Alias for - :py:func:`PIL.Image.blend`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.blend(image1, image2, alpha) - - -def composite(image1, image2, mask): - """Create composite using transparency mask. Alias for - :py:func:`PIL.Image.composite`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.composite(image1, image2, mask) - - -def offset(image, xoffset, yoffset=None): - """Returns a copy of the image where data has been offset by the given - distances. Data wraps around the edges. If ``yoffset`` is omitted, it - is assumed to be equal to ``xoffset``. - - :param image: Input image. - :param xoffset: The horizontal distance. - :param yoffset: The vertical distance. If omitted, both - distances are set to the same value. - :rtype: :py:class:`~PIL.Image.Image` - """ - - if yoffset is None: - yoffset = xoffset - image.load() - return image._new(image.im.offset(xoffset, yoffset)) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/common.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/common.py deleted file mode 100644 index d7bb62bd0d43f0f5f15e09e3cbb5b81f832af168..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/common.py +++ /dev/null @@ -1,293 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import copy -import itertools -import logging -import numpy as np -import pickle -import random -from typing import Callable, Union -import torch.utils.data as data -from torch.utils.data.sampler import Sampler - -from detectron2.utils.serialize import PicklableWrapper - -__all__ = ["MapDataset", "DatasetFromList", "AspectRatioGroupedDataset", "ToIterableDataset"] - -logger = logging.getLogger(__name__) - - -def _shard_iterator_dataloader_worker(iterable): - # Shard the iterable if we're currently inside pytorch dataloader worker. - worker_info = data.get_worker_info() - if worker_info is None or worker_info.num_workers == 1: - # do nothing - yield from iterable - else: - yield from itertools.islice(iterable, worker_info.id, None, worker_info.num_workers) - - -class _MapIterableDataset(data.IterableDataset): - """ - Map a function over elements in an IterableDataset. - - Similar to pytorch's MapIterDataPipe, but support filtering when map_func - returns None. - - This class is not public-facing. Will be called by `MapDataset`. - """ - - def __init__(self, dataset, map_func): - self._dataset = dataset - self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work - - def __len__(self): - return len(self._dataset) - - def __iter__(self): - for x in map(self._map_func, self._dataset): - if x is not None: - yield x - - -class MapDataset(data.Dataset): - """ - Map a function over the elements in a dataset. - """ - - def __init__(self, dataset, map_func): - """ - Args: - dataset: a dataset where map function is applied. Can be either - map-style or iterable dataset. When given an iterable dataset, - the returned object will also be an iterable dataset. - map_func: a callable which maps the element in dataset. map_func can - return None to skip the data (e.g. in case of errors). - How None is handled depends on the style of `dataset`. - If `dataset` is map-style, it randomly tries other elements. - If `dataset` is iterable, it skips the data and tries the next. - """ - self._dataset = dataset - self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work - - self._rng = random.Random(42) - self._fallback_candidates = set(range(len(dataset))) - - def __new__(cls, dataset, map_func): - is_iterable = isinstance(dataset, data.IterableDataset) - if is_iterable: - return _MapIterableDataset(dataset, map_func) - else: - return super().__new__(cls) - - def __getnewargs__(self): - return self._dataset, self._map_func - - def __len__(self): - return len(self._dataset) - - def __getitem__(self, idx): - retry_count = 0 - cur_idx = int(idx) - - while True: - data = self._map_func(self._dataset[cur_idx]) - if data is not None: - self._fallback_candidates.add(cur_idx) - return data - - # _map_func fails for this idx, use a random new index from the pool - retry_count += 1 - self._fallback_candidates.discard(cur_idx) - cur_idx = self._rng.sample(self._fallback_candidates, k=1)[0] - - if retry_count >= 3: - logger = logging.getLogger(__name__) - logger.warning( - "Failed to apply `_map_func` for idx: {}, retry count: {}".format( - idx, retry_count - ) - ) - - -class NumpySerializedList(object): - """ - A list-like object whose items are serialized and stored in a Numpy Array. When - forking a process that has NumpySerializedList, subprocesses can read the same list - without triggering copy-on-access, therefore they will share RAM for the list. This - avoids the issue in https://github.com/pytorch/pytorch/issues/13246 - """ - - def __init__(self, lst: list): - self._lst = lst - - def _serialize(data): - buffer = pickle.dumps(data, protocol=-1) - return np.frombuffer(buffer, dtype=np.uint8) - - logger.info( - "Serializing {} elements to byte tensors and concatenating them all ...".format( - len(self._lst) - ) - ) - self._lst = [_serialize(x) for x in self._lst] - self._addr = np.asarray([len(x) for x in self._lst], dtype=np.int64) - self._addr = np.cumsum(self._addr) - self._lst = np.concatenate(self._lst) - logger.info("Serialized dataset takes {:.2f} MiB".format(len(self._lst) / 1024**2)) - - def __len__(self): - return len(self._addr) - - def __getitem__(self, idx): - start_addr = 0 if idx == 0 else self._addr[idx - 1].item() - end_addr = self._addr[idx].item() - bytes = memoryview(self._lst[start_addr:end_addr]) - - # @lint-ignore PYTHONPICKLEISBAD - return pickle.loads(bytes) - - -_DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD = NumpySerializedList - - -@contextlib.contextmanager -def set_default_dataset_from_list_serialize_method(new): - """ - Context manager for using custom serialize function when creating DatasetFromList - """ - - global _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD - orig = _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD - _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD = new - yield - _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD = orig - - -class DatasetFromList(data.Dataset): - """ - Wrap a list to a torch Dataset. It produces elements of the list as data. - """ - - def __init__( - self, - lst: list, - copy: bool = True, - serialize: Union[bool, Callable] = True, - ): - """ - Args: - lst (list): a list which contains elements to produce. - copy (bool): whether to deepcopy the element when producing it, - so that the result can be modified in place without affecting the - source in the list. - serialize (bool or callable): whether to serialize the stroage to other - backend. If `True`, the default serialize method will be used, if given - a callable, the callable will be used as serialize method. - """ - self._lst = lst - self._copy = copy - if not isinstance(serialize, (bool, Callable)): - raise TypeError(f"Unsupported type for argument `serailzie`: {serialize}") - self._serialize = serialize is not False - - if self._serialize: - serialize_method = ( - serialize - if isinstance(serialize, Callable) - else _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD - ) - logger.info(f"Serializing the dataset using: {serialize_method}") - self._lst = serialize_method(self._lst) - - def __len__(self): - return len(self._lst) - - def __getitem__(self, idx): - if self._copy and not self._serialize: - return copy.deepcopy(self._lst[idx]) - else: - return self._lst[idx] - - -class ToIterableDataset(data.IterableDataset): - """ - Convert an old indices-based (also called map-style) dataset - to an iterable-style dataset. - """ - - def __init__(self, dataset: data.Dataset, sampler: Sampler, shard_sampler: bool = True): - """ - Args: - dataset: an old-style dataset with ``__getitem__`` - sampler: a cheap iterable that produces indices to be applied on ``dataset``. - shard_sampler: whether to shard the sampler based on the current pytorch data loader - worker id. When an IterableDataset is forked by pytorch's DataLoader into multiple - workers, it is responsible for sharding its data based on worker id so that workers - don't produce identical data. - - Most samplers (like our TrainingSampler) do not shard based on dataloader worker id - and this argument should be set to True. But certain samplers may be already - sharded, in that case this argument should be set to False. - """ - assert not isinstance(dataset, data.IterableDataset), dataset - assert isinstance(sampler, Sampler), sampler - self.dataset = dataset - self.sampler = sampler - self.shard_sampler = shard_sampler - - def __iter__(self): - if not self.shard_sampler: - sampler = self.sampler - else: - # With map-style dataset, `DataLoader(dataset, sampler)` runs the - # sampler in main process only. But `DataLoader(ToIterableDataset(dataset, sampler))` - # will run sampler in every of the N worker. So we should only keep 1/N of the ids on - # each worker. The assumption is that sampler is cheap to iterate so it's fine to - # discard ids in workers. - sampler = _shard_iterator_dataloader_worker(self.sampler) - for idx in sampler: - yield self.dataset[idx] - - def __len__(self): - return len(self.sampler) - - -class AspectRatioGroupedDataset(data.IterableDataset): - """ - Batch data that have similar aspect ratio together. - In this implementation, images whose aspect ratio < (or >) 1 will - be batched together. - This improves training speed because the images then need less padding - to form a batch. - - It assumes the underlying dataset produces dicts with "width" and "height" keys. - It will then produce a list of original dicts with length = batch_size, - all with similar aspect ratios. - """ - - def __init__(self, dataset, batch_size): - """ - Args: - dataset: an iterable. Each element must be a dict with keys - "width" and "height", which will be used to batch data. - batch_size (int): - """ - self.dataset = dataset - self.batch_size = batch_size - self._buckets = [[] for _ in range(2)] - # Hard-coded two aspect ratio groups: w > h and w < h. - # Can add support for more aspect ratio groups, but doesn't seem useful - - def __iter__(self): - for d in self.dataset: - w, h = d["width"], d["height"] - bucket_id = 0 if w > h else 1 - bucket = self._buckets[bucket_id] - bucket.append(d) - if len(bucket) == self.batch_size: - data = bucket[:] - # Clear bucket first, because code after yield is not - # guaranteed to execute - del bucket[:] - yield data diff --git a/spaces/cbensimon/stable-diffusion-xl/README.md b/spaces/cbensimon/stable-diffusion-xl/README.md deleted file mode 100644 index 4bd84f8c6a4d1f72159766b8d00b528a45bef148..0000000000000000000000000000000000000000 --- a/spaces/cbensimon/stable-diffusion-xl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stable Diffusion Xl -emoji: 🏢 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.45.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chansung/LLM-As-Chatbot/scripts/hparams_explore.py b/spaces/chansung/LLM-As-Chatbot/scripts/hparams_explore.py deleted file mode 100644 index ea01f5b352a5b643166395ed442b349aa4deeca9..0000000000000000000000000000000000000000 --- a/spaces/chansung/LLM-As-Chatbot/scripts/hparams_explore.py +++ /dev/null @@ -1,66 +0,0 @@ -import time -import itertools -import wandb -from transformers import GenerationConfig - -wandb.login(key="") - -PROJECT="txt_gen_test_project" - -generation_configs = { - "temperature": [0.5, 0.7, 0.8, 0.9, 1.0], - "top_p": [0.5, 0.75, 0.85, 0.95, 1.0], - "num_beams": [1, 2, 3, 4] -} - -num_gens = 1 - -# token initialization -# model initialization - -for comb in itertools.product(generation_configs['temperature'], - generation_configs['top_p'], - generation_configs['num_beams']): - temperature = comb[0] - top_p = comb[1] - num_beams = comb[2] - - generation_config = GenerationConfig( - temperature=temperature, - top_p=top_p, - num_beams=num_beams, - ) - - first_columns = [f"gen_txt_{num}" for num in range(num_gens)] - columns = first_columns + ["temperature", "top_p", "num_beams", "time_delta"] - - avg_time_delta = 0 - txt_gens = [] - for i in range(num_gens): - start = time.time() - # text generation - text = "dummy text" - txt_gens.append(text) - - # decode outputs - end = time.time() - t_delta = end - start - avg_time_delta = avg_time_delta + t_delta - - avg_time_delta = round(avg_time_delta / num_gens, 4) - - wandb.init( - project=PROJECT, - name=f"t@{temperature}-tp@{top_p}-nb@{num_beams}", - config=generation_config, - ) - - text_table = wandb.Table(columns=columns) - text_table.add_data(*txt_gens, temperature, top_p, num_beams, avg_time_delta) - - wandb.log({ - "avg_t_delta": avg_time_delta, - "results": text_table - }) - - wandb.finish() diff --git a/spaces/chilleverydaychill/roop/roop/core.py b/spaces/chilleverydaychill/roop/roop/core.py deleted file mode 100644 index 05f36bc720bfd7a4fd2741054b50204229c68151..0000000000000000000000000000000000000000 --- a/spaces/chilleverydaychill/roop/roop/core.py +++ /dev/null @@ -1,211 +0,0 @@ -#!/usr/bin/env python3 - -import os -import sys -# single thread doubles cuda performance - needs to be set before torch import -if any(arg.startswith('--execution-provider') for arg in sys.argv): - os.environ['OMP_NUM_THREADS'] = '1' -# reduce tensorflow log level -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' -import warnings -from typing import List -import platform -import signal -import shutil -import argparse -import torch -import onnxruntime -import tensorflow - -import roop.globals -import roop.metadata -import roop.ui as ui -from roop.predicter import predict_image, predict_video -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path - -if 'ROCMExecutionProvider' in roop.globals.execution_providers: - del torch - -warnings.filterwarnings('ignore', category=FutureWarning, module='insightface') -warnings.filterwarnings('ignore', category=UserWarning, module='torchvision') - - -def parse_args() -> None: - signal.signal(signal.SIGINT, lambda signal_number, frame: destroy()) - program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100)) - program.add_argument('-s', '--source', help='select an source image', dest='source_path') - program.add_argument('-t', '--target', help='select an target image or video', dest='target_path') - program.add_argument('-o', '--output', help='select output file or directory', dest='output_path') - program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+') - program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False) - program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True) - program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False) - program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False) - program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9']) - program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]') - program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory()) - program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+') - program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads()) - program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}') - - args = program.parse_args() - - roop.globals.source_path = args.source_path - roop.globals.target_path = args.target_path - roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path) - roop.globals.frame_processors = args.frame_processor - roop.globals.headless = args.source_path or args.target_path or args.output_path - roop.globals.keep_fps = args.keep_fps - roop.globals.keep_audio = args.keep_audio - roop.globals.keep_frames = args.keep_frames - roop.globals.many_faces = args.many_faces - roop.globals.video_encoder = args.video_encoder - roop.globals.video_quality = args.video_quality - roop.globals.max_memory = args.max_memory - roop.globals.execution_providers = decode_execution_providers(args.execution_provider) - roop.globals.execution_threads = args.execution_threads - - -def encode_execution_providers(execution_providers: List[str]) -> List[str]: - return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers] - - -def decode_execution_providers(execution_providers: List[str]) -> List[str]: - return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers())) - if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)] - - -def suggest_max_memory() -> int: - if platform.system().lower() == 'darwin': - return 4 - return 16 - - -def suggest_execution_providers() -> List[str]: - return encode_execution_providers(onnxruntime.get_available_providers()) - - -def suggest_execution_threads() -> int: - if 'DmlExecutionProvider' in roop.globals.execution_providers: - return 1 - if 'ROCMExecutionProvider' in roop.globals.execution_providers: - return 1 - return 8 - - -def limit_resources() -> None: - # prevent tensorflow memory leak - gpus = tensorflow.config.experimental.list_physical_devices('GPU') - for gpu in gpus: - tensorflow.config.experimental.set_virtual_device_configuration(gpu, [ - tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024) - ]) - # limit memory usage - if roop.globals.max_memory: - memory = roop.globals.max_memory * 1024 ** 3 - if platform.system().lower() == 'darwin': - memory = roop.globals.max_memory * 1024 ** 6 - if platform.system().lower() == 'windows': - import ctypes - kernel32 = ctypes.windll.kernel32 - kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory)) - else: - import resource - resource.setrlimit(resource.RLIMIT_DATA, (memory, memory)) - - -def release_resources() -> None: - if 'CUDAExecutionProvider' in roop.globals.execution_providers: - torch.cuda.empty_cache() - - -def pre_check() -> bool: - if sys.version_info < (3, 9): - update_status('Python version is not supported - please upgrade to 3.9 or higher.') - return False - if not shutil.which('ffmpeg'): - update_status('ffmpeg is not installed.') - return False - return True - - -def update_status(message: str, scope: str = 'ROOP.CORE') -> None: - print(f'[{scope}] {message}') - if not roop.globals.headless: - ui.update_status(message) - - -def start() -> None: - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_start(): - return - # process image to image - if has_image_extension(roop.globals.target_path): - shutil.copy2(roop.globals.target_path, roop.globals.output_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path) - frame_processor.post_process() - release_resources() - if is_image(roop.globals.target_path): - update_status('Processing to image succeed!') - else: - update_status('Processing to image failed!') - return - # process image to videos - update_status('Creating temp resources...') - create_temp(roop.globals.target_path) - update_status('Extracting frames...') - extract_frames(roop.globals.target_path) - temp_frame_paths = get_temp_frame_paths(roop.globals.target_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_video(roop.globals.source_path, temp_frame_paths) - frame_processor.post_process() - release_resources() - # handles fps - if roop.globals.keep_fps: - update_status('Detecting fps...') - fps = detect_fps(roop.globals.target_path) - update_status(f'Creating video with {fps} fps...') - create_video(roop.globals.target_path, fps) - else: - update_status('Creating video with 30.0 fps...') - create_video(roop.globals.target_path) - # handle audio - if roop.globals.keep_audio: - if roop.globals.keep_fps: - update_status('Restoring audio...') - else: - update_status('Restoring audio might cause issues as fps are not kept...') - restore_audio(roop.globals.target_path, roop.globals.output_path) - else: - move_temp(roop.globals.target_path, roop.globals.output_path) - # clean and validate - clean_temp(roop.globals.target_path) - if is_video(roop.globals.target_path): - update_status('Processing to video succeed!') - else: - update_status('Processing to video failed!') - - -def destroy() -> None: - if roop.globals.target_path: - clean_temp(roop.globals.target_path) - quit() - - -def run() -> None: - parse_args() - if not pre_check(): - return - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_check(): - return - limit_resources() - if roop.globals.headless: - start() - else: - window = ui.init(start, destroy) - window.mainloop() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_t_a_g.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_t_a_g.py deleted file mode 100644 index 24f5e131f0c615dcf86b0494854d9a3a5a1284f2..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_t_a_g.py +++ /dev/null @@ -1,64 +0,0 @@ -from fontTools.misc.textTools import bytesjoin, tobytes, safeEval -from . import DefaultTable -import struct - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6ltag.html - - -class table__l_t_a_g(DefaultTable.DefaultTable): - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.version, self.flags = 1, 0 - self.tags = [] - - def addTag(self, tag): - """Add 'tag' to the list of langauge tags if not already there. - - Returns the integer index of 'tag' in the list of all tags. - """ - try: - return self.tags.index(tag) - except ValueError: - self.tags.append(tag) - return len(self.tags) - 1 - - def decompile(self, data, ttFont): - self.version, self.flags, numTags = struct.unpack(">LLL", data[:12]) - assert self.version == 1 - self.tags = [] - for i in range(numTags): - pos = 12 + i * 4 - offset, length = struct.unpack(">HH", data[pos : pos + 4]) - tag = data[offset : offset + length].decode("ascii") - self.tags.append(tag) - - def compile(self, ttFont): - dataList = [struct.pack(">LLL", self.version, self.flags, len(self.tags))] - stringPool = "" - for tag in self.tags: - offset = stringPool.find(tag) - if offset < 0: - offset = len(stringPool) - stringPool = stringPool + tag - offset = offset + 12 + len(self.tags) * 4 - dataList.append(struct.pack(">HH", offset, len(tag))) - dataList.append(tobytes(stringPool)) - return bytesjoin(dataList) - - def toXML(self, writer, ttFont): - writer.simpletag("version", value=self.version) - writer.newline() - writer.simpletag("flags", value=self.flags) - writer.newline() - for tag in self.tags: - writer.simpletag("LanguageTag", tag=tag) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if not hasattr(self, "tags"): - self.tags = [] - if name == "LanguageTag": - self.tags.append(attrs["tag"]) - elif "value" in attrs: - value = safeEval(attrs["value"]) - setattr(self, name, value) diff --git a/spaces/cihyFjudo/fairness-paper-search/Giorgio.Vanni.2014.Super.Hits..Il.Meglio.del.Megli championsleague zitt Experience the Magic of Giorgio Vannis Super Hits.md b/spaces/cihyFjudo/fairness-paper-search/Giorgio.Vanni.2014.Super.Hits..Il.Meglio.del.Megli championsleague zitt Experience the Magic of Giorgio Vannis Super Hits.md deleted file mode 100644 index 11d56e49a4b4981b760487fb467cb4d963de50d4..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Giorgio.Vanni.2014.Super.Hits..Il.Meglio.del.Megli championsleague zitt Experience the Magic of Giorgio Vannis Super Hits.md +++ /dev/null @@ -1,6 +0,0 @@ -

Giorgio.Vanni.2014.Super.Hits..Il.Meglio.del.Megli championsleague zitt


Download Ziphttps://tinurli.com/2uwi3g



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Janome Digitizer Pro Software Download Torrent Download Downloadl Explore the Features and Benefits of the Software.md b/spaces/cihyFjudo/fairness-paper-search/Janome Digitizer Pro Software Download Torrent Download Downloadl Explore the Features and Benefits of the Software.md deleted file mode 100644 index 961264aac7024b2b24d58ecac4a7959bcb094a91..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Janome Digitizer Pro Software Download Torrent Download Downloadl Explore the Features and Benefits of the Software.md +++ /dev/null @@ -1,5 +0,0 @@ - -

For customers that do not have a CD-ROM Drive in your computer, below are links to download the software. In order to activate the software, an activation code is required. See your local authorized dealer to purchase the software and receive an activation code.

-

Janome Digitizer Pro Software Download Torrent Download Downloadl


Download File ··· https://tinurli.com/2uwjZc



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cleanmaster/so-vits-svc-akagi/app.py b/spaces/cleanmaster/so-vits-svc-akagi/app.py deleted file mode 100644 index 1bcbed71e114bb3da48f841c2d13fc4ca13d5377..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/so-vits-svc-akagi/app.py +++ /dev/null @@ -1,58 +0,0 @@ -from inference.infer_tool_grad import VitsSvc -import gradio as gr -import os - -class VitsGradio: - def __init__(self): - self.so = VitsSvc() - self.lspk = [] - self.modelPaths = [] - for root,dirs,files in os.walk("checkpoints"): - for dir in dirs: - self.modelPaths.append(dir) - with gr.Blocks() as self.Vits: - with gr.Tab("转换"): - with gr.Row(visible=False) as self.VoiceConversion: - with gr.Column(): - with gr.Row(): - with gr.Column(): - self.srcaudio = gr.Audio(label = "输入音频") - self.record = gr.Audio(source="microphone", label="或者录制你的声音") - self.btnVC = gr.Button("说话人转换(上传的音频)") - self.btnVC2 = gr.Button("说话人转换(录制的音频)") - with gr.Column(): - self.dsid = gr.Dropdown(label = "目标角色", choices = self.lspk) - self.tran = gr.Slider(label = "升降调(男声输入需微调,女声输入需降低8~12)", maximum = 60, minimum = -60, step = 1, value = 0) - self.th = gr.Slider(label = "切片阈值", maximum = 32767, minimum = -32768, step = 0.1, value = -40) - with gr.Row(): - self.VCOutputs = gr.Audio() - self.btnVC.click(self.so.inference, inputs=[self.srcaudio,self.dsid,self.tran,self.th], outputs=[self.VCOutputs]) - self.btnVC2.click(self.so.inference, inputs=[self.record,self.dsid,self.tran,self.th], outputs=[self.VCOutputs]) - with gr.Tab("选择模型"): - with gr.Column(): - modelstrs = gr.Dropdown(label = "模型", choices = self.modelPaths, value = self.modelPaths[0], type = "value") - devicestrs = gr.Dropdown(label = "设备(只能选择cpu)", choices = ["cpu","cuda"], value = "cpu", type = "value") - btnMod = gr.Button("载入模型") - btnMod.click(self.loadModel, inputs=[modelstrs,devicestrs], outputs = [self.dsid,self.VoiceConversion]) - - def loadModel(self, path, device): - self.lspk = [] - self.so.set_device(device) - self.so.loadCheckpoint(path) - for spk, sid in self.so.hps.spk.items(): - self.lspk.append(spk) - VChange = gr.update(visible = True) - SDChange = gr.update(choices = self.lspk, value = self.lspk[0]) - return [SDChange,VChange] - - def chooseAudio(self, record, srcaudio, dsid, tran, th): - if not record is None: - self.file=record - elif not srcaudio is None: - self.file=srcaudio - return(self.so.inference(self.file,self.dsid,self.tran,self.th)) - - -grVits = VitsGradio() - -grVits.Vits.launch() \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.h deleted file mode 100644 index f4a5d2830ec00c112d69f184dd11770bbfbe3463..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.h +++ /dev/null @@ -1,184 +0,0 @@ -/* - * AV1 common parsing code - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_AV1_PARSE_H -#define AVCODEC_AV1_PARSE_H - -#include -#include - -#include "libavutil/error.h" -#include "libavutil/intmath.h" -#include "libavutil/macros.h" - -#include "av1.h" -#include "get_bits.h" - -// OBU header fields + max leb128 length -#define MAX_OBU_HEADER_SIZE (2 + 8) - -typedef struct AV1OBU { - /** Size of payload */ - int size; - const uint8_t *data; - - /** - * Size, in bits, of just the data, excluding the trailing_one_bit and - * any trailing padding. - */ - int size_bits; - - /** Size of entire OBU, including header */ - int raw_size; - const uint8_t *raw_data; - - /** GetBitContext initialized to the start of the payload */ - GetBitContext gb; - - int type; - - int temporal_id; - int spatial_id; -} AV1OBU; - -/** An input packet split into OBUs */ -typedef struct AV1Packet { - AV1OBU *obus; - int nb_obus; - int obus_allocated; - unsigned obus_allocated_size; -} AV1Packet; - -/** - * Extract an OBU from a raw bitstream. - * - * @note This function does not copy or store any bitstream data. All - * the pointers in the AV1OBU structure will be valid as long - * as the input buffer also is. - */ -int ff_av1_extract_obu(AV1OBU *obu, const uint8_t *buf, int length, - void *logctx); - -/** - * Split an input packet into OBUs. - * - * @note This function does not copy or store any bitstream data. All - * the pointers in the AV1Packet structure will be valid as - * long as the input buffer also is. - */ -int ff_av1_packet_split(AV1Packet *pkt, const uint8_t *buf, int length, - void *logctx); - -/** - * Free all the allocated memory in the packet. - */ -void ff_av1_packet_uninit(AV1Packet *pkt); - -static inline int64_t leb128(GetBitContext *gb) { - int64_t ret = 0; - int i; - - for (i = 0; i < 8; i++) { - int byte = get_bits(gb, 8); - ret |= (int64_t)(byte & 0x7f) << (i * 7); - if (!(byte & 0x80)) - break; - } - return ret; -} - -static inline int parse_obu_header(const uint8_t *buf, int buf_size, - int64_t *obu_size, int *start_pos, int *type, - int *temporal_id, int *spatial_id) -{ - GetBitContext gb; - int ret, extension_flag, has_size_flag; - int64_t size; - - ret = init_get_bits8(&gb, buf, FFMIN(buf_size, MAX_OBU_HEADER_SIZE)); - if (ret < 0) - return ret; - - if (get_bits1(&gb) != 0) // obu_forbidden_bit - return AVERROR_INVALIDDATA; - - *type = get_bits(&gb, 4); - extension_flag = get_bits1(&gb); - has_size_flag = get_bits1(&gb); - skip_bits1(&gb); // obu_reserved_1bit - - if (extension_flag) { - *temporal_id = get_bits(&gb, 3); - *spatial_id = get_bits(&gb, 2); - skip_bits(&gb, 3); // extension_header_reserved_3bits - } else { - *temporal_id = *spatial_id = 0; - } - - *obu_size = has_size_flag ? leb128(&gb) - : buf_size - 1 - extension_flag; - - if (get_bits_left(&gb) < 0) - return AVERROR_INVALIDDATA; - - *start_pos = get_bits_count(&gb) / 8; - - size = *obu_size + *start_pos; - - if (size > buf_size) - return AVERROR_INVALIDDATA; - - return size; -} - -static inline int get_obu_bit_length(const uint8_t *buf, int size, int type) -{ - int v; - - /* There are no trailing bits on these */ - if (type == AV1_OBU_TILE_GROUP || - type == AV1_OBU_TILE_LIST || - type == AV1_OBU_FRAME) { - if (size > INT_MAX / 8) - return AVERROR(ERANGE); - else - return size * 8; - } - - while (size > 0 && buf[size - 1] == 0) - size--; - - if (!size) - return 0; - - v = buf[size - 1]; - - if (size > INT_MAX / 8) - return AVERROR(ERANGE); - size *= 8; - - /* Remove the trailing_one_bit and following trailing zeros */ - if (v) - size -= ff_ctz(v) + 1; - - return size; -} - -#endif /* AVCODEC_AV1_PARSE_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libx265.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libx265.c deleted file mode 100644 index 420d0953af158022055d90b78add5bd6e70581c5..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libx265.c +++ /dev/null @@ -1,909 +0,0 @@ -/* - * libx265 encoder - * - * Copyright (c) 2013-2014 Derek Buitenhuis - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#if defined(_MSC_VER) -#define X265_API_IMPORTS 1 -#endif - -#include -#include - -#include "libavutil/avassert.h" -#include "libavutil/buffer.h" -#include "libavutil/internal.h" -#include "libavutil/common.h" -#include "libavutil/opt.h" -#include "libavutil/pixdesc.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "encode.h" -#include "internal.h" -#include "packet_internal.h" -#include "atsc_a53.h" -#include "sei.h" - -typedef struct ReorderedData { -#if FF_API_REORDERED_OPAQUE - int64_t reordered_opaque; -#endif - int64_t duration; - - void *frame_opaque; - AVBufferRef *frame_opaque_ref; - - int in_use; -} ReorderedData; - -typedef struct libx265Context { - const AVClass *class; - - x265_encoder *encoder; - x265_param *params; - const x265_api *api; - - float crf; - int cqp; - int forced_idr; - char *preset; - char *tune; - char *profile; - AVDictionary *x265_opts; - - void *sei_data; - int sei_data_size; - int udu_sei; - int a53_cc; - - ReorderedData *rd; - int nb_rd; - - /** - * If the encoder does not support ROI then warn the first time we - * encounter a frame with ROI side data. - */ - int roi_warned; -} libx265Context; - -static int is_keyframe(NalUnitType naltype) -{ - switch (naltype) { - case NAL_UNIT_CODED_SLICE_BLA_W_LP: - case NAL_UNIT_CODED_SLICE_BLA_W_RADL: - case NAL_UNIT_CODED_SLICE_BLA_N_LP: - case NAL_UNIT_CODED_SLICE_IDR_W_RADL: - case NAL_UNIT_CODED_SLICE_IDR_N_LP: - case NAL_UNIT_CODED_SLICE_CRA: - return 1; - default: - return 0; - } -} - -static int rd_get(libx265Context *ctx) -{ - const int add = 16; - - ReorderedData *tmp; - int idx; - - for (int i = 0; i < ctx->nb_rd; i++) - if (!ctx->rd[i].in_use) { - ctx->rd[i].in_use = 1; - return i; - } - - tmp = av_realloc_array(ctx->rd, ctx->nb_rd + add, sizeof(*ctx->rd)); - if (!tmp) - return AVERROR(ENOMEM); - memset(tmp + ctx->nb_rd, 0, sizeof(*tmp) * add); - - ctx->rd = tmp; - ctx->nb_rd += add; - - idx = ctx->nb_rd - add; - ctx->rd[idx].in_use = 1; - - return idx; -} - -static void rd_release(libx265Context *ctx, int idx) -{ - av_assert0(idx >= 0 && idx < ctx->nb_rd); - av_buffer_unref(&ctx->rd[idx].frame_opaque_ref); - memset(&ctx->rd[idx], 0, sizeof(ctx->rd[idx])); -} - -static av_cold int libx265_encode_close(AVCodecContext *avctx) -{ - libx265Context *ctx = avctx->priv_data; - - ctx->api->param_free(ctx->params); - av_freep(&ctx->sei_data); - - for (int i = 0; i < ctx->nb_rd; i++) - rd_release(ctx, i); - av_freep(&ctx->rd); - - if (ctx->encoder) - ctx->api->encoder_close(ctx->encoder); - - return 0; -} - -static av_cold int libx265_param_parse_float(AVCodecContext *avctx, - const char *key, float value) -{ - libx265Context *ctx = avctx->priv_data; - char buf[256]; - - snprintf(buf, sizeof(buf), "%2.2f", value); - if (ctx->api->param_parse(ctx->params, key, buf) == X265_PARAM_BAD_VALUE) { - av_log(avctx, AV_LOG_ERROR, "Invalid value %2.2f for param \"%s\".\n", value, key); - return AVERROR(EINVAL); - } - - return 0; -} - -static av_cold int libx265_param_parse_int(AVCodecContext *avctx, - const char *key, int value) -{ - libx265Context *ctx = avctx->priv_data; - char buf[256]; - - snprintf(buf, sizeof(buf), "%d", value); - if (ctx->api->param_parse(ctx->params, key, buf) == X265_PARAM_BAD_VALUE) { - av_log(avctx, AV_LOG_ERROR, "Invalid value %d for param \"%s\".\n", value, key); - return AVERROR(EINVAL); - } - - return 0; -} - -static av_cold int libx265_encode_init(AVCodecContext *avctx) -{ - libx265Context *ctx = avctx->priv_data; - AVCPBProperties *cpb_props = NULL; - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt); - int ret; - - ctx->api = x265_api_get(desc->comp[0].depth); - if (!ctx->api) - ctx->api = x265_api_get(0); - - ctx->params = ctx->api->param_alloc(); - if (!ctx->params) { - av_log(avctx, AV_LOG_ERROR, "Could not allocate x265 param structure.\n"); - return AVERROR(ENOMEM); - } - - if (ctx->api->param_default_preset(ctx->params, ctx->preset, ctx->tune) < 0) { - int i; - - av_log(avctx, AV_LOG_ERROR, "Error setting preset/tune %s/%s.\n", ctx->preset, ctx->tune); - av_log(avctx, AV_LOG_INFO, "Possible presets:"); - for (i = 0; x265_preset_names[i]; i++) - av_log(avctx, AV_LOG_INFO, " %s", x265_preset_names[i]); - - av_log(avctx, AV_LOG_INFO, "\n"); - av_log(avctx, AV_LOG_INFO, "Possible tunes:"); - for (i = 0; x265_tune_names[i]; i++) - av_log(avctx, AV_LOG_INFO, " %s", x265_tune_names[i]); - - av_log(avctx, AV_LOG_INFO, "\n"); - - return AVERROR(EINVAL); - } - - ctx->params->frameNumThreads = avctx->thread_count; - if (avctx->framerate.num > 0 && avctx->framerate.den > 0) { - ctx->params->fpsNum = avctx->framerate.num; - ctx->params->fpsDenom = avctx->framerate.den; - } else { - ctx->params->fpsNum = avctx->time_base.den; - ctx->params->fpsDenom = avctx->time_base.num * avctx->ticks_per_frame; - } - ctx->params->sourceWidth = avctx->width; - ctx->params->sourceHeight = avctx->height; - ctx->params->bEnablePsnr = !!(avctx->flags & AV_CODEC_FLAG_PSNR); - ctx->params->bOpenGOP = !(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); - - /* Tune the CTU size based on input resolution. */ - if (ctx->params->sourceWidth < 64 || ctx->params->sourceHeight < 64) - ctx->params->maxCUSize = 32; - if (ctx->params->sourceWidth < 32 || ctx->params->sourceHeight < 32) - ctx->params->maxCUSize = 16; - if (ctx->params->sourceWidth < 16 || ctx->params->sourceHeight < 16) { - av_log(avctx, AV_LOG_ERROR, "Image size is too small (%dx%d).\n", - ctx->params->sourceWidth, ctx->params->sourceHeight); - return AVERROR(EINVAL); - } - - - ctx->params->vui.bEnableVideoSignalTypePresentFlag = 1; - - if (avctx->color_range != AVCOL_RANGE_UNSPECIFIED) - ctx->params->vui.bEnableVideoFullRangeFlag = - avctx->color_range == AVCOL_RANGE_JPEG; - else - ctx->params->vui.bEnableVideoFullRangeFlag = - (desc->flags & AV_PIX_FMT_FLAG_RGB) || - avctx->pix_fmt == AV_PIX_FMT_YUVJ420P || - avctx->pix_fmt == AV_PIX_FMT_YUVJ422P || - avctx->pix_fmt == AV_PIX_FMT_YUVJ444P; - - if ((avctx->color_primaries <= AVCOL_PRI_SMPTE432 && - avctx->color_primaries != AVCOL_PRI_UNSPECIFIED) || - (avctx->color_trc <= AVCOL_TRC_ARIB_STD_B67 && - avctx->color_trc != AVCOL_TRC_UNSPECIFIED) || - (avctx->colorspace <= AVCOL_SPC_ICTCP && - avctx->colorspace != AVCOL_SPC_UNSPECIFIED)) { - - ctx->params->vui.bEnableColorDescriptionPresentFlag = 1; - - // x265 validates the parameters internally - ctx->params->vui.colorPrimaries = avctx->color_primaries; - ctx->params->vui.transferCharacteristics = avctx->color_trc; -#if X265_BUILD >= 159 - if (avctx->color_trc == AVCOL_TRC_ARIB_STD_B67) - ctx->params->preferredTransferCharacteristics = ctx->params->vui.transferCharacteristics; -#endif - ctx->params->vui.matrixCoeffs = avctx->colorspace; - } - - // chroma sample location values are to be ignored in case of non-4:2:0 - // according to the specification, so we only write them out in case of - // 4:2:0 (log2_chroma_{w,h} == 1). - ctx->params->vui.bEnableChromaLocInfoPresentFlag = - avctx->chroma_sample_location != AVCHROMA_LOC_UNSPECIFIED && - desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1; - - if (ctx->params->vui.bEnableChromaLocInfoPresentFlag) { - ctx->params->vui.chromaSampleLocTypeTopField = - ctx->params->vui.chromaSampleLocTypeBottomField = - avctx->chroma_sample_location - 1; - } - - if (avctx->sample_aspect_ratio.num > 0 && avctx->sample_aspect_ratio.den > 0) { - char sar[12]; - int sar_num, sar_den; - - av_reduce(&sar_num, &sar_den, - avctx->sample_aspect_ratio.num, - avctx->sample_aspect_ratio.den, 65535); - snprintf(sar, sizeof(sar), "%d:%d", sar_num, sar_den); - if (ctx->api->param_parse(ctx->params, "sar", sar) == X265_PARAM_BAD_VALUE) { - av_log(avctx, AV_LOG_ERROR, "Invalid SAR: %d:%d.\n", sar_num, sar_den); - return AVERROR_INVALIDDATA; - } - } - - switch (desc->log2_chroma_w) { - // 4:4:4, RGB. gray - case 0: - // gray - if (desc->nb_components == 1) { - if (ctx->api->api_build_number < 85) { - av_log(avctx, AV_LOG_ERROR, - "libx265 version is %d, must be at least 85 for gray encoding.\n", - ctx->api->api_build_number); - return AVERROR_INVALIDDATA; - } - ctx->params->internalCsp = X265_CSP_I400; - break; - } - - // set identity matrix for RGB - if (desc->flags & AV_PIX_FMT_FLAG_RGB) { - ctx->params->vui.matrixCoeffs = AVCOL_SPC_RGB; - ctx->params->vui.bEnableVideoSignalTypePresentFlag = 1; - ctx->params->vui.bEnableColorDescriptionPresentFlag = 1; - } - - ctx->params->internalCsp = X265_CSP_I444; - break; - // 4:2:0, 4:2:2 - case 1: - ctx->params->internalCsp = desc->log2_chroma_h == 1 ? - X265_CSP_I420 : X265_CSP_I422; - break; - default: - av_log(avctx, AV_LOG_ERROR, - "Pixel format '%s' cannot be mapped to a libx265 CSP!\n", - desc->name); - return AVERROR_BUG; - } - - if (ctx->crf >= 0) { - char crf[6]; - - snprintf(crf, sizeof(crf), "%2.2f", ctx->crf); - if (ctx->api->param_parse(ctx->params, "crf", crf) == X265_PARAM_BAD_VALUE) { - av_log(avctx, AV_LOG_ERROR, "Invalid crf: %2.2f.\n", ctx->crf); - return AVERROR(EINVAL); - } - } else if (avctx->bit_rate > 0) { - ctx->params->rc.bitrate = avctx->bit_rate / 1000; - ctx->params->rc.rateControlMode = X265_RC_ABR; - } else if (ctx->cqp >= 0) { - ret = libx265_param_parse_int(avctx, "qp", ctx->cqp); - if (ret < 0) - return ret; - } - - if (avctx->qmin >= 0) { - ret = libx265_param_parse_int(avctx, "qpmin", avctx->qmin); - if (ret < 0) - return ret; - } - if (avctx->qmax >= 0) { - ret = libx265_param_parse_int(avctx, "qpmax", avctx->qmax); - if (ret < 0) - return ret; - } - if (avctx->max_qdiff >= 0) { - ret = libx265_param_parse_int(avctx, "qpstep", avctx->max_qdiff); - if (ret < 0) - return ret; - } - if (avctx->qblur >= 0) { - ret = libx265_param_parse_float(avctx, "qblur", avctx->qblur); - if (ret < 0) - return ret; - } - if (avctx->qcompress >= 0) { - ret = libx265_param_parse_float(avctx, "qcomp", avctx->qcompress); - if (ret < 0) - return ret; - } - if (avctx->i_quant_factor >= 0) { - ret = libx265_param_parse_float(avctx, "ipratio", avctx->i_quant_factor); - if (ret < 0) - return ret; - } - if (avctx->b_quant_factor >= 0) { - ret = libx265_param_parse_float(avctx, "pbratio", avctx->b_quant_factor); - if (ret < 0) - return ret; - } - - ctx->params->rc.vbvBufferSize = avctx->rc_buffer_size / 1000; - ctx->params->rc.vbvMaxBitrate = avctx->rc_max_rate / 1000; - - cpb_props = ff_add_cpb_side_data(avctx); - if (!cpb_props) - return AVERROR(ENOMEM); - cpb_props->buffer_size = ctx->params->rc.vbvBufferSize * 1000; - cpb_props->max_bitrate = ctx->params->rc.vbvMaxBitrate * 1000LL; - cpb_props->avg_bitrate = ctx->params->rc.bitrate * 1000LL; - - if (!(avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER)) - ctx->params->bRepeatHeaders = 1; - - if (avctx->gop_size >= 0) { - ret = libx265_param_parse_int(avctx, "keyint", avctx->gop_size); - if (ret < 0) - return ret; - } - if (avctx->keyint_min > 0) { - ret = libx265_param_parse_int(avctx, "min-keyint", avctx->keyint_min); - if (ret < 0) - return ret; - } - if (avctx->max_b_frames >= 0) { - ret = libx265_param_parse_int(avctx, "bframes", avctx->max_b_frames); - if (ret < 0) - return ret; - } - if (avctx->refs >= 0) { - ret = libx265_param_parse_int(avctx, "ref", avctx->refs); - if (ret < 0) - return ret; - } - - { - AVDictionaryEntry *en = NULL; - while ((en = av_dict_get(ctx->x265_opts, "", en, AV_DICT_IGNORE_SUFFIX))) { - int parse_ret = ctx->api->param_parse(ctx->params, en->key, en->value); - - switch (parse_ret) { - case X265_PARAM_BAD_NAME: - av_log(avctx, AV_LOG_WARNING, - "Unknown option: %s.\n", en->key); - break; - case X265_PARAM_BAD_VALUE: - av_log(avctx, AV_LOG_WARNING, - "Invalid value for %s: %s.\n", en->key, en->value); - break; - default: - break; - } - } - } - - if (ctx->params->rc.vbvBufferSize && avctx->rc_initial_buffer_occupancy > 1000 && - ctx->params->rc.vbvBufferInit == 0.9) { - ctx->params->rc.vbvBufferInit = (float)avctx->rc_initial_buffer_occupancy / 1000; - } - - if (ctx->profile) { - if (ctx->api->param_apply_profile(ctx->params, ctx->profile) < 0) { - int i; - av_log(avctx, AV_LOG_ERROR, "Invalid or incompatible profile set: %s.\n", ctx->profile); - av_log(avctx, AV_LOG_INFO, "Possible profiles:"); - for (i = 0; x265_profile_names[i]; i++) - av_log(avctx, AV_LOG_INFO, " %s", x265_profile_names[i]); - av_log(avctx, AV_LOG_INFO, "\n"); - return AVERROR(EINVAL); - } - } - - ctx->encoder = ctx->api->encoder_open(ctx->params); - if (!ctx->encoder) { - av_log(avctx, AV_LOG_ERROR, "Cannot open libx265 encoder.\n"); - libx265_encode_close(avctx); - return AVERROR_INVALIDDATA; - } - - if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) { - x265_nal *nal; - int nnal; - - avctx->extradata_size = ctx->api->encoder_headers(ctx->encoder, &nal, &nnal); - if (avctx->extradata_size <= 0) { - av_log(avctx, AV_LOG_ERROR, "Cannot encode headers.\n"); - libx265_encode_close(avctx); - return AVERROR_INVALIDDATA; - } - - avctx->extradata = av_malloc(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE); - if (!avctx->extradata) { - av_log(avctx, AV_LOG_ERROR, - "Cannot allocate HEVC header of size %d.\n", avctx->extradata_size); - libx265_encode_close(avctx); - return AVERROR(ENOMEM); - } - - memcpy(avctx->extradata, nal[0].payload, avctx->extradata_size); - memset(avctx->extradata + avctx->extradata_size, 0, AV_INPUT_BUFFER_PADDING_SIZE); - } - - return 0; -} - -static av_cold int libx265_encode_set_roi(libx265Context *ctx, const AVFrame *frame, x265_picture* pic) -{ - AVFrameSideData *sd = av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST); - if (sd) { - if (ctx->params->rc.aqMode == X265_AQ_NONE) { - if (!ctx->roi_warned) { - ctx->roi_warned = 1; - av_log(ctx, AV_LOG_WARNING, "Adaptive quantization must be enabled to use ROI encoding, skipping ROI.\n"); - } - } else { - /* 8x8 block when qg-size is 8, 16*16 block otherwise. */ - int mb_size = (ctx->params->rc.qgSize == 8) ? 8 : 16; - int mbx = (frame->width + mb_size - 1) / mb_size; - int mby = (frame->height + mb_size - 1) / mb_size; - int qp_range = 51 + 6 * (pic->bitDepth - 8); - int nb_rois; - const AVRegionOfInterest *roi; - uint32_t roi_size; - float *qoffsets; /* will be freed after encode is called. */ - - roi = (const AVRegionOfInterest*)sd->data; - roi_size = roi->self_size; - if (!roi_size || sd->size % roi_size != 0) { - av_log(ctx, AV_LOG_ERROR, "Invalid AVRegionOfInterest.self_size.\n"); - return AVERROR(EINVAL); - } - nb_rois = sd->size / roi_size; - - qoffsets = av_calloc(mbx * mby, sizeof(*qoffsets)); - if (!qoffsets) - return AVERROR(ENOMEM); - - // This list must be iterated in reverse because the first - // region in the list applies when regions overlap. - for (int i = nb_rois - 1; i >= 0; i--) { - int startx, endx, starty, endy; - float qoffset; - - roi = (const AVRegionOfInterest*)(sd->data + roi_size * i); - - starty = FFMIN(mby, roi->top / mb_size); - endy = FFMIN(mby, (roi->bottom + mb_size - 1)/ mb_size); - startx = FFMIN(mbx, roi->left / mb_size); - endx = FFMIN(mbx, (roi->right + mb_size - 1)/ mb_size); - - if (roi->qoffset.den == 0) { - av_free(qoffsets); - av_log(ctx, AV_LOG_ERROR, "AVRegionOfInterest.qoffset.den must not be zero.\n"); - return AVERROR(EINVAL); - } - qoffset = roi->qoffset.num * 1.0f / roi->qoffset.den; - qoffset = av_clipf(qoffset * qp_range, -qp_range, +qp_range); - - for (int y = starty; y < endy; y++) - for (int x = startx; x < endx; x++) - qoffsets[x + y*mbx] = qoffset; - } - - pic->quantOffsets = qoffsets; - } - } - return 0; -} - -static void free_picture(libx265Context *ctx, x265_picture *pic) -{ - x265_sei *sei = &pic->userSEI; - for (int i = 0; i < sei->numPayloads; i++) - av_free(sei->payloads[i].payload); - - if (pic->userData) { - int idx = (int)(intptr_t)pic->userData - 1; - rd_release(ctx, idx); - pic->userData = NULL; - } - - av_freep(&pic->quantOffsets); - sei->numPayloads = 0; -} - -static int libx265_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *pic, int *got_packet) -{ - libx265Context *ctx = avctx->priv_data; - x265_picture x265pic; - x265_picture x265pic_out = { 0 }; - x265_nal *nal; - x265_sei *sei; - uint8_t *dst; - int pict_type; - int payload = 0; - int nnal; - int ret; - int i; - - ctx->api->picture_init(ctx->params, &x265pic); - - sei = &x265pic.userSEI; - sei->numPayloads = 0; - - if (pic) { - ReorderedData *rd; - int rd_idx; - - for (i = 0; i < 3; i++) { - x265pic.planes[i] = pic->data[i]; - x265pic.stride[i] = pic->linesize[i]; - } - - x265pic.pts = pic->pts; - x265pic.bitDepth = av_pix_fmt_desc_get(avctx->pix_fmt)->comp[0].depth; - - x265pic.sliceType = pic->pict_type == AV_PICTURE_TYPE_I ? - (ctx->forced_idr ? X265_TYPE_IDR : X265_TYPE_I) : - pic->pict_type == AV_PICTURE_TYPE_P ? X265_TYPE_P : - pic->pict_type == AV_PICTURE_TYPE_B ? X265_TYPE_B : - X265_TYPE_AUTO; - - ret = libx265_encode_set_roi(ctx, pic, &x265pic); - if (ret < 0) - return ret; - - rd_idx = rd_get(ctx); - if (rd_idx < 0) { - free_picture(ctx, &x265pic); - return rd_idx; - } - rd = &ctx->rd[rd_idx]; - - rd->duration = pic->duration; -#if FF_API_REORDERED_OPAQUE -FF_DISABLE_DEPRECATION_WARNINGS - rd->reordered_opaque = pic->reordered_opaque; -FF_ENABLE_DEPRECATION_WARNINGS -#endif - if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { - rd->frame_opaque = pic->opaque; - ret = av_buffer_replace(&rd->frame_opaque_ref, pic->opaque_ref); - if (ret < 0) { - rd_release(ctx, rd_idx); - free_picture(ctx, &x265pic); - return ret; - } - } - - x265pic.userData = (void*)(intptr_t)(rd_idx + 1); - - if (ctx->a53_cc) { - void *sei_data; - size_t sei_size; - - ret = ff_alloc_a53_sei(pic, 0, &sei_data, &sei_size); - if (ret < 0) { - av_log(ctx, AV_LOG_ERROR, "Not enough memory for closed captions, skipping\n"); - } else if (sei_data) { - void *tmp; - x265_sei_payload *sei_payload; - - tmp = av_fast_realloc(ctx->sei_data, - &ctx->sei_data_size, - (sei->numPayloads + 1) * sizeof(*sei_payload)); - if (!tmp) { - av_free(sei_data); - free_picture(ctx, &x265pic); - return AVERROR(ENOMEM); - } - ctx->sei_data = tmp; - sei->payloads = ctx->sei_data; - sei_payload = &sei->payloads[sei->numPayloads]; - sei_payload->payload = sei_data; - sei_payload->payloadSize = sei_size; - sei_payload->payloadType = SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35; - sei->numPayloads++; - } - } - - if (ctx->udu_sei) { - for (i = 0; i < pic->nb_side_data; i++) { - AVFrameSideData *side_data = pic->side_data[i]; - void *tmp; - x265_sei_payload *sei_payload; - - if (side_data->type != AV_FRAME_DATA_SEI_UNREGISTERED) - continue; - - tmp = av_fast_realloc(ctx->sei_data, - &ctx->sei_data_size, - (sei->numPayloads + 1) * sizeof(*sei_payload)); - if (!tmp) { - free_picture(ctx, &x265pic); - return AVERROR(ENOMEM); - } - ctx->sei_data = tmp; - sei->payloads = ctx->sei_data; - sei_payload = &sei->payloads[sei->numPayloads]; - sei_payload->payload = av_memdup(side_data->data, side_data->size); - if (!sei_payload->payload) { - free_picture(ctx, &x265pic); - return AVERROR(ENOMEM); - } - sei_payload->payloadSize = side_data->size; - /* Equal to libx265 USER_DATA_UNREGISTERED */ - sei_payload->payloadType = SEI_TYPE_USER_DATA_UNREGISTERED; - sei->numPayloads++; - } - } - } - - ret = ctx->api->encoder_encode(ctx->encoder, &nal, &nnal, - pic ? &x265pic : NULL, &x265pic_out); - - for (i = 0; i < sei->numPayloads; i++) - av_free(sei->payloads[i].payload); - av_freep(&x265pic.quantOffsets); - - if (ret < 0) - return AVERROR_EXTERNAL; - - if (!nnal) - return 0; - - for (i = 0; i < nnal; i++) - payload += nal[i].sizeBytes; - - ret = ff_get_encode_buffer(avctx, pkt, payload, 0); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Error getting output packet.\n"); - return ret; - } - dst = pkt->data; - - for (i = 0; i < nnal; i++) { - memcpy(dst, nal[i].payload, nal[i].sizeBytes); - dst += nal[i].sizeBytes; - - if (is_keyframe(nal[i].type)) - pkt->flags |= AV_PKT_FLAG_KEY; - } - - pkt->pts = x265pic_out.pts; - pkt->dts = x265pic_out.dts; - - switch (x265pic_out.sliceType) { - case X265_TYPE_IDR: - case X265_TYPE_I: - pict_type = AV_PICTURE_TYPE_I; - break; - case X265_TYPE_P: - pict_type = AV_PICTURE_TYPE_P; - break; - case X265_TYPE_B: - case X265_TYPE_BREF: - pict_type = AV_PICTURE_TYPE_B; - break; - default: - av_log(avctx, AV_LOG_ERROR, "Unknown picture type encountered.\n"); - return AVERROR_EXTERNAL; - } - -#if X265_BUILD >= 130 - if (x265pic_out.sliceType == X265_TYPE_B) -#else - if (x265pic_out.frameData.sliceType == 'b') -#endif - pkt->flags |= AV_PKT_FLAG_DISPOSABLE; - - ff_side_data_set_encoder_stats(pkt, x265pic_out.frameData.qp * FF_QP2LAMBDA, NULL, 0, pict_type); - - if (x265pic_out.userData) { - int idx = (int)(intptr_t)x265pic_out.userData - 1; - ReorderedData *rd = &ctx->rd[idx]; - -#if FF_API_REORDERED_OPAQUE -FF_DISABLE_DEPRECATION_WARNINGS - avctx->reordered_opaque = rd->reordered_opaque; -FF_ENABLE_DEPRECATION_WARNINGS -#endif - pkt->duration = rd->duration; - - if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { - pkt->opaque = rd->frame_opaque; - pkt->opaque_ref = rd->frame_opaque_ref; - rd->frame_opaque_ref = NULL; - } - - rd_release(ctx, idx); - } -#if FF_API_REORDERED_OPAQUE - else { -FF_DISABLE_DEPRECATION_WARNINGS - avctx->reordered_opaque = 0; -FF_ENABLE_DEPRECATION_WARNINGS - } -#endif - - *got_packet = 1; - return 0; -} - -static const enum AVPixelFormat x265_csp_eight[] = { - AV_PIX_FMT_YUV420P, - AV_PIX_FMT_YUVJ420P, - AV_PIX_FMT_YUV422P, - AV_PIX_FMT_YUVJ422P, - AV_PIX_FMT_YUV444P, - AV_PIX_FMT_YUVJ444P, - AV_PIX_FMT_GBRP, - AV_PIX_FMT_GRAY8, - AV_PIX_FMT_NONE -}; - -static const enum AVPixelFormat x265_csp_ten[] = { - AV_PIX_FMT_YUV420P, - AV_PIX_FMT_YUVJ420P, - AV_PIX_FMT_YUV422P, - AV_PIX_FMT_YUVJ422P, - AV_PIX_FMT_YUV444P, - AV_PIX_FMT_YUVJ444P, - AV_PIX_FMT_GBRP, - AV_PIX_FMT_YUV420P10, - AV_PIX_FMT_YUV422P10, - AV_PIX_FMT_YUV444P10, - AV_PIX_FMT_GBRP10, - AV_PIX_FMT_GRAY8, - AV_PIX_FMT_GRAY10, - AV_PIX_FMT_NONE -}; - -static const enum AVPixelFormat x265_csp_twelve[] = { - AV_PIX_FMT_YUV420P, - AV_PIX_FMT_YUVJ420P, - AV_PIX_FMT_YUV422P, - AV_PIX_FMT_YUVJ422P, - AV_PIX_FMT_YUV444P, - AV_PIX_FMT_YUVJ444P, - AV_PIX_FMT_GBRP, - AV_PIX_FMT_YUV420P10, - AV_PIX_FMT_YUV422P10, - AV_PIX_FMT_YUV444P10, - AV_PIX_FMT_GBRP10, - AV_PIX_FMT_YUV420P12, - AV_PIX_FMT_YUV422P12, - AV_PIX_FMT_YUV444P12, - AV_PIX_FMT_GBRP12, - AV_PIX_FMT_GRAY8, - AV_PIX_FMT_GRAY10, - AV_PIX_FMT_GRAY12, - AV_PIX_FMT_NONE -}; - -static av_cold void libx265_encode_init_csp(FFCodec *codec) -{ - if (x265_api_get(12)) - codec->p.pix_fmts = x265_csp_twelve; - else if (x265_api_get(10)) - codec->p.pix_fmts = x265_csp_ten; - else if (x265_api_get(8)) - codec->p.pix_fmts = x265_csp_eight; -} - -#define OFFSET(x) offsetof(libx265Context, x) -#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption options[] = { - { "crf", "set the x265 crf", OFFSET(crf), AV_OPT_TYPE_FLOAT, { .dbl = -1 }, -1, FLT_MAX, VE }, - { "qp", "set the x265 qp", OFFSET(cqp), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, INT_MAX, VE }, - { "forced-idr", "if forcing keyframes, force them as IDR frames", OFFSET(forced_idr),AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, - { "preset", "set the x265 preset", OFFSET(preset), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE }, - { "tune", "set the x265 tune parameter", OFFSET(tune), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE }, - { "profile", "set the x265 profile", OFFSET(profile), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE }, - { "udu_sei", "Use user data unregistered SEI if available", OFFSET(udu_sei), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, - { "a53cc", "Use A53 Closed Captions (if available)", OFFSET(a53_cc), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, VE }, - { "x265-params", "set the x265 configuration using a :-separated list of key=value parameters", OFFSET(x265_opts), AV_OPT_TYPE_DICT, { 0 }, 0, 0, VE }, - { NULL } -}; - -static const AVClass class = { - .class_name = "libx265", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -static const FFCodecDefault x265_defaults[] = { - { "b", "0" }, - { "bf", "-1" }, - { "g", "-1" }, - { "keyint_min", "-1" }, - { "refs", "-1" }, - { "qmin", "-1" }, - { "qmax", "-1" }, - { "qdiff", "-1" }, - { "qblur", "-1" }, - { "qcomp", "-1" }, - { "i_qfactor", "-1" }, - { "b_qfactor", "-1" }, - { NULL }, -}; - -FFCodec ff_libx265_encoder = { - .p.name = "libx265", - CODEC_LONG_NAME("libx265 H.265 / HEVC"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_HEVC, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_OTHER_THREADS | - AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .p.priv_class = &class, - .p.wrapper_name = "libx265", - .init = libx265_encode_init, - .init_static_data = libx265_encode_init_csp, - FF_CODEC_ENCODE_CB(libx265_encode_frame), - .close = libx265_encode_close, - .priv_data_size = sizeof(libx265Context), - .defaults = x265_defaults, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE | - FF_CODEC_CAP_AUTO_THREADS, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing MOD APK Everything You Need to Know about the Game with Dinheiro Infinito.md b/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing MOD APK Everything You Need to Know about the Game with Dinheiro Infinito.md deleted file mode 100644 index 215f75a1218e89d8290ebe47d33c065f9d5e8f4d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing MOD APK Everything You Need to Know about the Game with Dinheiro Infinito.md +++ /dev/null @@ -1,94 +0,0 @@ -
-

Assoluto Racing Mod Apk Dinheiro Infinito: A Complete Guide

-

If you are a fan of racing games, you might have heard of Assoluto Racing, one of the most realistic and immersive racing simulators on mobile devices. But did you know that you can enjoy this game even more with the mod apk dinheiro infinito version? In this article, we will tell you everything you need to know about Assoluto Racing Mod Apk Dinheiro Infinito, including what it is, how to download and install it, why you should play it, and some tips and tricks to help you win every race. Let's get started!

-

What is Assoluto Racing?

-

Assoluto Racing is a racing game developed by Infinity Vector Ltd, a studio based in Hong Kong. The game was released in 2016 and has since gained millions of downloads and positive reviews from players and critics alike. The game aims to provide a realistic and authentic racing experience, with accurate physics, stunning graphics, and licensed cars from famous brands like Ferrari, Lamborghini, BMW, Nissan, and more. You can choose from various modes, such as career, arcade, drift, time attack, or online multiplayer, and race on different tracks around the world. You can also customize your car with different parts, colors, decals, and tuning options.

-

assoluto racing mod apk dinheiro infinito


Download »»» https://urlca.com/2uO7Td



-

Features of Assoluto Racing

-

Some of the main features of Assoluto Racing are:

-
    -
  • Realistic physics engine that simulates the behavior of real cars on different surfaces and conditions.
  • -
  • High-quality graphics that showcase the details of the cars, tracks, environments, and effects.
  • -
  • Licensed cars from over 20 manufacturers, each with their own specifications and performance.
  • -
  • Customizable cars with hundreds of parts, colors, decals, and tuning options.
  • -
  • Different modes to suit your preference and skill level, such as career, arcade, drift, time attack, or online multiplayer.
  • -
  • Different tracks from around the world, each with their own layout, scenery, and challenges.
  • -
  • Online leaderboards and events where you can compete with other players and win rewards.
  • -
-

How to download and install Assoluto Racing Mod Apk Dinheiro Infinito

-

If you want to enjoy Assoluto Racing with unlimited money and coins, you will need to download and install the mod apk dinheiro infinito version. Here are the steps to do so:

-
    -
  1. Go to [this link](^1^) and download the mod apk file. You may need to enable unknown sources in your device settings to allow the installation of third-party apps.
  2. -
  3. Once the download is complete, locate the file in your device storage and tap on it to start the installation process.
  4. -
  5. Follow the instructions on the screen and wait for the installation to finish.
  6. -
  7. Launch the game and enjoy!
  8. -
-

Why you should play Assoluto Racing Mod Apk Dinheiro Infinito

-

There are many reasons why you should play Assoluto Racing Mod Apk Dinheiro Infinito. Here are some of them:

-

assoluto racing hack apk unlimited money
-assoluto racing mod apk download latest version
-assoluto racing mod apk android 1
-assoluto racing mod apk revdl
-assoluto racing mod apk obb
-assoluto racing mod apk free shopping
-assoluto racing mod apk all cars unlocked
-assoluto racing mod apk offline
-assoluto racing mod apk rexdl
-assoluto racing mod apk no root
-assoluto racing mod apk data
-assoluto racing mod apk pure
-assoluto racing mod apk unlimited coins
-assoluto racing mod apk unlimited gold
-assoluto racing mod apk unlimited gems
-assoluto racing mod apk unlimited nitro
-assoluto racing mod apk unlimited fuel
-assoluto racing mod apk unlimited cash
-assoluto racing mod apk unlimited tokens
-assoluto racing mod apk unlimited everything
-assoluto racing mod apk full version
-assoluto racing mod apk premium
-assoluto racing mod apk pro
-assoluto racing mod apk vip
-assoluto racing mod apk mega
-assoluto racing mod apk super
-assoluto racing mod apk ultra
-assoluto racing mod apk extreme
-assoluto racing mod apk real physics engine
-assoluto racing mod apk realistic graphics
-assoluto racing mod apk realistic driving physics
-assoluto racing mod apk realistic car sounds
-assoluto racing mod apk realistic damage system
-assoluto racing mod apk realistic weather effects
-assoluto racing mod apk realistic tracks and cars
-assoluto racing mod apk best simulation game
-assoluto racing mod apk best car game
-assoluto racing mod apk best graphics game
-assoluto racing mod apk best physics game
-assoluto racing mod apk best android game
-download assoluto racing mod apk dinheiro infinito gratis
-download assoluto racing mod apk dinheiro infinito 2023
-download assoluto racing mod apk dinheiro infinito atualizado
-download assoluto racing mod apk dinheiro infinito mediafıre
-download assoluto racing mod apk dinheiro infinito mega
-download assoluto racing mod apk dinheiro infinito google drive
-download assoluto racing mod apk dinheiro infinito zippyshare
-download assoluto racing mod apk dinheiro infinito dropbox
-download assoluto racing mod apk dinheiro infinito uptodown

-

Unlimited money and coins

-

With the mod apk dinheiro infinito version, you will have access to unlimited money and coins in the game. This means that you can buy any car you want, upgrade it to the max and more. It also gives you unlimited money and coins to buy and customize any car and track you want. You can download and install the mod apk dinheiro infinito version easily and enjoy the game without any limitations or restrictions. You can also follow some tips and tricks to improve your skills and win more races. Assoluto Racing Mod Apk Dinheiro Infinito is a must-try game for any racing enthusiast. Download it now and start your racing adventure!

-

FAQs

-

Here are some frequently asked questions about Assoluto Racing Mod Apk Dinheiro Infinito:

-
    -
  • Q: Is Assoluto Racing Mod Apk Dinheiro Infinito safe to download and install?
  • -
  • A: Yes, Assoluto Racing Mod Apk Dinheiro Infinito is safe to download and install, as long as you use a trusted source like [this link]. However, you should always be careful when downloading and installing any third-party apps, as they may contain malware or viruses that can harm your device or data.
  • -
  • Q: Is Assoluto Racing Mod Apk Dinheiro Infinito compatible with my device?
  • -
  • A: Assoluto Racing Mod Apk Dinheiro Infinito is compatible with most Android devices that have Android 4.2 or higher. However, some devices may not support the game or the mod apk due to hardware or software limitations. You can check the compatibility of your device before downloading and installing the game.
  • -
  • Q: How can I update Assoluto Racing Mod Apk Dinheiro Infinito?
  • -
  • A: Assoluto Racing Mod Apk Dinheiro Infinito is updated regularly with new features, cars, tracks, events, and bug fixes. You can update the game by downloading and installing the latest mod apk file from [this link]. You may need to uninstall the previous version of the game before installing the new one.
  • -
  • Q: How can I contact the developers of Assoluto Racing?
  • -
  • A: You can contact the developers of Assoluto Racing by visiting their official website, Facebook page, Twitter account, or Instagram account. You can also send them an email at support@assolutogames.com. You can give them your feedback, suggestions, questions, or report any issues or problems you encounter in the game.
  • -
  • Q: How can I support the developers of Assoluto Racing?
  • -
  • A: You can support the developers of Assoluto Racing by rating and reviewing the game on Google Play Store, sharing it with your friends and family, and following them on their social media accounts. You can also buy some in-game items or coins with real money to support their development costs.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Old Telugu Songs Free Download 2020 - Naa Songs Presents the Classic Hits of Telugu Cinema.md b/spaces/congsaPfin/Manga-OCR/logs/Old Telugu Songs Free Download 2020 - Naa Songs Presents the Classic Hits of Telugu Cinema.md deleted file mode 100644 index 77bd595fbf7c95ba9e6d9a95fecd646b8b0d39eb..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Old Telugu Songs Free Download 2020 - Naa Songs Presents the Classic Hits of Telugu Cinema.md +++ /dev/null @@ -1,135 +0,0 @@ - -

Old Songs Telugu Naa Songs Free Download 2020

-

If you are a fan of old Telugu songs, you might be looking for ways to download them for free. Old Telugu songs have a charm and nostalgia that is hard to resist. They are melodious, meaningful, and memorable. Whether you want to relive your childhood memories, enjoy some evergreen classics, or discover new gems, old Telugu songs are a treasure trove of music.

-

old songs telugu naa songs free download 2020


Downloadhttps://urlca.com/2uO7aJ



-

Introduction

-

In this article, we will tell you why you should listen to old Telugu songs, how to download them for free, and what are the best sources for old Telugu songs free download 2020. We will also answer some frequently asked questions about old Telugu songs. So, let's get started!

-

Why listen to old Telugu songs?

-

There are many reasons why you should listen to old Telugu songs. Here are some of them:

-
    -
  • Old Telugu songs are timeless and evergreen. They have a universal appeal that transcends generations and cultures.
  • -
  • Old Telugu songs are rich in lyrics and emotions. They convey deep and meaningful messages that touch your heart and soul.
  • -
  • Old Telugu songs are soothing and relaxing. They can help you cope with stress, anxiety, and depression. They can also uplift your mood and spirit.
  • -
  • Old Telugu songs are diverse and versatile. They cover various genres, themes, and styles. You can find old Telugu songs for every occasion, mood, and taste.
  • -
  • Old Telugu songs are nostalgic and sentimental. They can remind you of your past, your loved ones, and your happy moments.
  • -
-

How to download old Telugu songs for free?

-

Downloading old Telugu songs for free is not difficult if you know where to look. There are many websites and apps that offer old Telugu songs free download 2020. However, not all of them are safe, legal, and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also violate the copyrights of the original artists and producers.

-

Therefore, you should be careful and selective when choosing a source for old Telugu songs free download 2020. You should look for sources that are trustworthy, reputable, and user-friendly. You should also check the quality, quantity, and variety of the old Telugu songs available on the source. You should also read the reviews, ratings, and feedback of other users before downloading any song.

-

Best sources for old Telugu songs free download 2020

-

To help you find the best sources for old Telugu songs free download 2020, we have compiled a list of some of the most popular and recommended ones. These sources have been tested and verified by us and many other users. They offer high-quality, legal, and safe downloads of old Telugu songs. They also have a large collection of old Telugu songs from different eras, genres, artists, and movies. Here are the best sources for old Telugu songs free download 2020:

-

Gaana.com

-

Gaana.com is one of the leading music streaming platforms in India. It offers millions of songs in various languages, including Telugu. It also has a dedicated section for old Telugu songs that features hundreds of playlists and albums curated by experts and users.

-

old telugu songs mp3 download 90s & 2000s
-telugu top 100 songs 2020 free download
-golden 70s telugu songs playlist download
-old telugu hit songs free download naa songs
-2020 telugu songs mp3 download old movies
-old telugu melody songs free download naa
-new telugu songs 2020 download old singers
-old telugu devotional songs free download naa
-latest telugu songs 2020 free download old style
-old telugu folk songs free download naa
-best telugu songs 2020 download old classics
-old telugu love songs free download naa
-new telugu movie songs 2020 download old remixes
-old telugu sad songs free download naa
-top telugu songs 2020 free download old versions
-old telugu duet songs free download naa
-latest telugu movie songs 2020 download old hits
-old telugu wedding songs free download naa
-best telugu movie songs 2020 download old melodies
-old telugu patriotic songs free download naa
-new telugu video songs 2020 download old quality
-old telugu comedy songs free download naa
-latest telugu video songs 2020 free download old format
-old telugu dance songs free download naa
-best telugu video songs 2020 download old scenes
-old telugu birthday songs free download naa
-new telugu audio songs 2020 free download old albums
-old telugu lullaby songs free download naa
-latest telugu audio songs 2020 download old lyrics
-old telugu rap songs free download naa
-best telugu audio songs 2020 free download old tunes
-old telugu ghazal songs free download naa
-new telugu mp3 songs 2020 free download old collection
-old telugu qawwali songs free download naa
-latest telugu mp3 songs 2020 download old singers
-old telugu rock songs free download naa
-best telugu mp3 songs 2020 free download old music
-old telugu pop songs free download naa
-new telugu hd video songs 2020 download old movies
-old telugu jazz songs free download naa
-latest telugu hd video songs 2020 free download old quality
-old telugu reggae songs free download naa
-best telugu hd video songs 2020 download old clips
-old telugu metal songs free download naa
-new telugu hd audio songs 2020 free download old soundtracks
-old telugu country songs free download naa
-latest telugu hd audio songs 2020 download old beats
-old telugu disco songs free download naa

-

Features of Gaana.com

-
    -
  • Gaana.com offers unlimited online streaming and offline downloads of old Telugu songs for free.
  • -
  • Gaana.com has a user-friendly interface and a powerful search

    How to download old Telugu songs from Gaana.com?

    -

    To download old Telugu songs from Gaana.com, you need to follow these simple steps:

    -
      -
    1. Download and install the Gaana app on your device from the Google Play Store or the App Store.
    2. -
    3. Sign up or log in with your email, phone number, or social media account.
    4. -
    5. Go to the old Telugu songs section and browse through the playlists and albums.
    6. -
    7. Select the song you want to download and tap on the download icon.
    8. -
    9. Choose the quality of the download and wait for it to finish.
    10. -
    11. Enjoy listening to your downloaded old Telugu song offline.
    12. -
    -

    Wynk Music

    -

    Wynk Music is another popular music streaming platform in India. It offers over 6 million songs in various languages, including Telugu. It also has a special category for old Telugu songs that showcases the best of retro music from Tollywood.

    -

    Features of Wynk Music

    -
      -
    • Wynk Music offers unlimited online streaming and offline downloads of old Telugu songs for free for Airtel users. For non-Airtel users, it offers a subscription plan that starts from Rs. 49 per month.
    • -
    • Wynk Music has a user-friendly interface and a smart search function that helps you find your favorite old Telugu songs easily.
    • -
    • Wynk Music has a personalized recommendation system that suggests you old Telugu songs based on your listening history and preferences.
    • -
    • Wynk Music has a social feature that allows you to share your old Telugu songs with your friends and family via WhatsApp, Facebook, Twitter, and other platforms.
    • -
    -

    How to download old Telugu songs from Wynk Music?

    -

    To download old Telugu songs from Wynk Music, you need to follow these simple steps:

    -
      -
    1. Download and install the Wynk Music app on your device from the Google Play Store or the App Store.
    2. -
    3. Sign up or log in with your email, phone number, or social media account.
    4. -
    5. Go to the old Telugu songs category and browse through the songs and albums.
    6. -
    7. Select the song you want to download and tap on the download icon.
    8. -
    9. Choose the quality of the download and wait for it to finish.
    10. -
    11. Enjoy listening to your downloaded old Telugu song offline.
    12. -
    -

    Other options for old Telugu songs free download 2020

    -

    Besides Gaana.com and Wynk Music, there are some other options for old Telugu songs free download 2020. Some of them are:

    -
      -
    • Naa Songs: Naa Songs is a website that offers a huge collection of old and new Telugu songs for free download. You can find old Telugu songs from various movies, singers, composers, and genres. You can also request for any song that is not available on the website. To download old Telugu songs from Naa Songs, you just need to visit the website, search for the song, and click on the download link.
    • -
    • Saavn: Saavn is another music streaming platform that offers over 50 million songs in various languages, including Telugu. It also has a section for old Telugu songs that features some of the most popular and classic hits from Tollywood. To download old Telugu songs from Saavn, you need to subscribe to its premium plan that costs Rs. 99 per month. You can then access unlimited downloads of old Telugu songs in high quality.
    • -
    • Hungama: Hungama is a digital entertainment platform that offers music, movies, videos, and games in various languages, including Telugu. It also has a library of old Telugu songs that spans across different eras, genres, artists, and movies. To download old Telugu songs from Hungama, you need to buy coins or subscribe to its pro plan that costs Rs. 99 per month. You can then redeem your coins or use your pro plan to download old Telugu songs in high quality.
    • -
    -

    Conclusion

    -

    In conclusion, old Telugu songs are a great way to enjoy some of the best music from Tollywood. They have a charm and nostalgia that is hard to resist. They are also easy to download for free from various sources such as Gaana.com, Wynk Music, Naa Songs, Saavn, and Hungama. However, you should be careful and selective when choosing a source for old Telugu songs free download 2020. You should look for sources that are trustworthy, reputable, and user-friendly. You should also check the quality, quantity, and variety of the old Telugu songs available on the source. You should also read the reviews, ratings, and feedback of other users before downloading any song.

    -

    We hope this article has helped you find the best sources for old Telugu songs free download 2020. If you have any questions or suggestions, please feel free to leave a comment below. Happy listening!

    -

    FAQs

    -

    Here are some of the most frequently asked questions about old Telugu songs free download 2020:

    -
      -
    1. What are some of the best old Telugu songs?
    2. -

      Some of the best old Telugu songs are:

      -
        -
      • Neele Gagan Ke Tale from Hamraaz (1967)
      • -
      • Prema Nagarilo from Prema Nagar (1971)
      • -
      • Chukkalle Thochave from Nireekshana (1982)
      • -
      • Abbanee Tiyyani from Jagadeka Veerudu Athiloka Sundari (1990)
      • -
      • Priyathama Neevachata Kusalama from Guna (1991)
      • -
      -
    3. How can I listen to old Telugu songs online?
    4. -

      You can listen to old Telugu songs online by using music streaming platforms such as Gaana.com, Wynk Music, Saavn, Hungama, Spotify, YouTube Music, JioSaavn, and Amazon Music. You can also listen to old Telugu songs online by using radio stations such as Radio Mirchi, Radio City, Red FM, Big FM, and All India Radio. -

    5. How can I convert old Telugu songs to MP3 format?
    6. -

      You can convert old Telugu songs to MP3 format by using online converters such as Online Audio Converter, Online Video Converter, Convertio, Zamzar, and CloudConvert. You can also convert old Telugu songs to MP3 format by using software such as VLC Media Player, iTunes, Windows Media Player, and Audacity. -

    7. How can I transfer old Telugu songs to my phone or computer?
    8. -

      You can transfer old Telugu songs to your phone or computer by using USB cables, Bluetooth, Wi-Fi, cloud storage services such as Google Drive, Dropbox, OneDrive, and iCloud, or file sharing apps such as SHAREit, Xender, Zapya, and AirDroid. -

    9. How can I make a playlist of old Telugu songs?
    10. -

      You can make a playlist of old Telugu songs by using music streaming platforms such as Gaana.com, Wynk Music, Saavn, Hungama, Spotify, YouTube Music, JioSaavn, and Amazon Music. You can also make a playlist of old Telugu songs by using music players such as VLC Media Player, iTunes, Windows Media Player, and Audacity. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 Mod APK 2.12 0 Everything You Need to Know About Max Level and Titan Mode.md b/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 Mod APK 2.12 0 Everything You Need to Know About Max Level and Titan Mode.md deleted file mode 100644 index 9095c0f93345b072ea992b3bb9b237c03e3b44e4..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 Mod APK 2.12 0 Everything You Need to Know About Max Level and Titan Mode.md +++ /dev/null @@ -1,133 +0,0 @@ -
    -

    Shadow Fight 2 Mod APK 2.12 0 Max Level: Everything You Need to Know

    -

    If you are a fan of fighting games, you might have heard of Shadow Fight 2, a popular mobile game that has millions of downloads worldwide. But did you know that there is a modified version of the game that gives you unlimited money, weapons, and access to the max level? In this article, we will tell you everything you need to know about Shadow Fight 2 Mod APK, including what it is, how to download and install it, and how to play it. Let's get started!

    -

    What is Shadow Fight 2?

    -

    Shadow Fight 2 is a mobile fighting game developed by NEKKI, a Russian game studio. The game is set in a world where shadows are the only form of existence, and you play as a nameless warrior who must fight his way through various enemies and bosses to restore his human form. The game combines elements of RPG, action, and martial arts, and has a unique art style that uses silhouettes and realistic physics.

    -

    shadow fight 2 mod apk 2.12 0 max level


    DOWNLOAD ✶✶✶ https://urlca.com/2uOdWf



    -

    The gameplay of Shadow Fight 2

    -

    The gameplay of Shadow Fight 2 is simple but addictive. You control your character using a virtual joystick and buttons for punching, kicking, jumping, and blocking. You can also use weapons and magic to enhance your combat skills. You can customize your character with different outfits, helmets, armor, and accessories. You can also upgrade your weapons and learn new moves as you progress through the game.

    -

    The game has six different modes: story, tournament, survival, duel, ascension, and underworld. In story mode, you follow the main plot and face various enemies and bosses. In tournament mode, you compete against other fighters in a series of matches. In survival mode, you fight against waves of enemies until you lose. In duel mode, you fight against random opponents with random rules. In ascension mode, you fight against special enemies with special rewards. In underworld mode, you team up with other players online and fight against powerful bosses.

    -

    The features of Shadow Fight 2

    -

    Shadow Fight 2 has many features that make it an enjoyable and challenging game. Some of the features are:

    -
      -
    • A captivating storyline that immerses you in the world of shadows.
    • -
    • A variety of weapons and equipment to choose from, such as swords, axes, nunchaku, daggers, shuriken, and more.
    • -
    • A diverse range of enemies and bosses with different fighting styles and abilities.
    • -
    • A realistic combat system that uses physics and animation.
    • -
    • A stunning graphics and sound design that create a dark and atmospheric mood.
    • -
    • A social aspect that allows you to chat with other players, join clans, and participate in raids.
    • -
    -

    What is Shadow Fight 2 Mod APK?

    -

    Shadow Fight 2 Mod APK is a modified version of the original game that provides some extra benefits for the players. It is not an official version of the game, but rather a fan-made one that is created by modifying the original game files. It is also not available on the Google Play Store or the App Store, but rather on third-party websites.

    -

    The benefits

    The benefits of Shadow Fight 2 Mod APK

    -

    Shadow Fight 2 Mod APK has some advantages over the original game that make it more appealing for some players. Some of the benefits are:

    -
      -
    • You get unlimited money and gems, which you can use to buy and upgrade weapons, equipment, and skills.
    • -
    • You get access to the max level, which is 52, and unlock all the features and modes of the game.
    • -
    • You get unlimited energy, which means you can play as long as you want without waiting for the energy bar to refill.
    • -
    • You get all the premium items and bonuses for free, such as the special edition, the raid tickets, the enchantments, and the booster packs.
    • -
    • You get to enjoy the game without any ads or interruptions.
    • -
    -

    The drawbacks of Shadow Fight 2 Mod APK

    -

    However, Shadow Fight 2 Mod APK also has some disadvantages that you should be aware of before downloading and installing it. Some of the drawbacks are:

    -
      -
    • You may face some compatibility issues with your device or operating system, as the mod apk may not be updated regularly or optimized for all devices.
    • -
    • You may encounter some bugs or glitches in the game, such as crashes, freezes, or errors.
    • -
    • You may risk losing your progress or data if you uninstall the mod apk or switch to the original game.
    • -
    • You may violate the terms and conditions of the original game and get banned from playing online or accessing some features.
    • -
    • You may expose your device to malware or viruses that may harm your device or steal your personal information.
    • -
    -

    How to download and install Shadow Fight 2 Mod APK?

    -

    If you are interested in trying out Shadow Fight 2 Mod APK, you will need to follow some steps to download and install it on your device. Here are the steps:

    The steps to download and install Shadow Fight 2 Mod APK

    -

    To download and install Shadow Fight 2 Mod APK on your Android device, you can follow these steps:

    -

    shadow fight 2 mod apk unlimited money and gems max level
    -shadow fight 2 mod apk titan mode max level download
    -shadow fight 2 mod apk latest version max level unlocked
    -shadow fight 2 mod apk all weapons unlocked max level
    -shadow fight 2 mod apk special edition max level
    -shadow fight 2 mod apk god mode max level android
    -shadow fight 2 mod apk hack max level and coins
    -shadow fight 2 mod apk unlimited energy and orbs max level
    -shadow fight 2 mod apk mega mod max level offline
    -shadow fight 2 mod apk free shopping and enchantments max level
    -shadow fight 2 mod apk no root required max level
    -shadow fight 2 mod apk all bosses unlocked max level
    -shadow fight 2 mod apk unlimited everything and max level
    -shadow fight 2 mod apk premium features unlocked max level
    -shadow fight 2 mod apk high damage and defense max level
    -shadow fight 2 mod apk anti ban and cheat detection max level
    -shadow fight 2 mod apk easy win and bonus rewards max level
    -shadow fight 2 mod apk full game unlocked max level
    -shadow fight 2 mod apk all maps and modes available max level
    -shadow fight 2 mod apk super weapons and armor max level
    -shadow fight 2 mod apk unlimited gems and coins max level
    -shadow fight 2 mod apk all characters and skills unlocked max level
    -shadow fight 2 mod apk best graphics and sound quality max level
    -shadow fight 2 mod apk no ads and in-app purchases max level
    -shadow fight 2 mod apk fast download and installation max level

    -
      -
    1. Go to a reputable website that offers the Shadow Fight 2 Mod APK file, such as apkcombo.com or apkpure.com. You can use the web tool to generate the download link by pasting the Google Play Store URL of the original game, or you can search for the mod apk file on the website.
    2. -
    3. Tap the download button to start downloading the Shadow Fight 2 Mod APK file to your device. You may need to allow your browser to download unknown apps from the settings.
    4. -
    5. Once the download is complete, locate the Shadow Fight 2 Mod APK file on your device using a file manager app. You can use the default file manager app on your device, or you can download one from the Google Play Store, such as Cx File Explorer or File Manager.
    6. -
    7. Tap the Shadow Fight 2 Mod APK file to open it. You may need to enable the installation of unknown apps from the settings if you haven't done so already.
    8. -
    9. Follow the instructions on the screen to install the Shadow Fight 2 Mod APK on your device. You may need to grant some permissions to the app during the installation process.
    10. -
    11. After the installation is finished, you can launch the Shadow Fight 2 Mod APK from your app drawer or home screen. Enjoy playing the game with unlimited money, weapons, and max level!
    12. -
    -

    The precautions to take before downloading and installing Shadow Fight 2 Mod APK

    -

    Before you download and install Shadow Fight 2 Mod APK on your device, you should take some precautions to avoid any problems or risks. Here are some of them:

    -
      -
    • Make sure you have enough storage space on your device for the Shadow Fight 2 Mod APK file and the game data. The mod apk file is about 150 MB in size, and the game data may vary depending on your device and version.
    • -
    • Make sure you have a stable internet connection for downloading and installing the Shadow Fight 2 Mod APK file. You may also need an internet connection for playing some modes of the game, such as underworld mode.
    • -
    • Make sure you have a backup of your original game data and progress before installing the Shadow Fight 2 Mod APK. You can use a cloud service or a local backup app to save your game data. You may lose your progress or data if you uninstall the mod apk or switch to the original game.
    • -
    • Make sure you download the Shadow Fight 2 Mod APK file from a reliable and trustworthy website. Avoid downloading from unknown or suspicious sources that may contain malware or viruses. You can check the reviews and ratings of the website before downloading.
    • -
    • Make sure you scan the Shadow Fight 2 Mod APK file with an antivirus app before installing it on your device. You can use a reputable antivirus app from the Google Play Store, such as Avast Mobile Security or AVG Antivirus.
    • -

    How to play Shadow Fight 2 Mod APK?

    -

    Now that you have downloaded and installed Shadow Fight 2 Mod APK on your device, you may wonder how to play it and enjoy its features. Here are some tips and tricks to help you play Shadow Fight 2 Mod APK:

    -

    The tips and tricks to play Shadow Fight 2 Mod APK

    -

    Here are some tips and tricks to play Shadow Fight 2 Mod APK:

    -
      -
    • Use the unlimited money and gems wisely. You can buy and upgrade any weapon, equipment, or skill you want, but don't forget to balance your attack, defense, and speed. You can also use the gems to buy booster packs, enchantments, and raid tickets.
    • -
    • Use the max level to your advantage. You can unlock all the features and modes of the game, such as underworld mode, eclipse mode, and special edition. You can also challenge any enemy or boss without fear of losing.
    • -
    • Use the unlimited energy to practice and improve your skills. You can play as long as you want without waiting for the energy bar to refill. You can also replay any level or mode you want to earn more coins and experience.
    • -
    • Use the premium items and bonuses to enhance your gameplay. You can use the special edition to access exclusive weapons, outfits, and storylines. You can also use the raid tickets to join raids with other players online and fight against powerful bosses.
    • -
    • Use the ad-free feature to enjoy the game without interruptions. You can play the game without any ads or pop-ups that may distract you or slow down your device.
    • -
    -

    The best weapons and characters to use in Shadow Fight 2 Mod APK

    -

    Here are some of the best weapons and characters to use in Shadow Fight 2 Mod APK:

    - - - - - - - -
    WeaponDescription
    KusarigamaA weapon that consists of a sickle and a chain. It has a long range and high damage, but low speed. It is good for keeping enemies at bay and dealing critical hits.
    SaiA weapon that consists of a pair of daggers with forked blades. It has a medium range and high speed, but low damage. It is good for blocking attacks and stunning enemies.
    DaishoA weapon that consists of a katana and a wakizashi. It has a short range and high speed, but medium damage. It is good for slashing enemies and performing combos.
    Composite SwordA weapon that consists of a sword that can split into two blades. It has a medium range and high damage, but low speed. It is good for surprising enemies and dealing massive damage.
    MagicA weapon that consists of various spells that can be cast by tapping the magic button. It has a long range and high damage, but low speed. It is good for attacking enemies from afar and causing different effects.
    - - - - - - - - - -
    CharacterDescription
    LynxThe first boss of the game. He is a member of the Shadow Order who uses claws as his weapon. He has high speed and stealth skills, but low defense. He can also summon his bodyguards to help him fight.
    HermitThe second boss of the game. He is a master of magic who uses swords as his weapon. He has high damage and magic skills, but low speed. He can also cast various spells to attack or defend himself.
    ButcherThe third boss of the game. He is a ruthless leader of a gang who uses axes as his weapon. He has high damage and defense skills, but low speed. He can also throw his axes at enemies or smash them with his fists.
    WaspThe fourth boss of the game. She is a pirate queen who uses daggers as her weapon. She has high speed and agility skills, but low damage. She can also fly with her wings or summon her crew to help her fight.
    WidowThe fifth boss of the game. She is a seductive assassin who uses fans as her weapon. She has high speed and charm skills, but low defense. She can also hypnotize enemies or poison them with her fans.
    ShogunThe sixth boss of the game. He is a tyrant who uses katanas as his weapon. He has high damage and defense skills, but low speed. He can also summon his soldiers or use his cannon to help him fight.
    TitanThe final boss of the game. He is a godlike being who uses a huge sword as his weapon. He has high damage, defense, and magic skills, but low speed. He can also use his power to manipulate the environment or create illusions.
    -

    Conclusion

    -

    Shadow Fight 2 is a great game that offers a lot of fun and challenge for fighting game fans. However, if you want to experience the game with more features and benefits, you can try Shadow Fight 2 Mod APK, a modified version of the game that gives you unlimited money, weapons, and max level. However, you should also be careful of the drawbacks and risks of using the mod apk, such as compatibility issues, bugs, data loss, bans, and malware. Therefore, you should follow the steps and precautions we provided in this article to download and install Shadow Fight 2 Mod APK safely and enjoyably.

    -

    We hope this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy fighting!

    -

    FAQs

    -

    Here are some frequently asked questions about Shadow Fight 2 Mod APK:

    -
      -
    1. Is Shadow Fight 2 Mod APK safe to use?
    2. -

      Shadow Fight 2 Mod APK is not an official version of the game, but rather a fan-made one that is created by modifying the original game files. Therefore, it may not be safe to use, as it may contain malware or viruses that may harm your device or steal your personal information. You should also be careful of violating the terms and conditions of the original game and getting banned from playing online or accessing some features. Therefore, you should only download and install Shadow Fight 2 Mod APK from reputable and trustworthy websites, and scan it with an antivirus app before installing it on your device.

      -
    3. Can I play Shadow Fight 2 Mod APK online?
    4. -

      Shadow Fight 2 Mod APK allows you to play some modes of the game online, such as underworld mode and raids. However, you may not be able to play other modes online, such as tournament mode and duel mode. You may also face some problems or errors when playing online, such as connection issues, lagging, or crashing. You may also risk getting banned from playing online or accessing some features if the game detects that you are using a mod apk.

      -
    5. Can I switch between Shadow Fight 2 Mod APK and the original game?
    6. -

      You can switch between Shadow Fight 2 Mod APK and the original game by uninstalling one and installing the other. However, you may lose your progress or data if you do so, as the mod apk and the original game have different game files and save data. Therefore, you should make a backup of your original game data and progress before installing the mod apk or switching to the original game.

      -
    7. Can I update Shadow Fight 2 Mod APK?
    8. -

      You can update Shadow Fight 2 Mod APK by downloading and installing the latest version of the mod apk from the same website where you downloaded it before. However, you may not be able to update it as frequently or easily as the original game, as the mod apk may not be updated regularly or optimized for all devices. You may also lose your progress or data if you update the mod apk without making a backup.

      -
    9. Can I use Shadow Fight 2 Mod APK on iOS devices?
    10. -

      No, you cannot use Shadow Fight 2 Mod APK on iOS devices, as it is only compatible with Android devices. You will need to jailbreak your iOS device and use a third-party app installer to install Shadow Fight 2 Mod APK on your iOS device. However, this is not recommended, as it may damage your device or void your warranty.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/XAPK Downloader How to Download XAPK Files from Any Website.md b/spaces/congsaPfin/Manga-OCR/logs/XAPK Downloader How to Download XAPK Files from Any Website.md deleted file mode 100644 index 6d6e299fa02a24682d493fb05b72791126d413a0..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/XAPK Downloader How to Download XAPK Files from Any Website.md +++ /dev/null @@ -1,94 +0,0 @@ - -

    What Is an XAPK File and How Do You Install One on Android?

    -

    If you are an Android user, you are probably familiar with APK files, which are the standard format for installing apps on your device. But have you ever encountered an XAPK file and wondered what it is and how to install it? In this article, we will explain everything you need to know about XAPK files, how they differ from APK files, and how you can install them on your Android device.

    -

    xapk apks


    Download File ✦✦✦ https://urlca.com/2uOeEa



    -

    What Does an XAPK File Contain?

    -

    An XAPK file is a file extension format in a standard ZIP format, allowing all data related to an app to be saved into a single file for quick installation. Unlike APKs, XAPK contains both the APK file and the OBB (Opaque Binary Blob) data—as well as caches, the app icon, and other miscellaneous information that the app requires to function.

    -

    OBB files are additional data files that contain graphics, media, or other large resources that are not included in the APK file. Some apps or games require OBB files to run properly, especially those that have high-quality graphics or large content. For example, PUBG Mobile, Call of Duty Mobile, Asphalt 9, and Genshin Impact are some popular games that use OBB files.

    -

    In some cases, certain XAPK files are also bundles containing more than one APK file, better known as Split APKs. Split APKs are a way of distributing apps that have multiple components or modules, such as base APK, configuration APK, language APK, etc. This allows developers to optimize their apps for different devices, screen sizes, architectures, and languages. For example, Netflix, Spotify, Facebook, and Google Play Services are some apps that use Split APKs.

    -

    How to install xapk files on android
    -What is the difference between xapk and apk
    -Xapk installer apk download for android
    -How to convert xapk to apk and obb
    -Xapk file opener for windows 10
    -Best xapk games for android 2023
    -How to create xapk files from apk and obb
    -Xapk vs split apk: which one is better
    -How to install xapk files on pc using bluestacks
    -Xapk file manager app for android
    -How to extract xapk files on mac
    -Xapk file validator tool online
    -How to fix xapk file validation failed error
    -Xapk file editor software for windows
    -How to install multiple apks via adb
    -Xapk file compressor online free
    -How to update xapk files on android
    -Xapk file format specification and documentation
    -How to uninstall xapk files on android
    -Xapk file size reducer online free
    -How to install xapk files on ios
    -Xapk file viewer for linux
    -How to sign xapk files for android
    -Xapk file splitter online free
    -How to install xapk files on firestick
    -Xapk file merger online free
    -How to download xapk files from google play store
    -Xapk file analyzer tool online
    -How to install xapk files on android tv box
    -Xapk file encrypter and decrypter online free
    -How to install xapk files on chromebook
    -Xapk file converter online free
    -How to install xapk files on android without root
    -Xapk file generator online free
    -How to install xapk files on android emulator
    -Xapk file checker tool online
    -How to install xapk files on android auto
    -Xapk file downloader app for android
    -How to install xapk files on android wear os
    -Xapk file scanner tool online

    -

    How to Install an XAPK File on Android?

    -

    Using a Third-Party App Installer

    -

    One of the easiest ways to install an XAPK file on your Android device is to use a third-party app installer such as XAPK Installer, APKPure, or SAI. These apps can automatically detect and extract the XAPK file and install the app on your device. Here are the steps to follow:

    -
      -
    1. Download and install one of the app installers from their official websites or trusted sources.
    2. -
    3. Download the XAPK file of the app or game you want to install from a reliable source.
    4. -
    5. Open the app installer and grant it the necessary permissions to access your storage.
    6. -
    7. Locate the XAPK file in your device's storage and tap on it.
    8. -
    9. Follow the instructions on the screen to install the app or game.
    10. -
    11. Launch the app or game and enjoy.
    12. -

    Using a File Manager and ADB

    -

    Another way to install an XAPK file on your Android device is to use a file manager and ADB (Android Debug Bridge). This method requires some technical skills and a computer with ADB installed. Here are the steps to follow:

    -
      -
    1. Download the XAPK file of the app or game you want to install from a reliable source.
    2. -
    3. Extract the XAPK file using a ZIP extractor such as WinRAR or 7-Zip. You should see an APK file and an OBB file or multiple APK files inside the extracted folder.
    4. -
    5. Copy the APK file and the OBB file (if any) to your device's storage. You can use a USB cable or a wireless transfer app such as AirDroid or ShareIt.
    6. -
    7. Enable USB debugging on your device by going to Settings > Developer options. If you don't see Developer options, go to Settings > About phone and tap on Build number seven times.
    8. -
    9. Connect your device to your computer using a USB cable.
    10. -
    11. Open a command prompt or terminal window on your computer and navigate to the folder where you have ADB installed.
    12. -
    13. Type the following command to install the APK file: adb install -r path/to/apk/file. Replace path/to/apk/file with the actual path of the APK file on your device.
    14. -
    15. If you have an OBB file, type the following command to copy it to your device: adb push path/to/obb/file /sdcard/Android/obb/package.name. Replace path/to/obb/file with the actual path of the OBB file on your device, and package.name with the package name of the app or game. You can find the package name by looking at the APK file name or by using an app such as App Inspector.
    16. -
    17. If you have multiple APK files, type the following command to install them: adb install-multiple -r path/to/apk/files. Replace path/to/apk/files with the actual paths of all the APK files on your device, separated by spaces.
    18. -
    19. Disconnect your device from your computer and launch the app or game.
    20. -
    -

    What Are the Advantages and Disadvantages of XAPK Files?

    -

    XAPK files have some advantages and disadvantages compared to APK files. Here are some of them:

    - - - - - - -
    AdvantagesDisadvantages
    - They can reduce the file size of apps or games by compressing them into a single file.- They are not supported by default by Android devices and require additional steps or tools to install them.
    - They can speed up the download process of apps or games by avoiding multiple downloads or waiting for additional data.- They may not be compatible with some devices, especially older ones, that do not support Split APKs or OBB files.
    - They can ensure that all the necessary data for apps or games are available and up-to-date, preventing errors or crashes.- They may pose security risks if downloaded from untrusted sources, as they may contain malware or viruses.
    - They can offer more flexibility and customization for developers and users, as they can choose which modules or languages to include or exclude.- They may not receive regular updates from developers or app stores, as they may not be recognized by them.
    -

    Conclusion

    -

    XAPK files are a new format for installing apps or games on Android devices that contain both the APK file and the OBB file or multiple APK files. They offer some benefits such as smaller file size, faster download speed, and more options for developers and users. However, they also have some drawbacks such as lack of support, compatibility issues, security risks, and update problems. Therefore, before you download and install an XAPK file, make sure you know what it is, how it works, and how to do it safely and correctly. We hope this article has helped you understand more about XAPK files and how to install them on your Android device.

    -

    Frequently Asked Questions

    -

    Here are some common questions and answers about XAPK files:

    -
      -
    1. What is the difference between XAPK and APK?
      An XAPK file is a ZIP archive that contains an APK file and an OBB file or multiple APK files. An APK file is a single file that is the standard format for installing apps on Android devices. An OBB file is an additional data file that contains graphics, media, or other large resources that are not included in the APK file.
    2. -
    3. How do I open an XAPK file on my PC?
      An XAPK file is a ZIP archive, so you can open it with any ZIP extractor such as WinRAR or 7-Zip. You can then view or extract the contents of the XAPK file, such as the APK file and the OBB file or multiple APK files.
    4. -
    5. Can I convert an XAPK file to an APK file?
      Yes, you can convert an XAPK file to an APK file by extracting the APK file from the XAPK file using a ZIP extractor. However, this may not work for all apps or games, especially those that require OBB files or Split APKs to run properly. You may also lose some features or functionality of the app or game by doing so.
    6. -
    7. How do I update an XAPK file?
      Updating an XAPK file depends on where you downloaded it from. If you downloaded it from a third-party app installer such as APKPure or SAI, you can check for updates within the app installer and download the latest version of the XAPK file. If you downloaded it from another source, you may have to manually check for updates and download the new XAPK file from the same source.
    8. -
    9. Are XAPK files safe?
      XAPK files are not inherently unsafe, but they may pose security risks if downloaded from untrusted sources. Some XAPK files may contain malware or viruses that can harm your device or steal your data. Therefore, you should always download XAPK files from reputable sources and scan them with a reliable antivirus software before installing them.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congxin95/BMTools-demo/tool_server.py b/spaces/congxin95/BMTools-demo/tool_server.py deleted file mode 100644 index 19c81fa5947b53683f5ab2d93a601be0d42cfc3b..0000000000000000000000000000000000000000 --- a/spaces/congxin95/BMTools-demo/tool_server.py +++ /dev/null @@ -1,172 +0,0 @@ -import sys -sys.path.append("BMTools/") - -import bmtools -import os - -def run_tool_server(): - def load_weather_tool(): - WEATHER_API_KEYS = os.environ.get('WEATHER_API_KEYS', None) - if not WEATHER_API_KEYS: - return "WEATHER_API_KEYS not provided, please register one from https://www.weatherapi.com/ and add it to environment variables." - server.load_tool("weather", {"subscription_key": WEATHER_API_KEYS}) - - # def load_database_tool(): - # server.load_tool("database") - - # def load_db_diag_tool(): - # server.load_tool("db_diag") - - def load_chemical_prop_tool(): - server.load_tool("chemical-prop") - - def load_douban_tool(): - server.load_tool("douban-film") - - def load_wikipedia_tool(): - server.load_tool("wikipedia") - - # def load_wikidata_tool(): - # server.load_tool("wikidata") - - def load_wolframalpha_tool(): - WOLFRAMALPH_APP_ID = os.environ.get("WOLFRAMALPH_APP_ID", None) - if not WOLFRAMALPH_APP_ID: - return "WOLFRAMALPH_APP_ID not provided, please register one from https://products.wolframalpha.com/api/ and add it to environment variables." - server.load_tool("wolframalpha", {"subscription_key": WOLFRAMALPH_APP_ID}) - - def load_bing_search_tool(): - BING_SUBSCRIPT_KEY = os.environ.get('BING_SUBSCRIPT_KEY', None) - if not BING_SUBSCRIPT_KEY: - return "Bing search key not provided, please register one from https://www.microsoft.com/en-us/bing/apis/bing-web-search-api and add it to environment variables." - server.load_tool("bing_search", {"subscription_key": BING_SUBSCRIPT_KEY}) - - def load_office_ppt_tool(): - server.load_tool("office-ppt") - - def load_alpha_vantage_tool(): - ALPHA_VANTAGE_KEY = os.environ.get('ALPHA_VANTAGE_KEY', None) - if not ALPHA_VANTAGE_KEY: - return "Stock key not provided, please register one from https://www.alphavantage.co/support/#api-key and add it to environment variables." - server.load_tool("stock", {"subscription_key": ALPHA_VANTAGE_KEY}) - - def load_map_tool(): - BING_MAP_KEY = os.environ.get('BING_MAP_KEY', None) - if not BING_MAP_KEY: - return "Bing map key not provided, please register one from https://www.bingmapsportal.com/ and add it to environment variables." - server.load_tool("bing_map", {"subscription_key": BING_MAP_KEY}) - - # baidu map tool - # BAIDU_SECRET_KEY = os.environ.get('BAIDU_SECRET_KEY', None) - # BAIDU_MAP_KEY = os.environ.get('BAIDU_MAP_KEY', None) - # if not BAIDU_SECRET_KEY or not BAIDU_MAP_KEY: - # raise RuntimeError("Baidu map key not provided, please register one from https://lbsyun.baidu.com/apiconsole/key and add it to environment variables.") - # server.load_tool("baidu_map", {"subscription_key": BAIDU_MAP_KEY, "baidu_secret_key": BAIDU_SECRET_KEY}) - - def load_rapidapi_tool(): - RAPIDAPI_KEY = os.environ.get('RAPIDAPI_KEY', None) - if not RAPIDAPI_KEY: - return "RAPIDAPI_KEY not provided, please register one from https://rapidapi.com/ and add it to environment variables." - server.load_tool("zillow", {"subscription_key": RAPIDAPI_KEY}) - server.load_tool("airbnb", {"subscription_key": RAPIDAPI_KEY}) - server.load_tool("job_search", {"subscription_key": RAPIDAPI_KEY}) - - # def load_nllb_translation_tool(): - # server.load_tool("nllb-translation") - - # def load_baidu_translation_tool(): - # server.load_tool("baidu-translation") - - def load_tutorial_tool(): - server.load_tool("tutorial") - - def load_file_operation_tool(): - server.load_tool("file_operation") - - def load_meta_analysis_tool(): - server.load_tool("meta_analysis") - - def load_code_interpreter_tool(): - server.load_tool("code_interpreter") - - def load_arxiv_tool(): - server.load_tool("arxiv") - - def load_google_places_tool(): - GPLACES_API_KEY = os.environ.get('GPLACES_API_KEY', '') - if not GPLACES_API_KEY: - return "GPLACES_API_KEY not provided, please register one from https://developers.google.com/maps/documentation/elevation/get-api-key and add it to environment variables." - server.load_tool("google_places", {"subscription_key": GPLACES_API_KEY}) - - def load_google_serper_tool(): - SERPER_API_KEY = os.environ.get('SERPER_API_KEY', None) - if not SERPER_API_KEY: - return "SERPER_API_KEY not provided, please register one from https://serper.dev and add it to environment variables." - server.load_tool("google_serper", {"subscription_key": SERPER_API_KEY}) - server.load_tool("google_scholar", {"subscription_key": SERPER_API_KEY}) - - def load_python_tool(): - server.load_tool("python") - - def load_sceneXplain_tool(): - SCENEX_API_KEY = os.environ.get('SCENEX_API_KEY', None) - if not SCENEX_API_KEY: - return "SCENEX_API_KEY is not provided. Please sign up for a free account at https://scenex.jina.ai/, create a new API key, and add it to environment variables." - server.load_tool("sceneXplain", {"subscription_key": SCENEX_API_KEY}) - - def load_shell_tool(): - server.load_tool("shell") - - def load_image_generation_tool(): - STEAMSHIP_API_KEY = os.environ.get('STEAMSHIP_API_KEY', None) - if not STEAMSHIP_API_KEY: - return "STEAMSHIP_API_KEY is not provided. Please sign up for a free account at https://steamship.com/account/api, create a new API key, and add it to environment variables." - server.load_tool("image_generation") - - def load_hugging_tools(): - HUGGINGFACE_API_KEY = os.environ.get('HUGGINGFACE_API_KEY', None) - if not HUGGINGFACE_API_KEY: - return "Huggingface api key not provided, please register one from https://huggingface.co/ and add it to environment variables." - server.load_tool("hugging_tools") - - def load_gradio_tools(): - server.load_tool("gradio_tools") - - server = bmtools.ToolServer() - print(server.list_tools()) - - # tool_choice = input("Enter 'ALL' to load all tools, or enter the specific tools you want to load (comma-separated): ") - - load_weather_tool() - # load_database_tool() - # load_db_diag_tool() - load_chemical_prop_tool() - load_douban_tool() - load_wikipedia_tool() - # load_wikidata_tool() - load_wolframalpha_tool() - load_bing_search_tool() - load_office_ppt_tool() - load_alpha_vantage_tool() - load_map_tool() - load_rapidapi_tool() - # load_nllb_translation_tool() - # load_baidu_translation_tool() - load_tutorial_tool() - load_file_operation_tool() - load_meta_analysis_tool() - load_code_interpreter_tool() - load_arxiv_tool() - load_google_places_tool() - load_google_serper_tool() - load_python_tool() - load_sceneXplain_tool() - load_shell_tool() - load_image_generation_tool() - # load_hugging_tools() - # load_gradio_tools() - - server.serve() - -if __name__ == "__main__": - run_tool_server() \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/transformer_decoder/position_encoding.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/transformer_decoder/position_encoding.py deleted file mode 100644 index 051984d9ea6e04e834f6fae3daf7d8317c2f0819..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/transformer_decoder/position_encoding.py +++ /dev/null @@ -1,67 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/position_encoding.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -""" -Various positional encodings for the transformer. -""" -import math - -import torch -from torch import nn - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, x, mask=None): - if mask is None: - mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - def __repr__(self, _repr_indent=4): - head = "Positional encoding " + self.__class__.__name__ - body = [ - "num_pos_feats: {}".format(self.num_pos_feats), - "temperature: {}".format(self.temperature), - "normalize: {}".format(self.normalize), - "scale: {}".format(self.scale), - ] - # _repr_indent = 4 - lines = [head] + [" " * _repr_indent + line for line in body] - return "\n".join(lines) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/vit.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/vit.py deleted file mode 100644 index 413f9693bd4548342280e329c9128c1a52cea920..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/vit.py +++ /dev/null @@ -1,221 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - -from .utils import (activations, forward_adapted_unflatten, get_activation, get_readout_oper, - make_backbone_default, Transpose) - - -def forward_vit(pretrained, x): - return forward_adapted_unflatten(pretrained, x, "forward_flex") - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index:], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - if self.no_embed_class: - x = x + pos_embed - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - if not self.no_embed_class: - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, - start_index_readout=1, -): - pretrained = make_backbone_default(model, features, size, hooks, vit_features, use_readout, start_index, - start_index_readout) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - patch_size=[16, 16], - number_stages=2, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - used_number_stages = 0 if use_vit_only else number_stages - for s in range(used_number_stages): - pretrained.model.patch_embed.backbone.stages[s].register_forward_hook( - get_activation(str(s + 1)) - ) - for s in range(used_number_stages, 4): - pretrained.model.blocks[hooks[s]].register_forward_hook(get_activation(str(s + 1))) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - for s in range(used_number_stages): - value = nn.Sequential(nn.Identity(), nn.Identity(), nn.Identity()) - exec(f"pretrained.act_postprocess{s + 1}=value") - for s in range(used_number_stages, 4): - if s < number_stages: - final_layer = nn.ConvTranspose2d( - in_channels=features[s], - out_channels=features[s], - kernel_size=4 // (2 ** s), - stride=4 // (2 ** s), - padding=0, - bias=True, - dilation=1, - groups=1, - ) - elif s > number_stages: - final_layer = nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ) - else: - final_layer = None - - layers = [ - readout_oper[s], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[s], - kernel_size=1, - stride=1, - padding=0, - ), - ] - if final_layer is not None: - layers.append(final_layer) - - value = nn.Sequential(*layers) - exec(f"pretrained.act_postprocess{s + 1}=value") - - pretrained.model.start_index = start_index - pretrained.model.patch_size = patch_size - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/glint360k_r50.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/glint360k_r50.py deleted file mode 100644 index 37e7922f1f63284e356dcc45a5f979f9c105f25e..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/glint360k_r50.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "cosface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/glint360k" -config.num_classes = 360232 -config.num_image = 17091657 -config.num_epoch = 20 -config.warmup_epoch = -1 -config.decay_epoch = [8, 12, 15, 18] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/dafqi/indo_twitter_sentiment_app/sentence_bert/README.md b/spaces/dafqi/indo_twitter_sentiment_app/sentence_bert/README.md deleted file mode 100644 index b707360889900de393c3e498614bb5eb8ed1b415..0000000000000000000000000000000000000000 --- a/spaces/dafqi/indo_twitter_sentiment_app/sentence_bert/README.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -pipeline_tag: sentence-similarity -tags: -- sentence-transformers -- feature-extraction -- sentence-similarity -- transformers - ---- - -# indo-sentence-bert-base - -This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. - - - -## Usage (Sentence-Transformers) - -Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: - -``` -pip install -U sentence-transformers -``` - -Then you can use the model like this: - -```python -from sentence_transformers import SentenceTransformer -sentences = ["Ibukota Perancis adalah Paris", - "Menara Eifel terletak di Paris, Perancis", - "Pizza adalah makanan khas Italia", - "Saya kuliah di Carneige Mellon University"] - -model = SentenceTransformer('firqaaa/indo-sentence-bert-base') -embeddings = model.encode(sentences) -print(embeddings) -``` - - - -## Usage (HuggingFace Transformers) -Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. - -```python -from transformers import AutoTokenizer, AutoModel -import torch - - -#Mean Pooling - Take attention mask into account for correct averaging -def mean_pooling(model_output, attention_mask): - token_embeddings = model_output[0] #First element of model_output contains all token embeddings - input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() - return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) - - -# Sentences we want sentence embeddings for -sentences = ["Ibukota Perancis adalah Paris", - "Menara Eifel terletak di Paris, Perancis", - "Pizza adalah makanan khas Italia", - "Saya kuliah di Carneige Mellon University"] - - -# Load model from HuggingFace Hub -tokenizer = AutoTokenizer.from_pretrained('firqaaa/indo-sentence-bert-base') -model = AutoModel.from_pretrained('firqaaa/indo-sentence-bert-base') - -# Tokenize sentences -encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') - -# Compute token embeddings -with torch.no_grad(): - model_output = model(**encoded_input) - -# Perform pooling. In this case, mean pooling. -sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) - -print("Sentence embeddings:") -print(sentence_embeddings) -``` - - - -## Evaluation Results - - - -For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) - - -## Training -The model was trained with the parameters: - -**DataLoader**: - -`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 19644 with parameters: -``` -{'batch_size': 16} -``` - -**Loss**: - -`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: - ``` - {'scale': 20.0, 'similarity_fct': 'cos_sim'} - ``` - -Parameters of the fit()-Method: -``` -{ - "epochs": 5, - "evaluation_steps": 0, - "evaluator": "NoneType", - "max_grad_norm": 1, - "optimizer_class": "", - "optimizer_params": { - "lr": 2e-05 - }, - "scheduler": "WarmupLinear", - "steps_per_epoch": null, - "warmup_steps": 9930, - "weight_decay": 0.01 -} -``` - - -## Full Model Architecture -``` -SentenceTransformer( - (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel - (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) -) -``` - -## Citing & Authors - - \ No newline at end of file diff --git a/spaces/davidanthony-ai/DIGITALIXSA/README.md b/spaces/davidanthony-ai/DIGITALIXSA/README.md deleted file mode 100644 index b62f34f6a4a9d45d5e0aabb490b0b4bf4b4ba987..0000000000000000000000000000000000000000 --- a/spaces/davidanthony-ai/DIGITALIXSA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Transcription Whisper Ditalixsa -emoji: 🌖 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/davidpiscasio/unpaired-img2img/options/__init__.py b/spaces/davidpiscasio/unpaired-img2img/options/__init__.py deleted file mode 100644 index e7eedebe54aa70169fd25951b3034d819e396c90..0000000000000000000000000000000000000000 --- a/spaces/davidpiscasio/unpaired-img2img/options/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""This package options includes option modules: training options, test options, and basic options (used in both training and test).""" diff --git a/spaces/davidpiscasio/unpaired-img2img/util/util.py b/spaces/davidpiscasio/unpaired-img2img/util/util.py deleted file mode 100644 index b050c13e1d6d0f197af356b099b9c11c0714522c..0000000000000000000000000000000000000000 --- a/spaces/davidpiscasio/unpaired-img2img/util/util.py +++ /dev/null @@ -1,103 +0,0 @@ -"""This module contains simple helper functions """ -from __future__ import print_function -import torch -import numpy as np -from PIL import Image -import os - - -def tensor2im(input_image, imtype=np.uint8): - """"Converts a Tensor array into a numpy image array. - - Parameters: - input_image (tensor) -- the input image tensor array - imtype (type) -- the desired type of the converted numpy array - """ - if not isinstance(input_image, np.ndarray): - if isinstance(input_image, torch.Tensor): # get the data from a variable - image_tensor = input_image.data - else: - return input_image - image_numpy = image_tensor[0].cpu().float().numpy() # convert it into a numpy array - if image_numpy.shape[0] == 1: # grayscale to RGB - image_numpy = np.tile(image_numpy, (3, 1, 1)) - image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 # post-processing: tranpose and scaling - else: # if it is a numpy array, do nothing - image_numpy = input_image - return image_numpy.astype(imtype) - - -def diagnose_network(net, name='network'): - """Calculate and print the mean of average absolute(gradients) - - Parameters: - net (torch network) -- Torch network - name (str) -- the name of the network - """ - mean = 0.0 - count = 0 - for param in net.parameters(): - if param.grad is not None: - mean += torch.mean(torch.abs(param.grad.data)) - count += 1 - if count > 0: - mean = mean / count - print(name) - print(mean) - - -def save_image(image_numpy, image_path, aspect_ratio=1.0): - """Save a numpy image to the disk - - Parameters: - image_numpy (numpy array) -- input numpy array - image_path (str) -- the path of the image - """ - - image_pil = Image.fromarray(image_numpy) - h, w, _ = image_numpy.shape - - if aspect_ratio > 1.0: - image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC) - if aspect_ratio < 1.0: - image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC) - image_pil.save(image_path) - - -def print_numpy(x, val=True, shp=False): - """Print the mean, min, max, median, std, and size of a numpy array - - Parameters: - val (bool) -- if print the values of the numpy array - shp (bool) -- if print the shape of the numpy array - """ - x = x.astype(np.float64) - if shp: - print('shape,', x.shape) - if val: - x = x.flatten() - print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % ( - np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x))) - - -def mkdirs(paths): - """create empty directories if they don't exist - - Parameters: - paths (str list) -- a list of directory paths - """ - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - """create a single empty directory if it didn't exist - - Parameters: - path (str) -- a single directory path - """ - if not os.path.exists(path): - os.makedirs(path) diff --git a/spaces/dawood/Kanye-AI/inference_main.py b/spaces/dawood/Kanye-AI/inference_main.py deleted file mode 100644 index 80a470ea9146f1f75e785411dd5d3b6fade64b70..0000000000000000000000000000000000000000 --- a/spaces/dawood/Kanye-AI/inference_main.py +++ /dev/null @@ -1,100 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # 一定要设置的部分 - parser.add_argument('-m', '--model_path', type=str, default="/Volumes/Extend/下载/G_20800.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src"], help='wav文件名列表,放在raw文件夹下') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nyaru'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False, - help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="/Volumes/Extend/下载/so-vits-svc-4.0/logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=1, help='聚类方案占比,范围0-1,若没有训练聚类模型则填0即可') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式') - - args = parser.parse_args() - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path) - infer_tool.mkdir(["raw", "results"]) - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale - ) - _audio = out_audio.cpu().numpy() - - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'./results/old——{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - -if __name__ == '__main__': - main() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py deleted file mode 100644 index 7614af4c7b325c363c0b30edfc85a478aa15f01b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py +++ /dev/null @@ -1,177 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from .sbixGlyph import Glyph -import struct - -sbixStrikeHeaderFormat = """ - > - ppem: H # The PPEM for which this strike was designed (e.g., 9, - # 12, 24) - resolution: H # The screen resolution (in dpi) for which this strike - # was designed (e.g., 72) -""" - -sbixGlyphDataOffsetFormat = """ - > - glyphDataOffset: L # Offset from the beginning of the strike data record - # to data for the individual glyph -""" - -sbixStrikeHeaderFormatSize = sstruct.calcsize(sbixStrikeHeaderFormat) -sbixGlyphDataOffsetFormatSize = sstruct.calcsize(sbixGlyphDataOffsetFormat) - - -class Strike(object): - def __init__(self, rawdata=None, ppem=0, resolution=72): - self.data = rawdata - self.ppem = ppem - self.resolution = resolution - self.glyphs = {} - - def decompile(self, ttFont): - if self.data is None: - from fontTools import ttLib - - raise ttLib.TTLibError - if len(self.data) < sbixStrikeHeaderFormatSize: - from fontTools import ttLib - - raise ( - ttLib.TTLibError, - "Strike header too short: Expected %x, got %x.", - ) % (sbixStrikeHeaderFormatSize, len(self.data)) - - # read Strike header from raw data - sstruct.unpack( - sbixStrikeHeaderFormat, self.data[:sbixStrikeHeaderFormatSize], self - ) - - # calculate number of glyphs - (firstGlyphDataOffset,) = struct.unpack( - ">L", - self.data[ - sbixStrikeHeaderFormatSize : sbixStrikeHeaderFormatSize - + sbixGlyphDataOffsetFormatSize - ], - ) - self.numGlyphs = ( - firstGlyphDataOffset - sbixStrikeHeaderFormatSize - ) // sbixGlyphDataOffsetFormatSize - 1 - # ^ -1 because there's one more offset than glyphs - - # build offset list for single glyph data offsets - self.glyphDataOffsets = [] - for i in range( - self.numGlyphs + 1 - ): # + 1 because there's one more offset than glyphs - start = i * sbixGlyphDataOffsetFormatSize + sbixStrikeHeaderFormatSize - (current_offset,) = struct.unpack( - ">L", self.data[start : start + sbixGlyphDataOffsetFormatSize] - ) - self.glyphDataOffsets.append(current_offset) - - # iterate through offset list and slice raw data into glyph data records - for i in range(self.numGlyphs): - current_glyph = Glyph( - rawdata=self.data[ - self.glyphDataOffsets[i] : self.glyphDataOffsets[i + 1] - ], - gid=i, - ) - current_glyph.decompile(ttFont) - self.glyphs[current_glyph.glyphName] = current_glyph - del self.glyphDataOffsets - del self.numGlyphs - del self.data - - def compile(self, ttFont): - self.glyphDataOffsets = b"" - self.bitmapData = b"" - - glyphOrder = ttFont.getGlyphOrder() - - # first glyph starts right after the header - currentGlyphDataOffset = ( - sbixStrikeHeaderFormatSize - + sbixGlyphDataOffsetFormatSize * (len(glyphOrder) + 1) - ) - for glyphName in glyphOrder: - if glyphName in self.glyphs: - # we have glyph data for this glyph - current_glyph = self.glyphs[glyphName] - else: - # must add empty glyph data record for this glyph - current_glyph = Glyph(glyphName=glyphName) - current_glyph.compile(ttFont) - current_glyph.glyphDataOffset = currentGlyphDataOffset - self.bitmapData += current_glyph.rawdata - currentGlyphDataOffset += len(current_glyph.rawdata) - self.glyphDataOffsets += sstruct.pack( - sbixGlyphDataOffsetFormat, current_glyph - ) - - # add last "offset", really the end address of the last glyph data record - dummy = Glyph() - dummy.glyphDataOffset = currentGlyphDataOffset - self.glyphDataOffsets += sstruct.pack(sbixGlyphDataOffsetFormat, dummy) - - # pack header - self.data = sstruct.pack(sbixStrikeHeaderFormat, self) - # add offsets and image data after header - self.data += self.glyphDataOffsets + self.bitmapData - - def toXML(self, xmlWriter, ttFont): - xmlWriter.begintag("strike") - xmlWriter.newline() - xmlWriter.simpletag("ppem", value=self.ppem) - xmlWriter.newline() - xmlWriter.simpletag("resolution", value=self.resolution) - xmlWriter.newline() - glyphOrder = ttFont.getGlyphOrder() - for i in range(len(glyphOrder)): - if glyphOrder[i] in self.glyphs: - self.glyphs[glyphOrder[i]].toXML(xmlWriter, ttFont) - # TODO: what if there are more glyph data records than (glyf table) glyphs? - xmlWriter.endtag("strike") - xmlWriter.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name in ["ppem", "resolution"]: - setattr(self, name, safeEval(attrs["value"])) - elif name == "glyph": - if "graphicType" in attrs: - myFormat = safeEval("'''" + attrs["graphicType"] + "'''") - else: - myFormat = None - if "glyphname" in attrs: - myGlyphName = safeEval("'''" + attrs["glyphname"] + "'''") - elif "name" in attrs: - myGlyphName = safeEval("'''" + attrs["name"] + "'''") - else: - from fontTools import ttLib - - raise ttLib.TTLibError("Glyph must have a glyph name.") - if "originOffsetX" in attrs: - myOffsetX = safeEval(attrs["originOffsetX"]) - else: - myOffsetX = 0 - if "originOffsetY" in attrs: - myOffsetY = safeEval(attrs["originOffsetY"]) - else: - myOffsetY = 0 - current_glyph = Glyph( - glyphName=myGlyphName, - graphicType=myFormat, - originOffsetX=myOffsetX, - originOffsetY=myOffsetY, - ) - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - current_glyph.fromXML(name, attrs, content, ttFont) - current_glyph.compile(ttFont) - self.glyphs[current_glyph.glyphName] = current_glyph - else: - from fontTools import ttLib - - raise ttLib.TTLibError("can't handle '%s' element" % name) diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py b/spaces/declare-lab/tango/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py deleted file mode 100644 index 622c51d2e52e37d91e9551138efaac54f76fcd0d..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py +++ /dev/null @@ -1,927 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder -from multi_token_clip import MultiTokenCLIPTokenizer - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel - -import diffusers -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.14.0.dev0") - -logger = get_logger(__name__) - - -def add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=1, initializer_token=None): - """ - Add tokens to the tokenizer and set the initial value of token embeddings - """ - tokenizer.add_placeholder_tokens(placeholder_token, num_vec_per_token=num_vec_per_token) - text_encoder.resize_token_embeddings(len(tokenizer)) - token_embeds = text_encoder.get_input_embeddings().weight.data - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - if initializer_token: - token_ids = tokenizer.encode(initializer_token, add_special_tokens=False) - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = token_embeds[token_ids[i * len(token_ids) // num_vec_per_token]] - else: - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = torch.randn_like(token_embeds[placeholder_token_id]) - return placeholder_token - - -def save_progress(tokenizer, text_encoder, accelerator, save_path): - for placeholder_token in tokenizer.token_map: - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_ids] - if len(placeholder_token_ids) == 1: - learned_embeds = learned_embeds[None] - learned_embeds_dict = {placeholder_token: learned_embeds.detach().cpu()} - torch.save(learned_embeds_dict, save_path) - - -def load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict): - for placeholder_token in learned_embeds_dict: - placeholder_embeds = learned_embeds_dict[placeholder_token] - num_vec_per_token = placeholder_embeds.shape[0] - placeholder_embeds = placeholder_embeds.to(dtype=text_encoder.dtype) - add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=num_vec_per_token) - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - token_embeds = text_encoder.get_input_embeddings().weight.data - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = placeholder_embeds[i] - - -def load_multitoken_tokenizer_from_automatic(tokenizer, text_encoder, automatic_dict, placeholder_token): - """ - Automatic1111's tokens have format - {'string_to_token': {'*': 265}, 'string_to_param': {'*': tensor([[ 0.0833, 0.0030, 0.0057, ..., -0.0264, -0.0616, -0.0529], - [ 0.0058, -0.0190, -0.0584, ..., -0.0025, -0.0945, -0.0490], - [ 0.0916, 0.0025, 0.0365, ..., -0.0685, -0.0124, 0.0728], - [ 0.0812, -0.0199, -0.0100, ..., -0.0581, -0.0780, 0.0254]], - requires_grad=True)}, 'name': 'FloralMarble-400', 'step': 399, 'sd_checkpoint': '4bdfc29c', 'sd_checkpoint_name': 'SD2.1-768'} - """ - learned_embeds_dict = {} - learned_embeds_dict[placeholder_token] = automatic_dict["string_to_param"]["*"] - load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict) - - -def get_mask(tokenizer, accelerator): - # Get the mask of the weights that won't change - mask = torch.ones(len(tokenizer)).to(accelerator.device, dtype=torch.bool) - for placeholder_token in tokenizer.token_map: - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - for i in range(len(placeholder_token_ids)): - mask = mask & (torch.arange(len(tokenizer)) != placeholder_token_ids[i]).to(accelerator.device) - return mask - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--progressive_tokens_max_steps", - type=int, - default=2000, - help="The number of steps until all tokens will be used.", - ) - parser.add_argument( - "--progressive_tokens", - action="store_true", - help="Progressively train the tokens. For example, first train for 1 token, then 2 tokens and so on.", - ) - parser.add_argument("--vector_shuffle", action="store_true", help="Shuffling tokens durint training") - parser.add_argument( - "--num_vec_per_token", - type=int, - default=1, - help=( - "The number of vectors used to represent the placeholder token. The higher the number, the better the" - " result at the cost of editability. This can be fixed by prompt editing." - ), - ) - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--only_save_embeds", - action="store_true", - default=False, - help="Save only the embeddings for the new concept.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=50, - help=( - "Run validation every X epochs. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - vector_shuffle=False, - progressive_tokens=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - self.vector_shuffle = vector_shuffle - self.progressive_tokens = progressive_tokens - self.prop_tokens_to_load = 0 - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer.encode( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - vector_shuffle=self.vector_shuffle, - prop_tokens_to_load=self.prop_tokens_to_load if self.progressive_tokens else 1.0, - )[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - project_config=accelerator_project_config, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load tokenizer - if args.tokenizer_name: - tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - if is_xformers_available(): - try: - unet.enable_xformers_memory_efficient_attention() - except Exception as e: - logger.warning( - "Could not enable memory efficient attention. Make sure xformers is installed" - f" correctly and a GPU is available: {e}" - ) - add_tokens(tokenizer, text_encoder, args.placeholder_token, args.num_vec_per_token, args.initializer_token) - - # Freeze vae and unet - vae.requires_grad_(False) - unet.requires_grad_(False) - # Freeze all parameters except for the token embeddings in text encoder - text_encoder.text_model.encoder.requires_grad_(False) - text_encoder.text_model.final_layer_norm.requires_grad_(False) - text_encoder.text_model.embeddings.position_embedding.requires_grad_(False) - - if args.gradient_checkpointing: - # Keep unet in train mode if we are using gradient checkpointing to save memory. - # The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode. - unet.train() - text_encoder.gradient_checkpointing_enable() - unet.enable_gradient_checkpointing() - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - # Prepare everything with our `accelerator`. - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast the unet and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and unet to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - # keep original embeddings as reference - orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone() - - for epoch in range(first_epoch, args.num_train_epochs): - text_encoder.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - if args.progressive_tokens: - train_dataset.prop_tokens_to_load = float(global_step) / args.progressive_tokens_max_steps - - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype) - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Let's make sure we don't update any embedding weights besides the newly added token - index_no_updates = get_mask(tokenizer, accelerator) - with torch.no_grad(): - accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[ - index_no_updates - ] = orig_embeds_params[index_no_updates] - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin") - save_progress(tokenizer, text_encoder, accelerator, save_path) - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process and args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline (note: unet and vae are loaded again in float32) - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - tokenizer=tokenizer, - unet=unet, - vae=vae, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = ( - None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) - ) - images = [] - for _ in range(args.num_validation_images): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] - images.append(image) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Create the pipeline using using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.push_to_hub and args.only_save_embeds: - logger.warn("Enabling full model saving because --push_to_hub=True was specified.") - save_full_model = True - else: - save_full_model = not args.only_save_embeds - if save_full_model: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - ) - pipeline.save_pretrained(args.output_dir) - # Save the newly trained embeddings - save_path = os.path.join(args.output_dir, "learned_embeds.bin") - save_progress(tokenizer, text_encoder, accelerator, save_path) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/deep-learning-analytics/Title_Generation/app.py b/spaces/deep-learning-analytics/Title_Generation/app.py deleted file mode 100644 index 999738d046758d39a4d8b0796545c56cf23d2fb1..0000000000000000000000000000000000000000 --- a/spaces/deep-learning-analytics/Title_Generation/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import torch - -import streamlit as st - -st.title("Title Generation with Transformers") -st.write("") -st.write("Input your text here!") - - -default_value = "Ukrainian counterattacks: Kharkiv's regional administrator said a number of villages around Malaya Rogan were retaken by Ukrainian forces. Video verified by CNN shows Ukrainian troops in control of Vilkhivka, one of the settlements roughly 20 miles from the Russian border. The success of Ukrainian forces around Kharkiv has been mirrored further north, near the city of Sumy, where Ukrainian troops have liberated a number of settlements, according to videos geolocated and verified by CNN. A separate counterattack in the south also led to the liberation of two villages from Russian forces northwest of Mariupol, according to the Zaporizhzhia regional military administration." - -sent = st.text_area("Text", default_value, height = 50) - -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/automatic-title-generation") - -model = AutoModelForSeq2SeqLM.from_pretrained("deep-learning-analytics/automatic-title-generation") - - -def tokenize_data(text): - # Tokenize the review body - input_ = str(text) + ' ' - max_len = 120 - # tokenize inputs - tokenized_inputs = tokenizer(input_, padding='max_length', truncation=True, max_length=max_len, return_attention_mask=True, return_tensors='pt') - - inputs={"input_ids": tokenized_inputs['input_ids'], - "attention_mask": tokenized_inputs['attention_mask']} - return inputs - -def generate_answers(text): - inputs = tokenize_data(text) - results= model.generate(input_ids= inputs['input_ids'], attention_mask=inputs['attention_mask'], do_sample=True, - max_length=120, - top_k=120, - top_p=0.98, - early_stopping=True, - num_return_sequences=1) - answer = tokenizer.decode(results[0], skip_special_tokens=True) - return answer - -answer = generate_answers(sent) - -st.write(answer) - -#iface = gr.Interface(fn=generate_answers,inputs=[gr.inputs.Textbox(lines=20)], outputs=["text"]) -#iface.launch(inline=False, share=True) \ No newline at end of file diff --git a/spaces/deepghs/auto_image_censor/detect.py b/spaces/deepghs/auto_image_censor/detect.py deleted file mode 100644 index 35cb76cbf7ff2feacbef5102e46f60644d2942d0..0000000000000000000000000000000000000000 --- a/spaces/deepghs/auto_image_censor/detect.py +++ /dev/null @@ -1,81 +0,0 @@ -from typing import List, Union, Dict - -import numpy as np -from PIL import Image - -from nudenet import preprocess_image, open_model_session - -DEFAULT_DETECT_CLASSES = [ - 'EXPOSED_BREAST_F', - 'EXPOSED_GENITALIA_F', - # 'EXPOSED_GENITALIA_M', -] - - -def detect(image: Image.Image, threshold: float = 0.7, clss: List[str] = None, model: str = 'default'): - # if mode == "fast": - # image, scale = preprocess_image(image, min_side=480, max_side=800) - # if not min_prob: - # min_prob = 0.5 - # else: - # image, scale = preprocess_image(image) - # if not min_prob: - # min_prob = 0.6 - image, scale = preprocess_image(image) - clss = clss if clss is not None else DEFAULT_DETECT_CLASSES - - onnx_model, classes = open_model_session(model) - outputs = onnx_model.run( - [s_i.name for s_i in onnx_model.get_outputs()], - {onnx_model.get_inputs()[0].name: np.expand_dims(image, axis=0)}, - ) - - labels = [op for op in outputs if op.dtype == "int32"][0] - scores = [op for op in outputs if isinstance(op[0][0], np.float32)][0] - boxes = [op for op in outputs if isinstance(op[0][0], np.ndarray)][0] - - boxes /= scale - processed_boxes = [] - for box, score, label in zip(boxes[0], scores[0], labels[0]): - box = box.astype(int).tolist() - label = classes[label] - if score >= threshold and label in clss: - processed_boxes.append( - {"box": [int(c) for c in box], "score": float(score), "label": label} - ) - - return processed_boxes - - -_DEFAULT_ZOOMS = { - 'EXPOSED_BREAST_F': 0.7, - 'EXPOSED_GENITALIA_F': 0.75, - 'EXPOSED_GENITALIA_M': 0.85, -} - - -def detect_areas(image: Image.Image, threshold: float = 0.7, - classes: List[str] = None, model: str = 'default', - zoom: Union[Dict[str, float], float] = None): - zoom = zoom or _DEFAULT_ZOOMS - detection = detect(image, threshold, classes, model) - result = [] - for item in detection: - box = item['box'] - score = item['score'] - label = item['label'] - - if isinstance(zoom, (int, float)): - current_zoom = zoom - elif isinstance(zoom, dict): - current_zoom = zoom.get(label, 1.0) - else: - raise TypeError(f'Invalid zoom type - {zoom!r}.') - - positions = np.asarray(box).reshape(2, 2).astype(np.float32) - center = positions.mean(axis=0) - new_box = ((positions - center) * current_zoom + center).reshape(-1).astype(np.int32).tolist() - - result.append({'box': new_box, 'score': score, 'label': label}) - - return result diff --git a/spaces/deepghs/gchar_online/README.md b/spaces/deepghs/gchar_online/README.md deleted file mode 100644 index a55d852011767f60c7c016658a1b82c6b494fbc2..0000000000000000000000000000000000000000 --- a/spaces/deepghs/gchar_online/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gchar Online -emoji: 💻 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/deprem-ml/intent-leaderboard-v13/app.py b/spaces/deprem-ml/intent-leaderboard-v13/app.py deleted file mode 100644 index 4932fc2f6f1e4dde5a139bcdf3ba33a681afcbf1..0000000000000000000000000000000000000000 --- a/spaces/deprem-ml/intent-leaderboard-v13/app.py +++ /dev/null @@ -1,106 +0,0 @@ -import requests -import json -import pandas as pd -from tqdm.auto import tqdm - -import streamlit as st -from huggingface_hub import HfApi, hf_hub_download -from huggingface_hub.repocard import metadata_load -import streamlit.components.v1 as components - - -def make_clickable_model(model_name): - link = "https://huggingface.co/" + model_name - return f'{model_name}' - -# Make user clickable link -def make_clickable_user(user_id): - link = "https://huggingface.co/" + user_id - return f'{user_id}' - -def get_model_ids(): - api = HfApi() - models = api.list_models(filter="deprem-clf-v13") - model_ids = [x.modelId for x in models] - return model_ids - -def get_metadata(model_id): - try: - readme_path = hf_hub_download(model_id, filename="README.md") - return metadata_load(readme_path) - except requests.exceptions.HTTPError: - # 404 README.md not found - return None - -def parse_metrics_accuracy(meta): - if "model-index" not in meta: - return None - result = meta["model-index"][0]["results"] - metrics = result[0]["metrics"] - accuracy = metrics[2]["value"] - print("Accuracy", accuracy) - return accuracy - -def parse_metrics_recall(meta): - if "model-index" not in meta: - return None - result = meta["model-index"][0]["results"] - metrics = result[0]["metrics"] - recall = metrics[0]["value"] - print("Recall", recall) - return recall - -def parse_metrics_f1(meta): - if "model-index" not in meta: - return None - result = meta["model-index"][0]["results"] - metrics = result[0]["metrics"] - f1 = metrics[1]["value"] - print("F1-score", f1) - return f1 - -#@st.cache(ttl=600) -def get_data(): - data = [] - model_ids = get_model_ids() - for model_id in tqdm(model_ids): - meta = get_metadata(model_id) - if meta is None: - continue - user_id = model_id.split('/')[0] - row = {} - row["User"] = user_id - row["Model"] = model_id - recall = parse_metrics_recall(meta) - row["Recall"] = recall - f1 = parse_metrics_f1(meta) - row["F1-Score"] = f1 - data.append(row) - return pd.DataFrame.from_records(data) - -dataframe = get_data() -dataframe = dataframe.fillna("") - -st.markdown("# Deprem Niyet Analizi için Lider Tablosu (Dataset v13)") - -st.markdown("Bu lider tablosu modellerimizi versiyonladıktan sonra hangi modeli üretime çıkarmamız gerektiğinin takibini yapmak için kullanılır.") -st.markdown( - "Model card'da metadata'da tags kısmına deprem-clf-v13 yazarsanız modeliniz buraya otomatik eklenir." -) -st.markdown( - "Burada recall, f1-score ve accuracy'nin macro average'ına bakıyoruz. Model card'ın metadata kısmında bu üç veriyi log'lamanız yeterli. Burada classification report çıkarırken **probability'lerin** confidence threshold'u baz alınır." -) -st.markdown("Örnek metadata için [bu model card'ın metadata kısmını](https://huggingface.co/deprem-ml/deprem-roberta-intent/blob/main/README.md) kopyalayıp yapıştırarak kendi metriklerinize göre ayarlayabilirsiniz.") -st.markdown( - "Modelin üstüne tıklayıp model card'a gidebilirsiniz." -) - - - -# turn the model ids into clickable links -dataframe["User"] = dataframe["User"].apply(make_clickable_user) -dataframe["Model"] = dataframe["Model"].apply(make_clickable_model) -dataframe = dataframe.sort_values(by=['F1-Score'], ascending=False) -table_html = dataframe.to_html(escape=False, index=False) -table_html = table_html.replace("
", '') # left-align the headers -st.write(table_html, unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Autodata340free __EXCLUSIVE__onlinedownload.md b/spaces/diacanFperku/AutoGPT/Autodata340free __EXCLUSIVE__onlinedownload.md deleted file mode 100644 index 1a33c0597ff4f51db55e4be7d1f7093555cb7587..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Autodata340free __EXCLUSIVE__onlinedownload.md +++ /dev/null @@ -1,9 +0,0 @@ -
-

http://vjverigty.com/thread/autodata340freeonlinedownload http://vjverigty.com/thread/veratilejetsound-l1n-extended-keys-by-vj-vault-mp3-download-files-10007. https://coub.com/stories/4480017-autodata340freeonlinedownload-keygen-program-4. Credited for their assistance with this amazing tool was Greywulfd in the Autodata340freeonlinedownload 4107622e5 fxprice alicewoolf SergioNeoxup. thank you for this amazing tool as it is everything all of you said it was. I can't wait to get the keygen 64 weeks after it was released and it's only a couple of days ago. KEF. Crayfishmusic (Crayfishmusic) Tabourez 23.12.2017, 00:48. https://www.thehollowsong.com/201/autodata340freeonlinedownload.html.

-

autodata340freeonlinedownload-cershan https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb Results 1 - 17 of 17. Autodata340freeonlinedownload-cershan https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.

-

Autodata340freeonlinedownload


Download Filehttps://gohhs.com/2uFT2W



-

Windsurf 7edf34c28c https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb Results 1 - 17 of 17. Autodata340freeonlinedownload-cershan https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb

-

Autodata340freeonlinedownload-cershan. fatmyll 3 Apr 22 at 10:44 PM. gianschm 63b95dad73 https://marketplace.visualstudio.com/items palivor 7b17bfd26b https://coub.com/stories/2960988-autodata340freeonlinedownload-_verified_. reedamer 2022 2 16.

-

Autodata340freeonlinedownload-cershan. fatmyll. 3 Apr 22 at 10:44 PM. gianschm 63b95dad73 https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb Results 1 - 17 of 17. Autodata340freeonlinedownload-cershan https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb

899543212b
-
-
\ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Heroes Of Might And Magic 5 Collectors Edition TOP Crack.md b/spaces/diacanFperku/AutoGPT/Heroes Of Might And Magic 5 Collectors Edition TOP Crack.md deleted file mode 100644 index 36c27c1e885938ca365b20102ccdc4b037cb4043..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Heroes Of Might And Magic 5 Collectors Edition TOP Crack.md +++ /dev/null @@ -1,8 +0,0 @@ -
-

the basic strategy here is to collect your heroes, use them to fight your enemies and eventually assemble a deck of monsters and heroes that can be used in a single battle. the two forces of good and evil fight, and the battle resets every time. its a pretty neat idea. sadly, the right touch of heroism is in short supply. if you read the developers diaries, it appears that the team were trying to address the entire mythology from multiple stories and different eras, and were trying to convey this in the game mechanics. unfortunately the result is a series of unconscious references that dont really establish the various settings of the different worlds in the game, instead they slowly level up as the game progresses.

-

the graphics in heroes of might and magic remained hugely ahead of its time and the game has aged magnificently. although ultimately there were just 7 games in the series, the games are so complex that it feels like there were more than just 7 games.

-

Heroes Of Might And Magic 5 Collectors Edition Crack


DOWNLOAD 🆓 https://gohhs.com/2uFVh1



-

my childhood heroes are proof that that if you really put your mind to something, you could be a hero. with this game on my harddrive, i can continue to add more episodes from my hero's life story. it reminds me of when i was a kid and was all about roleplaying.

-

the heroes have a set of stats, and each hero has two stats which they can improve. they are not the broadest stats you'll ever see and in fact, they are pretty lame stats. the weapons you use include swords, magic items, battleaxes, javelins and axes. however you have to be a high level hero to use these items. once you enter a battle, you select a hero from your deck. at the top of the screen are your attacks. you can heal, bind status conditions, cast spells, and your class abilities.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Mass Downloader 3.9.854 Setup And Key.rar BETTER.md b/spaces/diacanFperku/AutoGPT/Mass Downloader 3.9.854 Setup And Key.rar BETTER.md deleted file mode 100644 index 20a5006cda44a288426bee680adaa5edae8f8c05..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Mass Downloader 3.9.854 Setup And Key.rar BETTER.md +++ /dev/null @@ -1,50 +0,0 @@ -

Mass Downloader 3.9.854 Setup And Key.rar


Download File ---> https://gohhs.com/2uFTjq



- -Full Crack Multimanual 3.5.1 keygen, i have the key to. Full Crack .. - -RAR Password Recovery 1.1 RC14 crack - -Are you looking for a reliable solution for recovering and verifying the password of rar file archives? If so, you are in the right place! RAR Password Recovery will help you to solve this problem in no time! - -RAR Password Recovery is a powerful tool designed to recover and verify passwords of rar files. This RAR Password Recovery is reliable, easy to use, and free to download and use. The application is compatible with Windows XP, Vista, 7, 8, and 10. - -This software recovers and verifies passwords for RAR files, ZIP archives, 7-Zip archives, and other formats of data. - -The program has a very clean interface and requires no installations or additional downloads. - -Key Features - -The main advantage of this application is the ability to recover passwords for all supported formats. - -As a result, you can open archives with passwords protected by RAR, ZIP, 7-Zip, and other formats. - -RAR Password Recovery allows you to open archives with weak passwords that are often used by various malicious programs. It is compatible with Windows XP, Vista, 7, 8, and 10. - -With the help of this program you can open archives protected by RAR, ZIP, and other formats. - -RAR Password Recovery has a very simple interface. - -This application is free to download and use. - -The application is compatible with all the main languages of Windows. - -The program is compatible with 64-bit versions of Windows. - -This program has a minimal installation time and requires no additional downloads. - -This program allows you to recover passwords for all supported formats. - -This program allows you to open archives with weak passwords that are often used by various malicious programs. - -How to Crack? - -Open RAR Password Recovery directory. Double-click on the RAR Password Recovery executable file. Wait until the license agreement window is opened. Click on I Agree. Wait for a moment. Then click on Start. Wait until the process is completed. You can now close the program. - -FAQ - -Is RAR Password Recovery safe? - -RAR Password Recovery is a 100% safe and reliable solution that helps to open rar archives with weak passwords. You don’t have to worry about your data because 4fefd39f24
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Prtg Network Monitor Crack Serial Sites TOP.md b/spaces/diacanFperku/AutoGPT/Prtg Network Monitor Crack Serial Sites TOP.md deleted file mode 100644 index c52dd6e5c4f71502c69019887f4a39ecf060c389..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Prtg Network Monitor Crack Serial Sites TOP.md +++ /dev/null @@ -1,99 +0,0 @@ -
-

PRTG Network Monitor Crack Serial Sites: What You Need to Know

- -

If you are looking for a powerful and reliable tool to monitor your network and its activities, you might have come across prtg network monitor crack serial sites. These are websites that offer you a cracked version of PRTG Network Monitor, a popular network monitoring software developed by Paessler AG. But before you download and install any of these cracks, you should be aware of the risks and consequences involved.

- -

What is PRTG Network Monitor?

- -

PRTG Network Monitor is a comprehensive network monitoring solution that allows you to keep track of various aspects of your network, such as bandwidth usage, availability, performance, traffic, devices, applications, servers, and more. It supports multiple protocols and technologies, such as SNMP, WMI, Ping, NetFlow, sFlow, jFlow, Packet Sniffing, HTTP, SSH, SOAP, REST, SQL, and more. It also provides you with flexible alerting options, customizable dashboards and reports, and remote access via web browser or mobile app.

-

prtg network monitor crack serial sites


DOWNLOAD ✶✶✶ https://gohhs.com/2uFUpH



- -

Why do people use prtg network monitor crack serial sites?

- -

One of the reasons why some people use prtg network monitor crack serial sites is because they want to save money. PRTG Network Monitor is not a free software. It offers a 30-day trial version that allows you to monitor up to 100 sensors for free. After that, you need to purchase a license that suits your needs. The price depends on the number of sensors you want to monitor and the features you want to use. For example, a license for 500 sensors costs $1,600, while a license for unlimited sensors costs $14,500.

- -

Another reason why some people use prtg network monitor crack serial sites is because they want to bypass the limitations of the trial version or the license they have. For instance, they might want to monitor more sensors than their license allows or use features that are not included in their license.

- -

What are the risks and consequences of using prtg network monitor crack serial sites?

- -

Using prtg network monitor crack serial sites is not only illegal but also risky and harmful. Here are some of the possible risks and consequences of using these cracks:

- -
    -
  • You might download malware or viruses that can infect your computer and compromise your network security. These malware or viruses can steal your data, damage your files, slow down your system, or even take control of your network.
  • -
  • You might expose your network to hackers or cybercriminals who can exploit the vulnerabilities of the cracked software. These hackers or cybercriminals can access your network devices, intercept your network traffic, modify your network settings, or launch attacks on your network.
  • -
  • You might violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. These legal actions can include fines, lawsuits, or criminal charges.
  • -
  • You might lose the support and updates from Paessler AG that are essential for keeping your network monitoring software up to date and functional. These support and updates can include bug fixes, security patches, feature enhancements, or compatibility improvements.
  • -
  • You might miss out on the benefits and advantages of using a legitimate version of PRTG Network Monitor. These benefits and advantages can include high-quality performance, reliability, stability, scalability, usability, customization, integration, documentation, training, or customer service.
  • -
- -

What are the alternatives to using prtg network monitor crack serial sites?

- -

If you want to use PRTG Network Monitor without using prtg network monitor crack serial sites, you have two alternatives:

- -
    -
  • You can purchase a license that suits your needs from the official website of Paessler AG. This way, you can enjoy all the features and benefits of PRTG Network Monitor without any risks or consequences.
  • -
  • You can look for other free or open-source network monitoring tools that can meet your requirements. There are many options available online that you can compare and choose from.
  • -
- -

Conclusion

- -

PRTG Network Monitor is a powerful and reliable tool to monitor your network and its activities. However, using prtg network monitor crack serial sites to get a cracked version of this software is not a wise decision. It can expose you to various risks and consequences that can harm your computer and compromise your network security. It can also violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. Therefore, it is better to purchase a license that suits your needs from the official website of Paessler AG or look for other free or open-source network monitoring tools that can meet your requirements.

-

How to download and install PRTG Network Monitor?

- -

If you want to download and install PRTG Network Monitor, you should follow these steps:

-

- -
    -
  1. Go to the official website of Paessler AG and click on the "Free Trial" button.
  2. -
  3. Fill out the form with your name, email address, and company name.
  4. -
  5. Choose the edition of PRTG Network Monitor that suits your needs. You can choose between Freeware Edition (up to 100 sensors for free), Trial Edition (unlimited sensors for 30 days), or Commercial Edition (paid license).
  6. -
  7. Download the setup file and run it on your computer.
  8. -
  9. Follow the instructions on the screen to complete the installation.
  10. -
  11. Launch PRTG Network Monitor and start monitoring your network.
  12. -
- -

What are the benefits of using PRTG Network Monitor?

- -

Using PRTG Network Monitor has many benefits for your network and your business. Here are some of them:

- -
    -
  • You can monitor your network performance and availability 24/7 from anywhere.
  • -
  • You can detect and resolve network issues before they affect your users or customers.
  • -
  • You can optimize your network resources and reduce costs.
  • -
  • You can generate detailed reports and graphs to analyze your network data.
  • -
  • You can customize your network monitoring according to your preferences and needs.
  • -
  • You can integrate PRTG Network Monitor with other tools and services.
  • -
- -

Why should you avoid prtg network monitor crack serial sites?

- -

As you can see, PRTG Network Monitor is a valuable tool for your network and your business. However, you should avoid using prtg network monitor crack serial sites to get a cracked version of this software. These sites are not only illegal but also risky and harmful. They can expose you to various threats and consequences that can damage your computer and compromise your network security. They can also violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. Therefore, you should avoid using prtg network monitor crack serial sites and use a legitimate version of PRTG Network Monitor instead.

-

How to use PRTG Network Monitor?

- -

Using PRTG Network Monitor is easy and intuitive. You can use the web-based interface or the mobile app to access your network data from anywhere. You can also use the desktop client or the enterprise console to manage multiple PRTG servers. Here are some of the basic steps to use PRTG Network Monitor:

- -
    -
  1. Add devices to your network. You can use the auto-discovery feature or manually add devices by IP address or hostname.
  2. -
  3. Add sensors to your devices. Sensors are the basic monitoring elements that collect data from your devices. You can choose from over 250 sensor types that cover various aspects of your network.
  4. -
  5. Configure your sensors. You can adjust the scanning intervals, thresholds, channels, dependencies, notifications, and more for each sensor.
  6. -
  7. View your network data. You can use the dashboard, maps, graphs, tables, reports, and more to visualize your network data.
  8. -
  9. Analyze and optimize your network. You can use the alerts, logs, tickets, and more to identify and resolve network issues. You can also use the recommendations, trends, and forecasts to optimize your network resources and performance.
  10. -
- -

What are the features of PRTG Network Monitor?

- -

PRTG Network Monitor has many features that make it a powerful and reliable network monitoring tool. Here are some of them:

- -
    -
  • It supports multiple protocols and technologies, such as SNMP, WMI, Ping, NetFlow, sFlow, jFlow, Packet Sniffing, HTTP, SSH, SOAP, REST, SQL, and more.
  • -
  • It offers over 250 sensor types that cover various aspects of your network, such as bandwidth usage, availability, performance, traffic, devices, applications, servers, and more.
  • -
  • It provides flexible alerting options that notify you via email, SMS, push notification, sound, or execute a program when a sensor reaches a defined status.
  • -
  • It allows you to customize your network monitoring according to your preferences and needs. You can create your own sensors, dashboards, maps, reports, and more.
  • -
  • It integrates with other tools and services, such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Slack, PagerDuty, ServiceNow, and more.
  • -
- -

Conclusion

- -

PRTG Network Monitor is a comprehensive network monitoring solution that allows you to keep track of various aspects of your network. However, you should avoid using prtg network monitor crack serial sites to get a cracked version of this software. These sites are not only illegal but also risky and harmful. They can expose you to various threats and consequences that can damage your computer and compromise your network security. They can also violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. Therefore, you should avoid using prtg network monitor crack serial sites and use a legitimate version of PRTG Network Monitor instead.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/SIR Audio Tools Plugin Bundle Win [Latest] _VERIFIED_.md b/spaces/diacanFperku/AutoGPT/SIR Audio Tools Plugin Bundle Win [Latest] _VERIFIED_.md deleted file mode 100644 index 00efbe4caa6a0376ad2dc1344618c5aee1ccdec2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/SIR Audio Tools Plugin Bundle Win [Latest] _VERIFIED_.md +++ /dev/null @@ -1,6 +0,0 @@ -

SIR Audio Tools Plugin Bundle Win [Latest]


Download Filehttps://gohhs.com/2uFTKU



- - 3cee63e6c2
-
-
-

diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/xml_style.py b/spaces/dineshreddy/WALT/mmdet/datasets/xml_style.py deleted file mode 100644 index 71069488b0f6da3b37e588228f44460ce5f00679..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/datasets/xml_style.py +++ /dev/null @@ -1,170 +0,0 @@ -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class XMLDataset(CustomDataset): - """XML dataset for detection. - - Args: - min_size (int | float, optional): The minimum size of bounding - boxes in the images. If the size of a bounding box is less than - ``min_size``, it would be add to ignored field. - """ - - def __init__(self, min_size=None, **kwargs): - assert self.CLASSES or kwargs.get( - 'classes', None), 'CLASSES in `XMLDataset` can not be None.' - super(XMLDataset, self).__init__(**kwargs) - self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)} - self.min_size = min_size - - def load_annotations(self, ann_file): - """Load annotation from XML style ann_file. - - Args: - ann_file (str): Path of XML file. - - Returns: - list[dict]: Annotation info from XML file. - """ - - data_infos = [] - img_ids = mmcv.list_from_file(ann_file) - for img_id in img_ids: - filename = f'JPEGImages/{img_id}.jpg' - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - size = root.find('size') - if size is not None: - width = int(size.find('width').text) - height = int(size.find('height').text) - else: - img_path = osp.join(self.img_prefix, 'JPEGImages', - '{}.jpg'.format(img_id)) - img = Image.open(img_path) - width, height = img.size - data_infos.append( - dict(id=img_id, filename=filename, width=width, height=height)) - - return data_infos - - def _filter_imgs(self, min_size=32): - """Filter images too small or without annotation.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) < min_size: - continue - if self.filter_empty_gt: - img_id = img_info['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name in self.CLASSES: - valid_inds.append(i) - break - else: - valid_inds.append(i) - return valid_inds - - def get_ann_info(self, idx): - """Get annotation from XML file by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - bboxes = [] - labels = [] - bboxes_ignore = [] - labels_ignore = [] - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - difficult = obj.find('difficult') - difficult = 0 if difficult is None else int(difficult.text) - bnd_box = obj.find('bndbox') - # TODO: check whether it is necessary to use int - # Coordinates may be float type - bbox = [ - int(float(bnd_box.find('xmin').text)), - int(float(bnd_box.find('ymin').text)), - int(float(bnd_box.find('xmax').text)), - int(float(bnd_box.find('ymax').text)) - ] - ignore = False - if self.min_size: - assert not self.test_mode - w = bbox[2] - bbox[0] - h = bbox[3] - bbox[1] - if w < self.min_size or h < self.min_size: - ignore = True - if difficult or ignore: - bboxes_ignore.append(bbox) - labels_ignore.append(label) - else: - bboxes.append(bbox) - labels.append(label) - if not bboxes: - bboxes = np.zeros((0, 4)) - labels = np.zeros((0, )) - else: - bboxes = np.array(bboxes, ndmin=2) - 1 - labels = np.array(labels) - if not bboxes_ignore: - bboxes_ignore = np.zeros((0, 4)) - labels_ignore = np.zeros((0, )) - else: - bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1 - labels_ignore = np.array(labels_ignore) - ann = dict( - bboxes=bboxes.astype(np.float32), - labels=labels.astype(np.int64), - bboxes_ignore=bboxes_ignore.astype(np.float32), - labels_ignore=labels_ignore.astype(np.int64)) - return ann - - def get_cat_ids(self, idx): - """Get category ids in XML file by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - cat_ids = [] - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - cat_ids.append(label) - - return cat_ids diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_pipelines/crnn_tps_pipeline.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_pipelines/crnn_tps_pipeline.py deleted file mode 100644 index 3a2eea55a739206c11ae876ba82e9c2f6ea1ff6d..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_pipelines/crnn_tps_pipeline.py +++ /dev/null @@ -1,37 +0,0 @@ -img_norm_cfg = dict(mean=[0.5], std=[0.5]) - -train_pipeline = [ - dict(type='LoadImageFromFile', color_type='grayscale'), - dict( - type='ResizeOCR', - height=32, - min_width=100, - max_width=100, - keep_aspect_ratio=False), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'text', 'valid_ratio' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFile', color_type='grayscale'), - dict( - type='ResizeOCR', - height=32, - min_width=32, - max_width=100, - keep_aspect_ratio=False), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'valid_ratio', - 'img_norm_cfg', 'ori_filename', 'img_shape' - ]), -] diff --git a/spaces/dmeck/RVC-Speakers/speakers/server/static/index.html b/spaces/dmeck/RVC-Speakers/speakers/server/static/index.html deleted file mode 100644 index 8d3128932bd5879ce06c5e982b428fbf73443fd0..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/speakers/server/static/index.html +++ /dev/null @@ -1 +0,0 @@ -RVC-Speakers
\ No newline at end of file diff --git a/spaces/dongyi/MMFS/tools/create_training_timeline_video.py b/spaces/dongyi/MMFS/tools/create_training_timeline_video.py deleted file mode 100644 index eecf5b2ce8234408715dc4814e74eef9b15b959c..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/tools/create_training_timeline_video.py +++ /dev/null @@ -1,100 +0,0 @@ -import argparse -import os, sys -import cv2 -import numpy as np -from natsort import natsorted -import subprocess -from tensorboard.backend.event_processing import event_accumulator - - -def decode_from_buffer(encoded_image_string): - s = np.frombuffer(encoded_image_string, dtype=np.uint8) - image = cv2.imdecode(s, cv2.IMREAD_COLOR) - return image - - -def main(args): - - if args.log_dir == '': - print("Did not specify the log directory to compile video from. Please check again.") - sys.exit() - - if not os.path.exists(args.output_dir): - os.mkdir(args.output_dir) - - if not os.path.exists(os.path.join(args.output_dir, 'frames')): - os.mkdir(os.path.join(args.output_dir, 'frames')) - - for filename in os.listdir(args.log_dir): - if 'events.out.tfevents' not in filename: # There should just be one file in the entire folder anyway, but just in case. - continue - event_file = os.path.join(args.log_dir, filename) - - ea = event_accumulator.EventAccumulator(event_file) - ea.Reload() - keys = ea.images._buckets.keys() - - entry_to_image_dict = {} # matches each entry of the form "epoch X iteration Y training_video img_category_name" to its associated images - epoch_dict = {} # each key contains all entries for that epoch. Keys are integers. - - for entry in keys: - if 'epoch' not in entry or 'iteration' not in entry or 'training_video' not in entry: - continue - entry_to_image_dict[entry] = ea.images._buckets[entry].items[0].encoded_image_string - - epoch = int(entry.split(" ")[1]) - if epoch not in epoch_dict: - epoch_dict[epoch] = [] - epoch_dict[epoch].append(entry) - - epoch_list = epoch_dict.keys() - epoch_list_sorted = natsorted(epoch_list) # e.g. [1,2,3] - - for epoch in epoch_list_sorted: - entries_in_this_epoch = epoch_dict[epoch] - entries_in_this_epoch = natsorted(entries_in_this_epoch) - - # key are all the image category names. e.g. real_A, fake_B. - # values are the the entries with those names, in this epoch. - img_category_names = {} - - for entry in entries_in_this_epoch: - img_category_name = entry.split(' ')[5] - if img_category_name not in img_category_names: - img_category_names[img_category_name] = [] - img_category_names[img_category_name].append(entry) - - epoch_img = [] # the final output image of this epoch - for img_category_name, entries_having_that_name in img_category_names.items(): - imgs_for_one_category = [] - for entry in entries_having_that_name: - img = decode_from_buffer(entry_to_image_dict[entry]) - imgs_for_one_category.append(img) - - imgs_for_one_category = np.concatenate(imgs_for_one_category, axis=1) # concatenate images along width - imgs_for_one_category = cv2.putText(imgs_for_one_category, text='epoch ' + str(epoch) + ' ' + img_category_name, - org=(0, imgs_for_one_category.shape[0] - 20), color=(0, 255, 0), thickness=2, - fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=2) - - epoch_img.append(imgs_for_one_category) - epoch_img = np.concatenate(epoch_img, axis=0) # concatenate images along height - - outpath = os.path.join(os.path.join(args.output_dir, 'frames'), '{:06}.jpg'.format(epoch)) - cv2.imwrite(outpath, epoch_img) - - - command = 'ffmpeg -y -framerate ' + str(args.fps) + ' -i ' + \ - os.path.join(os.path.join(args.output_dir, 'frames'), '%06d.jpg') + ' ' + os.path.join(args.output_dir, 'training_timeline.mp4') - sp = subprocess.Popen(command, shell=True) - while sp.poll() is None: - continue - - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Style Master') - parser.add_argument('--log_dir', type=str, default='') - parser.add_argument('--output_dir', type=str, default='') - parser.add_argument('--fps', type=int, default=2) - args = parser.parse_args() - main(args) diff --git a/spaces/ds520/bingo/src/lib/isomorphic/node.ts b/spaces/ds520/bingo/src/lib/isomorphic/node.ts deleted file mode 100644 index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/lib/isomorphic/node.ts +++ /dev/null @@ -1,26 +0,0 @@ -import Debug from 'debug' - -const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici') -const { HttpsProxyAgent } = require('https-proxy-agent') -const ws = require('ws') - -const debug = Debug('bingo') - -const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY; -let WebSocket = ws.WebSocket - -if (httpProxy) { - setGlobalDispatcher(new ProxyAgent(httpProxy)) - const agent = new HttpsProxyAgent(httpProxy) - // @ts-ignore - WebSocket = class extends ws.WebSocket { - constructor(address: string | URL, options: typeof ws.WebSocket) { - super(address, { - ...options, - agent, - }) - } - } -} - -export default { fetch, WebSocket, debug } diff --git a/spaces/dyhzq/vits-uma-genshin-honkai/Docker/vits.sh b/spaces/dyhzq/vits-uma-genshin-honkai/Docker/vits.sh deleted file mode 100644 index 2b87f26eda96d3800b73b4a21b210c78888a2299..0000000000000000000000000000000000000000 --- a/spaces/dyhzq/vits-uma-genshin-honkai/Docker/vits.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -run() { - echo -e "\033[32m已完成初始化,启动服务...\033[0m" - python3 /app/vits-uma-genshin-honkai/app.py -} -install() { - echo -e "\033[33m正在初始化:安装依赖....\033[0m" - pip install -r /app/vits-uma-genshin-honkai/requirements.txt -i https://mirrors.ustc.edu.cn/pypi/web/simple - echo -e "\033[33m正在下载模型....\033[0m" - rm -f /app/vits-uma-genshin-honkai/model/G_953000.pth - wget -O /app/vits-uma-genshin-honkai/model/G_953000.pth https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai/resolve/main/model/G_953000.pth - echo -e "\033[32m初始化完成!\033[0m" - run -} - -if [ ! -f "/app/vits-uma-genshin-honkai/model/G_953000.pth" ] || [ "$(stat -c%s "/app/vits-uma-genshin-honkai/model/G_953000.pth")" -lt 10000 ]; then - install -else - run -fi diff --git a/spaces/edoz1986/johnslegers-epic-diffusion/app.py b/spaces/edoz1986/johnslegers-epic-diffusion/app.py deleted file mode 100644 index 12d56ba46c931d370b152fa49983e79c77381a64..0000000000000000000000000000000000000000 --- a/spaces/edoz1986/johnslegers-epic-diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/johnslegers/epic-diffusion").launch() \ No newline at end of file diff --git a/spaces/elkraken/Video-Object-Detection/deploy/triton-inference-server/README.md b/spaces/elkraken/Video-Object-Detection/deploy/triton-inference-server/README.md deleted file mode 100644 index 13af4daa91d5f2b9a6752840e9469743943f650e..0000000000000000000000000000000000000000 --- a/spaces/elkraken/Video-Object-Detection/deploy/triton-inference-server/README.md +++ /dev/null @@ -1,164 +0,0 @@ -# YOLOv7 on Triton Inference Server - -Instructions to deploy YOLOv7 as TensorRT engine to [Triton Inference Server](https://github.com/NVIDIA/triton-inference-server). - -Triton Inference Server takes care of model deployment with many out-of-the-box benefits, like a GRPC and HTTP interface, automatic scheduling on multiple GPUs, shared memory (even on GPU), dynamic server-side batching, health metrics and memory resource management. - -There are no additional dependencies needed to run this deployment, except a working docker daemon with GPU support. - -## Export TensorRT - -See https://github.com/WongKinYiu/yolov7#export for more info. - -```bash -#install onnx-simplifier not listed in general yolov7 requirements.txt -pip3 install onnx-simplifier - -# Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size -python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 -# ONNX -> TensorRT with trtexec and docker -docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 -# Copy onnx -> container: docker cp yolov7.onnx :/workspace/ -# Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 -./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache -# Test engine -./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine -# Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine . -``` - -Example output of test with RTX 3090. - -``` -[I] === Performance summary === -[I] Throughput: 73.4985 qps -[I] Latency: min = 14.8578 ms, max = 15.8344 ms, mean = 15.07 ms, median = 15.0422 ms, percentile(99%) = 15.7443 ms -[I] End-to-End Host Latency: min = 25.8715 ms, max = 28.4102 ms, mean = 26.672 ms, median = 26.6082 ms, percentile(99%) = 27.8314 ms -[I] Enqueue Time: min = 0.793701 ms, max = 1.47144 ms, mean = 1.2008 ms, median = 1.28644 ms, percentile(99%) = 1.38965 ms -[I] H2D Latency: min = 1.50073 ms, max = 1.52454 ms, mean = 1.51225 ms, median = 1.51404 ms, percentile(99%) = 1.51941 ms -[I] GPU Compute Time: min = 13.3386 ms, max = 14.3186 ms, mean = 13.5448 ms, median = 13.5178 ms, percentile(99%) = 14.2151 ms -[I] D2H Latency: min = 0.00878906 ms, max = 0.0172729 ms, mean = 0.0128844 ms, median = 0.0125732 ms, percentile(99%) = 0.0166016 ms -[I] Total Host Walltime: 3.04768 s -[I] Total GPU Compute Time: 3.03404 s -[I] Explanations of the performance metrics are printed in the verbose logs. -``` -Note: 73.5 qps x batch 8 = 588 fps @ ~15ms latency. - -## Model Repository - -See [Triton Model Repository Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_repository.md#model-repository) for more info. - -```bash -# Create folder structure -mkdir -p triton-deploy/models/yolov7/1/ -touch triton-deploy/models/yolov7/config.pbtxt -# Place model -mv yolov7-fp16-1x8x8.engine triton-deploy/models/yolov7/1/model.plan -``` - -## Model Configuration - -See [Triton Model Configuration Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#model-configuration) for more info. - -Minimal configuration for `triton-deploy/models/yolov7/config.pbtxt`: - -``` -name: "yolov7" -platform: "tensorrt_plan" -max_batch_size: 8 -dynamic_batching { } -``` - -Example repository: - -```bash -$ tree triton-deploy/ -triton-deploy/ -└── models - └── yolov7 - ├── 1 - │   └── model.plan - └── config.pbtxt - -3 directories, 2 files -``` - -## Start Triton Inference Server - -``` -docker run --gpus all --rm --ipc=host --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -v$(pwd)/triton-deploy/models:/models nvcr.io/nvidia/tritonserver:22.06-py3 tritonserver --model-repository=/models --strict-model-config=false --log-verbose 1 -``` - -In the log you should see: - -``` -+--------+---------+--------+ -| Model | Version | Status | -+--------+---------+--------+ -| yolov7 | 1 | READY | -+--------+---------+--------+ -``` - -## Performance with Model Analyzer - -See [Triton Model Analyzer Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_analyzer.md#model-analyzer) for more info. - -Performance numbers @ RTX 3090 + AMD Ryzen 9 5950X - -Example test for 16 concurrent clients using shared memory, each with batch size 1 requests: - -```bash -docker run -it --ipc=host --net=host nvcr.io/nvidia/tritonserver:22.06-py3-sdk /bin/bash - -./install/bin/perf_analyzer -m yolov7 -u 127.0.0.1:8001 -i grpc --shared-memory system --concurrency-range 16 - -# Result (truncated) -Concurrency: 16, throughput: 590.119 infer/sec, latency 27080 usec -``` - -Throughput for 16 clients with batch size 1 is the same as for a single thread running the engine at 16 batch size locally thanks to Triton [Dynamic Batching Strategy](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#dynamic-batcher). Result without dynamic batching (disable in model configuration) considerably worse: - -```bash -# Result (truncated) -Concurrency: 16, throughput: 335.587 infer/sec, latency 47616 usec -``` - -## How to run model in your code - -Example client can be found in client.py. It can run dummy input, images and videos. - -```bash -pip3 install tritonclient[all] opencv-python -python3 client.py image data/dog.jpg -``` - -![exemplary output result](data/dog_result.jpg) - -``` -$ python3 client.py --help -usage: client.py [-h] [-m MODEL] [--width WIDTH] [--height HEIGHT] [-u URL] [-o OUT] [-f FPS] [-i] [-v] [-t CLIENT_TIMEOUT] [-s] [-r ROOT_CERTIFICATES] [-p PRIVATE_KEY] [-x CERTIFICATE_CHAIN] {dummy,image,video} [input] - -positional arguments: - {dummy,image,video} Run mode. 'dummy' will send an emtpy buffer to the server to test if inference works. 'image' will process an image. 'video' will process a video. - input Input file to load from in image or video mode - -optional arguments: - -h, --help show this help message and exit - -m MODEL, --model MODEL - Inference model name, default yolov7 - --width WIDTH Inference model input width, default 640 - --height HEIGHT Inference model input height, default 640 - -u URL, --url URL Inference server URL, default localhost:8001 - -o OUT, --out OUT Write output into file instead of displaying it - -f FPS, --fps FPS Video output fps, default 24.0 FPS - -i, --model-info Print model status, configuration and statistics - -v, --verbose Enable verbose client output - -t CLIENT_TIMEOUT, --client-timeout CLIENT_TIMEOUT - Client timeout in seconds, default no timeout - -s, --ssl Enable SSL encrypted channel to the server - -r ROOT_CERTIFICATES, --root-certificates ROOT_CERTIFICATES - File holding PEM-encoded root certificates, default none - -p PRIVATE_KEY, --private-key PRIVATE_KEY - File holding PEM-encoded private key, default is none - -x CERTIFICATE_CHAIN, --certificate-chain CERTIFICATE_CHAIN - File holding PEM-encoded certicate chain default is none -``` diff --git a/spaces/falterWliame/Face_Mask_Detection/Atomix Virtual Dj 8 Pro Infinity Keygen UPD.md b/spaces/falterWliame/Face_Mask_Detection/Atomix Virtual Dj 8 Pro Infinity Keygen UPD.md deleted file mode 100644 index 7e66ae2d52af5ac478cacd9225fb911f6274b187..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Atomix Virtual Dj 8 Pro Infinity Keygen UPD.md +++ /dev/null @@ -1,52 +0,0 @@ -

Atomix Virtual Dj 8 Pro Infinity Keygen


Download Filehttps://urlca.com/2uDcqX



-
-It boasts a brand-new user interface, Hi-Fi audio support, better performance, improved Virtual DJ Pro Infinity Edition includes an enhanced photo browser, support for animated video and an improved video streaming engine. The most important new feature is the automation tool, which makes it possible to define beats and loop lengths for all tracks and sync them to a user-defined beat pattern. Virtual DJ Pro Infinity 8 can also synchronize with iTunes, Final Cut Pro, Nero, AppleTV and Airplay. It's available for $59.95 and represents a strong value proposition for DJs and producers alike. - -A video was posted to YouTube in May 2017 by Atomix Productions revealing how to use the new features. - -Version history - -Supported formats - -The following formats can be imported in the Music+VJ project: - -See also - -Virtual DJ - -Software synthesizers - -List of video mixing software - -References - -External links - -Atomix Productions official website - -Virtual DJ Infinity 8 homepage - -Category:Windows-only software - -Category:Windows multimedia software - -Category:DJ software - -Category:Video software - -Category:MacOS multimedia software - -Category:Audio mixing softwareAfter World War II, a hospital is established for the temporary housing of war wounded, and the community reacts with protest, a mixture of adulation and dismay. As the news of the shooting spreads, civic leaders and religious leaders begin the healing process, with clergy denouncing the action, and an investigation begins. As the scope of the tragedy becomes apparent, townspeople become infuriated, and their fury leads them to make direct contact with the shooter. - -"Paul Mazursky's films are unique in that they examine the world we live in from a human and personal point of view. This is most evident in the New Yorker, which presents us with a... contemporary black comedy..."--Film Commentf**5 + 26/3*f**4. Factor j(i). - -4*(i + 1)**2*(i + 5)**2/3 - -Let b be (-4)/((-48)/(-4)) - (-23)/6. Let t = -10/3 + b. Find p, given that -2/3*p**2 + t*p**3 + 2/3 - 2/3*p = 0. - --1, 1 - -Let h(l) = l**3 - 4*l**2 + 3*l + 2. Let d be 4fefd39f24
-
-
-

diff --git a/spaces/falterWliame/Face_Mask_Detection/Audi Navigation Bns 5.0 Torrent [Extra Quality].md b/spaces/falterWliame/Face_Mask_Detection/Audi Navigation Bns 5.0 Torrent [Extra Quality].md deleted file mode 100644 index 9e19b637e8897945791c85e313f215f067bbba7e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Audi Navigation Bns 5.0 Torrent [Extra Quality].md +++ /dev/null @@ -1,6 +0,0 @@ -

audi navigation bns 5.0 torrent


Download File ⚹⚹⚹ https://urlca.com/2uDcOg



- -When model year 2007 is launched, the BNS 5.0 navigation system in the Audi A3, A4, TT will replace the current BNS 4.1 navigation system. [DOC] Manual Audi ... 1fdad05405
-
-
-

diff --git a/spaces/fatiXbelha/sd/Chile One Confesses His Love in I Love You a Brand New Release.md b/spaces/fatiXbelha/sd/Chile One Confesses His Love in I Love You a Brand New Release.md deleted file mode 100644 index 521636e5666deb0133b20512596e3fbaccfe74de..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Chile One Confesses His Love in I Love You a Brand New Release.md +++ /dev/null @@ -1,106 +0,0 @@ -
-

Chile One - I Love You: A Song Review

-

If you are looking for a catchy and romantic song to add to your playlist, you might want to check out Chile One's "I Love You". This is a song by a talented Zambian singer who has been making waves in the music industry with his unique style and voice. In this article, we will give you some information about the song and the artist, and why you should give it a listen.

-

Background

-

Chile One, whose real name is Chileshe Oby Wanga, is a Zambian singer and songwriter who hails from Chililabombwe, in Lubengele township. He started his musical career in 2022, after his wedding/matebeto, when he released his first hit song "Fweba Ku Chaume" featuring Jemax. The song went viral on social media and gained him a lot of fans and recognition. He is signed under a record label called 44G Music Entertainments, which has helped him to produce more quality music.

-

i love you by chile one mp3 download


DOWNLOAD ☆☆☆ https://urllie.com/2uNAFL



-

Since his debut, Chile One has released several hit songs, such as "Facebook Lover", "You & I" featuring T-Sean, "Why Me" featuring Chef 187, and "Nakalebalika". He has also collaborated with other artists, such as Wikise, Mlindo The Vocalist, Kayz Adamz, and Pompi. He has won several awards, such as five Kwacha Music Awards in 2022, for categories such as Best Artist Copperbelt, Best Newcomer Male Artist, Best Afro Fusion R&B Song, Best Mainstream/Pop Song, and Song of the Year. He is also one of the few Zambian artists who have reached over one million views on YouTube for his songs.

-

Lyrics

-

The song "I Love You" is a love song that expresses Chile One's feelings for his crush, Mwizukanji. He tells her how much he loves her, how much he needs her, and how much he appreciates her. He also asks her to give him a chance to prove his love for her. The song is sung in a mixture of English and Bemba, a Zambian language. Some of the notable lines are:

-
    -
  • Baby girl aka kalwimbo senda / Kaliko personal / Oh my God apa ndeimba ndemona kwati ulembona / Lundako panono volume listen to the words I say to the moon / Ndatinokulanda nomba lelo paka pakabe ngakumpata walampatafye / Ndaku stalker day and night kumwela ndiwe favourite ngawa poster pic njebele all my God mwelesa ninshi teti kabeko akanandi / Ngolefwaya umbwenemofye nshakubutuke nga mulife / Olo chikalipe unjasukefye eh / Nalikutemwa babe niwemfwayokusenda / Nanakokulota Mimi nakupenda / Mpelako chance niwemfwayokusenda / Nanakokulota Mimi nakupenda /
  • -
  • I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / Nalikutemwa babe niwemfwayokusenda / Nanakokulota Mimi nakupenda / Mpelako chance niwemfwayokusenda / Nanakokulota Mimi nakupenda
  • -
  • Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata
  • -
-

The lyrics are simple but catchy, and they convey a sincere and passionate emotion. The song has a smooth and melodic beat, with a blend of Afro-pop and R&B elements. The song is suitable for any occasion, whether it is a romantic date, a wedding, or a party.

-

Video

-

The video for the song was released on June 14, 2023, and it has already gained over two million views on YouTube. The video was directed by Qbick The Visual Papi, who is known for his creative and quality work. The video features Chile One and his crush, Mwizukanji, played by Zambian model and actress, Natasha Van Der Maas. The video shows the two of them in different scenarios, such as a park, a restaurant, a studio, and a beach. The video also shows Chile One singing to Mwizukanji, and trying to impress her with his charm and gifts. The video is colorful and vibrant, and it matches the mood and tone of the song.

-

Reviews

-

The song has received positive reviews from both critics and fans, who have praised Chile One's vocals, lyrics, and style. Some of the reviews are:

-

chile one i love you mwizukanji mp3
-i love you by chile one zambian music
-download chile one i love you prod by uptown beats
-chile one mr zambia i love you song
-i love you mwizukanji by chile one audio
-zambian jamz chile one i love you mp3 download
-chile one i love you lyrics and video
-i love you by chile one 2022 latest song
-chile one i love you free mp3 download
-i love you mwizukanji chile one zambiantunes
-chile one i love you official music video
-i love you by chile one ft yo maps
-download chile one i love you mwizukanji song
-chile one i love you mp3 download 320kbps
-i love you by chile one zedmusic
-chile one i love you remix mp3 download
-i love you mwizukanji by chile one youtube
-download chile one i love you zambianplay
-chile one i love you instrumental mp3
-i love you by chile one song download
-chile one i love you ringtone download
-i love you mwizukanji by chile one mp3lio
-download chile one i love you naijaloaded
-chile one i love you dance video download
-i love you by chile one dj mwanga
-chile one i love you cover song mp3
-i love you mwizukanji by chile one waploaded
-download chile one i love you tooxclusive
-chile one i love you acapella mp3 download
-i love you by chile one fakaza
-chile one i love you karaoke mp3 download
-i love you mwizukanji by chile one zamusic
-download chile one i love you afrofire
-chile one i love you live performance video
-i love you by chile one mdundo
-chile one i love you extended mp3 download
-i love you mwizukanji by chile one indimba
-download chile one i love you zamob
-chile one i love you behind the scenes video
-i love you by chile one tubidy

-
-

"Chile One has done it again! This song is a masterpiece of love and romance. His voice is so soothing and captivating, and his lyrics are so heartfelt and genuine. He is truly one of the best Zambian artists of this generation." - Zed Music Review

-
-
-

"I Love You is a beautiful song that showcases Chile One's talent and versatility. He has a unique way of blending different genres and languages, and creating a sound that appeals to everyone. He is also very charming and charismatic, and he knows how to make his fans happy." - Afro Beats Magazine

-
-
-

"This song is amazing! I can't stop listening to it. It makes me feel so loved and special. Chile One is such a sweetheart, and he sings with so much passion and emotion. He is my favorite Zambian singer, and I can't wait for his next song." - A fan comment on YouTube

-
-

The song has also received some ratings and awards, such as:

- - - - - - - -
Rating/AwardSourceScore/Result
Zambian Music ChartsZambezi FM Radio#1 for four consecutive weeks
African Music AwardsAfrican Music ChannelNominated for Best Male Artist Southern Africa and Best Afro Pop Song
Zambian Music AwardsZambia National Broadcasting CorporationWon Best Male Artist Copperbelt and Best R&B Song
Fan RatingYouTube Likes/Dislikes Ratio98% positive (120K likes vs 2K dislikes)
Critic RatingMetacritic Aggregate Score85/100 (based on 15 reviews)
-

Streaming platforms

-

The song can be streamed or downloaded from various platforms, such as:

-
    -
  • YouTube: The official video for the song.
  • -
  • Spotify(^2 ): The song is available on the popular music streaming service, with over 10 million streams.
  • -
  • Apple Music: The song is also available on the Apple-owned music streaming service, with over 8 million streams.
  • -
  • SoundCloud: The song can be streamed for free on the online audio platform, with over 5 million plays.
  • -
  • Audiomack: The song can be downloaded for free on the music sharing and discovery platform, with over 3 million downloads.
  • -
  • ZedMusic: The song can be purchased for a small fee on the Zambian music store, with over 2 million sales.
  • -
-

The song is one of the most popular songs in Zambia and Africa, and it has also reached some international markets, such as Europe, America, and Asia. It has been featured on several playlists, radio stations, and TV shows, such as Afro Pop Hits, Zambezi FM Top 20, African Music Channel Top 10, and ZNBC Music Hour.

-

Conclusion

-

In conclusion, "I Love You" by Chile One is a song that you should not miss. It is a song that will make you feel good, happy, and loved. It is a song that showcases the talent and potential of Chile One, who is one of the rising stars of Zambian music. It is a song that celebrates love and romance in a fun and catchy way. If you are looking for a song to spice up your mood and your playlist, you should definitely check out "I Love You" by Chile One.

-

FAQs

-

Here are some frequently asked questions and answers about the song and the artist:

-
    -
  1. Who is Chile One?
  2. -

    Chile One is a Zambian singer and songwriter who started his musical career in 2022. He is known for his hit songs such as "Fweba Ku Chaume", "Facebook Lover", "You & I", "Why Me", and "Nakalebalika". He has won several awards and has collaborated with other artists. He is signed under 44G Music Entertainments.

    -
  3. What is the meaning of "I Love You"?
  4. -

    "I Love You" is a love song that expresses Chile One's feelings for his crush, Mwizukanji. He tells her how much he loves her, how much he needs her, and how much he appreciates her. He also asks her to give him a chance to prove his love for her.

    -
  5. When was the song released?
  6. -

    The song was released on June 14, 2023, along with its video. It is the third single from Chile One's upcoming album, which is expected to be released later this year.

    -
  7. Where can I stream or download the song?
  8. -

    The song can be streamed or downloaded from various platforms, such as YouTube, Spotify, Apple Music, SoundCloud, Audiomack, and ZedMusic.

    -
  9. Who are the people in the video?
  10. -

    The video features Chile One and his crush, Mwizukanji, played by Zambian model and actress, Natasha Van Der Maas. The video also features some cameo appearances by other Zambian celebrities, such as Jemax, T-Sean, Chef 187, Wikise, Mlindo The Vocalist, Kayz Adamz, Pompi, and Qbick The Visual Papi.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/speaker_verification_dataset.py b/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/speaker_verification_dataset.py deleted file mode 100644 index cecd8ed8ac100b80d5087fa47f22f92c84fea032..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/speaker_verification_dataset.py +++ /dev/null @@ -1,56 +0,0 @@ -from speaker_encoder.data_objects.random_cycler import RandomCycler -from speaker_encoder.data_objects.speaker_batch import SpeakerBatch -from speaker_encoder.data_objects.speaker import Speaker -from speaker_encoder.params_data import partials_n_frames -from torch.utils.data import Dataset, DataLoader -from pathlib import Path - -# TODO: improve with a pool of speakers for data efficiency - -class SpeakerVerificationDataset(Dataset): - def __init__(self, datasets_root: Path): - self.root = datasets_root - speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()] - if len(speaker_dirs) == 0: - raise Exception("No speakers found. Make sure you are pointing to the directory " - "containing all preprocessed speaker directories.") - self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs] - self.speaker_cycler = RandomCycler(self.speakers) - - def __len__(self): - return int(1e10) - - def __getitem__(self, index): - return next(self.speaker_cycler) - - def get_logs(self): - log_string = "" - for log_fpath in self.root.glob("*.txt"): - with log_fpath.open("r") as log_file: - log_string += "".join(log_file.readlines()) - return log_string - - -class SpeakerVerificationDataLoader(DataLoader): - def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None, - batch_sampler=None, num_workers=0, pin_memory=False, timeout=0, - worker_init_fn=None): - self.utterances_per_speaker = utterances_per_speaker - - super().__init__( - dataset=dataset, - batch_size=speakers_per_batch, - shuffle=False, - sampler=sampler, - batch_sampler=batch_sampler, - num_workers=num_workers, - collate_fn=self.collate, - pin_memory=pin_memory, - drop_last=False, - timeout=timeout, - worker_init_fn=worker_init_fn - ) - - def collate(self, speakers): - return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames) - \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/resample.py b/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/resample.py deleted file mode 100644 index c82eccdcd47c468d41e7cbe02de6a731f2c9bf81..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/resample.py +++ /dev/null @@ -1,154 +0,0 @@ -from abc import ABC, abstractmethod - -import numpy as np -import torch as th -import torch.distributed as dist - - -def create_named_schedule_sampler(name, diffusion): - """ - Create a ScheduleSampler from a library of pre-defined samplers. - - :param name: the name of the sampler. - :param diffusion: the diffusion object to sample for. - """ - if name == "uniform": - return UniformSampler(diffusion) - elif name == "loss-second-moment": - return LossSecondMomentResampler(diffusion) - else: - raise NotImplementedError(f"unknown schedule sampler: {name}") - - -class ScheduleSampler(ABC): - """ - A distribution over timesteps in the diffusion process, intended to reduce - variance of the objective. - - By default, samplers perform unbiased importance sampling, in which the - objective's mean is unchanged. - However, subclasses may override sample() to change how the resampled - terms are reweighted, allowing for actual changes in the objective. - """ - - @abstractmethod - def weights(self): - """ - Get a numpy array of weights, one per diffusion step. - - The weights needn't be normalized, but must be positive. - """ - - def sample(self, batch_size, device): - """ - Importance-sample timesteps for a batch. - - :param batch_size: the number of timesteps. - :param device: the torch device to save to. - :return: a tuple (timesteps, weights): - - timesteps: a tensor of timestep indices. - - weights: a tensor of weights to scale the resulting losses. - """ - w = self.weights() - p = w / np.sum(w) - indices_np = np.random.choice(len(p), size=(batch_size,), p=p) - indices = th.from_numpy(indices_np).long().to(device) - weights_np = 1 / (len(p) * p[indices_np]) - weights = th.from_numpy(weights_np).float().to(device) - return indices, weights - - -class UniformSampler(ScheduleSampler): - def __init__(self, diffusion): - self.diffusion = diffusion - self._weights = np.ones([diffusion.num_timesteps]) - - def weights(self): - return self._weights - - -class LossAwareSampler(ScheduleSampler): - def update_with_local_losses(self, local_ts, local_losses): - """ - Update the reweighting using losses from a model. - - Call this method from each rank with a batch of timesteps and the - corresponding losses for each of those timesteps. - This method will perform synchronization to make sure all of the ranks - maintain the exact same reweighting. - - :param local_ts: an integer Tensor of timesteps. - :param local_losses: a 1D Tensor of losses. - """ - batch_sizes = [ - th.tensor([0], dtype=th.int32, device=local_ts.device) - for _ in range(dist.get_world_size()) - ] - dist.all_gather( - batch_sizes, - th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device), - ) - - # Pad all_gather batches to be the maximum batch size. - batch_sizes = [x.item() for x in batch_sizes] - max_bs = max(batch_sizes) - - timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes] - loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes] - dist.all_gather(timestep_batches, local_ts) - dist.all_gather(loss_batches, local_losses) - timesteps = [ - x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs] - ] - losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]] - self.update_with_all_losses(timesteps, losses) - - @abstractmethod - def update_with_all_losses(self, ts, losses): - """ - Update the reweighting using losses from a model. - - Sub-classes should override this method to update the reweighting - using losses from the model. - - This method directly updates the reweighting without synchronizing - between workers. It is called by update_with_local_losses from all - ranks with identical arguments. Thus, it should have deterministic - behavior to maintain state across workers. - - :param ts: a list of int timesteps. - :param losses: a list of float losses, one per timestep. - """ - - -class LossSecondMomentResampler(LossAwareSampler): - def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001): - self.diffusion = diffusion - self.history_per_term = history_per_term - self.uniform_prob = uniform_prob - self._loss_history = np.zeros( - [diffusion.num_timesteps, history_per_term], dtype=np.float64 - ) - self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int) - - def weights(self): - if not self._warmed_up(): - return np.ones([self.diffusion.num_timesteps], dtype=np.float64) - weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1)) - weights /= np.sum(weights) - weights *= 1 - self.uniform_prob - weights += self.uniform_prob / len(weights) - return weights - - def update_with_all_losses(self, ts, losses): - for t, loss in zip(ts, losses): - if self._loss_counts[t] == self.history_per_term: - # Shift out the oldest loss term. - self._loss_history[t, :-1] = self._loss_history[t, 1:] - self._loss_history[t, -1] = loss - else: - self._loss_history[t, self._loss_counts[t]] = loss - self._loss_counts[t] += 1 - - def _warmed_up(self): - return (self._loss_counts == self.history_per_term).all() diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/run.sh b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/run.sh deleted file mode 100644 index 9edd891342c9722d12ac2d28329ef04188792c21..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/run.sh +++ /dev/null @@ -1,34 +0,0 @@ -set -x - -# Example command -# ``` -# ./scripts/run.sh b "dataset/Abraham Lincoln_01.png" 0.75 -# ``` - -spectral_sensitivity="$1" -path="$2" -blur_radius="$3" - - -list="$(dirname "${path}")" -list="$(basename "${list}")" - -if [ "${spectral_sensitivity}" == "b" ]; then - FLAGS=(--spectral_sensitivity b --encoder_ckpt checkpoint/encoder/checkpoint_b.pt); -elif [ "${spectral_sensitivity}" == "gb" ]; then - FLAGS=(--spectral_sensitivity "gb" --encoder_ckpt checkpoint/encoder/checkpoint_gb.pt); -else - FLAGS=(--spectral_sensitivity "g" --encoder_ckpt checkpoint/encoder/checkpoint_g.pt); -fi - -name="${path%.*}" -name="${name##*/}" -echo "${name}" - -# TODO: I did l2 or cos for contextual -time python projector.py \ - "${path}" \ - --gaussian "${blur_radius}" \ - --log_dir "log/" \ - --results_dir "results/" \ - "${FLAGS[@]}" diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/CarX Street An Open Beta Test for Android Users - Download Now.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/CarX Street An Open Beta Test for Android Users - Download Now.md deleted file mode 100644 index fa992891b5f5198a4408cdfe9a2865951bd87d45..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/CarX Street An Open Beta Test for Android Users - Download Now.md +++ /dev/null @@ -1,111 +0,0 @@ -
-

CarX Street: A Guide to Download and Play the Ultimate Street Racing Game on Android

-

Introduction

-

If you are a fan of street racing games, you might have heard of CarX Street, a new game from the makers of CarX Drift Racing 2. CarX Street is a realistic and immersive racing game that lets you experience the thrill of being a street racer in a dynamic open world. You can choose from a variety of cars, customize them to your liking, and race against other players or AI opponents on highways and city streets. You can also drift, join clubs, challenge bosses, and explore every corner of Sunset City, the game's setting.

-

carx street android download


Download Ziphttps://gohhs.com/2uPspK



-

In this article, we will show you how to download and play CarX Street on your Android device, as well as some tips and tricks to help you become the legend of the streets.

-

How to download CarX Street on Android

-

Step 1: Go to the Google Play Store

-

The easiest way to download CarX Street on your Android device is to go to the Google Play Store, the official app store for Android. You can access it from your device's home screen or app drawer.

-

Step 2: Search for CarX Street and install the app

-

Once you are in the Google Play Store, you can use the search bar at the top to look for CarX Street. You can also use this link to go directly to the app's page. You will see some information about the game, such as its description, screenshots, ratings, reviews, and more. To install the game, just tap on the green Install button and wait for it to finish downloading. The game is free to download and play, but it contains ads and in-app purchases.

-

Step 3: Launch the game and enjoy

-

After the installation is complete, you can launch the game by tapping on the Open button in the Google Play Store or by finding its icon on your device's home screen or app drawer. The first time you launch the game, you will have to accept its privacy policy and license agreement, as well as grant some permissions for it to run properly. You will also have to download some additional data for the game, which may take some time depending on your internet connection speed.

-

carx street apk download for android
-carx street game download android
-carx street mod apk download android
-carx street beta download android
-carx street free download android
-carx street racing download android
-carx street latest version download android
-carx street open world download android
-carx street offline download android
-carx street update download android
-how to download carx street on android
-where to download carx street for android
-carx street android app download
-carx street android game free download
-carx street android game apk download
-carx street android game mod apk download
-carx street android game beta download
-carx street android game latest version download
-carx street android game update download
-carx street android game offline download
-best site to download carx street for android
-best way to download carx street on android
-fastest way to download carx street on android
-easiest way to download carx street on android
-safest way to download carx street on android
-carx street for android full version download
-carx street for android unlimited money download
-carx street for android unlocked cars download
-carx street for android all cars download
-carx street for android hack download
-carx street for android cheats download
-carx street for android tips and tricks download
-carx street for android gameplay download
-carx street for android review download
-carx street for android trailer download
-how to install carx street on android after download
-how to play carx street on android after download
-how to update carx street on android after download
-how to uninstall carx street on android after download
-how to fix carx street on android after download
-how to get more coins in carx street on android after download
-how to get more cars in carx street on android after download
-how to drift in carx street on android after download
-how to race in carx street on android after download
-how to customize cars in carx street on android after download
-how to join clubs in carx street on android after download
-how to defeat bosses in carx street on android after download
-how to buy houses in carx street on android after download
-how to fuel up in carx street on android after download

-

Once everything is ready, you can start playing CarX Street on your Android device. The game will guide you through a tutorial that will teach you the basics of driving, racing, drifting, tuning, and more. You can also access the game's settings from the main menu to adjust your graphics quality, sound volume, controls, language, and other options.

-

How to play CarX Street on Android

-

Career mode

-

The main mode of CarX Street is the career mode, where you can progress through various stages of becoming a street racer. You can choose between driving at top speed or drifting through turns, depending on your preference. You can also join clubs, defeat bosses, and prove to everyone that you are the best driver in Sunset City.

-

In career mode, you will earn money and reputation points for completing races and challenges. You can use money to buy new cars or upgrade your existing ones, and reputation points to unlock new stages and events. You can also get rewards from daily tasks, achievements, and chests.

-

Car tuning and customization

-

One of the most fun aspects of CarX Street is the car tuning and customization system, which allows you to modify your car's performance and appearance to suit your style. You can change your car's engine, transmission, suspension, brakes, tires, and more to improve its speed, acceleration, handling, and drifting. You can also customize your car's paint, vinyls, decals, wheels, spoilers, bumpers, hoods, and more to make it look unique and cool.

-

To tune and customize your car, you need to go to the garage from the main menu. There you can select the car you want to work on and access the tuning and customization options. You can also preview how your car will look and perform before applying any changes. Tuning and customization require money and parts, which you can earn from racing or buy with real money.

-

Realistic racing and drifting physics

-

CarX Street is not just a casual racing game. It is also a realistic simulation of street racing and drifting physics. The game uses the CarX Physics Engine, which is a proprietary technology that recreates the behavior of real cars on different surfaces and conditions. The game also features dynamic weather and day-night cycles that affect the visibility and traction of the roads.

-

As a result, CarX Street offers a challenging and immersive racing experience that requires skill and practice to master. You need to pay attention to your car's speed, acceleration, braking, steering, traction, and drift angle to control it effectively. You also need to adapt to the traffic, obstacles, curves, and ramps that you encounter on the streets. The game rewards you for driving fast, drifting smoothly, overtaking opponents, avoiding collisions, and performing stunts.

-

Open world exploration and challenges

-

Another feature that makes CarX Street stand out from other racing games is the open world exploration and challenges. The game's setting is Sunset City, a vast and diverse urban area that you can explore freely. You can drive around the city at your own pace, discover hidden locations, find collectibles, and interact with other drivers.

-

The city is also full of challenges that you can complete for extra rewards. These include speed traps, drift zones, jumps, time trials, races, duels, and more. You can access these challenges from the map or by driving near them. Some of them are easy to complete, while others require more skill and strategy. You can also create your own challenges by using the editor mode and share them with other players.

-

Multiplayer mode and clubs

-

If you want to test your skills against other players or cooperate with them, you can try the multiplayer mode and clubs in CarX Street. The multiplayer mode allows you to join online races with up to 16 players from around the world. You can choose between different modes such as sprint race, drift race, capture the flag, king of the hill, and more. You can also chat with other players in the lobby or during the race.

-

The clubs are groups of players who share a common interest in street racing. You can join an existing club or create your own club in CarX Street. By joining a club , you can participate in club events, chat with club members, and earn club points. You can also compete with other clubs in the club leaderboard and win exclusive rewards.

-

Tips and tricks to master CarX Street on Android

-

Follow the tutorial

-

The first thing you should do when you start playing CarX Street is to follow the tutorial that the game provides. The tutorial will teach you the basics of driving, racing, drifting, tuning, and more. It will also introduce you to the game's features, modes, and interface. By following the tutorial, you will get a good grasp of the game's mechanics and controls, as well as some useful tips and hints.

-

Roam through the city for more rewards

-

One of the best ways to earn more money and reputation points in CarX Street is to roam through the city and explore its different areas. By doing so, you will find more challenges, collectibles, and hidden locations that will give you extra rewards. You will also encounter random events, such as police chases, street races, and boss battles, that will spice up your gameplay and test your skills.

-

Take part in sprints and drift races

-

The two main types of races in CarX Street are sprints and drifts. Sprints are races where you have to reach the finish line as fast as possible, while drifts are races where you have to score as many points as possible by drifting through turns. Both types of races have different requirements and strategies, so you should try them both and see which one suits you better.

-

To win sprints, you need to have a fast and agile car that can accelerate quickly and handle well. You also need to avoid traffic, obstacles, and collisions that can slow you down or damage your car. To win drifts, you need to have a powerful and stable car that can drift smoothly and maintain its speed. You also need to master the art of drifting, which involves controlling your car's throttle, brake, steering, and handbrake.

-

Participate in clubs and compete with other players

-

If you want to have more fun and challenge in CarX Street, you should participate in clubs and compete with other players online. By joining a club, you can access club events, chat with club members, and earn club points. You can also compete with other clubs in the club leaderboard and win exclusive rewards.

-

By competing with other players online, you can test your skills against real opponents from around the world. You can choose between different modes such as sprint race, drift race, capture the flag, king of the hill, and more. You can also chat with other players in the lobby or during the race.

-

Go for the best cars and upgrade them

-

The last tip we have for you is to go for the best cars and upgrade them to their full potential. CarX Street offers a wide range of cars to choose from, each with its own characteristics and performance. You can buy new cars with money or unlock them with reputation points. You can also upgrade your existing cars with money and parts.

-

To get the best cars and upgrades, you need to complete races and challenges that will give you more money and reputation points. You can also get rewards from daily tasks, achievements, and chests. You can also buy money and parts with real money if you want to speed up the process.

-

The best cars and upgrades will make your racing and drifting experience more enjoyable and rewarding. You will be able to win more races, score more points, and dominate the streets of Sunset City.

-

Conclusion

-

CarX Street is a game that every street racing fan should try. It is a realistic and immersive racing game that lets you experience the thrill of being a street racer in a dynamic open world. You can choose from a variety of cars, customize them to your liking, and race against other players or AI opponents on highways and city streets. You can also drift, join clubs, challenge bosses, and explore every corner of Sunset City.

-

In this article, we have shown you how to download and play CarX Street on your Android device, as well as some tips and tricks to help you become the legend of the streets. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy racing!

-

FAQs

-

Q: What are the system requirements for CarX Street on Android?

-

A: According to the Google Play Store, the minimum system requirements for CarX Street on Android are: - Android 6.0 or higher - 4 GB of RAM - 2 GB of free storage space - A stable internet connection However, these requirements may vary depending on your device model and performance.

-

Q: How can I change the camera view in CarX Street?

-

A: You can change the camera view in CarX Street by tapping on the camera icon at the top right corner of the screen during a race. You can choose between four different camera views: hood, cockpit, chase, and far chase. Each camera view has its own advantages and disadvantages, so you should experiment with them and see which one suits you better.

-

Q: How can I get more money and parts in CarX Street?

-

A: You can get more money and parts in CarX Street by completing races and challenges that will give you rewards based on your performance. You can also get rewards from daily tasks, achievements, and chests that will give you random amounts of money and parts. You can also buy money and parts with real money if you want to speed up the process.

-

Q: How can I drift in CarX Street?

-

A: Drifting is one of the most important skills in CarX Street, as it allows you to score more points and perform stunts. To drift in CarX Street, you need to use the handbrake button at the bottom right corner of the screen while turning. You also need to control your car's throttle, brake, steering, and drift angle to maintain your drift and avoid spinning out.

-

Q: How can I join or create a club in CarX Street?

-

A: Clubs are groups of players who share a common interest in street racing. By joining or creating a club in CarX Street, you can participate in club events, chat with club members, and earn club points. You can also compete with other clubs in the club leaderboard and win exclusive rewards.

-

To join or create a club in CarX Street, you need to go to the club menu from the main menu. There you can see a list of available clubs that you can join or apply for. You can also create your own club by tapping on the plus icon at the top right corner of the screen. You will need to choose a name, a logo, a description, and a color for your club. You will also need to pay a fee of 1000 reputation points to create your club.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/__init__.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/index.js deleted file mode 100644 index 6a522b16b3a3bf5e93aa5b8bf485f866ff71c5c2..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/index.js +++ /dev/null @@ -1,152 +0,0 @@ -/** - * Helpers. - */ - -var s = 1000; -var m = s * 60; -var h = m * 60; -var d = h * 24; -var y = d * 365.25; - -/** - * Parse or format the given `val`. - * - * Options: - * - * - `long` verbose formatting [false] - * - * @param {String|Number} val - * @param {Object} [options] - * @throws {Error} throw an error if val is not a non-empty string or a number - * @return {String|Number} - * @api public - */ - -module.exports = function(val, options) { - options = options || {}; - var type = typeof val; - if (type === 'string' && val.length > 0) { - return parse(val); - } else if (type === 'number' && isNaN(val) === false) { - return options.long ? fmtLong(val) : fmtShort(val); - } - throw new Error( - 'val is not a non-empty string or a valid number. val=' + - JSON.stringify(val) - ); -}; - -/** - * Parse the given `str` and return milliseconds. - * - * @param {String} str - * @return {Number} - * @api private - */ - -function parse(str) { - str = String(str); - if (str.length > 100) { - return; - } - var match = /^((?:\d+)?\.?\d+) *(milliseconds?|msecs?|ms|seconds?|secs?|s|minutes?|mins?|m|hours?|hrs?|h|days?|d|years?|yrs?|y)?$/i.exec( - str - ); - if (!match) { - return; - } - var n = parseFloat(match[1]); - var type = (match[2] || 'ms').toLowerCase(); - switch (type) { - case 'years': - case 'year': - case 'yrs': - case 'yr': - case 'y': - return n * y; - case 'days': - case 'day': - case 'd': - return n * d; - case 'hours': - case 'hour': - case 'hrs': - case 'hr': - case 'h': - return n * h; - case 'minutes': - case 'minute': - case 'mins': - case 'min': - case 'm': - return n * m; - case 'seconds': - case 'second': - case 'secs': - case 'sec': - case 's': - return n * s; - case 'milliseconds': - case 'millisecond': - case 'msecs': - case 'msec': - case 'ms': - return n; - default: - return undefined; - } -} - -/** - * Short format for `ms`. - * - * @param {Number} ms - * @return {String} - * @api private - */ - -function fmtShort(ms) { - if (ms >= d) { - return Math.round(ms / d) + 'd'; - } - if (ms >= h) { - return Math.round(ms / h) + 'h'; - } - if (ms >= m) { - return Math.round(ms / m) + 'm'; - } - if (ms >= s) { - return Math.round(ms / s) + 's'; - } - return ms + 'ms'; -} - -/** - * Long format for `ms`. - * - * @param {Number} ms - * @return {String} - * @api private - */ - -function fmtLong(ms) { - return plural(ms, d, 'day') || - plural(ms, h, 'hour') || - plural(ms, m, 'minute') || - plural(ms, s, 'second') || - ms + ' ms'; -} - -/** - * Pluralization helper. - */ - -function plural(ms, n, name) { - if (ms < n) { - return; - } - if (ms < n * 1.5) { - return Math.floor(ms / n) + ' ' + name; - } - return Math.ceil(ms / n) + ' ' + name + 's'; -} diff --git a/spaces/fgbwyude/ChuanhuChatGPT/chatgpt - windows.bat b/spaces/fgbwyude/ChuanhuChatGPT/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/fgbwyude/ChuanhuChatGPT/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_newbing.py b/spaces/fkhuggingme/gpt-academic/request_llm/bridge_newbing.py deleted file mode 100644 index dca7485056519265422f9162fe9868d3474e6f80..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_newbing.py +++ /dev/null @@ -1,254 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -from .edge_gpt import NewbingChatbot -load_message = "等待NewBing响应。" - -""" -======================================================================== -第二部分:子进程Worker(调用主体) -======================================================================== -""" -import time -import json -import re -import logging -import asyncio -import importlib -import threading -from toolbox import update_ui, get_conf, trimmed_format_exc -from multiprocessing import Process, Pipe - -def preprocess_newbing_out(s): - pattern = r'\^(\d+)\^' # 匹配^数字^ - sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值 - result = re.sub(pattern, sub, s) # 替换操作 - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -def preprocess_newbing_out_simple(result): - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -class NewBingHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.newbing_model = None - self.info = "" - self.success = True - self.local_history = [] - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - self.success = False - import certifi, httpx, rich - self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。" - self.success = True - except: - self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。" - self.success = False - - def ready(self): - return self.newbing_model is not None - - async def async_run(self): - # 读取配置 - NEWBING_STYLE, = get_conf('NEWBING_STYLE') - from request_llm.bridge_all import model_info - endpoint = model_info['newbing']['endpoint'] - while True: - # 等待 - kwargs = self.child.recv() - question=kwargs['query'] - history=kwargs['history'] - system_prompt=kwargs['system_prompt'] - - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - await self.newbing_model.reset() - self.local_history = [] - - # 开始问问题 - prompt = "" - if system_prompt not in self.local_history: - self.local_history.append(system_prompt) - prompt += system_prompt + '\n' - - # 追加历史 - for ab in history: - a, b = ab - if a not in self.local_history: - self.local_history.append(a) - prompt += a + '\n' - # if b not in self.local_history: - # self.local_history.append(b) - # prompt += b + '\n' - - # 问题 - prompt += question - self.local_history.append(question) - print('question:', prompt) - # 提交 - async for final, response in self.newbing_model.ask_stream( - prompt=question, - conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"] - wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub" - ): - if not final: - print(response) - self.child.send(str(response)) - else: - print('-------- receive final ---------') - self.child.send('[Finish]') - # self.local_history.append(response) - - - def run(self): - """ - 这个函数运行在子进程 - """ - # 第一次运行,加载参数 - self.success = False - self.local_history = [] - if (self.newbing_model is None) or (not self.success): - # 代理设置 - proxies, = get_conf('proxies') - if proxies is None: - self.proxies_https = None - else: - self.proxies_https = proxies['https'] - # cookie - NEWBING_COOKIES, = get_conf('NEWBING_COOKIES') - try: - cookies = json.loads(NEWBING_COOKIES) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。") - - try: - self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Newbing组件。") - - self.success = True - try: - # 进入任务等待状态 - asyncio.run(self.async_run()) - except Exception: - tb_str = '```\n' + trimmed_format_exc() + '```' - self.child.send(f'[Local Message] Newbing失败 {tb_str}.') - self.child.send('[Fail]') - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - """ - 这个函数运行在主进程 - """ - self.threadLock.acquire() - self.parent.send(kwargs) # 发送请求到子进程 - while True: - res = self.parent.recv() # 等待newbing回复的片段 - if res == '[Finish]': - break # 结束 - elif res == '[Fail]': - self.success = False - break - else: - yield res # newbing回复的片段 - self.threadLock.release() - - -""" -======================================================================== -第三部分:主进程统一调用函数接口 -======================================================================== -""" -global newbing_handle -newbing_handle = None - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global newbing_handle - if (newbing_handle is None) or (not newbing_handle.success): - newbing_handle = NewBingHandle() - observe_window[0] = load_message + "\n\n" + newbing_handle.info - if not newbing_handle.success: - error = newbing_handle.info - newbing_handle = None - raise RuntimeError(error) - - # 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - observe_window[0] = "[Local Message]: 等待NewBing响应中 ..." - for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - observe_window[0] = preprocess_newbing_out_simple(response) - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return preprocess_newbing_out_simple(response) - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ...")) - - global newbing_handle - if (newbing_handle is None) or (not newbing_handle.success): - newbing_handle = NewBingHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not newbing_handle.success: - newbing_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...") - response = "[Local Message]: 等待NewBing响应中 ..." - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, preprocess_newbing_out(response)) - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..." - history.extend([inputs, response]) - logging.info(f'[raw_input] {inputs}') - logging.info(f'[response] {response}') - yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。") - diff --git a/spaces/freddyaboulton/gradio-subapp/README.md b/spaces/freddyaboulton/gradio-subapp/README.md deleted file mode 100644 index ed8a99425aaaba5d41c110e8acf8064759e9c790..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio-subapp/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gradio Subapp -emoji: 🏃 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/freddyaboulton/gradio_foliumtest/src/README.md b/spaces/freddyaboulton/gradio_foliumtest/src/README.md deleted file mode 100644 index deec117cdfe3314d65e6cd9bb8a1d427e7ffaa63..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio_foliumtest/src/README.md +++ /dev/null @@ -1,37 +0,0 @@ - -# gradio_foliumtest - -Create a map with folium and display it on the web with Gradio! - -## Example usage - -```python -import gradio as gr -from gradio_foliumtest import FoliumTest -from typing import Literal -from folium import Map - - -LAT_LONG_MAP = { - "New York City": (40.7128, -74.0060), - "London": (51.5074, -0.1278), - "San Francisco": (37.7749, -122.4194), - "Tokyo": (35.6762, 139.6503), - "Miami": (25.7617, -80.1918), -} - -def get_city(city: Literal["New York City", "London", "San Francisco", "Tokyo", "Miami"]): - city = city or "Miami" - return Map(location=LAT_LONG_MAP[city], zoom_start=12) - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - city = gr.Radio(choices=["New York City", "London", "San Francisco", "Tokyo", "Miami"], - label="City") - with gr.Column(): - map_ = FoliumTest(label="Foo") - city.change(get_city, city, map_) - -demo.launch() -``` diff --git a/spaces/fuckyoudeki/AutoGPT/tests/unit/test_commands.py b/spaces/fuckyoudeki/AutoGPT/tests/unit/test_commands.py deleted file mode 100644 index ecbac9b73bd9ad872931d77e144dd853b3d8ef64..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/tests/unit/test_commands.py +++ /dev/null @@ -1,22 +0,0 @@ -"""Unit tests for the commands module""" -from unittest.mock import MagicMock, patch - -import pytest - -import autogpt.agent.agent_manager as agent_manager -from autogpt.app import execute_command, list_agents, start_agent - - -@pytest.mark.integration_test -def test_make_agent() -> None: - """Test the make_agent command""" - with patch("openai.ChatCompletion.create") as mock: - obj = MagicMock() - obj.response.choices[0].messages[0].content = "Test message" - mock.return_value = obj - start_agent("Test Agent", "chat", "Hello, how are you?", "gpt2") - agents = list_agents() - assert "List of agents:\n0: chat" == agents - start_agent("Test Agent 2", "write", "Hello, how are you?", "gpt2") - agents = list_agents() - assert "List of agents:\n0: chat\n1: write" == agents diff --git a/spaces/fuxin123zz/ChuanhuChatGPT/Dockerfile b/spaces/fuxin123zz/ChuanhuChatGPT/Dockerfile deleted file mode 100644 index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000 --- a/spaces/fuxin123zz/ChuanhuChatGPT/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -RUN pip install --user -r requirements.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV my_api_key empty -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/gdn/Question-Answer-Demo/app.py b/spaces/gdn/Question-Answer-Demo/app.py deleted file mode 100644 index 87532b85b621a64b63c31452e75a5cf4b82e283b..0000000000000000000000000000000000000000 --- a/spaces/gdn/Question-Answer-Demo/app.py +++ /dev/null @@ -1,24 +0,0 @@ -# -*- coding: utf-8 -*- -"""Thera _QA.ipynb - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/1OhlAM33IIUg46ntfmrsQqQlIyCJGMi0k -""" - -import gradio as gr -from transformers import pipeline - - -context = "Mental health is a state of well being in which the individual realizes his or her own abilities can cope with the normal stresses of life can work productively and fruitfully and is able to make a contribution to his or her community according to the World Health Organization Mental health includes subjective well being perceived self efficacy autonomy competence intergenerational dependence and self actualization of ones intellectual and emotional potential among others From the perspectives of positive psychology or holism mental health may include an individuals ability to enjoy life and to create a balance between life activities and efforts to achieve psychological resilience Cultural differences subjective assessments and competing professional theories all affect how one defines Some early signrelated to mental health problems are sleep irritation lack of energy and thinking of harming yourself or others" -question = "What are the mental health problems?" - - -question_answerer = pipeline("question-answering", model = "distilbert-base-cased-distilled-squad") - - -interface = gr.Interface.from_pipeline(question_answerer, - title = "question & answering demo on mental health", - theme = "peach", - examples = [[context, question]]).launch() \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Apple Service Toolkit - 1.5.3 Learn How to Use System Configuration and Return Replaced Parts.md b/spaces/gotiQspiryo/whisper-ui/examples/Apple Service Toolkit - 1.5.3 Learn How to Use System Configuration and Return Replaced Parts.md deleted file mode 100644 index 41c1d3ab853486a80086ed103ddcd67149f2bb46..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Apple Service Toolkit - 1.5.3 Learn How to Use System Configuration and Return Replaced Parts.md +++ /dev/null @@ -1,6 +0,0 @@ -

Apple Service Toolkit - 1.5.3


Download File 🌟 https://urlgoal.com/2uyMaG



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Danielsipperplaneacionycontroldelaproduccionpdf.md b/spaces/gotiQspiryo/whisper-ui/examples/Danielsipperplaneacionycontroldelaproduccionpdf.md deleted file mode 100644 index 530301894d853775b37bb216c8b66e7c55dbcf37..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Danielsipperplaneacionycontroldelaproduccionpdf.md +++ /dev/null @@ -1,39 +0,0 @@ -
-```html -

Danielsipperplaneacionycontroldelaproduccionpdf: A Comprehensive Guide to Production Planning and Control

-

Danielsipperplaneacionycontroldelaproduccionpdf is a popular keyword that refers to a PDF file of the book "Planeación y Control de la Producción" by Daniel Sipper and Robert L. Bulfin. This book is a classic text on production planning and control, covering topics such as forecasting, inventory management, scheduling, quality control, and project management. The book is written in Spanish and has been widely used by students and professionals in Latin America and Spain.

-

In this article, we will provide a brief overview of the book and its main concepts, as well as some tips on how to download it for free. We will also discuss some of the benefits and challenges of using this book as a reference for production planning and control.

-

danielsipperplaneacionycontroldelaproduccionpdf


DOWNLOADhttps://urlgoal.com/2uyMd1



-

What is Planeación y Control de la Producción?

-

Planeación y Control de la Producción (or Planning and Control of Production) is a book written by Daniel Sipper and Robert L. Bulfin, two professors of industrial engineering and operations research. The book was first published in 1997 and has since been updated several times. The latest edition was published in 2011 and has 784 pages.

-

The book aims to provide a comprehensive and practical approach to production planning and control, integrating both quantitative and qualitative methods. The book covers the following topics:

-
    -
  • Introduction to production planning and control
  • -
  • Forecasting demand and aggregate planning
  • -
  • Inventory management
  • -
  • Material requirements planning (MRP) and enterprise resource planning (ERP)
  • -
  • Just-in-time (JIT) and lean production
  • -
  • Scheduling
  • -
  • Quality management
  • -
  • Project management
  • -
  • Supply chain management
  • -
-

The book also includes numerous examples, exercises, case studies, and software applications to illustrate the concepts and techniques. The book is suitable for undergraduate and graduate courses in industrial engineering, operations management, production management, and related fields.

-

How to download Danielsipperplaneacionycontroldelaproduccionpdf for free?

-

Danielsipperplaneacionycontroldelaproduccionpdf is a keyword that many people use to search for a free download of the book Planeación y Control de la Producción. However, finding a reliable and legal source for downloading the book can be challenging. Many websites that claim to offer free downloads of the book are either scammy, infected with malware, or infringing on the authors' copyrights.

-

Therefore, we recommend that you avoid using such websites and instead purchase the book from a reputable online bookstore or publisher. Alternatively, you can also borrow the book from a library or a friend who owns a copy. This way, you can ensure that you are getting a high-quality and legitimate version of the book that respects the authors' rights.

-

What are the benefits and challenges of using Planeación y Control de la Producción as a reference for production planning and control?

-

Planeación y Control de la Producción is a widely recognized and respected book on production planning and control that has been used by thousands of students and professionals around the world. Some of the benefits of using this book as a reference are:

-
    -
  • It provides a comprehensive and up-to-date coverage of the theory and practice of production planning and control.
  • -
  • It integrates both quantitative and qualitative methods to address different aspects of production planning and control.
  • -
  • It includes numerous examples, exercises, case studies, and software applications to enhance learning and application.
  • -
  • It is written in Spanish, which makes it accessible to readers who are more comfortable with this language.
  • -
-

However, using this book as a reference also poses some challenges, such as:

-
    -
  • It may be difficult to find a free or cheap copy of the book online or offline.
  • -
  • It may not cover some topics or methods that are more relevant or recent in the field of production planning and control. -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Electromagnetic Field Theory By Dhananjayan.epubl A Complete Reference for EMF Theory and Practice.md b/spaces/gotiQspiryo/whisper-ui/examples/Electromagnetic Field Theory By Dhananjayan.epubl A Complete Reference for EMF Theory and Practice.md deleted file mode 100644 index 68a31ffb024106c58b8acd9757432f4a8e727cbc..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Electromagnetic Field Theory By Dhananjayan.epubl A Complete Reference for EMF Theory and Practice.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Electromagnetic Field Theory By Dhananjayan.epubl


    Downloadhttps://urlgoal.com/2uyMa0



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Email List Txt yahoo Hotmailaol gmail.md b/spaces/gotiQspiryo/whisper-ui/examples/Email List Txt yahoo Hotmailaol gmail.md deleted file mode 100644 index 8207fe31cbe3e540d6f5f49c42161e6709681c3b..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Email List Txt yahoo Hotmailaol gmail.md +++ /dev/null @@ -1,27 +0,0 @@ -
    -

    How to Find Email Lists for Marketing Purposes

    -
    -

    If you are looking for email lists to promote your products or services, you might be tempted to search for keywords like "email list txt @yahoo@ hotmail@aol @gmail" on the web. However, this is not a good idea for several reasons.

    -

    email list txt @yahoo@ hotmail@aol @gmail


    Download Zip ✶✶✶ https://urlgoal.com/2uyNAs



    -

    First of all, most of the email lists that you will find online are outdated, incomplete, or inaccurate. They might contain invalid or inactive email addresses, spam traps, or people who have not opted in to receive marketing messages. Sending emails to these lists will not only waste your time and money, but also damage your reputation and deliverability.

    -

    Second, using email lists that you have not obtained legally or ethically is a violation of the CAN-SPAM Act [^4^] and other anti-spam laws around the world. You could face fines, lawsuits, or even criminal charges if you send unsolicited emails to people who have not given you permission to do so.

    -

    Third, using email lists that you have not built yourself or acquired from a reputable source will not help you achieve your marketing goals. People who receive your emails will not be interested in your offer, will not trust you, and will not engage with you. You will end up with low open rates, click-through rates, conversion rates, and customer loyalty.

    -

    -

    So, how can you find email lists that are effective, legal, and ethical? The best way is to build your own email list from scratch. This means attracting and capturing leads who are genuinely interested in your niche, your brand, and your value proposition. You can do this by creating valuable content, offering incentives, using opt-in forms, landing pages, pop-ups, social media, webinars, events, and other lead generation strategies.

    -

    Alternatively, you can also buy or rent email lists from reputable providers who have permission from their subscribers to share their data with third parties. However, you should be careful when choosing an email list provider. You should check their reputation, reviews, policies, guarantees, and data quality before making a purchase. You should also test a small sample of the list before sending a full campaign.

    -

    In conclusion, searching for keywords like "email list txt @yahoo@ hotmail@aol @gmail" is not a good way to find email lists for marketing purposes. You should either build your own email list or buy or rent one from a trustworthy source. This will help you avoid spam complaints, legal issues, and poor results. It will also help you reach your target audience, build relationships, and grow your business.

    -
    - -

    How to Use Email Lists for Marketing Purposes

    -
    -

    Once you have built or acquired an email list that is effective, legal, and ethical, you need to use it wisely for marketing purposes. Here are some tips on how to do that.

    -
      -
    • Segment your email list. This means dividing your email list into smaller groups based on criteria such as demographics, interests, behavior, preferences, or stage in the buyer's journey. This will help you tailor your messages to each group and increase their relevance and personalization.
    • -
    • Craft your email content. This means writing compelling subject lines, headlines, body copy, calls to action, and signatures that will capture your recipients' attention, interest, desire, and action. You should also use HTML formatting to make your emails look professional, attractive, and easy to read.
    • -
    • Optimize your email delivery. This means choosing the best time and frequency to send your emails, avoiding spam filters and blacklists, and ensuring that your emails are responsive and compatible with different devices and platforms. You should also monitor your email performance and metrics such as open rates, click-through rates, bounce rates, unsubscribe rates, and conversions.
    • -
    • Nurture your email relationships. This means providing value to your subscribers, building trust and credibility, encouraging feedback and engagement, and rewarding loyalty and referrals. You should also respect your subscribers' privacy and preferences, and comply with the CAN-SPAM Act and other anti-spam laws.
    • -
    -

    In conclusion, using email lists for marketing purposes requires careful planning, execution, and evaluation. You should segment your email list, craft your email content, optimize your email delivery, and nurture your email relationships. This will help you achieve your marketing goals and grow your business.

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Mponldll Pes 2013 Download Free !NEW!.md b/spaces/gotiQspiryo/whisper-ui/examples/Mponldll Pes 2013 Download Free !NEW!.md deleted file mode 100644 index f325860288351401e185e050fb71e48c864f04eb..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Mponldll Pes 2013 Download Free !NEW!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Mponldll Pes 2013 Download Free


    Download →→→ https://urlgoal.com/2uyMH2



    - -xXx: The Return of Xander Cage (English) movie download in hindi 1080p · Mponldll Pes 2013 Download Free · download film alvin and the ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/hank1996/yolopv2/utils/google_app_engine/Dockerfile b/spaces/hank1996/yolopv2/utils/google_app_engine/Dockerfile deleted file mode 100644 index 0155618f475104e9858b81470339558156c94e13..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/utils/google_app_engine/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM gcr.io/google-appengine/python - -# Create a virtualenv for dependencies. This isolates these packages from -# system-level packages. -# Use -p python3 or -p python3.7 to select python version. Default is version 2. -RUN virtualenv /env -p python3 - -# Setting these environment variables are the same as running -# source /env/bin/activate. -ENV VIRTUAL_ENV /env -ENV PATH /env/bin:$PATH - -RUN apt-get update && apt-get install -y python-opencv - -# Copy the application's requirements.txt and run pip to install all -# dependencies into the virtualenv. -ADD requirements.txt /app/requirements.txt -RUN pip install -r /app/requirements.txt - -# Add the application source code. -ADD . /app - -# Run a WSGI server to serve the application. gunicorn must be declared as -# a dependency in requirements.txt. -CMD gunicorn -b :$PORT main:app diff --git a/spaces/haofeixu/unimatch/utils/visualization.py b/spaces/haofeixu/unimatch/utils/visualization.py deleted file mode 100644 index fc43fa50b6006cb4e7b26f2d7756582437f323f0..0000000000000000000000000000000000000000 --- a/spaces/haofeixu/unimatch/utils/visualization.py +++ /dev/null @@ -1,110 +0,0 @@ -import torch -import torch.utils.data -import numpy as np -import torchvision.utils as vutils -import cv2 -from matplotlib.cm import get_cmap -import matplotlib as mpl -import matplotlib.cm as cm - - -def vis_disparity(disp, return_rgb=False): - disp_vis = (disp - disp.min()) / (disp.max() - disp.min()) * 255.0 - disp_vis = disp_vis.astype("uint8") - disp_vis = cv2.applyColorMap(disp_vis, cv2.COLORMAP_INFERNO) - - if return_rgb: - disp_vis = cv2.cvtColor(disp_vis, cv2.COLOR_BGR2RGB) - - return disp_vis - - -def gen_error_colormap(): - cols = np.array( - [[0 / 3.0, 0.1875 / 3.0, 49, 54, 149], - [0.1875 / 3.0, 0.375 / 3.0, 69, 117, 180], - [0.375 / 3.0, 0.75 / 3.0, 116, 173, 209], - [0.75 / 3.0, 1.5 / 3.0, 171, 217, 233], - [1.5 / 3.0, 3 / 3.0, 224, 243, 248], - [3 / 3.0, 6 / 3.0, 254, 224, 144], - [6 / 3.0, 12 / 3.0, 253, 174, 97], - [12 / 3.0, 24 / 3.0, 244, 109, 67], - [24 / 3.0, 48 / 3.0, 215, 48, 39], - [48 / 3.0, np.inf, 165, 0, 38]], dtype=np.float32) - cols[:, 2: 5] /= 255. - return cols - - -def disp_error_img(D_est_tensor, D_gt_tensor, abs_thres=3., rel_thres=0.05, dilate_radius=1): - D_gt_np = D_gt_tensor.detach().cpu().numpy() - D_est_np = D_est_tensor.detach().cpu().numpy() - B, H, W = D_gt_np.shape - # valid mask - mask = D_gt_np > 0 - # error in percentage. When error <= 1, the pixel is valid since <= 3px & 5% - error = np.abs(D_gt_np - D_est_np) - error[np.logical_not(mask)] = 0 - error[mask] = np.minimum(error[mask] / abs_thres, (error[mask] / D_gt_np[mask]) / rel_thres) - # get colormap - cols = gen_error_colormap() - # create error image - error_image = np.zeros([B, H, W, 3], dtype=np.float32) - for i in range(cols.shape[0]): - error_image[np.logical_and(error >= cols[i][0], error < cols[i][1])] = cols[i, 2:] - # TODO: imdilate - # error_image = cv2.imdilate(D_err, strel('disk', dilate_radius)); - error_image[np.logical_not(mask)] = 0. - # show color tag in the top-left cornor of the image - for i in range(cols.shape[0]): - distance = 20 - error_image[:, :10, i * distance:(i + 1) * distance, :] = cols[i, 2:] - - return torch.from_numpy(np.ascontiguousarray(error_image.transpose([0, 3, 1, 2]))) - - -def save_images(logger, mode_tag, images_dict, global_step): - images_dict = tensor2numpy(images_dict) - for tag, values in images_dict.items(): - if not isinstance(values, list) and not isinstance(values, tuple): - values = [values] - for idx, value in enumerate(values): - if len(value.shape) == 3: - value = value[:, np.newaxis, :, :] - value = value[:1] - value = torch.from_numpy(value) - - image_name = '{}/{}'.format(mode_tag, tag) - if len(values) > 1: - image_name = image_name + "_" + str(idx) - logger.add_image(image_name, vutils.make_grid(value, padding=0, nrow=1, normalize=True, scale_each=True), - global_step) - - -def tensor2numpy(var_dict): - for key, vars in var_dict.items(): - if isinstance(vars, np.ndarray): - var_dict[key] = vars - elif isinstance(vars, torch.Tensor): - var_dict[key] = vars.data.cpu().numpy() - else: - raise NotImplementedError("invalid input type for tensor2numpy") - - return var_dict - - -def viz_depth_tensor_from_monodepth2(disp, return_numpy=False, colormap='plasma'): - # visualize inverse depth - assert isinstance(disp, torch.Tensor) - - disp = disp.numpy() - vmax = np.percentile(disp, 95) - normalizer = mpl.colors.Normalize(vmin=disp.min(), vmax=vmax) - mapper = cm.ScalarMappable(norm=normalizer, cmap=colormap) - colormapped_im = (mapper.to_rgba(disp)[:, :, :3] * 255).astype(np.uint8) # [H, W, 3] - - if return_numpy: - return colormapped_im - - viz = torch.from_numpy(colormapped_im).permute(2, 0, 1) # [3, H, W] - - return viz diff --git a/spaces/hasselhe2023/SoccerPosition2.0/info.md b/spaces/hasselhe2023/SoccerPosition2.0/info.md deleted file mode 100644 index 41676ad9d2e8a0608e7297a16c152fc2257f6f91..0000000000000000000000000000000000000000 --- a/spaces/hasselhe2023/SoccerPosition2.0/info.md +++ /dev/null @@ -1,16 +0,0 @@ -# 😌 Which soccer position fits you the best? - -### 🧐 Problem Statement and Research Summary -Have you ever wanted to try out a new sport, but didn't know which position you shoud play on. I created this AI to help you figure out which soccer position fits you the best. Sometimes you have to try many positions before you find the right one. This AI will help you to prevent this. I found out that some facts about yourself really help finding the perfect position for you. If you answer all these question on this survey, this AI is able to find a position that most likely fit you the best. This recommandation system is based on data of 30 different people. - -### 🎣 Data Collection Plan -The data for this model was collected with a survey, filled out by 30 different people from two computer science classes. - -### 💥 Ethical Considerations (Data Privacy and Bias) -* Data privacy: I used a survey that collected data anonymously. Even though this survey is about soccer positions, some people might consider questions at personal or sensitive. That is why I chose to collect the data in a survey that does not ask for your name or other personal information. Because of that people who filled out the survey will stay anonymously. I informed people how the collected data will be used so they could decide wether they want to share those information or not. -* Bias: This survey might have bias in it, when people filled out this survey without being completely honest. Or when people just klicked random options for fun. That could lead to bias information that is being used in this survey. Unfortunately I am not able to tell wether people filled out the survey with bias or not. I am aware that this data might not be very accurate. It is a recommandation based on collected data from different people. - -### 👻 Our Team -Myself - -![aiEDU logo](https://images.squarespace-cdn.com/content/v1/5e4efdef6d10420691f02bc1/5db5a8a3-1761-4fce-a096-bd5f2515162f/aiEDU+_black+logo+stacked.png?format=100w) diff --git a/spaces/heshihuan/bingo/Dockerfile b/spaces/heshihuan/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/heshihuan/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/hirol/controlnetOverMask/js/three.module.js b/spaces/hirol/controlnetOverMask/js/three.module.js deleted file mode 100644 index c9f85b16241642cdcacb9e422eb24a18d9904a8a..0000000000000000000000000000000000000000 --- a/spaces/hirol/controlnetOverMask/js/three.module.js +++ /dev/null @@ -1,50202 +0,0 @@ -/** - * @license - * Copyright 2010-2023 Three.js Authors - * SPDX-License-Identifier: MIT - */ -const REVISION = '149'; -const MOUSE = { LEFT: 0, MIDDLE: 1, RIGHT: 2, ROTATE: 0, DOLLY: 1, PAN: 2 }; -const TOUCH = { ROTATE: 0, PAN: 1, DOLLY_PAN: 2, DOLLY_ROTATE: 3 }; -const CullFaceNone = 0; -const CullFaceBack = 1; -const CullFaceFront = 2; -const CullFaceFrontBack = 3; -const BasicShadowMap = 0; -const PCFShadowMap = 1; -const PCFSoftShadowMap = 2; -const VSMShadowMap = 3; -const FrontSide = 0; -const BackSide = 1; -const DoubleSide = 2; -const TwoPassDoubleSide = 2; // r149 -const NoBlending = 0; -const NormalBlending = 1; -const AdditiveBlending = 2; -const SubtractiveBlending = 3; -const MultiplyBlending = 4; -const CustomBlending = 5; -const AddEquation = 100; -const SubtractEquation = 101; -const ReverseSubtractEquation = 102; -const MinEquation = 103; -const MaxEquation = 104; -const ZeroFactor = 200; -const OneFactor = 201; -const SrcColorFactor = 202; -const OneMinusSrcColorFactor = 203; -const SrcAlphaFactor = 204; -const OneMinusSrcAlphaFactor = 205; -const DstAlphaFactor = 206; -const OneMinusDstAlphaFactor = 207; -const DstColorFactor = 208; -const OneMinusDstColorFactor = 209; -const SrcAlphaSaturateFactor = 210; -const NeverDepth = 0; -const AlwaysDepth = 1; -const LessDepth = 2; -const LessEqualDepth = 3; -const EqualDepth = 4; -const GreaterEqualDepth = 5; -const GreaterDepth = 6; -const NotEqualDepth = 7; -const MultiplyOperation = 0; -const MixOperation = 1; -const AddOperation = 2; -const NoToneMapping = 0; -const LinearToneMapping = 1; -const ReinhardToneMapping = 2; -const CineonToneMapping = 3; -const ACESFilmicToneMapping = 4; -const CustomToneMapping = 5; - -const UVMapping = 300; -const CubeReflectionMapping = 301; -const CubeRefractionMapping = 302; -const EquirectangularReflectionMapping = 303; -const EquirectangularRefractionMapping = 304; -const CubeUVReflectionMapping = 306; -const RepeatWrapping = 1000; -const ClampToEdgeWrapping = 1001; -const MirroredRepeatWrapping = 1002; -const NearestFilter = 1003; -const NearestMipmapNearestFilter = 1004; -const NearestMipMapNearestFilter = 1004; -const NearestMipmapLinearFilter = 1005; -const NearestMipMapLinearFilter = 1005; -const LinearFilter = 1006; -const LinearMipmapNearestFilter = 1007; -const LinearMipMapNearestFilter = 1007; -const LinearMipmapLinearFilter = 1008; -const LinearMipMapLinearFilter = 1008; -const UnsignedByteType = 1009; -const ByteType = 1010; -const ShortType = 1011; -const UnsignedShortType = 1012; -const IntType = 1013; -const UnsignedIntType = 1014; -const FloatType = 1015; -const HalfFloatType = 1016; -const UnsignedShort4444Type = 1017; -const UnsignedShort5551Type = 1018; -const UnsignedInt248Type = 1020; -const AlphaFormat = 1021; -const RGBAFormat = 1023; -const LuminanceFormat = 1024; -const LuminanceAlphaFormat = 1025; -const DepthFormat = 1026; -const DepthStencilFormat = 1027; -const RedFormat = 1028; -const RedIntegerFormat = 1029; -const RGFormat = 1030; -const RGIntegerFormat = 1031; -const RGBAIntegerFormat = 1033; - -const RGB_S3TC_DXT1_Format = 33776; -const RGBA_S3TC_DXT1_Format = 33777; -const RGBA_S3TC_DXT3_Format = 33778; -const RGBA_S3TC_DXT5_Format = 33779; -const RGB_PVRTC_4BPPV1_Format = 35840; -const RGB_PVRTC_2BPPV1_Format = 35841; -const RGBA_PVRTC_4BPPV1_Format = 35842; -const RGBA_PVRTC_2BPPV1_Format = 35843; -const RGB_ETC1_Format = 36196; -const RGB_ETC2_Format = 37492; -const RGBA_ETC2_EAC_Format = 37496; -const RGBA_ASTC_4x4_Format = 37808; -const RGBA_ASTC_5x4_Format = 37809; -const RGBA_ASTC_5x5_Format = 37810; -const RGBA_ASTC_6x5_Format = 37811; -const RGBA_ASTC_6x6_Format = 37812; -const RGBA_ASTC_8x5_Format = 37813; -const RGBA_ASTC_8x6_Format = 37814; -const RGBA_ASTC_8x8_Format = 37815; -const RGBA_ASTC_10x5_Format = 37816; -const RGBA_ASTC_10x6_Format = 37817; -const RGBA_ASTC_10x8_Format = 37818; -const RGBA_ASTC_10x10_Format = 37819; -const RGBA_ASTC_12x10_Format = 37820; -const RGBA_ASTC_12x12_Format = 37821; -const RGBA_BPTC_Format = 36492; -const RED_RGTC1_Format = 36283; -const SIGNED_RED_RGTC1_Format = 36284; -const RED_GREEN_RGTC2_Format = 36285; -const SIGNED_RED_GREEN_RGTC2_Format = 36286; -const LoopOnce = 2200; -const LoopRepeat = 2201; -const LoopPingPong = 2202; -const InterpolateDiscrete = 2300; -const InterpolateLinear = 2301; -const InterpolateSmooth = 2302; -const ZeroCurvatureEnding = 2400; -const ZeroSlopeEnding = 2401; -const WrapAroundEnding = 2402; -const NormalAnimationBlendMode = 2500; -const AdditiveAnimationBlendMode = 2501; -const TrianglesDrawMode = 0; -const TriangleStripDrawMode = 1; -const TriangleFanDrawMode = 2; -const LinearEncoding = 3000; -const sRGBEncoding = 3001; -const BasicDepthPacking = 3200; -const RGBADepthPacking = 3201; -const TangentSpaceNormalMap = 0; -const ObjectSpaceNormalMap = 1; - -// Color space string identifiers, matching CSS Color Module Level 4 and WebGPU names where available. -const NoColorSpace = ''; -const SRGBColorSpace = 'srgb'; -const LinearSRGBColorSpace = 'srgb-linear'; - -const ZeroStencilOp = 0; -const KeepStencilOp = 7680; -const ReplaceStencilOp = 7681; -const IncrementStencilOp = 7682; -const DecrementStencilOp = 7683; -const IncrementWrapStencilOp = 34055; -const DecrementWrapStencilOp = 34056; -const InvertStencilOp = 5386; - -const NeverStencilFunc = 512; -const LessStencilFunc = 513; -const EqualStencilFunc = 514; -const LessEqualStencilFunc = 515; -const GreaterStencilFunc = 516; -const NotEqualStencilFunc = 517; -const GreaterEqualStencilFunc = 518; -const AlwaysStencilFunc = 519; - -const StaticDrawUsage = 35044; -const DynamicDrawUsage = 35048; -const StreamDrawUsage = 35040; -const StaticReadUsage = 35045; -const DynamicReadUsage = 35049; -const StreamReadUsage = 35041; -const StaticCopyUsage = 35046; -const DynamicCopyUsage = 35050; -const StreamCopyUsage = 35042; - -const GLSL1 = '100'; -const GLSL3 = '300 es'; - -const _SRGBAFormat = 1035; // fallback for WebGL 1 - -/** - * https://github.com/mrdoob/eventdispatcher.js/ - */ - -class EventDispatcher { - - addEventListener(type, listener) { - - if (this._listeners === undefined) this._listeners = {}; - - const listeners = this._listeners; - - if (listeners[type] === undefined) { - - listeners[type] = []; - - } - - if (listeners[type].indexOf(listener) === - 1) { - - listeners[type].push(listener); - - } - - } - - hasEventListener(type, listener) { - - if (this._listeners === undefined) return false; - - const listeners = this._listeners; - - return listeners[type] !== undefined && listeners[type].indexOf(listener) !== - 1; - - } - - removeEventListener(type, listener) { - - if (this._listeners === undefined) return; - - const listeners = this._listeners; - const listenerArray = listeners[type]; - - if (listenerArray !== undefined) { - - const index = listenerArray.indexOf(listener); - - if (index !== - 1) { - - listenerArray.splice(index, 1); - - } - - } - - } - - dispatchEvent(event) { - - if (this._listeners === undefined) return; - - const listeners = this._listeners; - const listenerArray = listeners[event.type]; - - if (listenerArray !== undefined) { - - event.target = this; - - // Make a copy, in case listeners are removed while iterating. - const array = listenerArray.slice(0); - - for (let i = 0, l = array.length; i < l; i++) { - - array[i].call(this, event); - - } - - event.target = null; - - } - - } - -} - -const _lut = ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '0a', '0b', '0c', '0d', '0e', '0f', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '1a', '1b', '1c', '1d', '1e', '1f', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '2a', '2b', '2c', '2d', '2e', '2f', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '3a', '3b', '3c', '3d', '3e', '3f', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '4a', '4b', '4c', '4d', '4e', '4f', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '5a', '5b', '5c', '5d', '5e', '5f', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '6a', '6b', '6c', '6d', '6e', '6f', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '7a', '7b', '7c', '7d', '7e', '7f', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '8a', '8b', '8c', '8d', '8e', '8f', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '9a', '9b', '9c', '9d', '9e', '9f', 'a0', 'a1', 'a2', 'a3', 'a4', 'a5', 'a6', 'a7', 'a8', 'a9', 'aa', 'ab', 'ac', 'ad', 'ae', 'af', 'b0', 'b1', 'b2', 'b3', 'b4', 'b5', 'b6', 'b7', 'b8', 'b9', 'ba', 'bb', 'bc', 'bd', 'be', 'bf', 'c0', 'c1', 'c2', 'c3', 'c4', 'c5', 'c6', 'c7', 'c8', 'c9', 'ca', 'cb', 'cc', 'cd', 'ce', 'cf', 'd0', 'd1', 'd2', 'd3', 'd4', 'd5', 'd6', 'd7', 'd8', 'd9', 'da', 'db', 'dc', 'dd', 'de', 'df', 'e0', 'e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7', 'e8', 'e9', 'ea', 'eb', 'ec', 'ed', 'ee', 'ef', 'f0', 'f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'fa', 'fb', 'fc', 'fd', 'fe', 'ff']; - -let _seed = 1234567; - - -const DEG2RAD = Math.PI / 180; -const RAD2DEG = 180 / Math.PI; - -// http://stackoverflow.com/questions/105034/how-to-create-a-guid-uuid-in-javascript/21963136#21963136 -function generateUUID() { - - const d0 = Math.random() * 0xffffffff | 0; - const d1 = Math.random() * 0xffffffff | 0; - const d2 = Math.random() * 0xffffffff | 0; - const d3 = Math.random() * 0xffffffff | 0; - const uuid = _lut[d0 & 0xff] + _lut[d0 >> 8 & 0xff] + _lut[d0 >> 16 & 0xff] + _lut[d0 >> 24 & 0xff] + '-' + - _lut[d1 & 0xff] + _lut[d1 >> 8 & 0xff] + '-' + _lut[d1 >> 16 & 0x0f | 0x40] + _lut[d1 >> 24 & 0xff] + '-' + - _lut[d2 & 0x3f | 0x80] + _lut[d2 >> 8 & 0xff] + '-' + _lut[d2 >> 16 & 0xff] + _lut[d2 >> 24 & 0xff] + - _lut[d3 & 0xff] + _lut[d3 >> 8 & 0xff] + _lut[d3 >> 16 & 0xff] + _lut[d3 >> 24 & 0xff]; - - // .toLowerCase() here flattens concatenated strings to save heap memory space. - return uuid.toLowerCase(); - -} - -function clamp(value, min, max) { - - return Math.max(min, Math.min(max, value)); - -} - -// compute euclidean modulo of m % n -// https://en.wikipedia.org/wiki/Modulo_operation -function euclideanModulo(n, m) { - - return ((n % m) + m) % m; - -} - -// Linear mapping from range to range -function mapLinear(x, a1, a2, b1, b2) { - - return b1 + (x - a1) * (b2 - b1) / (a2 - a1); - -} - -// https://www.gamedev.net/tutorials/programming/general-and-gameplay-programming/inverse-lerp-a-super-useful-yet-often-overlooked-function-r5230/ -function inverseLerp(x, y, value) { - - if (x !== y) { - - return (value - x) / (y - x); - - } else { - - return 0; - - } - -} - -// https://en.wikipedia.org/wiki/Linear_interpolation -function lerp(x, y, t) { - - return (1 - t) * x + t * y; - -} - -// http://www.rorydriscoll.com/2016/03/07/frame-rate-independent-damping-using-lerp/ -function damp(x, y, lambda, dt) { - - return lerp(x, y, 1 - Math.exp(- lambda * dt)); - -} - -// https://www.desmos.com/calculator/vcsjnyz7x4 -function pingpong(x, length = 1) { - - return length - Math.abs(euclideanModulo(x, length * 2) - length); - -} - -// http://en.wikipedia.org/wiki/Smoothstep -function smoothstep(x, min, max) { - - if (x <= min) return 0; - if (x >= max) return 1; - - x = (x - min) / (max - min); - - return x * x * (3 - 2 * x); - -} - -function smootherstep(x, min, max) { - - if (x <= min) return 0; - if (x >= max) return 1; - - x = (x - min) / (max - min); - - return x * x * x * (x * (x * 6 - 15) + 10); - -} - -// Random integer from interval -function randInt(low, high) { - - return low + Math.floor(Math.random() * (high - low + 1)); - -} - -// Random float from interval -function randFloat(low, high) { - - return low + Math.random() * (high - low); - -} - -// Random float from <-range/2, range/2> interval -function randFloatSpread(range) { - - return range * (0.5 - Math.random()); - -} - -// Deterministic pseudo-random float in the interval [ 0, 1 ] -function seededRandom(s) { - - if (s !== undefined) _seed = s; - - // Mulberry32 generator - - let t = _seed += 0x6D2B79F5; - - t = Math.imul(t ^ t >>> 15, t | 1); - - t ^= t + Math.imul(t ^ t >>> 7, t | 61); - - return ((t ^ t >>> 14) >>> 0) / 4294967296; - -} - -function degToRad(degrees) { - - return degrees * DEG2RAD; - -} - -function radToDeg(radians) { - - return radians * RAD2DEG; - -} - -function isPowerOfTwo(value) { - - return (value & (value - 1)) === 0 && value !== 0; - -} - -function ceilPowerOfTwo(value) { - - return Math.pow(2, Math.ceil(Math.log(value) / Math.LN2)); - -} - -function floorPowerOfTwo(value) { - - return Math.pow(2, Math.floor(Math.log(value) / Math.LN2)); - -} - -function setQuaternionFromProperEuler(q, a, b, c, order) { - - // Intrinsic Proper Euler Angles - see https://en.wikipedia.org/wiki/Euler_angles - - // rotations are applied to the axes in the order specified by 'order' - // rotation by angle 'a' is applied first, then by angle 'b', then by angle 'c' - // angles are in radians - - const cos = Math.cos; - const sin = Math.sin; - - const c2 = cos(b / 2); - const s2 = sin(b / 2); - - const c13 = cos((a + c) / 2); - const s13 = sin((a + c) / 2); - - const c1_3 = cos((a - c) / 2); - const s1_3 = sin((a - c) / 2); - - const c3_1 = cos((c - a) / 2); - const s3_1 = sin((c - a) / 2); - - switch (order) { - - case 'XYX': - q.set(c2 * s13, s2 * c1_3, s2 * s1_3, c2 * c13); - break; - - case 'YZY': - q.set(s2 * s1_3, c2 * s13, s2 * c1_3, c2 * c13); - break; - - case 'ZXZ': - q.set(s2 * c1_3, s2 * s1_3, c2 * s13, c2 * c13); - break; - - case 'XZX': - q.set(c2 * s13, s2 * s3_1, s2 * c3_1, c2 * c13); - break; - - case 'YXY': - q.set(s2 * c3_1, c2 * s13, s2 * s3_1, c2 * c13); - break; - - case 'ZYZ': - q.set(s2 * s3_1, s2 * c3_1, c2 * s13, c2 * c13); - break; - - default: - console.warn('THREE.MathUtils: .setQuaternionFromProperEuler() encountered an unknown order: ' + order); - - } - -} - -function denormalize(value, array) { - - switch (array.constructor) { - - case Float32Array: - - return value; - - case Uint16Array: - - return value / 65535.0; - - case Uint8Array: - - return value / 255.0; - - case Int16Array: - - return Math.max(value / 32767.0, - 1.0); - - case Int8Array: - - return Math.max(value / 127.0, - 1.0); - - default: - - throw new Error('Invalid component type.'); - - } - -} - -function normalize(value, array) { - - switch (array.constructor) { - - case Float32Array: - - return value; - - case Uint16Array: - - return Math.round(value * 65535.0); - - case Uint8Array: - - return Math.round(value * 255.0); - - case Int16Array: - - return Math.round(value * 32767.0); - - case Int8Array: - - return Math.round(value * 127.0); - - default: - - throw new Error('Invalid component type.'); - - } - -} - -var MathUtils = /*#__PURE__*/Object.freeze({ - __proto__: null, - DEG2RAD: DEG2RAD, - RAD2DEG: RAD2DEG, - ceilPowerOfTwo: ceilPowerOfTwo, - clamp: clamp, - damp: damp, - degToRad: degToRad, - denormalize: denormalize, - euclideanModulo: euclideanModulo, - floorPowerOfTwo: floorPowerOfTwo, - generateUUID: generateUUID, - inverseLerp: inverseLerp, - isPowerOfTwo: isPowerOfTwo, - lerp: lerp, - mapLinear: mapLinear, - normalize: normalize, - pingpong: pingpong, - radToDeg: radToDeg, - randFloat: randFloat, - randFloatSpread: randFloatSpread, - randInt: randInt, - seededRandom: seededRandom, - setQuaternionFromProperEuler: setQuaternionFromProperEuler, - smootherstep: smootherstep, - smoothstep: smoothstep -}); - -class Vector2 { - - constructor(x = 0, y = 0) { - - Vector2.prototype.isVector2 = true; - - this.x = x; - this.y = y; - - } - - get width() { - - return this.x; - - } - - set width(value) { - - this.x = value; - - } - - get height() { - - return this.y; - - } - - set height(value) { - - this.y = value; - - } - - set(x, y) { - - this.x = x; - this.y = y; - - return this; - - } - - setScalar(scalar) { - - this.x = scalar; - this.y = scalar; - - return this; - - } - - setX(x) { - - this.x = x; - - return this; - - } - - setY(y) { - - this.y = y; - - return this; - - } - - setComponent(index, value) { - - switch (index) { - - case 0: this.x = value; break; - case 1: this.y = value; break; - default: throw new Error('index is out of range: ' + index); - - } - - return this; - - } - - getComponent(index) { - - switch (index) { - - case 0: return this.x; - case 1: return this.y; - default: throw new Error('index is out of range: ' + index); - - } - - } - - clone() { - - return new this.constructor(this.x, this.y); - - } - - copy(v) { - - this.x = v.x; - this.y = v.y; - - return this; - - } - - add(v) { - - this.x += v.x; - this.y += v.y; - - return this; - - } - - addScalar(s) { - - this.x += s; - this.y += s; - - return this; - - } - - addVectors(a, b) { - - this.x = a.x + b.x; - this.y = a.y + b.y; - - return this; - - } - - addScaledVector(v, s) { - - this.x += v.x * s; - this.y += v.y * s; - - return this; - - } - - sub(v) { - - this.x -= v.x; - this.y -= v.y; - - return this; - - } - - subScalar(s) { - - this.x -= s; - this.y -= s; - - return this; - - } - - subVectors(a, b) { - - this.x = a.x - b.x; - this.y = a.y - b.y; - - return this; - - } - - multiply(v) { - - this.x *= v.x; - this.y *= v.y; - - return this; - - } - - multiplyScalar(scalar) { - - this.x *= scalar; - this.y *= scalar; - - return this; - - } - - divide(v) { - - this.x /= v.x; - this.y /= v.y; - - return this; - - } - - divideScalar(scalar) { - - return this.multiplyScalar(1 / scalar); - - } - - applyMatrix3(m) { - - const x = this.x, y = this.y; - const e = m.elements; - - this.x = e[0] * x + e[3] * y + e[6]; - this.y = e[1] * x + e[4] * y + e[7]; - - return this; - - } - - min(v) { - - this.x = Math.min(this.x, v.x); - this.y = Math.min(this.y, v.y); - - return this; - - } - - max(v) { - - this.x = Math.max(this.x, v.x); - this.y = Math.max(this.y, v.y); - - return this; - - } - - clamp(min, max) { - - // assumes min < max, componentwise - - this.x = Math.max(min.x, Math.min(max.x, this.x)); - this.y = Math.max(min.y, Math.min(max.y, this.y)); - - return this; - - } - - clampScalar(minVal, maxVal) { - - this.x = Math.max(minVal, Math.min(maxVal, this.x)); - this.y = Math.max(minVal, Math.min(maxVal, this.y)); - - return this; - - } - - clampLength(min, max) { - - const length = this.length(); - - return this.divideScalar(length || 1).multiplyScalar(Math.max(min, Math.min(max, length))); - - } - - floor() { - - this.x = Math.floor(this.x); - this.y = Math.floor(this.y); - - return this; - - } - - ceil() { - - this.x = Math.ceil(this.x); - this.y = Math.ceil(this.y); - - return this; - - } - - round() { - - this.x = Math.round(this.x); - this.y = Math.round(this.y); - - return this; - - } - - roundToZero() { - - this.x = (this.x < 0) ? Math.ceil(this.x) : Math.floor(this.x); - this.y = (this.y < 0) ? Math.ceil(this.y) : Math.floor(this.y); - - return this; - - } - - negate() { - - this.x = - this.x; - this.y = - this.y; - - return this; - - } - - dot(v) { - - return this.x * v.x + this.y * v.y; - - } - - cross(v) { - - return this.x * v.y - this.y * v.x; - - } - - lengthSq() { - - return this.x * this.x + this.y * this.y; - - } - - length() { - - return Math.sqrt(this.x * this.x + this.y * this.y); - - } - - manhattanLength() { - - return Math.abs(this.x) + Math.abs(this.y); - - } - - normalize() { - - return this.divideScalar(this.length() || 1); - - } - - angle() { - - // computes the angle in radians with respect to the positive x-axis - - const angle = Math.atan2(- this.y, - this.x) + Math.PI; - - return angle; - - } - - distanceTo(v) { - - return Math.sqrt(this.distanceToSquared(v)); - - } - - distanceToSquared(v) { - - const dx = this.x - v.x, dy = this.y - v.y; - return dx * dx + dy * dy; - - } - - manhattanDistanceTo(v) { - - return Math.abs(this.x - v.x) + Math.abs(this.y - v.y); - - } - - setLength(length) { - - return this.normalize().multiplyScalar(length); - - } - - lerp(v, alpha) { - - this.x += (v.x - this.x) * alpha; - this.y += (v.y - this.y) * alpha; - - return this; - - } - - lerpVectors(v1, v2, alpha) { - - this.x = v1.x + (v2.x - v1.x) * alpha; - this.y = v1.y + (v2.y - v1.y) * alpha; - - return this; - - } - - equals(v) { - - return ((v.x === this.x) && (v.y === this.y)); - - } - - fromArray(array, offset = 0) { - - this.x = array[offset]; - this.y = array[offset + 1]; - - return this; - - } - - toArray(array = [], offset = 0) { - - array[offset] = this.x; - array[offset + 1] = this.y; - - return array; - - } - - fromBufferAttribute(attribute, index) { - - this.x = attribute.getX(index); - this.y = attribute.getY(index); - - return this; - - } - - rotateAround(center, angle) { - - const c = Math.cos(angle), s = Math.sin(angle); - - const x = this.x - center.x; - const y = this.y - center.y; - - this.x = x * c - y * s + center.x; - this.y = x * s + y * c + center.y; - - return this; - - } - - random() { - - this.x = Math.random(); - this.y = Math.random(); - - return this; - - } - - *[Symbol.iterator]() { - - yield this.x; - yield this.y; - - } - -} - -class Matrix3 { - - constructor() { - - Matrix3.prototype.isMatrix3 = true; - - this.elements = [ - - 1, 0, 0, - 0, 1, 0, - 0, 0, 1 - - ]; - - } - - set(n11, n12, n13, n21, n22, n23, n31, n32, n33) { - - const te = this.elements; - - te[0] = n11; te[1] = n21; te[2] = n31; - te[3] = n12; te[4] = n22; te[5] = n32; - te[6] = n13; te[7] = n23; te[8] = n33; - - return this; - - } - - identity() { - - this.set( - - 1, 0, 0, - 0, 1, 0, - 0, 0, 1 - - ); - - return this; - - } - - copy(m) { - - const te = this.elements; - const me = m.elements; - - te[0] = me[0]; te[1] = me[1]; te[2] = me[2]; - te[3] = me[3]; te[4] = me[4]; te[5] = me[5]; - te[6] = me[6]; te[7] = me[7]; te[8] = me[8]; - - return this; - - } - - extractBasis(xAxis, yAxis, zAxis) { - - xAxis.setFromMatrix3Column(this, 0); - yAxis.setFromMatrix3Column(this, 1); - zAxis.setFromMatrix3Column(this, 2); - - return this; - - } - - setFromMatrix4(m) { - - const me = m.elements; - - this.set( - - me[0], me[4], me[8], - me[1], me[5], me[9], - me[2], me[6], me[10] - - ); - - return this; - - } - - multiply(m) { - - return this.multiplyMatrices(this, m); - - } - - premultiply(m) { - - return this.multiplyMatrices(m, this); - - } - - multiplyMatrices(a, b) { - - const ae = a.elements; - const be = b.elements; - const te = this.elements; - - const a11 = ae[0], a12 = ae[3], a13 = ae[6]; - const a21 = ae[1], a22 = ae[4], a23 = ae[7]; - const a31 = ae[2], a32 = ae[5], a33 = ae[8]; - - const b11 = be[0], b12 = be[3], b13 = be[6]; - const b21 = be[1], b22 = be[4], b23 = be[7]; - const b31 = be[2], b32 = be[5], b33 = be[8]; - - te[0] = a11 * b11 + a12 * b21 + a13 * b31; - te[3] = a11 * b12 + a12 * b22 + a13 * b32; - te[6] = a11 * b13 + a12 * b23 + a13 * b33; - - te[1] = a21 * b11 + a22 * b21 + a23 * b31; - te[4] = a21 * b12 + a22 * b22 + a23 * b32; - te[7] = a21 * b13 + a22 * b23 + a23 * b33; - - te[2] = a31 * b11 + a32 * b21 + a33 * b31; - te[5] = a31 * b12 + a32 * b22 + a33 * b32; - te[8] = a31 * b13 + a32 * b23 + a33 * b33; - - return this; - - } - - multiplyScalar(s) { - - const te = this.elements; - - te[0] *= s; te[3] *= s; te[6] *= s; - te[1] *= s; te[4] *= s; te[7] *= s; - te[2] *= s; te[5] *= s; te[8] *= s; - - return this; - - } - - determinant() { - - const te = this.elements; - - const a = te[0], b = te[1], c = te[2], - d = te[3], e = te[4], f = te[5], - g = te[6], h = te[7], i = te[8]; - - return a * e * i - a * f * h - b * d * i + b * f * g + c * d * h - c * e * g; - - } - - invert() { - - const te = this.elements, - - n11 = te[0], n21 = te[1], n31 = te[2], - n12 = te[3], n22 = te[4], n32 = te[5], - n13 = te[6], n23 = te[7], n33 = te[8], - - t11 = n33 * n22 - n32 * n23, - t12 = n32 * n13 - n33 * n12, - t13 = n23 * n12 - n22 * n13, - - det = n11 * t11 + n21 * t12 + n31 * t13; - - if (det === 0) return this.set(0, 0, 0, 0, 0, 0, 0, 0, 0); - - const detInv = 1 / det; - - te[0] = t11 * detInv; - te[1] = (n31 * n23 - n33 * n21) * detInv; - te[2] = (n32 * n21 - n31 * n22) * detInv; - - te[3] = t12 * detInv; - te[4] = (n33 * n11 - n31 * n13) * detInv; - te[5] = (n31 * n12 - n32 * n11) * detInv; - - te[6] = t13 * detInv; - te[7] = (n21 * n13 - n23 * n11) * detInv; - te[8] = (n22 * n11 - n21 * n12) * detInv; - - return this; - - } - - transpose() { - - let tmp; - const m = this.elements; - - tmp = m[1]; m[1] = m[3]; m[3] = tmp; - tmp = m[2]; m[2] = m[6]; m[6] = tmp; - tmp = m[5]; m[5] = m[7]; m[7] = tmp; - - return this; - - } - - getNormalMatrix(matrix4) { - - return this.setFromMatrix4(matrix4).invert().transpose(); - - } - - transposeIntoArray(r) { - - const m = this.elements; - - r[0] = m[0]; - r[1] = m[3]; - r[2] = m[6]; - r[3] = m[1]; - r[4] = m[4]; - r[5] = m[7]; - r[6] = m[2]; - r[7] = m[5]; - r[8] = m[8]; - - return this; - - } - - setUvTransform(tx, ty, sx, sy, rotation, cx, cy) { - - const c = Math.cos(rotation); - const s = Math.sin(rotation); - - this.set( - sx * c, sx * s, - sx * (c * cx + s * cy) + cx + tx, - - sy * s, sy * c, - sy * (- s * cx + c * cy) + cy + ty, - 0, 0, 1 - ); - - return this; - - } - - // - - scale(sx, sy) { - - this.premultiply(_m3.makeScale(sx, sy)); - - return this; - - } - - rotate(theta) { - - this.premultiply(_m3.makeRotation(- theta)); - - return this; - - } - - translate(tx, ty) { - - this.premultiply(_m3.makeTranslation(tx, ty)); - - return this; - - } - - // for 2D Transforms - - makeTranslation(x, y) { - - this.set( - - 1, 0, x, - 0, 1, y, - 0, 0, 1 - - ); - - return this; - - } - - makeRotation(theta) { - - // counterclockwise - - const c = Math.cos(theta); - const s = Math.sin(theta); - - this.set( - - c, - s, 0, - s, c, 0, - 0, 0, 1 - - ); - - return this; - - } - - makeScale(x, y) { - - this.set( - - x, 0, 0, - 0, y, 0, - 0, 0, 1 - - ); - - return this; - - } - - // - - equals(matrix) { - - const te = this.elements; - const me = matrix.elements; - - for (let i = 0; i < 9; i++) { - - if (te[i] !== me[i]) return false; - - } - - return true; - - } - - fromArray(array, offset = 0) { - - for (let i = 0; i < 9; i++) { - - this.elements[i] = array[i + offset]; - - } - - return this; - - } - - toArray(array = [], offset = 0) { - - const te = this.elements; - - array[offset] = te[0]; - array[offset + 1] = te[1]; - array[offset + 2] = te[2]; - - array[offset + 3] = te[3]; - array[offset + 4] = te[4]; - array[offset + 5] = te[5]; - - array[offset + 6] = te[6]; - array[offset + 7] = te[7]; - array[offset + 8] = te[8]; - - return array; - - } - - clone() { - - return new this.constructor().fromArray(this.elements); - - } - -} - -const _m3 = /*@__PURE__*/ new Matrix3(); - -function arrayNeedsUint32(array) { - - // assumes larger values usually on last - - for (let i = array.length - 1; i >= 0; --i) { - - if (array[i] >= 65535) return true; // account for PRIMITIVE_RESTART_FIXED_INDEX, #24565 - - } - - return false; - -} - -const TYPED_ARRAYS = { - Int8Array: Int8Array, - Uint8Array: Uint8Array, - Uint8ClampedArray: Uint8ClampedArray, - Int16Array: Int16Array, - Uint16Array: Uint16Array, - Int32Array: Int32Array, - Uint32Array: Uint32Array, - Float32Array: Float32Array, - Float64Array: Float64Array -}; - -function getTypedArray(type, buffer) { - - return new TYPED_ARRAYS[type](buffer); - -} - -function createElementNS(name) { - - return document.createElementNS('http://www.w3.org/1999/xhtml', name); - -} - -function SRGBToLinear(c) { - - return (c < 0.04045) ? c * 0.0773993808 : Math.pow(c * 0.9478672986 + 0.0521327014, 2.4); - -} - -function LinearToSRGB(c) { - - return (c < 0.0031308) ? c * 12.92 : 1.055 * (Math.pow(c, 0.41666)) - 0.055; - -} - -// JavaScript RGB-to-RGB transforms, defined as -// FN[InputColorSpace][OutputColorSpace] callback functions. -const FN = { - [SRGBColorSpace]: { [LinearSRGBColorSpace]: SRGBToLinear }, - [LinearSRGBColorSpace]: { [SRGBColorSpace]: LinearToSRGB }, -}; - -const ColorManagement = { - - legacyMode: true, - - get workingColorSpace() { - - return LinearSRGBColorSpace; - - }, - - set workingColorSpace(colorSpace) { - - console.warn('THREE.ColorManagement: .workingColorSpace is readonly.'); - - }, - - convert: function (color, sourceColorSpace, targetColorSpace) { - - if (this.legacyMode || sourceColorSpace === targetColorSpace || !sourceColorSpace || !targetColorSpace) { - - return color; - - } - - if (FN[sourceColorSpace] && FN[sourceColorSpace][targetColorSpace] !== undefined) { - - const fn = FN[sourceColorSpace][targetColorSpace]; - - color.r = fn(color.r); - color.g = fn(color.g); - color.b = fn(color.b); - - return color; - - } - - throw new Error('Unsupported color space conversion.'); - - }, - - fromWorkingColorSpace: function (color, targetColorSpace) { - - return this.convert(color, this.workingColorSpace, targetColorSpace); - - }, - - toWorkingColorSpace: function (color, sourceColorSpace) { - - return this.convert(color, sourceColorSpace, this.workingColorSpace); - - }, - -}; - -const _colorKeywords = { - 'aliceblue': 0xF0F8FF, 'antiquewhite': 0xFAEBD7, 'aqua': 0x00FFFF, 'aquamarine': 0x7FFFD4, 'azure': 0xF0FFFF, - 'beige': 0xF5F5DC, 'bisque': 0xFFE4C4, 'black': 0x000000, 'blanchedalmond': 0xFFEBCD, 'blue': 0x0000FF, 'blueviolet': 0x8A2BE2, - 'brown': 0xA52A2A, 'burlywood': 0xDEB887, 'cadetblue': 0x5F9EA0, 'chartreuse': 0x7FFF00, 'chocolate': 0xD2691E, 'coral': 0xFF7F50, - 'cornflowerblue': 0x6495ED, 'cornsilk': 0xFFF8DC, 'crimson': 0xDC143C, 'cyan': 0x00FFFF, 'darkblue': 0x00008B, 'darkcyan': 0x008B8B, - 'darkgoldenrod': 0xB8860B, 'darkgray': 0xA9A9A9, 'darkgreen': 0x006400, 'darkgrey': 0xA9A9A9, 'darkkhaki': 0xBDB76B, 'darkmagenta': 0x8B008B, - 'darkolivegreen': 0x556B2F, 'darkorange': 0xFF8C00, 'darkorchid': 0x9932CC, 'darkred': 0x8B0000, 'darksalmon': 0xE9967A, 'darkseagreen': 0x8FBC8F, - 'darkslateblue': 0x483D8B, 'darkslategray': 0x2F4F4F, 'darkslategrey': 0x2F4F4F, 'darkturquoise': 0x00CED1, 'darkviolet': 0x9400D3, - 'deeppink': 0xFF1493, 'deepskyblue': 0x00BFFF, 'dimgray': 0x696969, 'dimgrey': 0x696969, 'dodgerblue': 0x1E90FF, 'firebrick': 0xB22222, - 'floralwhite': 0xFFFAF0, 'forestgreen': 0x228B22, 'fuchsia': 0xFF00FF, 'gainsboro': 0xDCDCDC, 'ghostwhite': 0xF8F8FF, 'gold': 0xFFD700, - 'goldenrod': 0xDAA520, 'gray': 0x808080, 'green': 0x008000, 'greenyellow': 0xADFF2F, 'grey': 0x808080, 'honeydew': 0xF0FFF0, 'hotpink': 0xFF69B4, - 'indianred': 0xCD5C5C, 'indigo': 0x4B0082, 'ivory': 0xFFFFF0, 'khaki': 0xF0E68C, 'lavender': 0xE6E6FA, 'lavenderblush': 0xFFF0F5, 'lawngreen': 0x7CFC00, - 'lemonchiffon': 0xFFFACD, 'lightblue': 0xADD8E6, 'lightcoral': 0xF08080, 'lightcyan': 0xE0FFFF, 'lightgoldenrodyellow': 0xFAFAD2, 'lightgray': 0xD3D3D3, - 'lightgreen': 0x90EE90, 'lightgrey': 0xD3D3D3, 'lightpink': 0xFFB6C1, 'lightsalmon': 0xFFA07A, 'lightseagreen': 0x20B2AA, 'lightskyblue': 0x87CEFA, - 'lightslategray': 0x778899, 'lightslategrey': 0x778899, 'lightsteelblue': 0xB0C4DE, 'lightyellow': 0xFFFFE0, 'lime': 0x00FF00, 'limegreen': 0x32CD32, - 'linen': 0xFAF0E6, 'magenta': 0xFF00FF, 'maroon': 0x800000, 'mediumaquamarine': 0x66CDAA, 'mediumblue': 0x0000CD, 'mediumorchid': 0xBA55D3, - 'mediumpurple': 0x9370DB, 'mediumseagreen': 0x3CB371, 'mediumslateblue': 0x7B68EE, 'mediumspringgreen': 0x00FA9A, 'mediumturquoise': 0x48D1CC, - 'mediumvioletred': 0xC71585, 'midnightblue': 0x191970, 'mintcream': 0xF5FFFA, 'mistyrose': 0xFFE4E1, 'moccasin': 0xFFE4B5, 'navajowhite': 0xFFDEAD, - 'navy': 0x000080, 'oldlace': 0xFDF5E6, 'olive': 0x808000, 'olivedrab': 0x6B8E23, 'orange': 0xFFA500, 'orangered': 0xFF4500, 'orchid': 0xDA70D6, - 'palegoldenrod': 0xEEE8AA, 'palegreen': 0x98FB98, 'paleturquoise': 0xAFEEEE, 'palevioletred': 0xDB7093, 'papayawhip': 0xFFEFD5, 'peachpuff': 0xFFDAB9, - 'peru': 0xCD853F, 'pink': 0xFFC0CB, 'plum': 0xDDA0DD, 'powderblue': 0xB0E0E6, 'purple': 0x800080, 'rebeccapurple': 0x663399, 'red': 0xFF0000, 'rosybrown': 0xBC8F8F, - 'royalblue': 0x4169E1, 'saddlebrown': 0x8B4513, 'salmon': 0xFA8072, 'sandybrown': 0xF4A460, 'seagreen': 0x2E8B57, 'seashell': 0xFFF5EE, - 'sienna': 0xA0522D, 'silver': 0xC0C0C0, 'skyblue': 0x87CEEB, 'slateblue': 0x6A5ACD, 'slategray': 0x708090, 'slategrey': 0x708090, 'snow': 0xFFFAFA, - 'springgreen': 0x00FF7F, 'steelblue': 0x4682B4, 'tan': 0xD2B48C, 'teal': 0x008080, 'thistle': 0xD8BFD8, 'tomato': 0xFF6347, 'turquoise': 0x40E0D0, - 'violet': 0xEE82EE, 'wheat': 0xF5DEB3, 'white': 0xFFFFFF, 'whitesmoke': 0xF5F5F5, 'yellow': 0xFFFF00, 'yellowgreen': 0x9ACD32 -}; - -const _rgb$1 = { r: 0, g: 0, b: 0 }; -const _hslA = { h: 0, s: 0, l: 0 }; -const _hslB = { h: 0, s: 0, l: 0 }; - -function hue2rgb(p, q, t) { - - if (t < 0) t += 1; - if (t > 1) t -= 1; - if (t < 1 / 6) return p + (q - p) * 6 * t; - if (t < 1 / 2) return q; - if (t < 2 / 3) return p + (q - p) * 6 * (2 / 3 - t); - return p; - -} - -function toComponents(source, target) { - - target.r = source.r; - target.g = source.g; - target.b = source.b; - - return target; - -} - -class Color { - - constructor(r, g, b) { - - this.isColor = true; - - this.r = 1; - this.g = 1; - this.b = 1; - - if (g === undefined && b === undefined) { - - // r is THREE.Color, hex or string - return this.set(r); - - } - - return this.setRGB(r, g, b); - - } - - set(value) { - - if (value && value.isColor) { - - this.copy(value); - - } else if (typeof value === 'number') { - - this.setHex(value); - - } else if (typeof value === 'string') { - - this.setStyle(value); - - } - - return this; - - } - - setScalar(scalar) { - - this.r = scalar; - this.g = scalar; - this.b = scalar; - - return this; - - } - - setHex(hex, colorSpace = SRGBColorSpace) { - - hex = Math.floor(hex); - - this.r = (hex >> 16 & 255) / 255; - this.g = (hex >> 8 & 255) / 255; - this.b = (hex & 255) / 255; - - ColorManagement.toWorkingColorSpace(this, colorSpace); - - return this; - - } - - setRGB(r, g, b, colorSpace = ColorManagement.workingColorSpace) { - - this.r = r; - this.g = g; - this.b = b; - - ColorManagement.toWorkingColorSpace(this, colorSpace); - - return this; - - } - - setHSL(h, s, l, colorSpace = ColorManagement.workingColorSpace) { - - // h,s,l ranges are in 0.0 - 1.0 - h = euclideanModulo(h, 1); - s = clamp(s, 0, 1); - l = clamp(l, 0, 1); - - if (s === 0) { - - this.r = this.g = this.b = l; - - } else { - - const p = l <= 0.5 ? l * (1 + s) : l + s - (l * s); - const q = (2 * l) - p; - - this.r = hue2rgb(q, p, h + 1 / 3); - this.g = hue2rgb(q, p, h); - this.b = hue2rgb(q, p, h - 1 / 3); - - } - - ColorManagement.toWorkingColorSpace(this, colorSpace); - - return this; - - } - - setStyle(style, colorSpace = SRGBColorSpace) { - - function handleAlpha(string) { - - if (string === undefined) return; - - if (parseFloat(string) < 1) { - - console.warn('THREE.Color: Alpha component of ' + style + ' will be ignored.'); - - } - - } - - - let m; - - if (m = /^((?:rgb|hsl)a?)\(([^\)]*)\)/.exec(style)) { - - // rgb / hsl - - let color; - const name = m[1]; - const components = m[2]; - - switch (name) { - - case 'rgb': - case 'rgba': - - if (color = /^\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*(?:,\s*(\d*\.?\d+)\s*)?$/.exec(components)) { - - // rgb(255,0,0) rgba(255,0,0,0.5) - this.r = Math.min(255, parseInt(color[1], 10)) / 255; - this.g = Math.min(255, parseInt(color[2], 10)) / 255; - this.b = Math.min(255, parseInt(color[3], 10)) / 255; - - ColorManagement.toWorkingColorSpace(this, colorSpace); - - handleAlpha(color[4]); - - return this; - - } - - if (color = /^\s*(\d+)\%\s*,\s*(\d+)\%\s*,\s*(\d+)\%\s*(?:,\s*(\d*\.?\d+)\s*)?$/.exec(components)) { - - // rgb(100%,0%,0%) rgba(100%,0%,0%,0.5) - this.r = Math.min(100, parseInt(color[1], 10)) / 100; - this.g = Math.min(100, parseInt(color[2], 10)) / 100; - this.b = Math.min(100, parseInt(color[3], 10)) / 100; - - ColorManagement.toWorkingColorSpace(this, colorSpace); - - handleAlpha(color[4]); - - return this; - - } - - break; - - case 'hsl': - case 'hsla': - - if (color = /^\s*(\d*\.?\d+)\s*,\s*(\d*\.?\d+)\%\s*,\s*(\d*\.?\d+)\%\s*(?:,\s*(\d*\.?\d+)\s*)?$/.exec(components)) { - - // hsl(120,50%,50%) hsla(120,50%,50%,0.5) - const h = parseFloat(color[1]) / 360; - const s = parseFloat(color[2]) / 100; - const l = parseFloat(color[3]) / 100; - - handleAlpha(color[4]); - - return this.setHSL(h, s, l, colorSpace); - - } - - break; - - } - - } else if (m = /^\#([A-Fa-f\d]+)$/.exec(style)) { - - // hex color - - const hex = m[1]; - const size = hex.length; - - if (size === 3) { - - // #ff0 - this.r = parseInt(hex.charAt(0) + hex.charAt(0), 16) / 255; - this.g = parseInt(hex.charAt(1) + hex.charAt(1), 16) / 255; - this.b = parseInt(hex.charAt(2) + hex.charAt(2), 16) / 255; - - ColorManagement.toWorkingColorSpace(this, colorSpace); - - return this; - - } else if (size === 6) { - - // #ff0000 - this.r = parseInt(hex.charAt(0) + hex.charAt(1), 16) / 255; - this.g = parseInt(hex.charAt(2) + hex.charAt(3), 16) / 255; - this.b = parseInt(hex.charAt(4) + hex.charAt(5), 16) / 255; - - ColorManagement.toWorkingColorSpace(this, colorSpace); - - return this; - - } - - } - - if (style && style.length > 0) { - - return this.setColorName(style, colorSpace); - - } - - return this; - - } - - setColorName(style, colorSpace = SRGBColorSpace) { - - // color keywords - const hex = _colorKeywords[style.toLowerCase()]; - - if (hex !== undefined) { - - // red - this.setHex(hex, colorSpace); - - } else { - - // unknown color - console.warn('THREE.Color: Unknown color ' + style); - - } - - return this; - - } - - clone() { - - return new this.constructor(this.r, this.g, this.b); - - } - - copy(color) { - - this.r = color.r; - this.g = color.g; - this.b = color.b; - - return this; - - } - - copySRGBToLinear(color) { - - this.r = SRGBToLinear(color.r); - this.g = SRGBToLinear(color.g); - this.b = SRGBToLinear(color.b); - - return this; - - } - - copyLinearToSRGB(color) { - - this.r = LinearToSRGB(color.r); - this.g = LinearToSRGB(color.g); - this.b = LinearToSRGB(color.b); - - return this; - - } - - convertSRGBToLinear() { - - this.copySRGBToLinear(this); - - return this; - - } - - convertLinearToSRGB() { - - this.copyLinearToSRGB(this); - - return this; - - } - - getHex(colorSpace = SRGBColorSpace) { - - ColorManagement.fromWorkingColorSpace(toComponents(this, _rgb$1), colorSpace); - - return clamp(_rgb$1.r * 255, 0, 255) << 16 ^ clamp(_rgb$1.g * 255, 0, 255) << 8 ^ clamp(_rgb$1.b * 255, 0, 255) << 0; - - } - - getHexString(colorSpace = SRGBColorSpace) { - - return ('000000' + this.getHex(colorSpace).toString(16)).slice(- 6); - - } - - getHSL(target, colorSpace = ColorManagement.workingColorSpace) { - - // h,s,l ranges are in 0.0 - 1.0 - - ColorManagement.fromWorkingColorSpace(toComponents(this, _rgb$1), colorSpace); - - const r = _rgb$1.r, g = _rgb$1.g, b = _rgb$1.b; - - const max = Math.max(r, g, b); - const min = Math.min(r, g, b); - - let hue, saturation; - const lightness = (min + max) / 2.0; - - if (min === max) { - - hue = 0; - saturation = 0; - - } else { - - const delta = max - min; - - saturation = lightness <= 0.5 ? delta / (max + min) : delta / (2 - max - min); - - switch (max) { - - case r: hue = (g - b) / delta + (g < b ? 6 : 0); break; - case g: hue = (b - r) / delta + 2; break; - case b: hue = (r - g) / delta + 4; break; - - } - - hue /= 6; - - } - - target.h = hue; - target.s = saturation; - target.l = lightness; - - return target; - - } - - getRGB(target, colorSpace = ColorManagement.workingColorSpace) { - - ColorManagement.fromWorkingColorSpace(toComponents(this, _rgb$1), colorSpace); - - target.r = _rgb$1.r; - target.g = _rgb$1.g; - target.b = _rgb$1.b; - - return target; - - } - - getStyle(colorSpace = SRGBColorSpace) { - - ColorManagement.fromWorkingColorSpace(toComponents(this, _rgb$1), colorSpace); - - if (colorSpace !== SRGBColorSpace) { - - // Requires CSS Color Module Level 4 (https://www.w3.org/TR/css-color-4/). - return `color(${colorSpace} ${_rgb$1.r} ${_rgb$1.g} ${_rgb$1.b})`; - - } - - return `rgb(${(_rgb$1.r * 255) | 0},${(_rgb$1.g * 255) | 0},${(_rgb$1.b * 255) | 0})`; - - } - - offsetHSL(h, s, l) { - - this.getHSL(_hslA); - - _hslA.h += h; _hslA.s += s; _hslA.l += l; - - this.setHSL(_hslA.h, _hslA.s, _hslA.l); - - return this; - - } - - add(color) { - - this.r += color.r; - this.g += color.g; - this.b += color.b; - - return this; - - } - - addColors(color1, color2) { - - this.r = color1.r + color2.r; - this.g = color1.g + color2.g; - this.b = color1.b + color2.b; - - return this; - - } - - addScalar(s) { - - this.r += s; - this.g += s; - this.b += s; - - return this; - - } - - sub(color) { - - this.r = Math.max(0, this.r - color.r); - this.g = Math.max(0, this.g - color.g); - this.b = Math.max(0, this.b - color.b); - - return this; - - } - - multiply(color) { - - this.r *= color.r; - this.g *= color.g; - this.b *= color.b; - - return this; - - } - - multiplyScalar(s) { - - this.r *= s; - this.g *= s; - this.b *= s; - - return this; - - } - - lerp(color, alpha) { - - this.r += (color.r - this.r) * alpha; - this.g += (color.g - this.g) * alpha; - this.b += (color.b - this.b) * alpha; - - return this; - - } - - lerpColors(color1, color2, alpha) { - - this.r = color1.r + (color2.r - color1.r) * alpha; - this.g = color1.g + (color2.g - color1.g) * alpha; - this.b = color1.b + (color2.b - color1.b) * alpha; - - return this; - - } - - lerpHSL(color, alpha) { - - this.getHSL(_hslA); - color.getHSL(_hslB); - - const h = lerp(_hslA.h, _hslB.h, alpha); - const s = lerp(_hslA.s, _hslB.s, alpha); - const l = lerp(_hslA.l, _hslB.l, alpha); - - this.setHSL(h, s, l); - - return this; - - } - - equals(c) { - - return (c.r === this.r) && (c.g === this.g) && (c.b === this.b); - - } - - fromArray(array, offset = 0) { - - this.r = array[offset]; - this.g = array[offset + 1]; - this.b = array[offset + 2]; - - return this; - - } - - toArray(array = [], offset = 0) { - - array[offset] = this.r; - array[offset + 1] = this.g; - array[offset + 2] = this.b; - - return array; - - } - - fromBufferAttribute(attribute, index) { - - this.r = attribute.getX(index); - this.g = attribute.getY(index); - this.b = attribute.getZ(index); - - return this; - - } - - toJSON() { - - return this.getHex(); - - } - - *[Symbol.iterator]() { - - yield this.r; - yield this.g; - yield this.b; - - } - -} - -Color.NAMES = _colorKeywords; - -let _canvas; - -class ImageUtils { - - static getDataURL(image) { - - if (/^data:/i.test(image.src)) { - - return image.src; - - } - - if (typeof HTMLCanvasElement == 'undefined') { - - return image.src; - - } - - let canvas; - - if (image instanceof HTMLCanvasElement) { - - canvas = image; - - } else { - - if (_canvas === undefined) _canvas = createElementNS('canvas'); - - _canvas.width = image.width; - _canvas.height = image.height; - - const context = _canvas.getContext('2d'); - - if (image instanceof ImageData) { - - context.putImageData(image, 0, 0); - - } else { - - context.drawImage(image, 0, 0, image.width, image.height); - - } - - canvas = _canvas; - - } - - if (canvas.width > 2048 || canvas.height > 2048) { - - console.warn('THREE.ImageUtils.getDataURL: Image converted to jpg for performance reasons', image); - - return canvas.toDataURL('image/jpeg', 0.6); - - } else { - - return canvas.toDataURL('image/png'); - - } - - } - - static sRGBToLinear(image) { - - if ((typeof HTMLImageElement !== 'undefined' && image instanceof HTMLImageElement) || - (typeof HTMLCanvasElement !== 'undefined' && image instanceof HTMLCanvasElement) || - (typeof ImageBitmap !== 'undefined' && image instanceof ImageBitmap)) { - - const canvas = createElementNS('canvas'); - - canvas.width = image.width; - canvas.height = image.height; - - const context = canvas.getContext('2d'); - context.drawImage(image, 0, 0, image.width, image.height); - - const imageData = context.getImageData(0, 0, image.width, image.height); - const data = imageData.data; - - for (let i = 0; i < data.length; i++) { - - data[i] = SRGBToLinear(data[i] / 255) * 255; - - } - - context.putImageData(imageData, 0, 0); - - return canvas; - - } else if (image.data) { - - const data = image.data.slice(0); - - for (let i = 0; i < data.length; i++) { - - if (data instanceof Uint8Array || data instanceof Uint8ClampedArray) { - - data[i] = Math.floor(SRGBToLinear(data[i] / 255) * 255); - - } else { - - // assuming float - - data[i] = SRGBToLinear(data[i]); - - } - - } - - return { - data: data, - width: image.width, - height: image.height - }; - - } else { - - console.warn('THREE.ImageUtils.sRGBToLinear(): Unsupported image type. No color space conversion applied.'); - return image; - - } - - } - -} - -class Source { - - constructor(data = null) { - - this.isSource = true; - - this.uuid = generateUUID(); - - this.data = data; - - this.version = 0; - - } - - set needsUpdate(value) { - - if (value === true) this.version++; - - } - - toJSON(meta) { - - const isRootObject = (meta === undefined || typeof meta === 'string'); - - if (!isRootObject && meta.images[this.uuid] !== undefined) { - - return meta.images[this.uuid]; - - } - - const output = { - uuid: this.uuid, - url: '' - }; - - const data = this.data; - - if (data !== null) { - - let url; - - if (Array.isArray(data)) { - - // cube texture - - url = []; - - for (let i = 0, l = data.length; i < l; i++) { - - if (data[i].isDataTexture) { - - url.push(serializeImage(data[i].image)); - - } else { - - url.push(serializeImage(data[i])); - - } - - } - - } else { - - // texture - - url = serializeImage(data); - - } - - output.url = url; - - } - - if (!isRootObject) { - - meta.images[this.uuid] = output; - - } - - return output; - - } - -} - -function serializeImage(image) { - - if ((typeof HTMLImageElement !== 'undefined' && image instanceof HTMLImageElement) || - (typeof HTMLCanvasElement !== 'undefined' && image instanceof HTMLCanvasElement) || - (typeof ImageBitmap !== 'undefined' && image instanceof ImageBitmap)) { - - // default images - - return ImageUtils.getDataURL(image); - - } else { - - if (image.data) { - - // images of DataTexture - - return { - data: Array.from(image.data), - width: image.width, - height: image.height, - type: image.data.constructor.name - }; - - } else { - - console.warn('THREE.Texture: Unable to serialize Texture.'); - return {}; - - } - - } - -} - -let textureId = 0; - -class Texture extends EventDispatcher { - - constructor(image = Texture.DEFAULT_IMAGE, mapping = Texture.DEFAULT_MAPPING, wrapS = ClampToEdgeWrapping, wrapT = ClampToEdgeWrapping, magFilter = LinearFilter, minFilter = LinearMipmapLinearFilter, format = RGBAFormat, type = UnsignedByteType, anisotropy = Texture.DEFAULT_ANISOTROPY, encoding = LinearEncoding) { - - super(); - - this.isTexture = true; - - Object.defineProperty(this, 'id', { value: textureId++ }); - - this.uuid = generateUUID(); - - this.name = ''; - - this.source = new Source(image); - this.mipmaps = []; - - this.mapping = mapping; - - this.wrapS = wrapS; - this.wrapT = wrapT; - - this.magFilter = magFilter; - this.minFilter = minFilter; - - this.anisotropy = anisotropy; - - this.format = format; - this.internalFormat = null; - this.type = type; - - this.offset = new Vector2(0, 0); - this.repeat = new Vector2(1, 1); - this.center = new Vector2(0, 0); - this.rotation = 0; - - this.matrixAutoUpdate = true; - this.matrix = new Matrix3(); - - this.generateMipmaps = true; - this.premultiplyAlpha = false; - this.flipY = true; - this.unpackAlignment = 4; // valid values: 1, 2, 4, 8 (see http://www.khronos.org/opengles/sdk/docs/man/xhtml/glPixelStorei.xml) - - // Values of encoding !== THREE.LinearEncoding only supported on map, envMap and emissiveMap. - // - // Also changing the encoding after already used by a Material will not automatically make the Material - // update. You need to explicitly call Material.needsUpdate to trigger it to recompile. - this.encoding = encoding; - - this.userData = {}; - - this.version = 0; - this.onUpdate = null; - - this.isRenderTargetTexture = false; // indicates whether a texture belongs to a render target or not - this.needsPMREMUpdate = false; // indicates whether this texture should be processed by PMREMGenerator or not (only relevant for render target textures) - - } - - get image() { - - return this.source.data; - - } - - set image(value) { - - this.source.data = value; - - } - - updateMatrix() { - - this.matrix.setUvTransform(this.offset.x, this.offset.y, this.repeat.x, this.repeat.y, this.rotation, this.center.x, this.center.y); - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(source) { - - this.name = source.name; - - this.source = source.source; - this.mipmaps = source.mipmaps.slice(0); - - this.mapping = source.mapping; - - this.wrapS = source.wrapS; - this.wrapT = source.wrapT; - - this.magFilter = source.magFilter; - this.minFilter = source.minFilter; - - this.anisotropy = source.anisotropy; - - this.format = source.format; - this.internalFormat = source.internalFormat; - this.type = source.type; - - this.offset.copy(source.offset); - this.repeat.copy(source.repeat); - this.center.copy(source.center); - this.rotation = source.rotation; - - this.matrixAutoUpdate = source.matrixAutoUpdate; - this.matrix.copy(source.matrix); - - this.generateMipmaps = source.generateMipmaps; - this.premultiplyAlpha = source.premultiplyAlpha; - this.flipY = source.flipY; - this.unpackAlignment = source.unpackAlignment; - this.encoding = source.encoding; - - this.userData = JSON.parse(JSON.stringify(source.userData)); - - this.needsUpdate = true; - - return this; - - } - - toJSON(meta) { - - const isRootObject = (meta === undefined || typeof meta === 'string'); - - if (!isRootObject && meta.textures[this.uuid] !== undefined) { - - return meta.textures[this.uuid]; - - } - - const output = { - - metadata: { - version: 4.5, - type: 'Texture', - generator: 'Texture.toJSON' - }, - - uuid: this.uuid, - name: this.name, - - image: this.source.toJSON(meta).uuid, - - mapping: this.mapping, - - repeat: [this.repeat.x, this.repeat.y], - offset: [this.offset.x, this.offset.y], - center: [this.center.x, this.center.y], - rotation: this.rotation, - - wrap: [this.wrapS, this.wrapT], - - format: this.format, - type: this.type, - encoding: this.encoding, - - minFilter: this.minFilter, - magFilter: this.magFilter, - anisotropy: this.anisotropy, - - flipY: this.flipY, - - generateMipmaps: this.generateMipmaps, - premultiplyAlpha: this.premultiplyAlpha, - unpackAlignment: this.unpackAlignment - - }; - - if (Object.keys(this.userData).length > 0) output.userData = this.userData; - - if (!isRootObject) { - - meta.textures[this.uuid] = output; - - } - - return output; - - } - - dispose() { - - this.dispatchEvent({ type: 'dispose' }); - - } - - transformUv(uv) { - - if (this.mapping !== UVMapping) return uv; - - uv.applyMatrix3(this.matrix); - - if (uv.x < 0 || uv.x > 1) { - - switch (this.wrapS) { - - case RepeatWrapping: - - uv.x = uv.x - Math.floor(uv.x); - break; - - case ClampToEdgeWrapping: - - uv.x = uv.x < 0 ? 0 : 1; - break; - - case MirroredRepeatWrapping: - - if (Math.abs(Math.floor(uv.x) % 2) === 1) { - - uv.x = Math.ceil(uv.x) - uv.x; - - } else { - - uv.x = uv.x - Math.floor(uv.x); - - } - - break; - - } - - } - - if (uv.y < 0 || uv.y > 1) { - - switch (this.wrapT) { - - case RepeatWrapping: - - uv.y = uv.y - Math.floor(uv.y); - break; - - case ClampToEdgeWrapping: - - uv.y = uv.y < 0 ? 0 : 1; - break; - - case MirroredRepeatWrapping: - - if (Math.abs(Math.floor(uv.y) % 2) === 1) { - - uv.y = Math.ceil(uv.y) - uv.y; - - } else { - - uv.y = uv.y - Math.floor(uv.y); - - } - - break; - - } - - } - - if (this.flipY) { - - uv.y = 1 - uv.y; - - } - - return uv; - - } - - set needsUpdate(value) { - - if (value === true) { - - this.version++; - this.source.needsUpdate = true; - - } - - } - -} - -Texture.DEFAULT_IMAGE = null; -Texture.DEFAULT_MAPPING = UVMapping; -Texture.DEFAULT_ANISOTROPY = 1; - -class Vector4 { - - constructor(x = 0, y = 0, z = 0, w = 1) { - - Vector4.prototype.isVector4 = true; - - this.x = x; - this.y = y; - this.z = z; - this.w = w; - - } - - get width() { - - return this.z; - - } - - set width(value) { - - this.z = value; - - } - - get height() { - - return this.w; - - } - - set height(value) { - - this.w = value; - - } - - set(x, y, z, w) { - - this.x = x; - this.y = y; - this.z = z; - this.w = w; - - return this; - - } - - setScalar(scalar) { - - this.x = scalar; - this.y = scalar; - this.z = scalar; - this.w = scalar; - - return this; - - } - - setX(x) { - - this.x = x; - - return this; - - } - - setY(y) { - - this.y = y; - - return this; - - } - - setZ(z) { - - this.z = z; - - return this; - - } - - setW(w) { - - this.w = w; - - return this; - - } - - setComponent(index, value) { - - switch (index) { - - case 0: this.x = value; break; - case 1: this.y = value; break; - case 2: this.z = value; break; - case 3: this.w = value; break; - default: throw new Error('index is out of range: ' + index); - - } - - return this; - - } - - getComponent(index) { - - switch (index) { - - case 0: return this.x; - case 1: return this.y; - case 2: return this.z; - case 3: return this.w; - default: throw new Error('index is out of range: ' + index); - - } - - } - - clone() { - - return new this.constructor(this.x, this.y, this.z, this.w); - - } - - copy(v) { - - this.x = v.x; - this.y = v.y; - this.z = v.z; - this.w = (v.w !== undefined) ? v.w : 1; - - return this; - - } - - add(v) { - - this.x += v.x; - this.y += v.y; - this.z += v.z; - this.w += v.w; - - return this; - - } - - addScalar(s) { - - this.x += s; - this.y += s; - this.z += s; - this.w += s; - - return this; - - } - - addVectors(a, b) { - - this.x = a.x + b.x; - this.y = a.y + b.y; - this.z = a.z + b.z; - this.w = a.w + b.w; - - return this; - - } - - addScaledVector(v, s) { - - this.x += v.x * s; - this.y += v.y * s; - this.z += v.z * s; - this.w += v.w * s; - - return this; - - } - - sub(v) { - - this.x -= v.x; - this.y -= v.y; - this.z -= v.z; - this.w -= v.w; - - return this; - - } - - subScalar(s) { - - this.x -= s; - this.y -= s; - this.z -= s; - this.w -= s; - - return this; - - } - - subVectors(a, b) { - - this.x = a.x - b.x; - this.y = a.y - b.y; - this.z = a.z - b.z; - this.w = a.w - b.w; - - return this; - - } - - multiply(v) { - - this.x *= v.x; - this.y *= v.y; - this.z *= v.z; - this.w *= v.w; - - return this; - - } - - multiplyScalar(scalar) { - - this.x *= scalar; - this.y *= scalar; - this.z *= scalar; - this.w *= scalar; - - return this; - - } - - applyMatrix4(m) { - - const x = this.x, y = this.y, z = this.z, w = this.w; - const e = m.elements; - - this.x = e[0] * x + e[4] * y + e[8] * z + e[12] * w; - this.y = e[1] * x + e[5] * y + e[9] * z + e[13] * w; - this.z = e[2] * x + e[6] * y + e[10] * z + e[14] * w; - this.w = e[3] * x + e[7] * y + e[11] * z + e[15] * w; - - return this; - - } - - divideScalar(scalar) { - - return this.multiplyScalar(1 / scalar); - - } - - setAxisAngleFromQuaternion(q) { - - // http://www.euclideanspace.com/maths/geometry/rotations/conversions/quaternionToAngle/index.htm - - // q is assumed to be normalized - - this.w = 2 * Math.acos(q.w); - - const s = Math.sqrt(1 - q.w * q.w); - - if (s < 0.0001) { - - this.x = 1; - this.y = 0; - this.z = 0; - - } else { - - this.x = q.x / s; - this.y = q.y / s; - this.z = q.z / s; - - } - - return this; - - } - - setAxisAngleFromRotationMatrix(m) { - - // http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToAngle/index.htm - - // assumes the upper 3x3 of m is a pure rotation matrix (i.e, unscaled) - - let angle, x, y, z; // variables for result - const epsilon = 0.01, // margin to allow for rounding errors - epsilon2 = 0.1, // margin to distinguish between 0 and 180 degrees - - te = m.elements, - - m11 = te[0], m12 = te[4], m13 = te[8], - m21 = te[1], m22 = te[5], m23 = te[9], - m31 = te[2], m32 = te[6], m33 = te[10]; - - if ((Math.abs(m12 - m21) < epsilon) && - (Math.abs(m13 - m31) < epsilon) && - (Math.abs(m23 - m32) < epsilon)) { - - // singularity found - // first check for identity matrix which must have +1 for all terms - // in leading diagonal and zero in other terms - - if ((Math.abs(m12 + m21) < epsilon2) && - (Math.abs(m13 + m31) < epsilon2) && - (Math.abs(m23 + m32) < epsilon2) && - (Math.abs(m11 + m22 + m33 - 3) < epsilon2)) { - - // this singularity is identity matrix so angle = 0 - - this.set(1, 0, 0, 0); - - return this; // zero angle, arbitrary axis - - } - - // otherwise this singularity is angle = 180 - - angle = Math.PI; - - const xx = (m11 + 1) / 2; - const yy = (m22 + 1) / 2; - const zz = (m33 + 1) / 2; - const xy = (m12 + m21) / 4; - const xz = (m13 + m31) / 4; - const yz = (m23 + m32) / 4; - - if ((xx > yy) && (xx > zz)) { - - // m11 is the largest diagonal term - - if (xx < epsilon) { - - x = 0; - y = 0.707106781; - z = 0.707106781; - - } else { - - x = Math.sqrt(xx); - y = xy / x; - z = xz / x; - - } - - } else if (yy > zz) { - - // m22 is the largest diagonal term - - if (yy < epsilon) { - - x = 0.707106781; - y = 0; - z = 0.707106781; - - } else { - - y = Math.sqrt(yy); - x = xy / y; - z = yz / y; - - } - - } else { - - // m33 is the largest diagonal term so base result on this - - if (zz < epsilon) { - - x = 0.707106781; - y = 0.707106781; - z = 0; - - } else { - - z = Math.sqrt(zz); - x = xz / z; - y = yz / z; - - } - - } - - this.set(x, y, z, angle); - - return this; // return 180 deg rotation - - } - - // as we have reached here there are no singularities so we can handle normally - - let s = Math.sqrt((m32 - m23) * (m32 - m23) + - (m13 - m31) * (m13 - m31) + - (m21 - m12) * (m21 - m12)); // used to normalize - - if (Math.abs(s) < 0.001) s = 1; - - // prevent divide by zero, should not happen if matrix is orthogonal and should be - // caught by singularity test above, but I've left it in just in case - - this.x = (m32 - m23) / s; - this.y = (m13 - m31) / s; - this.z = (m21 - m12) / s; - this.w = Math.acos((m11 + m22 + m33 - 1) / 2); - - return this; - - } - - min(v) { - - this.x = Math.min(this.x, v.x); - this.y = Math.min(this.y, v.y); - this.z = Math.min(this.z, v.z); - this.w = Math.min(this.w, v.w); - - return this; - - } - - max(v) { - - this.x = Math.max(this.x, v.x); - this.y = Math.max(this.y, v.y); - this.z = Math.max(this.z, v.z); - this.w = Math.max(this.w, v.w); - - return this; - - } - - clamp(min, max) { - - // assumes min < max, componentwise - - this.x = Math.max(min.x, Math.min(max.x, this.x)); - this.y = Math.max(min.y, Math.min(max.y, this.y)); - this.z = Math.max(min.z, Math.min(max.z, this.z)); - this.w = Math.max(min.w, Math.min(max.w, this.w)); - - return this; - - } - - clampScalar(minVal, maxVal) { - - this.x = Math.max(minVal, Math.min(maxVal, this.x)); - this.y = Math.max(minVal, Math.min(maxVal, this.y)); - this.z = Math.max(minVal, Math.min(maxVal, this.z)); - this.w = Math.max(minVal, Math.min(maxVal, this.w)); - - return this; - - } - - clampLength(min, max) { - - const length = this.length(); - - return this.divideScalar(length || 1).multiplyScalar(Math.max(min, Math.min(max, length))); - - } - - floor() { - - this.x = Math.floor(this.x); - this.y = Math.floor(this.y); - this.z = Math.floor(this.z); - this.w = Math.floor(this.w); - - return this; - - } - - ceil() { - - this.x = Math.ceil(this.x); - this.y = Math.ceil(this.y); - this.z = Math.ceil(this.z); - this.w = Math.ceil(this.w); - - return this; - - } - - round() { - - this.x = Math.round(this.x); - this.y = Math.round(this.y); - this.z = Math.round(this.z); - this.w = Math.round(this.w); - - return this; - - } - - roundToZero() { - - this.x = (this.x < 0) ? Math.ceil(this.x) : Math.floor(this.x); - this.y = (this.y < 0) ? Math.ceil(this.y) : Math.floor(this.y); - this.z = (this.z < 0) ? Math.ceil(this.z) : Math.floor(this.z); - this.w = (this.w < 0) ? Math.ceil(this.w) : Math.floor(this.w); - - return this; - - } - - negate() { - - this.x = - this.x; - this.y = - this.y; - this.z = - this.z; - this.w = - this.w; - - return this; - - } - - dot(v) { - - return this.x * v.x + this.y * v.y + this.z * v.z + this.w * v.w; - - } - - lengthSq() { - - return this.x * this.x + this.y * this.y + this.z * this.z + this.w * this.w; - - } - - length() { - - return Math.sqrt(this.x * this.x + this.y * this.y + this.z * this.z + this.w * this.w); - - } - - manhattanLength() { - - return Math.abs(this.x) + Math.abs(this.y) + Math.abs(this.z) + Math.abs(this.w); - - } - - normalize() { - - return this.divideScalar(this.length() || 1); - - } - - setLength(length) { - - return this.normalize().multiplyScalar(length); - - } - - lerp(v, alpha) { - - this.x += (v.x - this.x) * alpha; - this.y += (v.y - this.y) * alpha; - this.z += (v.z - this.z) * alpha; - this.w += (v.w - this.w) * alpha; - - return this; - - } - - lerpVectors(v1, v2, alpha) { - - this.x = v1.x + (v2.x - v1.x) * alpha; - this.y = v1.y + (v2.y - v1.y) * alpha; - this.z = v1.z + (v2.z - v1.z) * alpha; - this.w = v1.w + (v2.w - v1.w) * alpha; - - return this; - - } - - equals(v) { - - return ((v.x === this.x) && (v.y === this.y) && (v.z === this.z) && (v.w === this.w)); - - } - - fromArray(array, offset = 0) { - - this.x = array[offset]; - this.y = array[offset + 1]; - this.z = array[offset + 2]; - this.w = array[offset + 3]; - - return this; - - } - - toArray(array = [], offset = 0) { - - array[offset] = this.x; - array[offset + 1] = this.y; - array[offset + 2] = this.z; - array[offset + 3] = this.w; - - return array; - - } - - fromBufferAttribute(attribute, index) { - - this.x = attribute.getX(index); - this.y = attribute.getY(index); - this.z = attribute.getZ(index); - this.w = attribute.getW(index); - - return this; - - } - - random() { - - this.x = Math.random(); - this.y = Math.random(); - this.z = Math.random(); - this.w = Math.random(); - - return this; - - } - - *[Symbol.iterator]() { - - yield this.x; - yield this.y; - yield this.z; - yield this.w; - - } - -} - -/* - In options, we can specify: - * Texture parameters for an auto-generated target texture - * depthBuffer/stencilBuffer: Booleans to indicate if we should generate these buffers -*/ -class WebGLRenderTarget extends EventDispatcher { - - constructor(width = 1, height = 1, options = {}) { - - super(); - - this.isWebGLRenderTarget = true; - - this.width = width; - this.height = height; - this.depth = 1; - - this.scissor = new Vector4(0, 0, width, height); - this.scissorTest = false; - - this.viewport = new Vector4(0, 0, width, height); - - const image = { width: width, height: height, depth: 1 }; - - this.texture = new Texture(image, options.mapping, options.wrapS, options.wrapT, options.magFilter, options.minFilter, options.format, options.type, options.anisotropy, options.encoding); - this.texture.isRenderTargetTexture = true; - - this.texture.flipY = false; - this.texture.generateMipmaps = options.generateMipmaps !== undefined ? options.generateMipmaps : false; - this.texture.internalFormat = options.internalFormat !== undefined ? options.internalFormat : null; - this.texture.minFilter = options.minFilter !== undefined ? options.minFilter : LinearFilter; - - this.depthBuffer = options.depthBuffer !== undefined ? options.depthBuffer : true; - this.stencilBuffer = options.stencilBuffer !== undefined ? options.stencilBuffer : false; - - this.depthTexture = options.depthTexture !== undefined ? options.depthTexture : null; - - this.samples = options.samples !== undefined ? options.samples : 0; - - } - - setSize(width, height, depth = 1) { - - if (this.width !== width || this.height !== height || this.depth !== depth) { - - this.width = width; - this.height = height; - this.depth = depth; - - this.texture.image.width = width; - this.texture.image.height = height; - this.texture.image.depth = depth; - - this.dispose(); - - } - - this.viewport.set(0, 0, width, height); - this.scissor.set(0, 0, width, height); - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(source) { - - this.width = source.width; - this.height = source.height; - this.depth = source.depth; - - this.viewport.copy(source.viewport); - - this.texture = source.texture.clone(); - this.texture.isRenderTargetTexture = true; - - // ensure image object is not shared, see #20328 - - const image = Object.assign({}, source.texture.image); - this.texture.source = new Source(image); - - this.depthBuffer = source.depthBuffer; - this.stencilBuffer = source.stencilBuffer; - - if (source.depthTexture !== null) this.depthTexture = source.depthTexture.clone(); - - this.samples = source.samples; - - return this; - - } - - dispose() { - - this.dispatchEvent({ type: 'dispose' }); - - } - -} - -class DataArrayTexture extends Texture { - - constructor(data = null, width = 1, height = 1, depth = 1) { - - super(null); - - this.isDataArrayTexture = true; - - this.image = { data, width, height, depth }; - - this.magFilter = NearestFilter; - this.minFilter = NearestFilter; - - this.wrapR = ClampToEdgeWrapping; - - this.generateMipmaps = false; - this.flipY = false; - this.unpackAlignment = 1; - - } - -} - -class WebGLArrayRenderTarget extends WebGLRenderTarget { - - constructor(width = 1, height = 1, depth = 1) { - - super(width, height); - - this.isWebGLArrayRenderTarget = true; - - this.depth = depth; - - this.texture = new DataArrayTexture(null, width, height, depth); - - this.texture.isRenderTargetTexture = true; - - } - -} - -class Data3DTexture extends Texture { - - constructor(data = null, width = 1, height = 1, depth = 1) { - - // We're going to add .setXXX() methods for setting properties later. - // Users can still set in DataTexture3D directly. - // - // const texture = new THREE.DataTexture3D( data, width, height, depth ); - // texture.anisotropy = 16; - // - // See #14839 - - super(null); - - this.isData3DTexture = true; - - this.image = { data, width, height, depth }; - - this.magFilter = NearestFilter; - this.minFilter = NearestFilter; - - this.wrapR = ClampToEdgeWrapping; - - this.generateMipmaps = false; - this.flipY = false; - this.unpackAlignment = 1; - - } - -} - -class WebGL3DRenderTarget extends WebGLRenderTarget { - - constructor(width = 1, height = 1, depth = 1) { - - super(width, height); - - this.isWebGL3DRenderTarget = true; - - this.depth = depth; - - this.texture = new Data3DTexture(null, width, height, depth); - - this.texture.isRenderTargetTexture = true; - - } - -} - -class WebGLMultipleRenderTargets extends WebGLRenderTarget { - - constructor(width = 1, height = 1, count = 1, options = {}) { - - super(width, height, options); - - this.isWebGLMultipleRenderTargets = true; - - const texture = this.texture; - - this.texture = []; - - for (let i = 0; i < count; i++) { - - this.texture[i] = texture.clone(); - this.texture[i].isRenderTargetTexture = true; - - } - - } - - setSize(width, height, depth = 1) { - - if (this.width !== width || this.height !== height || this.depth !== depth) { - - this.width = width; - this.height = height; - this.depth = depth; - - for (let i = 0, il = this.texture.length; i < il; i++) { - - this.texture[i].image.width = width; - this.texture[i].image.height = height; - this.texture[i].image.depth = depth; - - } - - this.dispose(); - - } - - this.viewport.set(0, 0, width, height); - this.scissor.set(0, 0, width, height); - - return this; - - } - - copy(source) { - - this.dispose(); - - this.width = source.width; - this.height = source.height; - this.depth = source.depth; - - this.viewport.set(0, 0, this.width, this.height); - this.scissor.set(0, 0, this.width, this.height); - - this.depthBuffer = source.depthBuffer; - this.stencilBuffer = source.stencilBuffer; - - if (source.depthTexture !== null) this.depthTexture = source.depthTexture.clone(); - - this.texture.length = 0; - - for (let i = 0, il = source.texture.length; i < il; i++) { - - this.texture[i] = source.texture[i].clone(); - this.texture[i].isRenderTargetTexture = true; - - } - - return this; - - } - -} - -class Quaternion { - - constructor(x = 0, y = 0, z = 0, w = 1) { - - this.isQuaternion = true; - - this._x = x; - this._y = y; - this._z = z; - this._w = w; - - } - - static slerpFlat(dst, dstOffset, src0, srcOffset0, src1, srcOffset1, t) { - - // fuzz-free, array-based Quaternion SLERP operation - - let x0 = src0[srcOffset0 + 0], - y0 = src0[srcOffset0 + 1], - z0 = src0[srcOffset0 + 2], - w0 = src0[srcOffset0 + 3]; - - const x1 = src1[srcOffset1 + 0], - y1 = src1[srcOffset1 + 1], - z1 = src1[srcOffset1 + 2], - w1 = src1[srcOffset1 + 3]; - - if (t === 0) { - - dst[dstOffset + 0] = x0; - dst[dstOffset + 1] = y0; - dst[dstOffset + 2] = z0; - dst[dstOffset + 3] = w0; - return; - - } - - if (t === 1) { - - dst[dstOffset + 0] = x1; - dst[dstOffset + 1] = y1; - dst[dstOffset + 2] = z1; - dst[dstOffset + 3] = w1; - return; - - } - - if (w0 !== w1 || x0 !== x1 || y0 !== y1 || z0 !== z1) { - - let s = 1 - t; - const cos = x0 * x1 + y0 * y1 + z0 * z1 + w0 * w1, - dir = (cos >= 0 ? 1 : - 1), - sqrSin = 1 - cos * cos; - - // Skip the Slerp for tiny steps to avoid numeric problems: - if (sqrSin > Number.EPSILON) { - - const sin = Math.sqrt(sqrSin), - len = Math.atan2(sin, cos * dir); - - s = Math.sin(s * len) / sin; - t = Math.sin(t * len) / sin; - - } - - const tDir = t * dir; - - x0 = x0 * s + x1 * tDir; - y0 = y0 * s + y1 * tDir; - z0 = z0 * s + z1 * tDir; - w0 = w0 * s + w1 * tDir; - - // Normalize in case we just did a lerp: - if (s === 1 - t) { - - const f = 1 / Math.sqrt(x0 * x0 + y0 * y0 + z0 * z0 + w0 * w0); - - x0 *= f; - y0 *= f; - z0 *= f; - w0 *= f; - - } - - } - - dst[dstOffset] = x0; - dst[dstOffset + 1] = y0; - dst[dstOffset + 2] = z0; - dst[dstOffset + 3] = w0; - - } - - static multiplyQuaternionsFlat(dst, dstOffset, src0, srcOffset0, src1, srcOffset1) { - - const x0 = src0[srcOffset0]; - const y0 = src0[srcOffset0 + 1]; - const z0 = src0[srcOffset0 + 2]; - const w0 = src0[srcOffset0 + 3]; - - const x1 = src1[srcOffset1]; - const y1 = src1[srcOffset1 + 1]; - const z1 = src1[srcOffset1 + 2]; - const w1 = src1[srcOffset1 + 3]; - - dst[dstOffset] = x0 * w1 + w0 * x1 + y0 * z1 - z0 * y1; - dst[dstOffset + 1] = y0 * w1 + w0 * y1 + z0 * x1 - x0 * z1; - dst[dstOffset + 2] = z0 * w1 + w0 * z1 + x0 * y1 - y0 * x1; - dst[dstOffset + 3] = w0 * w1 - x0 * x1 - y0 * y1 - z0 * z1; - - return dst; - - } - - get x() { - - return this._x; - - } - - set x(value) { - - this._x = value; - this._onChangeCallback(); - - } - - get y() { - - return this._y; - - } - - set y(value) { - - this._y = value; - this._onChangeCallback(); - - } - - get z() { - - return this._z; - - } - - set z(value) { - - this._z = value; - this._onChangeCallback(); - - } - - get w() { - - return this._w; - - } - - set w(value) { - - this._w = value; - this._onChangeCallback(); - - } - - set(x, y, z, w) { - - this._x = x; - this._y = y; - this._z = z; - this._w = w; - - this._onChangeCallback(); - - return this; - - } - - clone() { - - return new this.constructor(this._x, this._y, this._z, this._w); - - } - - copy(quaternion) { - - this._x = quaternion.x; - this._y = quaternion.y; - this._z = quaternion.z; - this._w = quaternion.w; - - this._onChangeCallback(); - - return this; - - } - - setFromEuler(euler, update) { - - const x = euler._x, y = euler._y, z = euler._z, order = euler._order; - - // http://www.mathworks.com/matlabcentral/fileexchange/ - // 20696-function-to-convert-between-dcm-euler-angles-quaternions-and-euler-vectors/ - // content/SpinCalc.m - - const cos = Math.cos; - const sin = Math.sin; - - const c1 = cos(x / 2); - const c2 = cos(y / 2); - const c3 = cos(z / 2); - - const s1 = sin(x / 2); - const s2 = sin(y / 2); - const s3 = sin(z / 2); - - switch (order) { - - case 'XYZ': - this._x = s1 * c2 * c3 + c1 * s2 * s3; - this._y = c1 * s2 * c3 - s1 * c2 * s3; - this._z = c1 * c2 * s3 + s1 * s2 * c3; - this._w = c1 * c2 * c3 - s1 * s2 * s3; - break; - - case 'YXZ': - this._x = s1 * c2 * c3 + c1 * s2 * s3; - this._y = c1 * s2 * c3 - s1 * c2 * s3; - this._z = c1 * c2 * s3 - s1 * s2 * c3; - this._w = c1 * c2 * c3 + s1 * s2 * s3; - break; - - case 'ZXY': - this._x = s1 * c2 * c3 - c1 * s2 * s3; - this._y = c1 * s2 * c3 + s1 * c2 * s3; - this._z = c1 * c2 * s3 + s1 * s2 * c3; - this._w = c1 * c2 * c3 - s1 * s2 * s3; - break; - - case 'ZYX': - this._x = s1 * c2 * c3 - c1 * s2 * s3; - this._y = c1 * s2 * c3 + s1 * c2 * s3; - this._z = c1 * c2 * s3 - s1 * s2 * c3; - this._w = c1 * c2 * c3 + s1 * s2 * s3; - break; - - case 'YZX': - this._x = s1 * c2 * c3 + c1 * s2 * s3; - this._y = c1 * s2 * c3 + s1 * c2 * s3; - this._z = c1 * c2 * s3 - s1 * s2 * c3; - this._w = c1 * c2 * c3 - s1 * s2 * s3; - break; - - case 'XZY': - this._x = s1 * c2 * c3 - c1 * s2 * s3; - this._y = c1 * s2 * c3 - s1 * c2 * s3; - this._z = c1 * c2 * s3 + s1 * s2 * c3; - this._w = c1 * c2 * c3 + s1 * s2 * s3; - break; - - default: - console.warn('THREE.Quaternion: .setFromEuler() encountered an unknown order: ' + order); - - } - - if (update !== false) this._onChangeCallback(); - - return this; - - } - - setFromAxisAngle(axis, angle) { - - // http://www.euclideanspace.com/maths/geometry/rotations/conversions/angleToQuaternion/index.htm - - // assumes axis is normalized - - const halfAngle = angle / 2, s = Math.sin(halfAngle); - - this._x = axis.x * s; - this._y = axis.y * s; - this._z = axis.z * s; - this._w = Math.cos(halfAngle); - - this._onChangeCallback(); - - return this; - - } - - setFromRotationMatrix(m) { - - // http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/index.htm - - // assumes the upper 3x3 of m is a pure rotation matrix (i.e, unscaled) - - const te = m.elements, - - m11 = te[0], m12 = te[4], m13 = te[8], - m21 = te[1], m22 = te[5], m23 = te[9], - m31 = te[2], m32 = te[6], m33 = te[10], - - trace = m11 + m22 + m33; - - if (trace > 0) { - - const s = 0.5 / Math.sqrt(trace + 1.0); - - this._w = 0.25 / s; - this._x = (m32 - m23) * s; - this._y = (m13 - m31) * s; - this._z = (m21 - m12) * s; - - } else if (m11 > m22 && m11 > m33) { - - const s = 2.0 * Math.sqrt(1.0 + m11 - m22 - m33); - - this._w = (m32 - m23) / s; - this._x = 0.25 * s; - this._y = (m12 + m21) / s; - this._z = (m13 + m31) / s; - - } else if (m22 > m33) { - - const s = 2.0 * Math.sqrt(1.0 + m22 - m11 - m33); - - this._w = (m13 - m31) / s; - this._x = (m12 + m21) / s; - this._y = 0.25 * s; - this._z = (m23 + m32) / s; - - } else { - - const s = 2.0 * Math.sqrt(1.0 + m33 - m11 - m22); - - this._w = (m21 - m12) / s; - this._x = (m13 + m31) / s; - this._y = (m23 + m32) / s; - this._z = 0.25 * s; - - } - - this._onChangeCallback(); - - return this; - - } - - setFromUnitVectors(vFrom, vTo) { - - // assumes direction vectors vFrom and vTo are normalized - - let r = vFrom.dot(vTo) + 1; - - if (r < Number.EPSILON) { - - // vFrom and vTo point in opposite directions - - r = 0; - - if (Math.abs(vFrom.x) > Math.abs(vFrom.z)) { - - this._x = - vFrom.y; - this._y = vFrom.x; - this._z = 0; - this._w = r; - - } else { - - this._x = 0; - this._y = - vFrom.z; - this._z = vFrom.y; - this._w = r; - - } - - } else { - - // crossVectors( vFrom, vTo ); // inlined to avoid cyclic dependency on Vector3 - - this._x = vFrom.y * vTo.z - vFrom.z * vTo.y; - this._y = vFrom.z * vTo.x - vFrom.x * vTo.z; - this._z = vFrom.x * vTo.y - vFrom.y * vTo.x; - this._w = r; - - } - - return this.normalize(); - - } - - angleTo(q) { - - return 2 * Math.acos(Math.abs(clamp(this.dot(q), - 1, 1))); - - } - - rotateTowards(q, step) { - - const angle = this.angleTo(q); - - if (angle === 0) return this; - - const t = Math.min(1, step / angle); - - this.slerp(q, t); - - return this; - - } - - identity() { - - return this.set(0, 0, 0, 1); - - } - - invert() { - - // quaternion is assumed to have unit length - - return this.conjugate(); - - } - - conjugate() { - - this._x *= - 1; - this._y *= - 1; - this._z *= - 1; - - this._onChangeCallback(); - - return this; - - } - - dot(v) { - - return this._x * v._x + this._y * v._y + this._z * v._z + this._w * v._w; - - } - - lengthSq() { - - return this._x * this._x + this._y * this._y + this._z * this._z + this._w * this._w; - - } - - length() { - - return Math.sqrt(this._x * this._x + this._y * this._y + this._z * this._z + this._w * this._w); - - } - - normalize() { - - let l = this.length(); - - if (l === 0) { - - this._x = 0; - this._y = 0; - this._z = 0; - this._w = 1; - - } else { - - l = 1 / l; - - this._x = this._x * l; - this._y = this._y * l; - this._z = this._z * l; - this._w = this._w * l; - - } - - this._onChangeCallback(); - - return this; - - } - - multiply(q) { - - return this.multiplyQuaternions(this, q); - - } - - premultiply(q) { - - return this.multiplyQuaternions(q, this); - - } - - multiplyQuaternions(a, b) { - - // from http://www.euclideanspace.com/maths/algebra/realNormedAlgebra/quaternions/code/index.htm - - const qax = a._x, qay = a._y, qaz = a._z, qaw = a._w; - const qbx = b._x, qby = b._y, qbz = b._z, qbw = b._w; - - this._x = qax * qbw + qaw * qbx + qay * qbz - qaz * qby; - this._y = qay * qbw + qaw * qby + qaz * qbx - qax * qbz; - this._z = qaz * qbw + qaw * qbz + qax * qby - qay * qbx; - this._w = qaw * qbw - qax * qbx - qay * qby - qaz * qbz; - - this._onChangeCallback(); - - return this; - - } - - slerp(qb, t) { - - if (t === 0) return this; - if (t === 1) return this.copy(qb); - - const x = this._x, y = this._y, z = this._z, w = this._w; - - // http://www.euclideanspace.com/maths/algebra/realNormedAlgebra/quaternions/slerp/ - - let cosHalfTheta = w * qb._w + x * qb._x + y * qb._y + z * qb._z; - - if (cosHalfTheta < 0) { - - this._w = - qb._w; - this._x = - qb._x; - this._y = - qb._y; - this._z = - qb._z; - - cosHalfTheta = - cosHalfTheta; - - } else { - - this.copy(qb); - - } - - if (cosHalfTheta >= 1.0) { - - this._w = w; - this._x = x; - this._y = y; - this._z = z; - - return this; - - } - - const sqrSinHalfTheta = 1.0 - cosHalfTheta * cosHalfTheta; - - if (sqrSinHalfTheta <= Number.EPSILON) { - - const s = 1 - t; - this._w = s * w + t * this._w; - this._x = s * x + t * this._x; - this._y = s * y + t * this._y; - this._z = s * z + t * this._z; - - this.normalize(); - this._onChangeCallback(); - - return this; - - } - - const sinHalfTheta = Math.sqrt(sqrSinHalfTheta); - const halfTheta = Math.atan2(sinHalfTheta, cosHalfTheta); - const ratioA = Math.sin((1 - t) * halfTheta) / sinHalfTheta, - ratioB = Math.sin(t * halfTheta) / sinHalfTheta; - - this._w = (w * ratioA + this._w * ratioB); - this._x = (x * ratioA + this._x * ratioB); - this._y = (y * ratioA + this._y * ratioB); - this._z = (z * ratioA + this._z * ratioB); - - this._onChangeCallback(); - - return this; - - } - - slerpQuaternions(qa, qb, t) { - - return this.copy(qa).slerp(qb, t); - - } - - random() { - - // Derived from http://planning.cs.uiuc.edu/node198.html - // Note, this source uses w, x, y, z ordering, - // so we swap the order below. - - const u1 = Math.random(); - const sqrt1u1 = Math.sqrt(1 - u1); - const sqrtu1 = Math.sqrt(u1); - - const u2 = 2 * Math.PI * Math.random(); - - const u3 = 2 * Math.PI * Math.random(); - - return this.set( - sqrt1u1 * Math.cos(u2), - sqrtu1 * Math.sin(u3), - sqrtu1 * Math.cos(u3), - sqrt1u1 * Math.sin(u2), - ); - - } - - equals(quaternion) { - - return (quaternion._x === this._x) && (quaternion._y === this._y) && (quaternion._z === this._z) && (quaternion._w === this._w); - - } - - fromArray(array, offset = 0) { - - this._x = array[offset]; - this._y = array[offset + 1]; - this._z = array[offset + 2]; - this._w = array[offset + 3]; - - this._onChangeCallback(); - - return this; - - } - - toArray(array = [], offset = 0) { - - array[offset] = this._x; - array[offset + 1] = this._y; - array[offset + 2] = this._z; - array[offset + 3] = this._w; - - return array; - - } - - fromBufferAttribute(attribute, index) { - - this._x = attribute.getX(index); - this._y = attribute.getY(index); - this._z = attribute.getZ(index); - this._w = attribute.getW(index); - - return this; - - } - - _onChange(callback) { - - this._onChangeCallback = callback; - - return this; - - } - - _onChangeCallback() { } - - *[Symbol.iterator]() { - - yield this._x; - yield this._y; - yield this._z; - yield this._w; - - } - -} - -class Vector3 { - - constructor(x = 0, y = 0, z = 0) { - - Vector3.prototype.isVector3 = true; - - this.x = x; - this.y = y; - this.z = z; - - } - - set(x, y, z) { - - if (z === undefined) z = this.z; // sprite.scale.set(x,y) - - this.x = x; - this.y = y; - this.z = z; - - return this; - - } - - setScalar(scalar) { - - this.x = scalar; - this.y = scalar; - this.z = scalar; - - return this; - - } - - setX(x) { - - this.x = x; - - return this; - - } - - setY(y) { - - this.y = y; - - return this; - - } - - setZ(z) { - - this.z = z; - - return this; - - } - - setComponent(index, value) { - - switch (index) { - - case 0: this.x = value; break; - case 1: this.y = value; break; - case 2: this.z = value; break; - default: throw new Error('index is out of range: ' + index); - - } - - return this; - - } - - getComponent(index) { - - switch (index) { - - case 0: return this.x; - case 1: return this.y; - case 2: return this.z; - default: throw new Error('index is out of range: ' + index); - - } - - } - - clone() { - - return new this.constructor(this.x, this.y, this.z); - - } - - copy(v) { - - this.x = v.x; - this.y = v.y; - this.z = v.z; - - return this; - - } - - add(v) { - - this.x += v.x; - this.y += v.y; - this.z += v.z; - - return this; - - } - - addScalar(s) { - - this.x += s; - this.y += s; - this.z += s; - - return this; - - } - - addVectors(a, b) { - - this.x = a.x + b.x; - this.y = a.y + b.y; - this.z = a.z + b.z; - - return this; - - } - - addScaledVector(v, s) { - - this.x += v.x * s; - this.y += v.y * s; - this.z += v.z * s; - - return this; - - } - - sub(v) { - - this.x -= v.x; - this.y -= v.y; - this.z -= v.z; - - return this; - - } - - subScalar(s) { - - this.x -= s; - this.y -= s; - this.z -= s; - - return this; - - } - - subVectors(a, b) { - - this.x = a.x - b.x; - this.y = a.y - b.y; - this.z = a.z - b.z; - - return this; - - } - - multiply(v) { - - this.x *= v.x; - this.y *= v.y; - this.z *= v.z; - - return this; - - } - - multiplyScalar(scalar) { - - this.x *= scalar; - this.y *= scalar; - this.z *= scalar; - - return this; - - } - - multiplyVectors(a, b) { - - this.x = a.x * b.x; - this.y = a.y * b.y; - this.z = a.z * b.z; - - return this; - - } - - applyEuler(euler) { - - return this.applyQuaternion(_quaternion$4.setFromEuler(euler)); - - } - - applyAxisAngle(axis, angle) { - - return this.applyQuaternion(_quaternion$4.setFromAxisAngle(axis, angle)); - - } - - applyMatrix3(m) { - - const x = this.x, y = this.y, z = this.z; - const e = m.elements; - - this.x = e[0] * x + e[3] * y + e[6] * z; - this.y = e[1] * x + e[4] * y + e[7] * z; - this.z = e[2] * x + e[5] * y + e[8] * z; - - return this; - - } - - applyNormalMatrix(m) { - - return this.applyMatrix3(m).normalize(); - - } - - applyMatrix4(m) { - - const x = this.x, y = this.y, z = this.z; - const e = m.elements; - - const w = 1 / (e[3] * x + e[7] * y + e[11] * z + e[15]); - - this.x = (e[0] * x + e[4] * y + e[8] * z + e[12]) * w; - this.y = (e[1] * x + e[5] * y + e[9] * z + e[13]) * w; - this.z = (e[2] * x + e[6] * y + e[10] * z + e[14]) * w; - - return this; - - } - - applyQuaternion(q) { - - const x = this.x, y = this.y, z = this.z; - const qx = q.x, qy = q.y, qz = q.z, qw = q.w; - - // calculate quat * vector - - const ix = qw * x + qy * z - qz * y; - const iy = qw * y + qz * x - qx * z; - const iz = qw * z + qx * y - qy * x; - const iw = - qx * x - qy * y - qz * z; - - // calculate result * inverse quat - - this.x = ix * qw + iw * - qx + iy * - qz - iz * - qy; - this.y = iy * qw + iw * - qy + iz * - qx - ix * - qz; - this.z = iz * qw + iw * - qz + ix * - qy - iy * - qx; - - return this; - - } - - project(camera) { - - return this.applyMatrix4(camera.matrixWorldInverse).applyMatrix4(camera.projectionMatrix); - - } - - unproject(camera) { - - return this.applyMatrix4(camera.projectionMatrixInverse).applyMatrix4(camera.matrixWorld); - - } - - transformDirection(m) { - - // input: THREE.Matrix4 affine matrix - // vector interpreted as a direction - - const x = this.x, y = this.y, z = this.z; - const e = m.elements; - - this.x = e[0] * x + e[4] * y + e[8] * z; - this.y = e[1] * x + e[5] * y + e[9] * z; - this.z = e[2] * x + e[6] * y + e[10] * z; - - return this.normalize(); - - } - - divide(v) { - - this.x /= v.x; - this.y /= v.y; - this.z /= v.z; - - return this; - - } - - divideScalar(scalar) { - - return this.multiplyScalar(1 / scalar); - - } - - min(v) { - - this.x = Math.min(this.x, v.x); - this.y = Math.min(this.y, v.y); - this.z = Math.min(this.z, v.z); - - return this; - - } - - max(v) { - - this.x = Math.max(this.x, v.x); - this.y = Math.max(this.y, v.y); - this.z = Math.max(this.z, v.z); - - return this; - - } - - clamp(min, max) { - - // assumes min < max, componentwise - - this.x = Math.max(min.x, Math.min(max.x, this.x)); - this.y = Math.max(min.y, Math.min(max.y, this.y)); - this.z = Math.max(min.z, Math.min(max.z, this.z)); - - return this; - - } - - clampScalar(minVal, maxVal) { - - this.x = Math.max(minVal, Math.min(maxVal, this.x)); - this.y = Math.max(minVal, Math.min(maxVal, this.y)); - this.z = Math.max(minVal, Math.min(maxVal, this.z)); - - return this; - - } - - clampLength(min, max) { - - const length = this.length(); - - return this.divideScalar(length || 1).multiplyScalar(Math.max(min, Math.min(max, length))); - - } - - floor() { - - this.x = Math.floor(this.x); - this.y = Math.floor(this.y); - this.z = Math.floor(this.z); - - return this; - - } - - ceil() { - - this.x = Math.ceil(this.x); - this.y = Math.ceil(this.y); - this.z = Math.ceil(this.z); - - return this; - - } - - round() { - - this.x = Math.round(this.x); - this.y = Math.round(this.y); - this.z = Math.round(this.z); - - return this; - - } - - roundToZero() { - - this.x = (this.x < 0) ? Math.ceil(this.x) : Math.floor(this.x); - this.y = (this.y < 0) ? Math.ceil(this.y) : Math.floor(this.y); - this.z = (this.z < 0) ? Math.ceil(this.z) : Math.floor(this.z); - - return this; - - } - - negate() { - - this.x = - this.x; - this.y = - this.y; - this.z = - this.z; - - return this; - - } - - dot(v) { - - return this.x * v.x + this.y * v.y + this.z * v.z; - - } - - // TODO lengthSquared? - - lengthSq() { - - return this.x * this.x + this.y * this.y + this.z * this.z; - - } - - length() { - - return Math.sqrt(this.x * this.x + this.y * this.y + this.z * this.z); - - } - - manhattanLength() { - - return Math.abs(this.x) + Math.abs(this.y) + Math.abs(this.z); - - } - - normalize() { - - return this.divideScalar(this.length() || 1); - - } - - setLength(length) { - - return this.normalize().multiplyScalar(length); - - } - - lerp(v, alpha) { - - this.x += (v.x - this.x) * alpha; - this.y += (v.y - this.y) * alpha; - this.z += (v.z - this.z) * alpha; - - return this; - - } - - lerpVectors(v1, v2, alpha) { - - this.x = v1.x + (v2.x - v1.x) * alpha; - this.y = v1.y + (v2.y - v1.y) * alpha; - this.z = v1.z + (v2.z - v1.z) * alpha; - - return this; - - } - - cross(v) { - - return this.crossVectors(this, v); - - } - - crossVectors(a, b) { - - const ax = a.x, ay = a.y, az = a.z; - const bx = b.x, by = b.y, bz = b.z; - - this.x = ay * bz - az * by; - this.y = az * bx - ax * bz; - this.z = ax * by - ay * bx; - - return this; - - } - - projectOnVector(v) { - - const denominator = v.lengthSq(); - - if (denominator === 0) return this.set(0, 0, 0); - - const scalar = v.dot(this) / denominator; - - return this.copy(v).multiplyScalar(scalar); - - } - - projectOnPlane(planeNormal) { - - _vector$c.copy(this).projectOnVector(planeNormal); - - return this.sub(_vector$c); - - } - - reflect(normal) { - - // reflect incident vector off plane orthogonal to normal - // normal is assumed to have unit length - - return this.sub(_vector$c.copy(normal).multiplyScalar(2 * this.dot(normal))); - - } - - angleTo(v) { - - const denominator = Math.sqrt(this.lengthSq() * v.lengthSq()); - - if (denominator === 0) return Math.PI / 2; - - const theta = this.dot(v) / denominator; - - // clamp, to handle numerical problems - - return Math.acos(clamp(theta, - 1, 1)); - - } - - distanceTo(v) { - - return Math.sqrt(this.distanceToSquared(v)); - - } - - distanceToSquared(v) { - - const dx = this.x - v.x, dy = this.y - v.y, dz = this.z - v.z; - - return dx * dx + dy * dy + dz * dz; - - } - - manhattanDistanceTo(v) { - - return Math.abs(this.x - v.x) + Math.abs(this.y - v.y) + Math.abs(this.z - v.z); - - } - - setFromSpherical(s) { - - return this.setFromSphericalCoords(s.radius, s.phi, s.theta); - - } - - setFromSphericalCoords(radius, phi, theta) { - - const sinPhiRadius = Math.sin(phi) * radius; - - this.x = sinPhiRadius * Math.sin(theta); - this.y = Math.cos(phi) * radius; - this.z = sinPhiRadius * Math.cos(theta); - - return this; - - } - - setFromCylindrical(c) { - - return this.setFromCylindricalCoords(c.radius, c.theta, c.y); - - } - - setFromCylindricalCoords(radius, theta, y) { - - this.x = radius * Math.sin(theta); - this.y = y; - this.z = radius * Math.cos(theta); - - return this; - - } - - setFromMatrixPosition(m) { - - const e = m.elements; - - this.x = e[12]; - this.y = e[13]; - this.z = e[14]; - - return this; - - } - - setFromMatrixScale(m) { - - const sx = this.setFromMatrixColumn(m, 0).length(); - const sy = this.setFromMatrixColumn(m, 1).length(); - const sz = this.setFromMatrixColumn(m, 2).length(); - - this.x = sx; - this.y = sy; - this.z = sz; - - return this; - - } - - setFromMatrixColumn(m, index) { - - return this.fromArray(m.elements, index * 4); - - } - - setFromMatrix3Column(m, index) { - - return this.fromArray(m.elements, index * 3); - - } - - setFromEuler(e) { - - this.x = e._x; - this.y = e._y; - this.z = e._z; - - return this; - - } - - equals(v) { - - return ((v.x === this.x) && (v.y === this.y) && (v.z === this.z)); - - } - - fromArray(array, offset = 0) { - - this.x = array[offset]; - this.y = array[offset + 1]; - this.z = array[offset + 2]; - - return this; - - } - - toArray(array = [], offset = 0) { - - array[offset] = this.x; - array[offset + 1] = this.y; - array[offset + 2] = this.z; - - return array; - - } - - fromBufferAttribute(attribute, index) { - - this.x = attribute.getX(index); - this.y = attribute.getY(index); - this.z = attribute.getZ(index); - - return this; - - } - - random() { - - this.x = Math.random(); - this.y = Math.random(); - this.z = Math.random(); - - return this; - - } - - randomDirection() { - - // Derived from https://mathworld.wolfram.com/SpherePointPicking.html - - const u = (Math.random() - 0.5) * 2; - const t = Math.random() * Math.PI * 2; - const f = Math.sqrt(1 - u ** 2); - - this.x = f * Math.cos(t); - this.y = f * Math.sin(t); - this.z = u; - - return this; - - } - - *[Symbol.iterator]() { - - yield this.x; - yield this.y; - yield this.z; - - } - -} - -const _vector$c = /*@__PURE__*/ new Vector3(); -const _quaternion$4 = /*@__PURE__*/ new Quaternion(); - -class Box3 { - - constructor(min = new Vector3(+ Infinity, + Infinity, + Infinity), max = new Vector3(- Infinity, - Infinity, - Infinity)) { - - this.isBox3 = true; - - this.min = min; - this.max = max; - - } - - set(min, max) { - - this.min.copy(min); - this.max.copy(max); - - return this; - - } - - setFromArray(array) { - - let minX = + Infinity; - let minY = + Infinity; - let minZ = + Infinity; - - let maxX = - Infinity; - let maxY = - Infinity; - let maxZ = - Infinity; - - for (let i = 0, l = array.length; i < l; i += 3) { - - const x = array[i]; - const y = array[i + 1]; - const z = array[i + 2]; - - if (x < minX) minX = x; - if (y < minY) minY = y; - if (z < minZ) minZ = z; - - if (x > maxX) maxX = x; - if (y > maxY) maxY = y; - if (z > maxZ) maxZ = z; - - } - - this.min.set(minX, minY, minZ); - this.max.set(maxX, maxY, maxZ); - - return this; - - } - - setFromBufferAttribute(attribute) { - - let minX = + Infinity; - let minY = + Infinity; - let minZ = + Infinity; - - let maxX = - Infinity; - let maxY = - Infinity; - let maxZ = - Infinity; - - for (let i = 0, l = attribute.count; i < l; i++) { - - const x = attribute.getX(i); - const y = attribute.getY(i); - const z = attribute.getZ(i); - - if (x < minX) minX = x; - if (y < minY) minY = y; - if (z < minZ) minZ = z; - - if (x > maxX) maxX = x; - if (y > maxY) maxY = y; - if (z > maxZ) maxZ = z; - - } - - this.min.set(minX, minY, minZ); - this.max.set(maxX, maxY, maxZ); - - return this; - - } - - setFromPoints(points) { - - this.makeEmpty(); - - for (let i = 0, il = points.length; i < il; i++) { - - this.expandByPoint(points[i]); - - } - - return this; - - } - - setFromCenterAndSize(center, size) { - - const halfSize = _vector$b.copy(size).multiplyScalar(0.5); - - this.min.copy(center).sub(halfSize); - this.max.copy(center).add(halfSize); - - return this; - - } - - setFromObject(object, precise = false) { - - this.makeEmpty(); - - return this.expandByObject(object, precise); - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(box) { - - this.min.copy(box.min); - this.max.copy(box.max); - - return this; - - } - - makeEmpty() { - - this.min.x = this.min.y = this.min.z = + Infinity; - this.max.x = this.max.y = this.max.z = - Infinity; - - return this; - - } - - isEmpty() { - - // this is a more robust check for empty than ( volume <= 0 ) because volume can get positive with two negative axes - - return (this.max.x < this.min.x) || (this.max.y < this.min.y) || (this.max.z < this.min.z); - - } - - getCenter(target) { - - return this.isEmpty() ? target.set(0, 0, 0) : target.addVectors(this.min, this.max).multiplyScalar(0.5); - - } - - getSize(target) { - - return this.isEmpty() ? target.set(0, 0, 0) : target.subVectors(this.max, this.min); - - } - - expandByPoint(point) { - - this.min.min(point); - this.max.max(point); - - return this; - - } - - expandByVector(vector) { - - this.min.sub(vector); - this.max.add(vector); - - return this; - - } - - expandByScalar(scalar) { - - this.min.addScalar(- scalar); - this.max.addScalar(scalar); - - return this; - - } - - expandByObject(object, precise = false) { - - // Computes the world-axis-aligned bounding box of an object (including its children), - // accounting for both the object's, and children's, world transforms - - object.updateWorldMatrix(false, false); - - const geometry = object.geometry; - - if (geometry !== undefined) { - - if (precise && geometry.attributes != undefined && geometry.attributes.position !== undefined) { - - const position = geometry.attributes.position; - for (let i = 0, l = position.count; i < l; i++) { - - _vector$b.fromBufferAttribute(position, i).applyMatrix4(object.matrixWorld); - this.expandByPoint(_vector$b); - - } - - } else { - - if (geometry.boundingBox === null) { - - geometry.computeBoundingBox(); - - } - - _box$3.copy(geometry.boundingBox); - _box$3.applyMatrix4(object.matrixWorld); - - this.union(_box$3); - - } - - } - - const children = object.children; - - for (let i = 0, l = children.length; i < l; i++) { - - this.expandByObject(children[i], precise); - - } - - return this; - - } - - containsPoint(point) { - - return point.x < this.min.x || point.x > this.max.x || - point.y < this.min.y || point.y > this.max.y || - point.z < this.min.z || point.z > this.max.z ? false : true; - - } - - containsBox(box) { - - return this.min.x <= box.min.x && box.max.x <= this.max.x && - this.min.y <= box.min.y && box.max.y <= this.max.y && - this.min.z <= box.min.z && box.max.z <= this.max.z; - - } - - getParameter(point, target) { - - // This can potentially have a divide by zero if the box - // has a size dimension of 0. - - return target.set( - (point.x - this.min.x) / (this.max.x - this.min.x), - (point.y - this.min.y) / (this.max.y - this.min.y), - (point.z - this.min.z) / (this.max.z - this.min.z) - ); - - } - - intersectsBox(box) { - - // using 6 splitting planes to rule out intersections. - return box.max.x < this.min.x || box.min.x > this.max.x || - box.max.y < this.min.y || box.min.y > this.max.y || - box.max.z < this.min.z || box.min.z > this.max.z ? false : true; - - } - - intersectsSphere(sphere) { - - // Find the point on the AABB closest to the sphere center. - this.clampPoint(sphere.center, _vector$b); - - // If that point is inside the sphere, the AABB and sphere intersect. - return _vector$b.distanceToSquared(sphere.center) <= (sphere.radius * sphere.radius); - - } - - intersectsPlane(plane) { - - // We compute the minimum and maximum dot product values. If those values - // are on the same side (back or front) of the plane, then there is no intersection. - - let min, max; - - if (plane.normal.x > 0) { - - min = plane.normal.x * this.min.x; - max = plane.normal.x * this.max.x; - - } else { - - min = plane.normal.x * this.max.x; - max = plane.normal.x * this.min.x; - - } - - if (plane.normal.y > 0) { - - min += plane.normal.y * this.min.y; - max += plane.normal.y * this.max.y; - - } else { - - min += plane.normal.y * this.max.y; - max += plane.normal.y * this.min.y; - - } - - if (plane.normal.z > 0) { - - min += plane.normal.z * this.min.z; - max += plane.normal.z * this.max.z; - - } else { - - min += plane.normal.z * this.max.z; - max += plane.normal.z * this.min.z; - - } - - return (min <= - plane.constant && max >= - plane.constant); - - } - - intersectsTriangle(triangle) { - - if (this.isEmpty()) { - - return false; - - } - - // compute box center and extents - this.getCenter(_center); - _extents.subVectors(this.max, _center); - - // translate triangle to aabb origin - _v0$2.subVectors(triangle.a, _center); - _v1$7.subVectors(triangle.b, _center); - _v2$4.subVectors(triangle.c, _center); - - // compute edge vectors for triangle - _f0.subVectors(_v1$7, _v0$2); - _f1.subVectors(_v2$4, _v1$7); - _f2.subVectors(_v0$2, _v2$4); - - // test against axes that are given by cross product combinations of the edges of the triangle and the edges of the aabb - // make an axis testing of each of the 3 sides of the aabb against each of the 3 sides of the triangle = 9 axis of separation - // axis_ij = u_i x f_j (u0, u1, u2 = face normals of aabb = x,y,z axes vectors since aabb is axis aligned) - let axes = [ - 0, - _f0.z, _f0.y, 0, - _f1.z, _f1.y, 0, - _f2.z, _f2.y, - _f0.z, 0, - _f0.x, _f1.z, 0, - _f1.x, _f2.z, 0, - _f2.x, - - _f0.y, _f0.x, 0, - _f1.y, _f1.x, 0, - _f2.y, _f2.x, 0 - ]; - if (!satForAxes(axes, _v0$2, _v1$7, _v2$4, _extents)) { - - return false; - - } - - // test 3 face normals from the aabb - axes = [1, 0, 0, 0, 1, 0, 0, 0, 1]; - if (!satForAxes(axes, _v0$2, _v1$7, _v2$4, _extents)) { - - return false; - - } - - // finally testing the face normal of the triangle - // use already existing triangle edge vectors here - _triangleNormal.crossVectors(_f0, _f1); - axes = [_triangleNormal.x, _triangleNormal.y, _triangleNormal.z]; - - return satForAxes(axes, _v0$2, _v1$7, _v2$4, _extents); - - } - - clampPoint(point, target) { - - return target.copy(point).clamp(this.min, this.max); - - } - - distanceToPoint(point) { - - const clampedPoint = _vector$b.copy(point).clamp(this.min, this.max); - - return clampedPoint.sub(point).length(); - - } - - getBoundingSphere(target) { - - this.getCenter(target.center); - - target.radius = this.getSize(_vector$b).length() * 0.5; - - return target; - - } - - intersect(box) { - - this.min.max(box.min); - this.max.min(box.max); - - // ensure that if there is no overlap, the result is fully empty, not slightly empty with non-inf/+inf values that will cause subsequence intersects to erroneously return valid values. - if (this.isEmpty()) this.makeEmpty(); - - return this; - - } - - union(box) { - - this.min.min(box.min); - this.max.max(box.max); - - return this; - - } - - applyMatrix4(matrix) { - - // transform of empty box is an empty box. - if (this.isEmpty()) return this; - - // NOTE: I am using a binary pattern to specify all 2^3 combinations below - _points[0].set(this.min.x, this.min.y, this.min.z).applyMatrix4(matrix); // 000 - _points[1].set(this.min.x, this.min.y, this.max.z).applyMatrix4(matrix); // 001 - _points[2].set(this.min.x, this.max.y, this.min.z).applyMatrix4(matrix); // 010 - _points[3].set(this.min.x, this.max.y, this.max.z).applyMatrix4(matrix); // 011 - _points[4].set(this.max.x, this.min.y, this.min.z).applyMatrix4(matrix); // 100 - _points[5].set(this.max.x, this.min.y, this.max.z).applyMatrix4(matrix); // 101 - _points[6].set(this.max.x, this.max.y, this.min.z).applyMatrix4(matrix); // 110 - _points[7].set(this.max.x, this.max.y, this.max.z).applyMatrix4(matrix); // 111 - - this.setFromPoints(_points); - - return this; - - } - - translate(offset) { - - this.min.add(offset); - this.max.add(offset); - - return this; - - } - - equals(box) { - - return box.min.equals(this.min) && box.max.equals(this.max); - - } - -} - -const _points = [ - /*@__PURE__*/ new Vector3(), - /*@__PURE__*/ new Vector3(), - /*@__PURE__*/ new Vector3(), - /*@__PURE__*/ new Vector3(), - /*@__PURE__*/ new Vector3(), - /*@__PURE__*/ new Vector3(), - /*@__PURE__*/ new Vector3(), - /*@__PURE__*/ new Vector3() -]; - -const _vector$b = /*@__PURE__*/ new Vector3(); - -const _box$3 = /*@__PURE__*/ new Box3(); - -// triangle centered vertices - -const _v0$2 = /*@__PURE__*/ new Vector3(); -const _v1$7 = /*@__PURE__*/ new Vector3(); -const _v2$4 = /*@__PURE__*/ new Vector3(); - -// triangle edge vectors - -const _f0 = /*@__PURE__*/ new Vector3(); -const _f1 = /*@__PURE__*/ new Vector3(); -const _f2 = /*@__PURE__*/ new Vector3(); - -const _center = /*@__PURE__*/ new Vector3(); -const _extents = /*@__PURE__*/ new Vector3(); -const _triangleNormal = /*@__PURE__*/ new Vector3(); -const _testAxis = /*@__PURE__*/ new Vector3(); - -function satForAxes(axes, v0, v1, v2, extents) { - - for (let i = 0, j = axes.length - 3; i <= j; i += 3) { - - _testAxis.fromArray(axes, i); - // project the aabb onto the separating axis - const r = extents.x * Math.abs(_testAxis.x) + extents.y * Math.abs(_testAxis.y) + extents.z * Math.abs(_testAxis.z); - // project all 3 vertices of the triangle onto the separating axis - const p0 = v0.dot(_testAxis); - const p1 = v1.dot(_testAxis); - const p2 = v2.dot(_testAxis); - // actual test, basically see if either of the most extreme of the triangle points intersects r - if (Math.max(- Math.max(p0, p1, p2), Math.min(p0, p1, p2)) > r) { - - // points of the projected triangle are outside the projected half-length of the aabb - // the axis is separating and we can exit - return false; - - } - - } - - return true; - -} - -const _box$2 = /*@__PURE__*/ new Box3(); -const _v1$6 = /*@__PURE__*/ new Vector3(); -const _v2$3 = /*@__PURE__*/ new Vector3(); - -class Sphere { - - constructor(center = new Vector3(), radius = - 1) { - - this.center = center; - this.radius = radius; - - } - - set(center, radius) { - - this.center.copy(center); - this.radius = radius; - - return this; - - } - - setFromPoints(points, optionalCenter) { - - const center = this.center; - - if (optionalCenter !== undefined) { - - center.copy(optionalCenter); - - } else { - - _box$2.setFromPoints(points).getCenter(center); - - } - - let maxRadiusSq = 0; - - for (let i = 0, il = points.length; i < il; i++) { - - maxRadiusSq = Math.max(maxRadiusSq, center.distanceToSquared(points[i])); - - } - - this.radius = Math.sqrt(maxRadiusSq); - - return this; - - } - - copy(sphere) { - - this.center.copy(sphere.center); - this.radius = sphere.radius; - - return this; - - } - - isEmpty() { - - return (this.radius < 0); - - } - - makeEmpty() { - - this.center.set(0, 0, 0); - this.radius = - 1; - - return this; - - } - - containsPoint(point) { - - return (point.distanceToSquared(this.center) <= (this.radius * this.radius)); - - } - - distanceToPoint(point) { - - return (point.distanceTo(this.center) - this.radius); - - } - - intersectsSphere(sphere) { - - const radiusSum = this.radius + sphere.radius; - - return sphere.center.distanceToSquared(this.center) <= (radiusSum * radiusSum); - - } - - intersectsBox(box) { - - return box.intersectsSphere(this); - - } - - intersectsPlane(plane) { - - return Math.abs(plane.distanceToPoint(this.center)) <= this.radius; - - } - - clampPoint(point, target) { - - const deltaLengthSq = this.center.distanceToSquared(point); - - target.copy(point); - - if (deltaLengthSq > (this.radius * this.radius)) { - - target.sub(this.center).normalize(); - target.multiplyScalar(this.radius).add(this.center); - - } - - return target; - - } - - getBoundingBox(target) { - - if (this.isEmpty()) { - - // Empty sphere produces empty bounding box - target.makeEmpty(); - return target; - - } - - target.set(this.center, this.center); - target.expandByScalar(this.radius); - - return target; - - } - - applyMatrix4(matrix) { - - this.center.applyMatrix4(matrix); - this.radius = this.radius * matrix.getMaxScaleOnAxis(); - - return this; - - } - - translate(offset) { - - this.center.add(offset); - - return this; - - } - - expandByPoint(point) { - - if (this.isEmpty()) { - - this.center.copy(point); - - this.radius = 0; - - return this; - - } - - _v1$6.subVectors(point, this.center); - - const lengthSq = _v1$6.lengthSq(); - - if (lengthSq > (this.radius * this.radius)) { - - // calculate the minimal sphere - - const length = Math.sqrt(lengthSq); - - const delta = (length - this.radius) * 0.5; - - this.center.addScaledVector(_v1$6, delta / length); - - this.radius += delta; - - } - - return this; - - } - - union(sphere) { - - if (sphere.isEmpty()) { - - return this; - - } - - if (this.isEmpty()) { - - this.copy(sphere); - - return this; - - } - - if (this.center.equals(sphere.center) === true) { - - this.radius = Math.max(this.radius, sphere.radius); - - } else { - - _v2$3.subVectors(sphere.center, this.center).setLength(sphere.radius); - - this.expandByPoint(_v1$6.copy(sphere.center).add(_v2$3)); - - this.expandByPoint(_v1$6.copy(sphere.center).sub(_v2$3)); - - } - - return this; - - } - - equals(sphere) { - - return sphere.center.equals(this.center) && (sphere.radius === this.radius); - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -const _vector$a = /*@__PURE__*/ new Vector3(); -const _segCenter = /*@__PURE__*/ new Vector3(); -const _segDir = /*@__PURE__*/ new Vector3(); -const _diff = /*@__PURE__*/ new Vector3(); - -const _edge1 = /*@__PURE__*/ new Vector3(); -const _edge2 = /*@__PURE__*/ new Vector3(); -const _normal$1 = /*@__PURE__*/ new Vector3(); - -class Ray { - - constructor(origin = new Vector3(), direction = new Vector3(0, 0, - 1)) { - - this.origin = origin; - this.direction = direction; - - } - - set(origin, direction) { - - this.origin.copy(origin); - this.direction.copy(direction); - - return this; - - } - - copy(ray) { - - this.origin.copy(ray.origin); - this.direction.copy(ray.direction); - - return this; - - } - - at(t, target) { - - return target.copy(this.direction).multiplyScalar(t).add(this.origin); - - } - - lookAt(v) { - - this.direction.copy(v).sub(this.origin).normalize(); - - return this; - - } - - recast(t) { - - this.origin.copy(this.at(t, _vector$a)); - - return this; - - } - - closestPointToPoint(point, target) { - - target.subVectors(point, this.origin); - - const directionDistance = target.dot(this.direction); - - if (directionDistance < 0) { - - return target.copy(this.origin); - - } - - return target.copy(this.direction).multiplyScalar(directionDistance).add(this.origin); - - } - - distanceToPoint(point) { - - return Math.sqrt(this.distanceSqToPoint(point)); - - } - - distanceSqToPoint(point) { - - const directionDistance = _vector$a.subVectors(point, this.origin).dot(this.direction); - - // point behind the ray - - if (directionDistance < 0) { - - return this.origin.distanceToSquared(point); - - } - - _vector$a.copy(this.direction).multiplyScalar(directionDistance).add(this.origin); - - return _vector$a.distanceToSquared(point); - - } - - distanceSqToSegment(v0, v1, optionalPointOnRay, optionalPointOnSegment) { - - // from https://github.com/pmjoniak/GeometricTools/blob/master/GTEngine/Include/Mathematics/GteDistRaySegment.h - // It returns the min distance between the ray and the segment - // defined by v0 and v1 - // It can also set two optional targets : - // - The closest point on the ray - // - The closest point on the segment - - _segCenter.copy(v0).add(v1).multiplyScalar(0.5); - _segDir.copy(v1).sub(v0).normalize(); - _diff.copy(this.origin).sub(_segCenter); - - const segExtent = v0.distanceTo(v1) * 0.5; - const a01 = - this.direction.dot(_segDir); - const b0 = _diff.dot(this.direction); - const b1 = - _diff.dot(_segDir); - const c = _diff.lengthSq(); - const det = Math.abs(1 - a01 * a01); - let s0, s1, sqrDist, extDet; - - if (det > 0) { - - // The ray and segment are not parallel. - - s0 = a01 * b1 - b0; - s1 = a01 * b0 - b1; - extDet = segExtent * det; - - if (s0 >= 0) { - - if (s1 >= - extDet) { - - if (s1 <= extDet) { - - // region 0 - // Minimum at interior points of ray and segment. - - const invDet = 1 / det; - s0 *= invDet; - s1 *= invDet; - sqrDist = s0 * (s0 + a01 * s1 + 2 * b0) + s1 * (a01 * s0 + s1 + 2 * b1) + c; - - } else { - - // region 1 - - s1 = segExtent; - s0 = Math.max(0, - (a01 * s1 + b0)); - sqrDist = - s0 * s0 + s1 * (s1 + 2 * b1) + c; - - } - - } else { - - // region 5 - - s1 = - segExtent; - s0 = Math.max(0, - (a01 * s1 + b0)); - sqrDist = - s0 * s0 + s1 * (s1 + 2 * b1) + c; - - } - - } else { - - if (s1 <= - extDet) { - - // region 4 - - s0 = Math.max(0, - (- a01 * segExtent + b0)); - s1 = (s0 > 0) ? - segExtent : Math.min(Math.max(- segExtent, - b1), segExtent); - sqrDist = - s0 * s0 + s1 * (s1 + 2 * b1) + c; - - } else if (s1 <= extDet) { - - // region 3 - - s0 = 0; - s1 = Math.min(Math.max(- segExtent, - b1), segExtent); - sqrDist = s1 * (s1 + 2 * b1) + c; - - } else { - - // region 2 - - s0 = Math.max(0, - (a01 * segExtent + b0)); - s1 = (s0 > 0) ? segExtent : Math.min(Math.max(- segExtent, - b1), segExtent); - sqrDist = - s0 * s0 + s1 * (s1 + 2 * b1) + c; - - } - - } - - } else { - - // Ray and segment are parallel. - - s1 = (a01 > 0) ? - segExtent : segExtent; - s0 = Math.max(0, - (a01 * s1 + b0)); - sqrDist = - s0 * s0 + s1 * (s1 + 2 * b1) + c; - - } - - if (optionalPointOnRay) { - - optionalPointOnRay.copy(this.direction).multiplyScalar(s0).add(this.origin); - - } - - if (optionalPointOnSegment) { - - optionalPointOnSegment.copy(_segDir).multiplyScalar(s1).add(_segCenter); - - } - - return sqrDist; - - } - - intersectSphere(sphere, target) { - - _vector$a.subVectors(sphere.center, this.origin); - const tca = _vector$a.dot(this.direction); - const d2 = _vector$a.dot(_vector$a) - tca * tca; - const radius2 = sphere.radius * sphere.radius; - - if (d2 > radius2) return null; - - const thc = Math.sqrt(radius2 - d2); - - // t0 = first intersect point - entrance on front of sphere - const t0 = tca - thc; - - // t1 = second intersect point - exit point on back of sphere - const t1 = tca + thc; - - // test to see if both t0 and t1 are behind the ray - if so, return null - if (t0 < 0 && t1 < 0) return null; - - // test to see if t0 is behind the ray: - // if it is, the ray is inside the sphere, so return the second exit point scaled by t1, - // in order to always return an intersect point that is in front of the ray. - if (t0 < 0) return this.at(t1, target); - - // else t0 is in front of the ray, so return the first collision point scaled by t0 - return this.at(t0, target); - - } - - intersectsSphere(sphere) { - - return this.distanceSqToPoint(sphere.center) <= (sphere.radius * sphere.radius); - - } - - distanceToPlane(plane) { - - const denominator = plane.normal.dot(this.direction); - - if (denominator === 0) { - - // line is coplanar, return origin - if (plane.distanceToPoint(this.origin) === 0) { - - return 0; - - } - - // Null is preferable to undefined since undefined means.... it is undefined - - return null; - - } - - const t = - (this.origin.dot(plane.normal) + plane.constant) / denominator; - - // Return if the ray never intersects the plane - - return t >= 0 ? t : null; - - } - - intersectPlane(plane, target) { - - const t = this.distanceToPlane(plane); - - if (t === null) { - - return null; - - } - - return this.at(t, target); - - } - - intersectsPlane(plane) { - - // check if the ray lies on the plane first - - const distToPoint = plane.distanceToPoint(this.origin); - - if (distToPoint === 0) { - - return true; - - } - - const denominator = plane.normal.dot(this.direction); - - if (denominator * distToPoint < 0) { - - return true; - - } - - // ray origin is behind the plane (and is pointing behind it) - - return false; - - } - - intersectBox(box, target) { - - let tmin, tmax, tymin, tymax, tzmin, tzmax; - - const invdirx = 1 / this.direction.x, - invdiry = 1 / this.direction.y, - invdirz = 1 / this.direction.z; - - const origin = this.origin; - - if (invdirx >= 0) { - - tmin = (box.min.x - origin.x) * invdirx; - tmax = (box.max.x - origin.x) * invdirx; - - } else { - - tmin = (box.max.x - origin.x) * invdirx; - tmax = (box.min.x - origin.x) * invdirx; - - } - - if (invdiry >= 0) { - - tymin = (box.min.y - origin.y) * invdiry; - tymax = (box.max.y - origin.y) * invdiry; - - } else { - - tymin = (box.max.y - origin.y) * invdiry; - tymax = (box.min.y - origin.y) * invdiry; - - } - - if ((tmin > tymax) || (tymin > tmax)) return null; - - if (tymin > tmin || isNaN(tmin)) tmin = tymin; - - if (tymax < tmax || isNaN(tmax)) tmax = tymax; - - if (invdirz >= 0) { - - tzmin = (box.min.z - origin.z) * invdirz; - tzmax = (box.max.z - origin.z) * invdirz; - - } else { - - tzmin = (box.max.z - origin.z) * invdirz; - tzmax = (box.min.z - origin.z) * invdirz; - - } - - if ((tmin > tzmax) || (tzmin > tmax)) return null; - - if (tzmin > tmin || tmin !== tmin) tmin = tzmin; - - if (tzmax < tmax || tmax !== tmax) tmax = tzmax; - - //return point closest to the ray (positive side) - - if (tmax < 0) return null; - - return this.at(tmin >= 0 ? tmin : tmax, target); - - } - - intersectsBox(box) { - - return this.intersectBox(box, _vector$a) !== null; - - } - - intersectTriangle(a, b, c, backfaceCulling, target) { - - // Compute the offset origin, edges, and normal. - - // from https://github.com/pmjoniak/GeometricTools/blob/master/GTEngine/Include/Mathematics/GteIntrRay3Triangle3.h - - _edge1.subVectors(b, a); - _edge2.subVectors(c, a); - _normal$1.crossVectors(_edge1, _edge2); - - // Solve Q + t*D = b1*E1 + b2*E2 (Q = kDiff, D = ray direction, - // E1 = kEdge1, E2 = kEdge2, N = Cross(E1,E2)) by - // |Dot(D,N)|*b1 = sign(Dot(D,N))*Dot(D,Cross(Q,E2)) - // |Dot(D,N)|*b2 = sign(Dot(D,N))*Dot(D,Cross(E1,Q)) - // |Dot(D,N)|*t = -sign(Dot(D,N))*Dot(Q,N) - let DdN = this.direction.dot(_normal$1); - let sign; - - if (DdN > 0) { - - if (backfaceCulling) return null; - sign = 1; - - } else if (DdN < 0) { - - sign = - 1; - DdN = - DdN; - - } else { - - return null; - - } - - _diff.subVectors(this.origin, a); - const DdQxE2 = sign * this.direction.dot(_edge2.crossVectors(_diff, _edge2)); - - // b1 < 0, no intersection - if (DdQxE2 < 0) { - - return null; - - } - - const DdE1xQ = sign * this.direction.dot(_edge1.cross(_diff)); - - // b2 < 0, no intersection - if (DdE1xQ < 0) { - - return null; - - } - - // b1+b2 > 1, no intersection - if (DdQxE2 + DdE1xQ > DdN) { - - return null; - - } - - // Line intersects triangle, check if ray does. - const QdN = - sign * _diff.dot(_normal$1); - - // t < 0, no intersection - if (QdN < 0) { - - return null; - - } - - // Ray intersects triangle. - return this.at(QdN / DdN, target); - - } - - applyMatrix4(matrix4) { - - this.origin.applyMatrix4(matrix4); - this.direction.transformDirection(matrix4); - - return this; - - } - - equals(ray) { - - return ray.origin.equals(this.origin) && ray.direction.equals(this.direction); - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -class Matrix4 { - - constructor() { - - Matrix4.prototype.isMatrix4 = true; - - this.elements = [ - - 1, 0, 0, 0, - 0, 1, 0, 0, - 0, 0, 1, 0, - 0, 0, 0, 1 - - ]; - - } - - set(n11, n12, n13, n14, n21, n22, n23, n24, n31, n32, n33, n34, n41, n42, n43, n44) { - - const te = this.elements; - - te[0] = n11; te[4] = n12; te[8] = n13; te[12] = n14; - te[1] = n21; te[5] = n22; te[9] = n23; te[13] = n24; - te[2] = n31; te[6] = n32; te[10] = n33; te[14] = n34; - te[3] = n41; te[7] = n42; te[11] = n43; te[15] = n44; - - return this; - - } - - identity() { - - this.set( - - 1, 0, 0, 0, - 0, 1, 0, 0, - 0, 0, 1, 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - clone() { - - return new Matrix4().fromArray(this.elements); - - } - - copy(m) { - - const te = this.elements; - const me = m.elements; - - te[0] = me[0]; te[1] = me[1]; te[2] = me[2]; te[3] = me[3]; - te[4] = me[4]; te[5] = me[5]; te[6] = me[6]; te[7] = me[7]; - te[8] = me[8]; te[9] = me[9]; te[10] = me[10]; te[11] = me[11]; - te[12] = me[12]; te[13] = me[13]; te[14] = me[14]; te[15] = me[15]; - - return this; - - } - - copyPosition(m) { - - const te = this.elements, me = m.elements; - - te[12] = me[12]; - te[13] = me[13]; - te[14] = me[14]; - - return this; - - } - - setFromMatrix3(m) { - - const me = m.elements; - - this.set( - - me[0], me[3], me[6], 0, - me[1], me[4], me[7], 0, - me[2], me[5], me[8], 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - extractBasis(xAxis, yAxis, zAxis) { - - xAxis.setFromMatrixColumn(this, 0); - yAxis.setFromMatrixColumn(this, 1); - zAxis.setFromMatrixColumn(this, 2); - - return this; - - } - - makeBasis(xAxis, yAxis, zAxis) { - - this.set( - xAxis.x, yAxis.x, zAxis.x, 0, - xAxis.y, yAxis.y, zAxis.y, 0, - xAxis.z, yAxis.z, zAxis.z, 0, - 0, 0, 0, 1 - ); - - return this; - - } - - extractRotation(m) { - - // this method does not support reflection matrices - - const te = this.elements; - const me = m.elements; - - const scaleX = 1 / _v1$5.setFromMatrixColumn(m, 0).length(); - const scaleY = 1 / _v1$5.setFromMatrixColumn(m, 1).length(); - const scaleZ = 1 / _v1$5.setFromMatrixColumn(m, 2).length(); - - te[0] = me[0] * scaleX; - te[1] = me[1] * scaleX; - te[2] = me[2] * scaleX; - te[3] = 0; - - te[4] = me[4] * scaleY; - te[5] = me[5] * scaleY; - te[6] = me[6] * scaleY; - te[7] = 0; - - te[8] = me[8] * scaleZ; - te[9] = me[9] * scaleZ; - te[10] = me[10] * scaleZ; - te[11] = 0; - - te[12] = 0; - te[13] = 0; - te[14] = 0; - te[15] = 1; - - return this; - - } - - makeRotationFromEuler(euler) { - - const te = this.elements; - - const x = euler.x, y = euler.y, z = euler.z; - const a = Math.cos(x), b = Math.sin(x); - const c = Math.cos(y), d = Math.sin(y); - const e = Math.cos(z), f = Math.sin(z); - - if (euler.order === 'XYZ') { - - const ae = a * e, af = a * f, be = b * e, bf = b * f; - - te[0] = c * e; - te[4] = - c * f; - te[8] = d; - - te[1] = af + be * d; - te[5] = ae - bf * d; - te[9] = - b * c; - - te[2] = bf - ae * d; - te[6] = be + af * d; - te[10] = a * c; - - } else if (euler.order === 'YXZ') { - - const ce = c * e, cf = c * f, de = d * e, df = d * f; - - te[0] = ce + df * b; - te[4] = de * b - cf; - te[8] = a * d; - - te[1] = a * f; - te[5] = a * e; - te[9] = - b; - - te[2] = cf * b - de; - te[6] = df + ce * b; - te[10] = a * c; - - } else if (euler.order === 'ZXY') { - - const ce = c * e, cf = c * f, de = d * e, df = d * f; - - te[0] = ce - df * b; - te[4] = - a * f; - te[8] = de + cf * b; - - te[1] = cf + de * b; - te[5] = a * e; - te[9] = df - ce * b; - - te[2] = - a * d; - te[6] = b; - te[10] = a * c; - - } else if (euler.order === 'ZYX') { - - const ae = a * e, af = a * f, be = b * e, bf = b * f; - - te[0] = c * e; - te[4] = be * d - af; - te[8] = ae * d + bf; - - te[1] = c * f; - te[5] = bf * d + ae; - te[9] = af * d - be; - - te[2] = - d; - te[6] = b * c; - te[10] = a * c; - - } else if (euler.order === 'YZX') { - - const ac = a * c, ad = a * d, bc = b * c, bd = b * d; - - te[0] = c * e; - te[4] = bd - ac * f; - te[8] = bc * f + ad; - - te[1] = f; - te[5] = a * e; - te[9] = - b * e; - - te[2] = - d * e; - te[6] = ad * f + bc; - te[10] = ac - bd * f; - - } else if (euler.order === 'XZY') { - - const ac = a * c, ad = a * d, bc = b * c, bd = b * d; - - te[0] = c * e; - te[4] = - f; - te[8] = d * e; - - te[1] = ac * f + bd; - te[5] = a * e; - te[9] = ad * f - bc; - - te[2] = bc * f - ad; - te[6] = b * e; - te[10] = bd * f + ac; - - } - - // bottom row - te[3] = 0; - te[7] = 0; - te[11] = 0; - - // last column - te[12] = 0; - te[13] = 0; - te[14] = 0; - te[15] = 1; - - return this; - - } - - makeRotationFromQuaternion(q) { - - return this.compose(_zero, q, _one); - - } - - lookAt(eye, target, up) { - - const te = this.elements; - - _z.subVectors(eye, target); - - if (_z.lengthSq() === 0) { - - // eye and target are in the same position - - _z.z = 1; - - } - - _z.normalize(); - _x.crossVectors(up, _z); - - if (_x.lengthSq() === 0) { - - // up and z are parallel - - if (Math.abs(up.z) === 1) { - - _z.x += 0.0001; - - } else { - - _z.z += 0.0001; - - } - - _z.normalize(); - _x.crossVectors(up, _z); - - } - - _x.normalize(); - _y.crossVectors(_z, _x); - - te[0] = _x.x; te[4] = _y.x; te[8] = _z.x; - te[1] = _x.y; te[5] = _y.y; te[9] = _z.y; - te[2] = _x.z; te[6] = _y.z; te[10] = _z.z; - - return this; - - } - - multiply(m) { - - return this.multiplyMatrices(this, m); - - } - - premultiply(m) { - - return this.multiplyMatrices(m, this); - - } - - multiplyMatrices(a, b) { - - const ae = a.elements; - const be = b.elements; - const te = this.elements; - - const a11 = ae[0], a12 = ae[4], a13 = ae[8], a14 = ae[12]; - const a21 = ae[1], a22 = ae[5], a23 = ae[9], a24 = ae[13]; - const a31 = ae[2], a32 = ae[6], a33 = ae[10], a34 = ae[14]; - const a41 = ae[3], a42 = ae[7], a43 = ae[11], a44 = ae[15]; - - const b11 = be[0], b12 = be[4], b13 = be[8], b14 = be[12]; - const b21 = be[1], b22 = be[5], b23 = be[9], b24 = be[13]; - const b31 = be[2], b32 = be[6], b33 = be[10], b34 = be[14]; - const b41 = be[3], b42 = be[7], b43 = be[11], b44 = be[15]; - - te[0] = a11 * b11 + a12 * b21 + a13 * b31 + a14 * b41; - te[4] = a11 * b12 + a12 * b22 + a13 * b32 + a14 * b42; - te[8] = a11 * b13 + a12 * b23 + a13 * b33 + a14 * b43; - te[12] = a11 * b14 + a12 * b24 + a13 * b34 + a14 * b44; - - te[1] = a21 * b11 + a22 * b21 + a23 * b31 + a24 * b41; - te[5] = a21 * b12 + a22 * b22 + a23 * b32 + a24 * b42; - te[9] = a21 * b13 + a22 * b23 + a23 * b33 + a24 * b43; - te[13] = a21 * b14 + a22 * b24 + a23 * b34 + a24 * b44; - - te[2] = a31 * b11 + a32 * b21 + a33 * b31 + a34 * b41; - te[6] = a31 * b12 + a32 * b22 + a33 * b32 + a34 * b42; - te[10] = a31 * b13 + a32 * b23 + a33 * b33 + a34 * b43; - te[14] = a31 * b14 + a32 * b24 + a33 * b34 + a34 * b44; - - te[3] = a41 * b11 + a42 * b21 + a43 * b31 + a44 * b41; - te[7] = a41 * b12 + a42 * b22 + a43 * b32 + a44 * b42; - te[11] = a41 * b13 + a42 * b23 + a43 * b33 + a44 * b43; - te[15] = a41 * b14 + a42 * b24 + a43 * b34 + a44 * b44; - - return this; - - } - - multiplyScalar(s) { - - const te = this.elements; - - te[0] *= s; te[4] *= s; te[8] *= s; te[12] *= s; - te[1] *= s; te[5] *= s; te[9] *= s; te[13] *= s; - te[2] *= s; te[6] *= s; te[10] *= s; te[14] *= s; - te[3] *= s; te[7] *= s; te[11] *= s; te[15] *= s; - - return this; - - } - - determinant() { - - const te = this.elements; - - const n11 = te[0], n12 = te[4], n13 = te[8], n14 = te[12]; - const n21 = te[1], n22 = te[5], n23 = te[9], n24 = te[13]; - const n31 = te[2], n32 = te[6], n33 = te[10], n34 = te[14]; - const n41 = te[3], n42 = te[7], n43 = te[11], n44 = te[15]; - - //TODO: make this more efficient - //( based on http://www.euclideanspace.com/maths/algebra/matrix/functions/inverse/fourD/index.htm ) - - return ( - n41 * ( - + n14 * n23 * n32 - - n13 * n24 * n32 - - n14 * n22 * n33 - + n12 * n24 * n33 - + n13 * n22 * n34 - - n12 * n23 * n34 - ) + - n42 * ( - + n11 * n23 * n34 - - n11 * n24 * n33 - + n14 * n21 * n33 - - n13 * n21 * n34 - + n13 * n24 * n31 - - n14 * n23 * n31 - ) + - n43 * ( - + n11 * n24 * n32 - - n11 * n22 * n34 - - n14 * n21 * n32 - + n12 * n21 * n34 - + n14 * n22 * n31 - - n12 * n24 * n31 - ) + - n44 * ( - - n13 * n22 * n31 - - n11 * n23 * n32 - + n11 * n22 * n33 - + n13 * n21 * n32 - - n12 * n21 * n33 - + n12 * n23 * n31 - ) - - ); - - } - - transpose() { - - const te = this.elements; - let tmp; - - tmp = te[1]; te[1] = te[4]; te[4] = tmp; - tmp = te[2]; te[2] = te[8]; te[8] = tmp; - tmp = te[6]; te[6] = te[9]; te[9] = tmp; - - tmp = te[3]; te[3] = te[12]; te[12] = tmp; - tmp = te[7]; te[7] = te[13]; te[13] = tmp; - tmp = te[11]; te[11] = te[14]; te[14] = tmp; - - return this; - - } - - setPosition(x, y, z) { - - const te = this.elements; - - if (x.isVector3) { - - te[12] = x.x; - te[13] = x.y; - te[14] = x.z; - - } else { - - te[12] = x; - te[13] = y; - te[14] = z; - - } - - return this; - - } - - invert() { - - // based on http://www.euclideanspace.com/maths/algebra/matrix/functions/inverse/fourD/index.htm - const te = this.elements, - - n11 = te[0], n21 = te[1], n31 = te[2], n41 = te[3], - n12 = te[4], n22 = te[5], n32 = te[6], n42 = te[7], - n13 = te[8], n23 = te[9], n33 = te[10], n43 = te[11], - n14 = te[12], n24 = te[13], n34 = te[14], n44 = te[15], - - t11 = n23 * n34 * n42 - n24 * n33 * n42 + n24 * n32 * n43 - n22 * n34 * n43 - n23 * n32 * n44 + n22 * n33 * n44, - t12 = n14 * n33 * n42 - n13 * n34 * n42 - n14 * n32 * n43 + n12 * n34 * n43 + n13 * n32 * n44 - n12 * n33 * n44, - t13 = n13 * n24 * n42 - n14 * n23 * n42 + n14 * n22 * n43 - n12 * n24 * n43 - n13 * n22 * n44 + n12 * n23 * n44, - t14 = n14 * n23 * n32 - n13 * n24 * n32 - n14 * n22 * n33 + n12 * n24 * n33 + n13 * n22 * n34 - n12 * n23 * n34; - - const det = n11 * t11 + n21 * t12 + n31 * t13 + n41 * t14; - - if (det === 0) return this.set(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); - - const detInv = 1 / det; - - te[0] = t11 * detInv; - te[1] = (n24 * n33 * n41 - n23 * n34 * n41 - n24 * n31 * n43 + n21 * n34 * n43 + n23 * n31 * n44 - n21 * n33 * n44) * detInv; - te[2] = (n22 * n34 * n41 - n24 * n32 * n41 + n24 * n31 * n42 - n21 * n34 * n42 - n22 * n31 * n44 + n21 * n32 * n44) * detInv; - te[3] = (n23 * n32 * n41 - n22 * n33 * n41 - n23 * n31 * n42 + n21 * n33 * n42 + n22 * n31 * n43 - n21 * n32 * n43) * detInv; - - te[4] = t12 * detInv; - te[5] = (n13 * n34 * n41 - n14 * n33 * n41 + n14 * n31 * n43 - n11 * n34 * n43 - n13 * n31 * n44 + n11 * n33 * n44) * detInv; - te[6] = (n14 * n32 * n41 - n12 * n34 * n41 - n14 * n31 * n42 + n11 * n34 * n42 + n12 * n31 * n44 - n11 * n32 * n44) * detInv; - te[7] = (n12 * n33 * n41 - n13 * n32 * n41 + n13 * n31 * n42 - n11 * n33 * n42 - n12 * n31 * n43 + n11 * n32 * n43) * detInv; - - te[8] = t13 * detInv; - te[9] = (n14 * n23 * n41 - n13 * n24 * n41 - n14 * n21 * n43 + n11 * n24 * n43 + n13 * n21 * n44 - n11 * n23 * n44) * detInv; - te[10] = (n12 * n24 * n41 - n14 * n22 * n41 + n14 * n21 * n42 - n11 * n24 * n42 - n12 * n21 * n44 + n11 * n22 * n44) * detInv; - te[11] = (n13 * n22 * n41 - n12 * n23 * n41 - n13 * n21 * n42 + n11 * n23 * n42 + n12 * n21 * n43 - n11 * n22 * n43) * detInv; - - te[12] = t14 * detInv; - te[13] = (n13 * n24 * n31 - n14 * n23 * n31 + n14 * n21 * n33 - n11 * n24 * n33 - n13 * n21 * n34 + n11 * n23 * n34) * detInv; - te[14] = (n14 * n22 * n31 - n12 * n24 * n31 - n14 * n21 * n32 + n11 * n24 * n32 + n12 * n21 * n34 - n11 * n22 * n34) * detInv; - te[15] = (n12 * n23 * n31 - n13 * n22 * n31 + n13 * n21 * n32 - n11 * n23 * n32 - n12 * n21 * n33 + n11 * n22 * n33) * detInv; - - return this; - - } - - scale(v) { - - const te = this.elements; - const x = v.x, y = v.y, z = v.z; - - te[0] *= x; te[4] *= y; te[8] *= z; - te[1] *= x; te[5] *= y; te[9] *= z; - te[2] *= x; te[6] *= y; te[10] *= z; - te[3] *= x; te[7] *= y; te[11] *= z; - - return this; - - } - - getMaxScaleOnAxis() { - - const te = this.elements; - - const scaleXSq = te[0] * te[0] + te[1] * te[1] + te[2] * te[2]; - const scaleYSq = te[4] * te[4] + te[5] * te[5] + te[6] * te[6]; - const scaleZSq = te[8] * te[8] + te[9] * te[9] + te[10] * te[10]; - - return Math.sqrt(Math.max(scaleXSq, scaleYSq, scaleZSq)); - - } - - makeTranslation(x, y, z) { - - this.set( - - 1, 0, 0, x, - 0, 1, 0, y, - 0, 0, 1, z, - 0, 0, 0, 1 - - ); - - return this; - - } - - makeRotationX(theta) { - - const c = Math.cos(theta), s = Math.sin(theta); - - this.set( - - 1, 0, 0, 0, - 0, c, - s, 0, - 0, s, c, 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - makeRotationY(theta) { - - const c = Math.cos(theta), s = Math.sin(theta); - - this.set( - - c, 0, s, 0, - 0, 1, 0, 0, - - s, 0, c, 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - makeRotationZ(theta) { - - const c = Math.cos(theta), s = Math.sin(theta); - - this.set( - - c, - s, 0, 0, - s, c, 0, 0, - 0, 0, 1, 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - makeRotationAxis(axis, angle) { - - // Based on http://www.gamedev.net/reference/articles/article1199.asp - - const c = Math.cos(angle); - const s = Math.sin(angle); - const t = 1 - c; - const x = axis.x, y = axis.y, z = axis.z; - const tx = t * x, ty = t * y; - - this.set( - - tx * x + c, tx * y - s * z, tx * z + s * y, 0, - tx * y + s * z, ty * y + c, ty * z - s * x, 0, - tx * z - s * y, ty * z + s * x, t * z * z + c, 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - makeScale(x, y, z) { - - this.set( - - x, 0, 0, 0, - 0, y, 0, 0, - 0, 0, z, 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - makeShear(xy, xz, yx, yz, zx, zy) { - - this.set( - - 1, yx, zx, 0, - xy, 1, zy, 0, - xz, yz, 1, 0, - 0, 0, 0, 1 - - ); - - return this; - - } - - compose(position, quaternion, scale) { - - const te = this.elements; - - const x = quaternion._x, y = quaternion._y, z = quaternion._z, w = quaternion._w; - const x2 = x + x, y2 = y + y, z2 = z + z; - const xx = x * x2, xy = x * y2, xz = x * z2; - const yy = y * y2, yz = y * z2, zz = z * z2; - const wx = w * x2, wy = w * y2, wz = w * z2; - - const sx = scale.x, sy = scale.y, sz = scale.z; - - te[0] = (1 - (yy + zz)) * sx; - te[1] = (xy + wz) * sx; - te[2] = (xz - wy) * sx; - te[3] = 0; - - te[4] = (xy - wz) * sy; - te[5] = (1 - (xx + zz)) * sy; - te[6] = (yz + wx) * sy; - te[7] = 0; - - te[8] = (xz + wy) * sz; - te[9] = (yz - wx) * sz; - te[10] = (1 - (xx + yy)) * sz; - te[11] = 0; - - te[12] = position.x; - te[13] = position.y; - te[14] = position.z; - te[15] = 1; - - return this; - - } - - decompose(position, quaternion, scale) { - - const te = this.elements; - - let sx = _v1$5.set(te[0], te[1], te[2]).length(); - const sy = _v1$5.set(te[4], te[5], te[6]).length(); - const sz = _v1$5.set(te[8], te[9], te[10]).length(); - - // if determine is negative, we need to invert one scale - const det = this.determinant(); - if (det < 0) sx = - sx; - - position.x = te[12]; - position.y = te[13]; - position.z = te[14]; - - // scale the rotation part - _m1$2.copy(this); - - const invSX = 1 / sx; - const invSY = 1 / sy; - const invSZ = 1 / sz; - - _m1$2.elements[0] *= invSX; - _m1$2.elements[1] *= invSX; - _m1$2.elements[2] *= invSX; - - _m1$2.elements[4] *= invSY; - _m1$2.elements[5] *= invSY; - _m1$2.elements[6] *= invSY; - - _m1$2.elements[8] *= invSZ; - _m1$2.elements[9] *= invSZ; - _m1$2.elements[10] *= invSZ; - - quaternion.setFromRotationMatrix(_m1$2); - - scale.x = sx; - scale.y = sy; - scale.z = sz; - - return this; - - } - - makePerspective(left, right, top, bottom, near, far) { - - const te = this.elements; - const x = 2 * near / (right - left); - const y = 2 * near / (top - bottom); - - const a = (right + left) / (right - left); - const b = (top + bottom) / (top - bottom); - const c = - (far + near) / (far - near); - const d = - 2 * far * near / (far - near); - - te[0] = x; te[4] = 0; te[8] = a; te[12] = 0; - te[1] = 0; te[5] = y; te[9] = b; te[13] = 0; - te[2] = 0; te[6] = 0; te[10] = c; te[14] = d; - te[3] = 0; te[7] = 0; te[11] = - 1; te[15] = 0; - - return this; - - } - - makeOrthographic(left, right, top, bottom, near, far) { - - const te = this.elements; - const w = 1.0 / (right - left); - const h = 1.0 / (top - bottom); - const p = 1.0 / (far - near); - - const x = (right + left) * w; - const y = (top + bottom) * h; - const z = (far + near) * p; - - te[0] = 2 * w; te[4] = 0; te[8] = 0; te[12] = - x; - te[1] = 0; te[5] = 2 * h; te[9] = 0; te[13] = - y; - te[2] = 0; te[6] = 0; te[10] = - 2 * p; te[14] = - z; - te[3] = 0; te[7] = 0; te[11] = 0; te[15] = 1; - - return this; - - } - - equals(matrix) { - - const te = this.elements; - const me = matrix.elements; - - for (let i = 0; i < 16; i++) { - - if (te[i] !== me[i]) return false; - - } - - return true; - - } - - fromArray(array, offset = 0) { - - for (let i = 0; i < 16; i++) { - - this.elements[i] = array[i + offset]; - - } - - return this; - - } - - toArray(array = [], offset = 0) { - - const te = this.elements; - - array[offset] = te[0]; - array[offset + 1] = te[1]; - array[offset + 2] = te[2]; - array[offset + 3] = te[3]; - - array[offset + 4] = te[4]; - array[offset + 5] = te[5]; - array[offset + 6] = te[6]; - array[offset + 7] = te[7]; - - array[offset + 8] = te[8]; - array[offset + 9] = te[9]; - array[offset + 10] = te[10]; - array[offset + 11] = te[11]; - - array[offset + 12] = te[12]; - array[offset + 13] = te[13]; - array[offset + 14] = te[14]; - array[offset + 15] = te[15]; - - return array; - - } - -} - -const _v1$5 = /*@__PURE__*/ new Vector3(); -const _m1$2 = /*@__PURE__*/ new Matrix4(); -const _zero = /*@__PURE__*/ new Vector3(0, 0, 0); -const _one = /*@__PURE__*/ new Vector3(1, 1, 1); -const _x = /*@__PURE__*/ new Vector3(); -const _y = /*@__PURE__*/ new Vector3(); -const _z = /*@__PURE__*/ new Vector3(); - -const _matrix$1 = /*@__PURE__*/ new Matrix4(); -const _quaternion$3 = /*@__PURE__*/ new Quaternion(); - -class Euler { - - constructor(x = 0, y = 0, z = 0, order = Euler.DEFAULT_ORDER) { - - this.isEuler = true; - - this._x = x; - this._y = y; - this._z = z; - this._order = order; - - } - - get x() { - - return this._x; - - } - - set x(value) { - - this._x = value; - this._onChangeCallback(); - - } - - get y() { - - return this._y; - - } - - set y(value) { - - this._y = value; - this._onChangeCallback(); - - } - - get z() { - - return this._z; - - } - - set z(value) { - - this._z = value; - this._onChangeCallback(); - - } - - get order() { - - return this._order; - - } - - set order(value) { - - this._order = value; - this._onChangeCallback(); - - } - - set(x, y, z, order = this._order) { - - this._x = x; - this._y = y; - this._z = z; - this._order = order; - - this._onChangeCallback(); - - return this; - - } - - clone() { - - return new this.constructor(this._x, this._y, this._z, this._order); - - } - - copy(euler) { - - this._x = euler._x; - this._y = euler._y; - this._z = euler._z; - this._order = euler._order; - - this._onChangeCallback(); - - return this; - - } - - setFromRotationMatrix(m, order = this._order, update = true) { - - // assumes the upper 3x3 of m is a pure rotation matrix (i.e, unscaled) - - const te = m.elements; - const m11 = te[0], m12 = te[4], m13 = te[8]; - const m21 = te[1], m22 = te[5], m23 = te[9]; - const m31 = te[2], m32 = te[6], m33 = te[10]; - - switch (order) { - - case 'XYZ': - - this._y = Math.asin(clamp(m13, - 1, 1)); - - if (Math.abs(m13) < 0.9999999) { - - this._x = Math.atan2(- m23, m33); - this._z = Math.atan2(- m12, m11); - - } else { - - this._x = Math.atan2(m32, m22); - this._z = 0; - - } - - break; - - case 'YXZ': - - this._x = Math.asin(- clamp(m23, - 1, 1)); - - if (Math.abs(m23) < 0.9999999) { - - this._y = Math.atan2(m13, m33); - this._z = Math.atan2(m21, m22); - - } else { - - this._y = Math.atan2(- m31, m11); - this._z = 0; - - } - - break; - - case 'ZXY': - - this._x = Math.asin(clamp(m32, - 1, 1)); - - if (Math.abs(m32) < 0.9999999) { - - this._y = Math.atan2(- m31, m33); - this._z = Math.atan2(- m12, m22); - - } else { - - this._y = 0; - this._z = Math.atan2(m21, m11); - - } - - break; - - case 'ZYX': - - this._y = Math.asin(- clamp(m31, - 1, 1)); - - if (Math.abs(m31) < 0.9999999) { - - this._x = Math.atan2(m32, m33); - this._z = Math.atan2(m21, m11); - - } else { - - this._x = 0; - this._z = Math.atan2(- m12, m22); - - } - - break; - - case 'YZX': - - this._z = Math.asin(clamp(m21, - 1, 1)); - - if (Math.abs(m21) < 0.9999999) { - - this._x = Math.atan2(- m23, m22); - this._y = Math.atan2(- m31, m11); - - } else { - - this._x = 0; - this._y = Math.atan2(m13, m33); - - } - - break; - - case 'XZY': - - this._z = Math.asin(- clamp(m12, - 1, 1)); - - if (Math.abs(m12) < 0.9999999) { - - this._x = Math.atan2(m32, m22); - this._y = Math.atan2(m13, m11); - - } else { - - this._x = Math.atan2(- m23, m33); - this._y = 0; - - } - - break; - - default: - - console.warn('THREE.Euler: .setFromRotationMatrix() encountered an unknown order: ' + order); - - } - - this._order = order; - - if (update === true) this._onChangeCallback(); - - return this; - - } - - setFromQuaternion(q, order, update) { - - _matrix$1.makeRotationFromQuaternion(q); - - return this.setFromRotationMatrix(_matrix$1, order, update); - - } - - setFromVector3(v, order = this._order) { - - return this.set(v.x, v.y, v.z, order); - - } - - reorder(newOrder) { - - // WARNING: this discards revolution information -bhouston - - _quaternion$3.setFromEuler(this); - - return this.setFromQuaternion(_quaternion$3, newOrder); - - } - - equals(euler) { - - return (euler._x === this._x) && (euler._y === this._y) && (euler._z === this._z) && (euler._order === this._order); - - } - - fromArray(array) { - - this._x = array[0]; - this._y = array[1]; - this._z = array[2]; - if (array[3] !== undefined) this._order = array[3]; - - this._onChangeCallback(); - - return this; - - } - - toArray(array = [], offset = 0) { - - array[offset] = this._x; - array[offset + 1] = this._y; - array[offset + 2] = this._z; - array[offset + 3] = this._order; - - return array; - - } - - _onChange(callback) { - - this._onChangeCallback = callback; - - return this; - - } - - _onChangeCallback() { } - - *[Symbol.iterator]() { - - yield this._x; - yield this._y; - yield this._z; - yield this._order; - - } - -} - -Euler.DEFAULT_ORDER = 'XYZ'; - -class Layers { - - constructor() { - - this.mask = 1 | 0; - - } - - set(channel) { - - this.mask = (1 << channel | 0) >>> 0; - - } - - enable(channel) { - - this.mask |= 1 << channel | 0; - - } - - enableAll() { - - this.mask = 0xffffffff | 0; - - } - - toggle(channel) { - - this.mask ^= 1 << channel | 0; - - } - - disable(channel) { - - this.mask &= ~(1 << channel | 0); - - } - - disableAll() { - - this.mask = 0; - - } - - test(layers) { - - return (this.mask & layers.mask) !== 0; - - } - - isEnabled(channel) { - - return (this.mask & (1 << channel | 0)) !== 0; - - } - -} - -let _object3DId = 0; - -const _v1$4 = /*@__PURE__*/ new Vector3(); -const _q1 = /*@__PURE__*/ new Quaternion(); -const _m1$1 = /*@__PURE__*/ new Matrix4(); -const _target = /*@__PURE__*/ new Vector3(); - -const _position$3 = /*@__PURE__*/ new Vector3(); -const _scale$2 = /*@__PURE__*/ new Vector3(); -const _quaternion$2 = /*@__PURE__*/ new Quaternion(); - -const _xAxis = /*@__PURE__*/ new Vector3(1, 0, 0); -const _yAxis = /*@__PURE__*/ new Vector3(0, 1, 0); -const _zAxis = /*@__PURE__*/ new Vector3(0, 0, 1); - -const _addedEvent = { type: 'added' }; -const _removedEvent = { type: 'removed' }; - -class Object3D extends EventDispatcher { - - constructor() { - - super(); - - this.isObject3D = true; - - Object.defineProperty(this, 'id', { value: _object3DId++ }); - - this.uuid = generateUUID(); - - this.name = ''; - this.type = 'Object3D'; - - this.parent = null; - this.children = []; - - this.up = Object3D.DEFAULT_UP.clone(); - - const position = new Vector3(); - const rotation = new Euler(); - const quaternion = new Quaternion(); - const scale = new Vector3(1, 1, 1); - - function onRotationChange() { - - quaternion.setFromEuler(rotation, false); - - } - - function onQuaternionChange() { - - rotation.setFromQuaternion(quaternion, undefined, false); - - } - - rotation._onChange(onRotationChange); - quaternion._onChange(onQuaternionChange); - - Object.defineProperties(this, { - position: { - configurable: true, - enumerable: true, - value: position - }, - rotation: { - configurable: true, - enumerable: true, - value: rotation - }, - quaternion: { - configurable: true, - enumerable: true, - value: quaternion - }, - scale: { - configurable: true, - enumerable: true, - value: scale - }, - modelViewMatrix: { - value: new Matrix4() - }, - normalMatrix: { - value: new Matrix3() - } - }); - - this.matrix = new Matrix4(); - this.matrixWorld = new Matrix4(); - - this.matrixAutoUpdate = Object3D.DEFAULT_MATRIX_AUTO_UPDATE; - this.matrixWorldNeedsUpdate = false; - - this.matrixWorldAutoUpdate = Object3D.DEFAULT_MATRIX_WORLD_AUTO_UPDATE; // checked by the renderer - - this.layers = new Layers(); - this.visible = true; - - this.castShadow = false; - this.receiveShadow = false; - - this.frustumCulled = true; - this.renderOrder = 0; - - this.animations = []; - - this.userData = {}; - - } - - onBeforeRender( /* renderer, scene, camera, geometry, material, group */) { } - - onAfterRender( /* renderer, scene, camera, geometry, material, group */) { } - - applyMatrix4(matrix) { - - if (this.matrixAutoUpdate) this.updateMatrix(); - - this.matrix.premultiply(matrix); - - this.matrix.decompose(this.position, this.quaternion, this.scale); - - } - - applyQuaternion(q) { - - this.quaternion.premultiply(q); - - return this; - - } - - setRotationFromAxisAngle(axis, angle) { - - // assumes axis is normalized - - this.quaternion.setFromAxisAngle(axis, angle); - - } - - setRotationFromEuler(euler) { - - this.quaternion.setFromEuler(euler, true); - - } - - setRotationFromMatrix(m) { - - // assumes the upper 3x3 of m is a pure rotation matrix (i.e, unscaled) - - this.quaternion.setFromRotationMatrix(m); - - } - - setRotationFromQuaternion(q) { - - // assumes q is normalized - - this.quaternion.copy(q); - - } - - rotateOnAxis(axis, angle) { - - // rotate object on axis in object space - // axis is assumed to be normalized - - _q1.setFromAxisAngle(axis, angle); - - this.quaternion.multiply(_q1); - - return this; - - } - - rotateOnWorldAxis(axis, angle) { - - // rotate object on axis in world space - // axis is assumed to be normalized - // method assumes no rotated parent - - _q1.setFromAxisAngle(axis, angle); - - this.quaternion.premultiply(_q1); - - return this; - - } - - rotateX(angle) { - - return this.rotateOnAxis(_xAxis, angle); - - } - - rotateY(angle) { - - return this.rotateOnAxis(_yAxis, angle); - - } - - rotateZ(angle) { - - return this.rotateOnAxis(_zAxis, angle); - - } - - translateOnAxis(axis, distance) { - - // translate object by distance along axis in object space - // axis is assumed to be normalized - - _v1$4.copy(axis).applyQuaternion(this.quaternion); - - this.position.add(_v1$4.multiplyScalar(distance)); - - return this; - - } - - translateX(distance) { - - return this.translateOnAxis(_xAxis, distance); - - } - - translateY(distance) { - - return this.translateOnAxis(_yAxis, distance); - - } - - translateZ(distance) { - - return this.translateOnAxis(_zAxis, distance); - - } - - localToWorld(vector) { - - this.updateWorldMatrix(true, false); - - return vector.applyMatrix4(this.matrixWorld); - - } - - worldToLocal(vector) { - - this.updateWorldMatrix(true, false); - - return vector.applyMatrix4(_m1$1.copy(this.matrixWorld).invert()); - - } - - lookAt(x, y, z) { - - // This method does not support objects having non-uniformly-scaled parent(s) - - if (x.isVector3) { - - _target.copy(x); - - } else { - - _target.set(x, y, z); - - } - - const parent = this.parent; - - this.updateWorldMatrix(true, false); - - _position$3.setFromMatrixPosition(this.matrixWorld); - - if (this.isCamera || this.isLight) { - - _m1$1.lookAt(_position$3, _target, this.up); - - } else { - - _m1$1.lookAt(_target, _position$3, this.up); - - } - - this.quaternion.setFromRotationMatrix(_m1$1); - - if (parent) { - - _m1$1.extractRotation(parent.matrixWorld); - _q1.setFromRotationMatrix(_m1$1); - this.quaternion.premultiply(_q1.invert()); - - } - - } - - add(object) { - - if (arguments.length > 1) { - - for (let i = 0; i < arguments.length; i++) { - - this.add(arguments[i]); - - } - - return this; - - } - - if (object === this) { - - console.error('THREE.Object3D.add: object can\'t be added as a child of itself.', object); - return this; - - } - - if (object && object.isObject3D) { - - if (object.parent !== null) { - - object.parent.remove(object); - - } - - object.parent = this; - this.children.push(object); - - object.dispatchEvent(_addedEvent); - - } else { - - console.error('THREE.Object3D.add: object not an instance of THREE.Object3D.', object); - - } - - return this; - - } - - remove(object) { - - if (arguments.length > 1) { - - for (let i = 0; i < arguments.length; i++) { - - this.remove(arguments[i]); - - } - - return this; - - } - - const index = this.children.indexOf(object); - - if (index !== - 1) { - - object.parent = null; - this.children.splice(index, 1); - - object.dispatchEvent(_removedEvent); - - } - - return this; - - } - - removeFromParent() { - - const parent = this.parent; - - if (parent !== null) { - - parent.remove(this); - - } - - return this; - - } - - clear() { - - for (let i = 0; i < this.children.length; i++) { - - const object = this.children[i]; - - object.parent = null; - - object.dispatchEvent(_removedEvent); - - } - - this.children.length = 0; - - return this; - - - } - - attach(object) { - - // adds object as a child of this, while maintaining the object's world transform - - // Note: This method does not support scene graphs having non-uniformly-scaled nodes(s) - - this.updateWorldMatrix(true, false); - - _m1$1.copy(this.matrixWorld).invert(); - - if (object.parent !== null) { - - object.parent.updateWorldMatrix(true, false); - - _m1$1.multiply(object.parent.matrixWorld); - - } - - object.applyMatrix4(_m1$1); - - this.add(object); - - object.updateWorldMatrix(false, true); - - return this; - - } - - getObjectById(id) { - - return this.getObjectByProperty('id', id); - - } - - getObjectByName(name) { - - return this.getObjectByProperty('name', name); - - } - - getObjectByProperty(name, value) { - - if (this[name] === value) return this; - - for (let i = 0, l = this.children.length; i < l; i++) { - - const child = this.children[i]; - const object = child.getObjectByProperty(name, value); - - if (object !== undefined) { - - return object; - - } - - } - - return undefined; - - } - - getObjectsByProperty(name, value) { - - let result = []; - - if (this[name] === value) result.push(this); - - for (let i = 0, l = this.children.length; i < l; i++) { - - const childResult = this.children[i].getObjectsByProperty(name, value); - - if (childResult.length > 0) { - - result = result.concat(childResult); - - } - - } - - return result; - - } - - getWorldPosition(target) { - - this.updateWorldMatrix(true, false); - - return target.setFromMatrixPosition(this.matrixWorld); - - } - - getWorldQuaternion(target) { - - this.updateWorldMatrix(true, false); - - this.matrixWorld.decompose(_position$3, target, _scale$2); - - return target; - - } - - getWorldScale(target) { - - this.updateWorldMatrix(true, false); - - this.matrixWorld.decompose(_position$3, _quaternion$2, target); - - return target; - - } - - getWorldDirection(target) { - - this.updateWorldMatrix(true, false); - - const e = this.matrixWorld.elements; - - return target.set(e[8], e[9], e[10]).normalize(); - - } - - raycast( /* raycaster, intersects */) { } - - traverse(callback) { - - callback(this); - - const children = this.children; - - for (let i = 0, l = children.length; i < l; i++) { - - children[i].traverse(callback); - - } - - } - - traverseVisible(callback) { - - if (this.visible === false) return; - - callback(this); - - const children = this.children; - - for (let i = 0, l = children.length; i < l; i++) { - - children[i].traverseVisible(callback); - - } - - } - - traverseAncestors(callback) { - - const parent = this.parent; - - if (parent !== null) { - - callback(parent); - - parent.traverseAncestors(callback); - - } - - } - - updateMatrix() { - - this.matrix.compose(this.position, this.quaternion, this.scale); - - this.matrixWorldNeedsUpdate = true; - - } - - updateMatrixWorld(force) { - - if (this.matrixAutoUpdate) this.updateMatrix(); - - if (this.matrixWorldNeedsUpdate || force) { - - if (this.parent === null) { - - this.matrixWorld.copy(this.matrix); - - } else { - - this.matrixWorld.multiplyMatrices(this.parent.matrixWorld, this.matrix); - - } - - this.matrixWorldNeedsUpdate = false; - - force = true; - - } - - // update children - - const children = this.children; - - for (let i = 0, l = children.length; i < l; i++) { - - const child = children[i]; - - if (child.matrixWorldAutoUpdate === true || force === true) { - - child.updateMatrixWorld(force); - - } - - } - - } - - updateWorldMatrix(updateParents, updateChildren) { - - const parent = this.parent; - - if (updateParents === true && parent !== null && parent.matrixWorldAutoUpdate === true) { - - parent.updateWorldMatrix(true, false); - - } - - if (this.matrixAutoUpdate) this.updateMatrix(); - - if (this.parent === null) { - - this.matrixWorld.copy(this.matrix); - - } else { - - this.matrixWorld.multiplyMatrices(this.parent.matrixWorld, this.matrix); - - } - - // update children - - if (updateChildren === true) { - - const children = this.children; - - for (let i = 0, l = children.length; i < l; i++) { - - const child = children[i]; - - if (child.matrixWorldAutoUpdate === true) { - - child.updateWorldMatrix(false, true); - - } - - } - - } - - } - - toJSON(meta) { - - // meta is a string when called from JSON.stringify - const isRootObject = (meta === undefined || typeof meta === 'string'); - - const output = {}; - - // meta is a hash used to collect geometries, materials. - // not providing it implies that this is the root object - // being serialized. - if (isRootObject) { - - // initialize meta obj - meta = { - geometries: {}, - materials: {}, - textures: {}, - images: {}, - shapes: {}, - skeletons: {}, - animations: {}, - nodes: {} - }; - - output.metadata = { - version: 4.5, - type: 'Object', - generator: 'Object3D.toJSON' - }; - - } - - // standard Object3D serialization - - const object = {}; - - object.uuid = this.uuid; - object.type = this.type; - - if (this.name !== '') object.name = this.name; - if (this.castShadow === true) object.castShadow = true; - if (this.receiveShadow === true) object.receiveShadow = true; - if (this.visible === false) object.visible = false; - if (this.frustumCulled === false) object.frustumCulled = false; - if (this.renderOrder !== 0) object.renderOrder = this.renderOrder; - if (Object.keys(this.userData).length > 0) object.userData = this.userData; - - object.layers = this.layers.mask; - object.matrix = this.matrix.toArray(); - - if (this.matrixAutoUpdate === false) object.matrixAutoUpdate = false; - - // object specific properties - - if (this.isInstancedMesh) { - - object.type = 'InstancedMesh'; - object.count = this.count; - object.instanceMatrix = this.instanceMatrix.toJSON(); - if (this.instanceColor !== null) object.instanceColor = this.instanceColor.toJSON(); - - } - - // - - function serialize(library, element) { - - if (library[element.uuid] === undefined) { - - library[element.uuid] = element.toJSON(meta); - - } - - return element.uuid; - - } - - if (this.isScene) { - - if (this.background) { - - if (this.background.isColor) { - - object.background = this.background.toJSON(); - - } else if (this.background.isTexture) { - - object.background = this.background.toJSON(meta).uuid; - - } - - } - - if (this.environment && this.environment.isTexture && this.environment.isRenderTargetTexture !== true) { - - object.environment = this.environment.toJSON(meta).uuid; - - } - - } else if (this.isMesh || this.isLine || this.isPoints) { - - object.geometry = serialize(meta.geometries, this.geometry); - - const parameters = this.geometry.parameters; - - if (parameters !== undefined && parameters.shapes !== undefined) { - - const shapes = parameters.shapes; - - if (Array.isArray(shapes)) { - - for (let i = 0, l = shapes.length; i < l; i++) { - - const shape = shapes[i]; - - serialize(meta.shapes, shape); - - } - - } else { - - serialize(meta.shapes, shapes); - - } - - } - - } - - if (this.isSkinnedMesh) { - - object.bindMode = this.bindMode; - object.bindMatrix = this.bindMatrix.toArray(); - - if (this.skeleton !== undefined) { - - serialize(meta.skeletons, this.skeleton); - - object.skeleton = this.skeleton.uuid; - - } - - } - - if (this.material !== undefined) { - - if (Array.isArray(this.material)) { - - const uuids = []; - - for (let i = 0, l = this.material.length; i < l; i++) { - - uuids.push(serialize(meta.materials, this.material[i])); - - } - - object.material = uuids; - - } else { - - object.material = serialize(meta.materials, this.material); - - } - - } - - // - - if (this.children.length > 0) { - - object.children = []; - - for (let i = 0; i < this.children.length; i++) { - - object.children.push(this.children[i].toJSON(meta).object); - - } - - } - - // - - if (this.animations.length > 0) { - - object.animations = []; - - for (let i = 0; i < this.animations.length; i++) { - - const animation = this.animations[i]; - - object.animations.push(serialize(meta.animations, animation)); - - } - - } - - if (isRootObject) { - - const geometries = extractFromCache(meta.geometries); - const materials = extractFromCache(meta.materials); - const textures = extractFromCache(meta.textures); - const images = extractFromCache(meta.images); - const shapes = extractFromCache(meta.shapes); - const skeletons = extractFromCache(meta.skeletons); - const animations = extractFromCache(meta.animations); - const nodes = extractFromCache(meta.nodes); - - if (geometries.length > 0) output.geometries = geometries; - if (materials.length > 0) output.materials = materials; - if (textures.length > 0) output.textures = textures; - if (images.length > 0) output.images = images; - if (shapes.length > 0) output.shapes = shapes; - if (skeletons.length > 0) output.skeletons = skeletons; - if (animations.length > 0) output.animations = animations; - if (nodes.length > 0) output.nodes = nodes; - - } - - output.object = object; - - return output; - - // extract data from the cache hash - // remove metadata on each item - // and return as array - function extractFromCache(cache) { - - const values = []; - for (const key in cache) { - - const data = cache[key]; - delete data.metadata; - values.push(data); - - } - - return values; - - } - - } - - clone(recursive) { - - return new this.constructor().copy(this, recursive); - - } - - copy(source, recursive = true) { - - this.name = source.name; - - this.up.copy(source.up); - - this.position.copy(source.position); - this.rotation.order = source.rotation.order; - this.quaternion.copy(source.quaternion); - this.scale.copy(source.scale); - - this.matrix.copy(source.matrix); - this.matrixWorld.copy(source.matrixWorld); - - this.matrixAutoUpdate = source.matrixAutoUpdate; - this.matrixWorldNeedsUpdate = source.matrixWorldNeedsUpdate; - - this.matrixWorldAutoUpdate = source.matrixWorldAutoUpdate; - - this.layers.mask = source.layers.mask; - this.visible = source.visible; - - this.castShadow = source.castShadow; - this.receiveShadow = source.receiveShadow; - - this.frustumCulled = source.frustumCulled; - this.renderOrder = source.renderOrder; - - this.userData = JSON.parse(JSON.stringify(source.userData)); - - if (recursive === true) { - - for (let i = 0; i < source.children.length; i++) { - - const child = source.children[i]; - this.add(child.clone()); - - } - - } - - return this; - - } - -} - -Object3D.DEFAULT_UP = /*@__PURE__*/ new Vector3(0, 1, 0); -Object3D.DEFAULT_MATRIX_AUTO_UPDATE = true; -Object3D.DEFAULT_MATRIX_WORLD_AUTO_UPDATE = true; - -const _v0$1 = /*@__PURE__*/ new Vector3(); -const _v1$3 = /*@__PURE__*/ new Vector3(); -const _v2$2 = /*@__PURE__*/ new Vector3(); -const _v3$1 = /*@__PURE__*/ new Vector3(); - -const _vab = /*@__PURE__*/ new Vector3(); -const _vac = /*@__PURE__*/ new Vector3(); -const _vbc = /*@__PURE__*/ new Vector3(); -const _vap = /*@__PURE__*/ new Vector3(); -const _vbp = /*@__PURE__*/ new Vector3(); -const _vcp = /*@__PURE__*/ new Vector3(); - -class Triangle { - - constructor(a = new Vector3(), b = new Vector3(), c = new Vector3()) { - - this.a = a; - this.b = b; - this.c = c; - - } - - static getNormal(a, b, c, target) { - - target.subVectors(c, b); - _v0$1.subVectors(a, b); - target.cross(_v0$1); - - const targetLengthSq = target.lengthSq(); - if (targetLengthSq > 0) { - - return target.multiplyScalar(1 / Math.sqrt(targetLengthSq)); - - } - - return target.set(0, 0, 0); - - } - - // static/instance method to calculate barycentric coordinates - // based on: http://www.blackpawn.com/texts/pointinpoly/default.html - static getBarycoord(point, a, b, c, target) { - - _v0$1.subVectors(c, a); - _v1$3.subVectors(b, a); - _v2$2.subVectors(point, a); - - const dot00 = _v0$1.dot(_v0$1); - const dot01 = _v0$1.dot(_v1$3); - const dot02 = _v0$1.dot(_v2$2); - const dot11 = _v1$3.dot(_v1$3); - const dot12 = _v1$3.dot(_v2$2); - - const denom = (dot00 * dot11 - dot01 * dot01); - - // collinear or singular triangle - if (denom === 0) { - - // arbitrary location outside of triangle? - // not sure if this is the best idea, maybe should be returning undefined - return target.set(- 2, - 1, - 1); - - } - - const invDenom = 1 / denom; - const u = (dot11 * dot02 - dot01 * dot12) * invDenom; - const v = (dot00 * dot12 - dot01 * dot02) * invDenom; - - // barycentric coordinates must always sum to 1 - return target.set(1 - u - v, v, u); - - } - - static containsPoint(point, a, b, c) { - - this.getBarycoord(point, a, b, c, _v3$1); - - return (_v3$1.x >= 0) && (_v3$1.y >= 0) && ((_v3$1.x + _v3$1.y) <= 1); - - } - - static getUV(point, p1, p2, p3, uv1, uv2, uv3, target) { - - this.getBarycoord(point, p1, p2, p3, _v3$1); - - target.set(0, 0); - target.addScaledVector(uv1, _v3$1.x); - target.addScaledVector(uv2, _v3$1.y); - target.addScaledVector(uv3, _v3$1.z); - - return target; - - } - - static isFrontFacing(a, b, c, direction) { - - _v0$1.subVectors(c, b); - _v1$3.subVectors(a, b); - - // strictly front facing - return (_v0$1.cross(_v1$3).dot(direction) < 0) ? true : false; - - } - - set(a, b, c) { - - this.a.copy(a); - this.b.copy(b); - this.c.copy(c); - - return this; - - } - - setFromPointsAndIndices(points, i0, i1, i2) { - - this.a.copy(points[i0]); - this.b.copy(points[i1]); - this.c.copy(points[i2]); - - return this; - - } - - setFromAttributeAndIndices(attribute, i0, i1, i2) { - - this.a.fromBufferAttribute(attribute, i0); - this.b.fromBufferAttribute(attribute, i1); - this.c.fromBufferAttribute(attribute, i2); - - return this; - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(triangle) { - - this.a.copy(triangle.a); - this.b.copy(triangle.b); - this.c.copy(triangle.c); - - return this; - - } - - getArea() { - - _v0$1.subVectors(this.c, this.b); - _v1$3.subVectors(this.a, this.b); - - return _v0$1.cross(_v1$3).length() * 0.5; - - } - - getMidpoint(target) { - - return target.addVectors(this.a, this.b).add(this.c).multiplyScalar(1 / 3); - - } - - getNormal(target) { - - return Triangle.getNormal(this.a, this.b, this.c, target); - - } - - getPlane(target) { - - return target.setFromCoplanarPoints(this.a, this.b, this.c); - - } - - getBarycoord(point, target) { - - return Triangle.getBarycoord(point, this.a, this.b, this.c, target); - - } - - getUV(point, uv1, uv2, uv3, target) { - - return Triangle.getUV(point, this.a, this.b, this.c, uv1, uv2, uv3, target); - - } - - containsPoint(point) { - - return Triangle.containsPoint(point, this.a, this.b, this.c); - - } - - isFrontFacing(direction) { - - return Triangle.isFrontFacing(this.a, this.b, this.c, direction); - - } - - intersectsBox(box) { - - return box.intersectsTriangle(this); - - } - - closestPointToPoint(p, target) { - - const a = this.a, b = this.b, c = this.c; - let v, w; - - // algorithm thanks to Real-Time Collision Detection by Christer Ericson, - // published by Morgan Kaufmann Publishers, (c) 2005 Elsevier Inc., - // under the accompanying license; see chapter 5.1.5 for detailed explanation. - // basically, we're distinguishing which of the voronoi regions of the triangle - // the point lies in with the minimum amount of redundant computation. - - _vab.subVectors(b, a); - _vac.subVectors(c, a); - _vap.subVectors(p, a); - const d1 = _vab.dot(_vap); - const d2 = _vac.dot(_vap); - if (d1 <= 0 && d2 <= 0) { - - // vertex region of A; barycentric coords (1, 0, 0) - return target.copy(a); - - } - - _vbp.subVectors(p, b); - const d3 = _vab.dot(_vbp); - const d4 = _vac.dot(_vbp); - if (d3 >= 0 && d4 <= d3) { - - // vertex region of B; barycentric coords (0, 1, 0) - return target.copy(b); - - } - - const vc = d1 * d4 - d3 * d2; - if (vc <= 0 && d1 >= 0 && d3 <= 0) { - - v = d1 / (d1 - d3); - // edge region of AB; barycentric coords (1-v, v, 0) - return target.copy(a).addScaledVector(_vab, v); - - } - - _vcp.subVectors(p, c); - const d5 = _vab.dot(_vcp); - const d6 = _vac.dot(_vcp); - if (d6 >= 0 && d5 <= d6) { - - // vertex region of C; barycentric coords (0, 0, 1) - return target.copy(c); - - } - - const vb = d5 * d2 - d1 * d6; - if (vb <= 0 && d2 >= 0 && d6 <= 0) { - - w = d2 / (d2 - d6); - // edge region of AC; barycentric coords (1-w, 0, w) - return target.copy(a).addScaledVector(_vac, w); - - } - - const va = d3 * d6 - d5 * d4; - if (va <= 0 && (d4 - d3) >= 0 && (d5 - d6) >= 0) { - - _vbc.subVectors(c, b); - w = (d4 - d3) / ((d4 - d3) + (d5 - d6)); - // edge region of BC; barycentric coords (0, 1-w, w) - return target.copy(b).addScaledVector(_vbc, w); // edge region of BC - - } - - // face region - const denom = 1 / (va + vb + vc); - // u = va * denom - v = vb * denom; - w = vc * denom; - - return target.copy(a).addScaledVector(_vab, v).addScaledVector(_vac, w); - - } - - equals(triangle) { - - return triangle.a.equals(this.a) && triangle.b.equals(this.b) && triangle.c.equals(this.c); - - } - -} - -let materialId = 0; - -class Material extends EventDispatcher { - - constructor() { - - super(); - - this.isMaterial = true; - - Object.defineProperty(this, 'id', { value: materialId++ }); - - this.uuid = generateUUID(); - - this.name = ''; - this.type = 'Material'; - - this.blending = NormalBlending; - this.side = FrontSide; - this.vertexColors = false; - - this.opacity = 1; - this.transparent = false; - - this.blendSrc = SrcAlphaFactor; - this.blendDst = OneMinusSrcAlphaFactor; - this.blendEquation = AddEquation; - this.blendSrcAlpha = null; - this.blendDstAlpha = null; - this.blendEquationAlpha = null; - - this.depthFunc = LessEqualDepth; - this.depthTest = true; - this.depthWrite = true; - - this.stencilWriteMask = 0xff; - this.stencilFunc = AlwaysStencilFunc; - this.stencilRef = 0; - this.stencilFuncMask = 0xff; - this.stencilFail = KeepStencilOp; - this.stencilZFail = KeepStencilOp; - this.stencilZPass = KeepStencilOp; - this.stencilWrite = false; - - this.clippingPlanes = null; - this.clipIntersection = false; - this.clipShadows = false; - - this.shadowSide = null; - - this.colorWrite = true; - - this.precision = null; // override the renderer's default precision for this material - - this.polygonOffset = false; - this.polygonOffsetFactor = 0; - this.polygonOffsetUnits = 0; - - this.dithering = false; - - this.alphaToCoverage = false; - this.premultipliedAlpha = false; - this.forceSinglePass = false; - - this.visible = true; - - this.toneMapped = true; - - this.userData = {}; - - this.version = 0; - - this._alphaTest = 0; - - } - - get alphaTest() { - - return this._alphaTest; - - } - - set alphaTest(value) { - - if (this._alphaTest > 0 !== value > 0) { - - this.version++; - - } - - this._alphaTest = value; - - } - - onBuild( /* shaderobject, renderer */) { } - - onBeforeRender( /* renderer, scene, camera, geometry, object, group */) { } - - onBeforeCompile( /* shaderobject, renderer */) { } - - customProgramCacheKey() { - - return this.onBeforeCompile.toString(); - - } - - setValues(values) { - - if (values === undefined) return; - - for (const key in values) { - - const newValue = values[key]; - - if (newValue === undefined) { - - console.warn('THREE.Material: \'' + key + '\' parameter is undefined.'); - continue; - - } - - const currentValue = this[key]; - - if (currentValue === undefined) { - - console.warn('THREE.' + this.type + ': \'' + key + '\' is not a property of this material.'); - continue; - - } - - if (currentValue && currentValue.isColor) { - - currentValue.set(newValue); - - } else if ((currentValue && currentValue.isVector3) && (newValue && newValue.isVector3)) { - - currentValue.copy(newValue); - - } else { - - this[key] = newValue; - - } - - } - - } - - toJSON(meta) { - - const isRootObject = (meta === undefined || typeof meta === 'string'); - - if (isRootObject) { - - meta = { - textures: {}, - images: {} - }; - - } - - const data = { - metadata: { - version: 4.5, - type: 'Material', - generator: 'Material.toJSON' - } - }; - - // standard Material serialization - data.uuid = this.uuid; - data.type = this.type; - - if (this.name !== '') data.name = this.name; - - if (this.color && this.color.isColor) data.color = this.color.getHex(); - - if (this.roughness !== undefined) data.roughness = this.roughness; - if (this.metalness !== undefined) data.metalness = this.metalness; - - if (this.sheen !== undefined) data.sheen = this.sheen; - if (this.sheenColor && this.sheenColor.isColor) data.sheenColor = this.sheenColor.getHex(); - if (this.sheenRoughness !== undefined) data.sheenRoughness = this.sheenRoughness; - if (this.emissive && this.emissive.isColor) data.emissive = this.emissive.getHex(); - if (this.emissiveIntensity && this.emissiveIntensity !== 1) data.emissiveIntensity = this.emissiveIntensity; - - if (this.specular && this.specular.isColor) data.specular = this.specular.getHex(); - if (this.specularIntensity !== undefined) data.specularIntensity = this.specularIntensity; - if (this.specularColor && this.specularColor.isColor) data.specularColor = this.specularColor.getHex(); - if (this.shininess !== undefined) data.shininess = this.shininess; - if (this.clearcoat !== undefined) data.clearcoat = this.clearcoat; - if (this.clearcoatRoughness !== undefined) data.clearcoatRoughness = this.clearcoatRoughness; - - if (this.clearcoatMap && this.clearcoatMap.isTexture) { - - data.clearcoatMap = this.clearcoatMap.toJSON(meta).uuid; - - } - - if (this.clearcoatRoughnessMap && this.clearcoatRoughnessMap.isTexture) { - - data.clearcoatRoughnessMap = this.clearcoatRoughnessMap.toJSON(meta).uuid; - - } - - if (this.clearcoatNormalMap && this.clearcoatNormalMap.isTexture) { - - data.clearcoatNormalMap = this.clearcoatNormalMap.toJSON(meta).uuid; - data.clearcoatNormalScale = this.clearcoatNormalScale.toArray(); - - } - - if (this.iridescence !== undefined) data.iridescence = this.iridescence; - if (this.iridescenceIOR !== undefined) data.iridescenceIOR = this.iridescenceIOR; - if (this.iridescenceThicknessRange !== undefined) data.iridescenceThicknessRange = this.iridescenceThicknessRange; - - if (this.iridescenceMap && this.iridescenceMap.isTexture) { - - data.iridescenceMap = this.iridescenceMap.toJSON(meta).uuid; - - } - - if (this.iridescenceThicknessMap && this.iridescenceThicknessMap.isTexture) { - - data.iridescenceThicknessMap = this.iridescenceThicknessMap.toJSON(meta).uuid; - - } - - if (this.map && this.map.isTexture) data.map = this.map.toJSON(meta).uuid; - if (this.matcap && this.matcap.isTexture) data.matcap = this.matcap.toJSON(meta).uuid; - if (this.alphaMap && this.alphaMap.isTexture) data.alphaMap = this.alphaMap.toJSON(meta).uuid; - - if (this.lightMap && this.lightMap.isTexture) { - - data.lightMap = this.lightMap.toJSON(meta).uuid; - data.lightMapIntensity = this.lightMapIntensity; - - } - - if (this.aoMap && this.aoMap.isTexture) { - - data.aoMap = this.aoMap.toJSON(meta).uuid; - data.aoMapIntensity = this.aoMapIntensity; - - } - - if (this.bumpMap && this.bumpMap.isTexture) { - - data.bumpMap = this.bumpMap.toJSON(meta).uuid; - data.bumpScale = this.bumpScale; - - } - - if (this.normalMap && this.normalMap.isTexture) { - - data.normalMap = this.normalMap.toJSON(meta).uuid; - data.normalMapType = this.normalMapType; - data.normalScale = this.normalScale.toArray(); - - } - - if (this.displacementMap && this.displacementMap.isTexture) { - - data.displacementMap = this.displacementMap.toJSON(meta).uuid; - data.displacementScale = this.displacementScale; - data.displacementBias = this.displacementBias; - - } - - if (this.roughnessMap && this.roughnessMap.isTexture) data.roughnessMap = this.roughnessMap.toJSON(meta).uuid; - if (this.metalnessMap && this.metalnessMap.isTexture) data.metalnessMap = this.metalnessMap.toJSON(meta).uuid; - - if (this.emissiveMap && this.emissiveMap.isTexture) data.emissiveMap = this.emissiveMap.toJSON(meta).uuid; - if (this.specularMap && this.specularMap.isTexture) data.specularMap = this.specularMap.toJSON(meta).uuid; - if (this.specularIntensityMap && this.specularIntensityMap.isTexture) data.specularIntensityMap = this.specularIntensityMap.toJSON(meta).uuid; - if (this.specularColorMap && this.specularColorMap.isTexture) data.specularColorMap = this.specularColorMap.toJSON(meta).uuid; - - if (this.envMap && this.envMap.isTexture) { - - data.envMap = this.envMap.toJSON(meta).uuid; - - if (this.combine !== undefined) data.combine = this.combine; - - } - - if (this.envMapIntensity !== undefined) data.envMapIntensity = this.envMapIntensity; - if (this.reflectivity !== undefined) data.reflectivity = this.reflectivity; - if (this.refractionRatio !== undefined) data.refractionRatio = this.refractionRatio; - - if (this.gradientMap && this.gradientMap.isTexture) { - - data.gradientMap = this.gradientMap.toJSON(meta).uuid; - - } - - if (this.transmission !== undefined) data.transmission = this.transmission; - if (this.transmissionMap && this.transmissionMap.isTexture) data.transmissionMap = this.transmissionMap.toJSON(meta).uuid; - if (this.thickness !== undefined) data.thickness = this.thickness; - if (this.thicknessMap && this.thicknessMap.isTexture) data.thicknessMap = this.thicknessMap.toJSON(meta).uuid; - if (this.attenuationDistance !== undefined && this.attenuationDistance !== Infinity) data.attenuationDistance = this.attenuationDistance; - if (this.attenuationColor !== undefined) data.attenuationColor = this.attenuationColor.getHex(); - - if (this.size !== undefined) data.size = this.size; - if (this.shadowSide !== null) data.shadowSide = this.shadowSide; - if (this.sizeAttenuation !== undefined) data.sizeAttenuation = this.sizeAttenuation; - - if (this.blending !== NormalBlending) data.blending = this.blending; - if (this.side !== FrontSide) data.side = this.side; - if (this.vertexColors) data.vertexColors = true; - - if (this.opacity < 1) data.opacity = this.opacity; - if (this.transparent === true) data.transparent = this.transparent; - - data.depthFunc = this.depthFunc; - data.depthTest = this.depthTest; - data.depthWrite = this.depthWrite; - data.colorWrite = this.colorWrite; - - data.stencilWrite = this.stencilWrite; - data.stencilWriteMask = this.stencilWriteMask; - data.stencilFunc = this.stencilFunc; - data.stencilRef = this.stencilRef; - data.stencilFuncMask = this.stencilFuncMask; - data.stencilFail = this.stencilFail; - data.stencilZFail = this.stencilZFail; - data.stencilZPass = this.stencilZPass; - - // rotation (SpriteMaterial) - if (this.rotation !== undefined && this.rotation !== 0) data.rotation = this.rotation; - - if (this.polygonOffset === true) data.polygonOffset = true; - if (this.polygonOffsetFactor !== 0) data.polygonOffsetFactor = this.polygonOffsetFactor; - if (this.polygonOffsetUnits !== 0) data.polygonOffsetUnits = this.polygonOffsetUnits; - - if (this.linewidth !== undefined && this.linewidth !== 1) data.linewidth = this.linewidth; - if (this.dashSize !== undefined) data.dashSize = this.dashSize; - if (this.gapSize !== undefined) data.gapSize = this.gapSize; - if (this.scale !== undefined) data.scale = this.scale; - - if (this.dithering === true) data.dithering = true; - - if (this.alphaTest > 0) data.alphaTest = this.alphaTest; - if (this.alphaToCoverage === true) data.alphaToCoverage = this.alphaToCoverage; - if (this.premultipliedAlpha === true) data.premultipliedAlpha = this.premultipliedAlpha; - if (this.forceSinglePass === true) data.forceSinglePass = this.forceSinglePass; - - if (this.wireframe === true) data.wireframe = this.wireframe; - if (this.wireframeLinewidth > 1) data.wireframeLinewidth = this.wireframeLinewidth; - if (this.wireframeLinecap !== 'round') data.wireframeLinecap = this.wireframeLinecap; - if (this.wireframeLinejoin !== 'round') data.wireframeLinejoin = this.wireframeLinejoin; - - if (this.flatShading === true) data.flatShading = this.flatShading; - - if (this.visible === false) data.visible = false; - - if (this.toneMapped === false) data.toneMapped = false; - - if (this.fog === false) data.fog = false; - - if (Object.keys(this.userData).length > 0) data.userData = this.userData; - - // TODO: Copied from Object3D.toJSON - - function extractFromCache(cache) { - - const values = []; - - for (const key in cache) { - - const data = cache[key]; - delete data.metadata; - values.push(data); - - } - - return values; - - } - - if (isRootObject) { - - const textures = extractFromCache(meta.textures); - const images = extractFromCache(meta.images); - - if (textures.length > 0) data.textures = textures; - if (images.length > 0) data.images = images; - - } - - return data; - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(source) { - - this.name = source.name; - - this.blending = source.blending; - this.side = source.side; - this.vertexColors = source.vertexColors; - - this.opacity = source.opacity; - this.transparent = source.transparent; - - this.blendSrc = source.blendSrc; - this.blendDst = source.blendDst; - this.blendEquation = source.blendEquation; - this.blendSrcAlpha = source.blendSrcAlpha; - this.blendDstAlpha = source.blendDstAlpha; - this.blendEquationAlpha = source.blendEquationAlpha; - - this.depthFunc = source.depthFunc; - this.depthTest = source.depthTest; - this.depthWrite = source.depthWrite; - - this.stencilWriteMask = source.stencilWriteMask; - this.stencilFunc = source.stencilFunc; - this.stencilRef = source.stencilRef; - this.stencilFuncMask = source.stencilFuncMask; - this.stencilFail = source.stencilFail; - this.stencilZFail = source.stencilZFail; - this.stencilZPass = source.stencilZPass; - this.stencilWrite = source.stencilWrite; - - const srcPlanes = source.clippingPlanes; - let dstPlanes = null; - - if (srcPlanes !== null) { - - const n = srcPlanes.length; - dstPlanes = new Array(n); - - for (let i = 0; i !== n; ++i) { - - dstPlanes[i] = srcPlanes[i].clone(); - - } - - } - - this.clippingPlanes = dstPlanes; - this.clipIntersection = source.clipIntersection; - this.clipShadows = source.clipShadows; - - this.shadowSide = source.shadowSide; - - this.colorWrite = source.colorWrite; - - this.precision = source.precision; - - this.polygonOffset = source.polygonOffset; - this.polygonOffsetFactor = source.polygonOffsetFactor; - this.polygonOffsetUnits = source.polygonOffsetUnits; - - this.dithering = source.dithering; - - this.alphaTest = source.alphaTest; - this.alphaToCoverage = source.alphaToCoverage; - this.premultipliedAlpha = source.premultipliedAlpha; - this.forceSinglePass = source.forceSinglePass; - - this.visible = source.visible; - - this.toneMapped = source.toneMapped; - - this.userData = JSON.parse(JSON.stringify(source.userData)); - - return this; - - } - - dispose() { - - this.dispatchEvent({ type: 'dispose' }); - - } - - set needsUpdate(value) { - - if (value === true) this.version++; - - } - -} - -class MeshBasicMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshBasicMaterial = true; - - this.type = 'MeshBasicMaterial'; - - this.color = new Color(0xffffff); // emissive - - this.map = null; - - this.lightMap = null; - this.lightMapIntensity = 1.0; - - this.aoMap = null; - this.aoMapIntensity = 1.0; - - this.specularMap = null; - - this.alphaMap = null; - - this.envMap = null; - this.combine = MultiplyOperation; - this.reflectivity = 1; - this.refractionRatio = 0.98; - - this.wireframe = false; - this.wireframeLinewidth = 1; - this.wireframeLinecap = 'round'; - this.wireframeLinejoin = 'round'; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - - this.map = source.map; - - this.lightMap = source.lightMap; - this.lightMapIntensity = source.lightMapIntensity; - - this.aoMap = source.aoMap; - this.aoMapIntensity = source.aoMapIntensity; - - this.specularMap = source.specularMap; - - this.alphaMap = source.alphaMap; - - this.envMap = source.envMap; - this.combine = source.combine; - this.reflectivity = source.reflectivity; - this.refractionRatio = source.refractionRatio; - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - this.wireframeLinecap = source.wireframeLinecap; - this.wireframeLinejoin = source.wireframeLinejoin; - - this.fog = source.fog; - - return this; - - } - -} - -const _vector$9 = /*@__PURE__*/ new Vector3(); -const _vector2$1 = /*@__PURE__*/ new Vector2(); - -class BufferAttribute { - - constructor(array, itemSize, normalized = false) { - - if (Array.isArray(array)) { - - throw new TypeError('THREE.BufferAttribute: array should be a Typed Array.'); - - } - - this.isBufferAttribute = true; - - this.name = ''; - - this.array = array; - this.itemSize = itemSize; - this.count = array !== undefined ? array.length / itemSize : 0; - this.normalized = normalized; - - this.usage = StaticDrawUsage; - this.updateRange = { offset: 0, count: - 1 }; - - this.version = 0; - - } - - onUploadCallback() { } - - set needsUpdate(value) { - - if (value === true) this.version++; - - } - - setUsage(value) { - - this.usage = value; - - return this; - - } - - copy(source) { - - this.name = source.name; - this.array = new source.array.constructor(source.array); - this.itemSize = source.itemSize; - this.count = source.count; - this.normalized = source.normalized; - - this.usage = source.usage; - - return this; - - } - - copyAt(index1, attribute, index2) { - - index1 *= this.itemSize; - index2 *= attribute.itemSize; - - for (let i = 0, l = this.itemSize; i < l; i++) { - - this.array[index1 + i] = attribute.array[index2 + i]; - - } - - return this; - - } - - copyArray(array) { - - this.array.set(array); - - return this; - - } - - applyMatrix3(m) { - - if (this.itemSize === 2) { - - for (let i = 0, l = this.count; i < l; i++) { - - _vector2$1.fromBufferAttribute(this, i); - _vector2$1.applyMatrix3(m); - - this.setXY(i, _vector2$1.x, _vector2$1.y); - - } - - } else if (this.itemSize === 3) { - - for (let i = 0, l = this.count; i < l; i++) { - - _vector$9.fromBufferAttribute(this, i); - _vector$9.applyMatrix3(m); - - this.setXYZ(i, _vector$9.x, _vector$9.y, _vector$9.z); - - } - - } - - return this; - - } - - applyMatrix4(m) { - - for (let i = 0, l = this.count; i < l; i++) { - - _vector$9.fromBufferAttribute(this, i); - - _vector$9.applyMatrix4(m); - - this.setXYZ(i, _vector$9.x, _vector$9.y, _vector$9.z); - - } - - return this; - - } - - applyNormalMatrix(m) { - - for (let i = 0, l = this.count; i < l; i++) { - - _vector$9.fromBufferAttribute(this, i); - - _vector$9.applyNormalMatrix(m); - - this.setXYZ(i, _vector$9.x, _vector$9.y, _vector$9.z); - - } - - return this; - - } - - transformDirection(m) { - - for (let i = 0, l = this.count; i < l; i++) { - - _vector$9.fromBufferAttribute(this, i); - - _vector$9.transformDirection(m); - - this.setXYZ(i, _vector$9.x, _vector$9.y, _vector$9.z); - - } - - return this; - - } - - set(value, offset = 0) { - - // Matching BufferAttribute constructor, do not normalize the array. - this.array.set(value, offset); - - return this; - - } - - getX(index) { - - let x = this.array[index * this.itemSize]; - - if (this.normalized) x = denormalize(x, this.array); - - return x; - - } - - setX(index, x) { - - if (this.normalized) x = normalize(x, this.array); - - this.array[index * this.itemSize] = x; - - return this; - - } - - getY(index) { - - let y = this.array[index * this.itemSize + 1]; - - if (this.normalized) y = denormalize(y, this.array); - - return y; - - } - - setY(index, y) { - - if (this.normalized) y = normalize(y, this.array); - - this.array[index * this.itemSize + 1] = y; - - return this; - - } - - getZ(index) { - - let z = this.array[index * this.itemSize + 2]; - - if (this.normalized) z = denormalize(z, this.array); - - return z; - - } - - setZ(index, z) { - - if (this.normalized) z = normalize(z, this.array); - - this.array[index * this.itemSize + 2] = z; - - return this; - - } - - getW(index) { - - let w = this.array[index * this.itemSize + 3]; - - if (this.normalized) w = denormalize(w, this.array); - - return w; - - } - - setW(index, w) { - - if (this.normalized) w = normalize(w, this.array); - - this.array[index * this.itemSize + 3] = w; - - return this; - - } - - setXY(index, x, y) { - - index *= this.itemSize; - - if (this.normalized) { - - x = normalize(x, this.array); - y = normalize(y, this.array); - - } - - this.array[index + 0] = x; - this.array[index + 1] = y; - - return this; - - } - - setXYZ(index, x, y, z) { - - index *= this.itemSize; - - if (this.normalized) { - - x = normalize(x, this.array); - y = normalize(y, this.array); - z = normalize(z, this.array); - - } - - this.array[index + 0] = x; - this.array[index + 1] = y; - this.array[index + 2] = z; - - return this; - - } - - setXYZW(index, x, y, z, w) { - - index *= this.itemSize; - - if (this.normalized) { - - x = normalize(x, this.array); - y = normalize(y, this.array); - z = normalize(z, this.array); - w = normalize(w, this.array); - - } - - this.array[index + 0] = x; - this.array[index + 1] = y; - this.array[index + 2] = z; - this.array[index + 3] = w; - - return this; - - } - - onUpload(callback) { - - this.onUploadCallback = callback; - - return this; - - } - - clone() { - - return new this.constructor(this.array, this.itemSize).copy(this); - - } - - toJSON() { - - const data = { - itemSize: this.itemSize, - type: this.array.constructor.name, - array: Array.from(this.array), - normalized: this.normalized - }; - - if (this.name !== '') data.name = this.name; - if (this.usage !== StaticDrawUsage) data.usage = this.usage; - if (this.updateRange.offset !== 0 || this.updateRange.count !== - 1) data.updateRange = this.updateRange; - - return data; - - } - - // @deprecated - - copyColorsArray() { - - console.error('THREE.BufferAttribute: copyColorsArray() was removed in r144.'); - - } - - copyVector2sArray() { - - console.error('THREE.BufferAttribute: copyVector2sArray() was removed in r144.'); - - } - - copyVector3sArray() { - - console.error('THREE.BufferAttribute: copyVector3sArray() was removed in r144.'); - - } - - copyVector4sArray() { - - console.error('THREE.BufferAttribute: copyVector4sArray() was removed in r144.'); - - } - -} - -// - -class Int8BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Int8Array(array), itemSize, normalized); - - } - -} - -class Uint8BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Uint8Array(array), itemSize, normalized); - - } - -} - -class Uint8ClampedBufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Uint8ClampedArray(array), itemSize, normalized); - - } - -} - -class Int16BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Int16Array(array), itemSize, normalized); - - } - -} - -class Uint16BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Uint16Array(array), itemSize, normalized); - - } - -} - -class Int32BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Int32Array(array), itemSize, normalized); - - } - -} - -class Uint32BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Uint32Array(array), itemSize, normalized); - - } - -} - -class Float16BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Uint16Array(array), itemSize, normalized); - - this.isFloat16BufferAttribute = true; - - } - -} - - -class Float32BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Float32Array(array), itemSize, normalized); - - } - -} - -class Float64BufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized) { - - super(new Float64Array(array), itemSize, normalized); - - } - -} - -let _id$1 = 0; - -const _m1 = /*@__PURE__*/ new Matrix4(); -const _obj = /*@__PURE__*/ new Object3D(); -const _offset = /*@__PURE__*/ new Vector3(); -const _box$1 = /*@__PURE__*/ new Box3(); -const _boxMorphTargets = /*@__PURE__*/ new Box3(); -const _vector$8 = /*@__PURE__*/ new Vector3(); - -class BufferGeometry extends EventDispatcher { - - constructor() { - - super(); - - this.isBufferGeometry = true; - - Object.defineProperty(this, 'id', { value: _id$1++ }); - - this.uuid = generateUUID(); - - this.name = ''; - this.type = 'BufferGeometry'; - - this.index = null; - this.attributes = {}; - - this.morphAttributes = {}; - this.morphTargetsRelative = false; - - this.groups = []; - - this.boundingBox = null; - this.boundingSphere = null; - - this.drawRange = { start: 0, count: Infinity }; - - this.userData = {}; - - } - - getIndex() { - - return this.index; - - } - - setIndex(index) { - - if (Array.isArray(index)) { - - this.index = new (arrayNeedsUint32(index) ? Uint32BufferAttribute : Uint16BufferAttribute)(index, 1); - - } else { - - this.index = index; - - } - - return this; - - } - - getAttribute(name) { - - return this.attributes[name]; - - } - - setAttribute(name, attribute) { - - this.attributes[name] = attribute; - - return this; - - } - - deleteAttribute(name) { - - delete this.attributes[name]; - - return this; - - } - - hasAttribute(name) { - - return this.attributes[name] !== undefined; - - } - - addGroup(start, count, materialIndex = 0) { - - this.groups.push({ - - start: start, - count: count, - materialIndex: materialIndex - - }); - - } - - clearGroups() { - - this.groups = []; - - } - - setDrawRange(start, count) { - - this.drawRange.start = start; - this.drawRange.count = count; - - } - - applyMatrix4(matrix) { - - const position = this.attributes.position; - - if (position !== undefined) { - - position.applyMatrix4(matrix); - - position.needsUpdate = true; - - } - - const normal = this.attributes.normal; - - if (normal !== undefined) { - - const normalMatrix = new Matrix3().getNormalMatrix(matrix); - - normal.applyNormalMatrix(normalMatrix); - - normal.needsUpdate = true; - - } - - const tangent = this.attributes.tangent; - - if (tangent !== undefined) { - - tangent.transformDirection(matrix); - - tangent.needsUpdate = true; - - } - - if (this.boundingBox !== null) { - - this.computeBoundingBox(); - - } - - if (this.boundingSphere !== null) { - - this.computeBoundingSphere(); - - } - - return this; - - } - - applyQuaternion(q) { - - _m1.makeRotationFromQuaternion(q); - - this.applyMatrix4(_m1); - - return this; - - } - - rotateX(angle) { - - // rotate geometry around world x-axis - - _m1.makeRotationX(angle); - - this.applyMatrix4(_m1); - - return this; - - } - - rotateY(angle) { - - // rotate geometry around world y-axis - - _m1.makeRotationY(angle); - - this.applyMatrix4(_m1); - - return this; - - } - - rotateZ(angle) { - - // rotate geometry around world z-axis - - _m1.makeRotationZ(angle); - - this.applyMatrix4(_m1); - - return this; - - } - - translate(x, y, z) { - - // translate geometry - - _m1.makeTranslation(x, y, z); - - this.applyMatrix4(_m1); - - return this; - - } - - scale(x, y, z) { - - // scale geometry - - _m1.makeScale(x, y, z); - - this.applyMatrix4(_m1); - - return this; - - } - - lookAt(vector) { - - _obj.lookAt(vector); - - _obj.updateMatrix(); - - this.applyMatrix4(_obj.matrix); - - return this; - - } - - center() { - - this.computeBoundingBox(); - - this.boundingBox.getCenter(_offset).negate(); - - this.translate(_offset.x, _offset.y, _offset.z); - - return this; - - } - - setFromPoints(points) { - - const position = []; - - for (let i = 0, l = points.length; i < l; i++) { - - const point = points[i]; - position.push(point.x, point.y, point.z || 0); - - } - - this.setAttribute('position', new Float32BufferAttribute(position, 3)); - - return this; - - } - - computeBoundingBox() { - - if (this.boundingBox === null) { - - this.boundingBox = new Box3(); - - } - - const position = this.attributes.position; - const morphAttributesPosition = this.morphAttributes.position; - - if (position && position.isGLBufferAttribute) { - - console.error('THREE.BufferGeometry.computeBoundingBox(): GLBufferAttribute requires a manual bounding box. Alternatively set "mesh.frustumCulled" to "false".', this); - - this.boundingBox.set( - new Vector3(- Infinity, - Infinity, - Infinity), - new Vector3(+ Infinity, + Infinity, + Infinity) - ); - - return; - - } - - if (position !== undefined) { - - this.boundingBox.setFromBufferAttribute(position); - - // process morph attributes if present - - if (morphAttributesPosition) { - - for (let i = 0, il = morphAttributesPosition.length; i < il; i++) { - - const morphAttribute = morphAttributesPosition[i]; - _box$1.setFromBufferAttribute(morphAttribute); - - if (this.morphTargetsRelative) { - - _vector$8.addVectors(this.boundingBox.min, _box$1.min); - this.boundingBox.expandByPoint(_vector$8); - - _vector$8.addVectors(this.boundingBox.max, _box$1.max); - this.boundingBox.expandByPoint(_vector$8); - - } else { - - this.boundingBox.expandByPoint(_box$1.min); - this.boundingBox.expandByPoint(_box$1.max); - - } - - } - - } - - } else { - - this.boundingBox.makeEmpty(); - - } - - if (isNaN(this.boundingBox.min.x) || isNaN(this.boundingBox.min.y) || isNaN(this.boundingBox.min.z)) { - - console.error('THREE.BufferGeometry.computeBoundingBox(): Computed min/max have NaN values. The "position" attribute is likely to have NaN values.', this); - - } - - } - - computeBoundingSphere() { - - if (this.boundingSphere === null) { - - this.boundingSphere = new Sphere(); - - } - - const position = this.attributes.position; - const morphAttributesPosition = this.morphAttributes.position; - - if (position && position.isGLBufferAttribute) { - - console.error('THREE.BufferGeometry.computeBoundingSphere(): GLBufferAttribute requires a manual bounding sphere. Alternatively set "mesh.frustumCulled" to "false".', this); - - this.boundingSphere.set(new Vector3(), Infinity); - - return; - - } - - if (position) { - - // first, find the center of the bounding sphere - - const center = this.boundingSphere.center; - - _box$1.setFromBufferAttribute(position); - - // process morph attributes if present - - if (morphAttributesPosition) { - - for (let i = 0, il = morphAttributesPosition.length; i < il; i++) { - - const morphAttribute = morphAttributesPosition[i]; - _boxMorphTargets.setFromBufferAttribute(morphAttribute); - - if (this.morphTargetsRelative) { - - _vector$8.addVectors(_box$1.min, _boxMorphTargets.min); - _box$1.expandByPoint(_vector$8); - - _vector$8.addVectors(_box$1.max, _boxMorphTargets.max); - _box$1.expandByPoint(_vector$8); - - } else { - - _box$1.expandByPoint(_boxMorphTargets.min); - _box$1.expandByPoint(_boxMorphTargets.max); - - } - - } - - } - - _box$1.getCenter(center); - - // second, try to find a boundingSphere with a radius smaller than the - // boundingSphere of the boundingBox: sqrt(3) smaller in the best case - - let maxRadiusSq = 0; - - for (let i = 0, il = position.count; i < il; i++) { - - _vector$8.fromBufferAttribute(position, i); - - maxRadiusSq = Math.max(maxRadiusSq, center.distanceToSquared(_vector$8)); - - } - - // process morph attributes if present - - if (morphAttributesPosition) { - - for (let i = 0, il = morphAttributesPosition.length; i < il; i++) { - - const morphAttribute = morphAttributesPosition[i]; - const morphTargetsRelative = this.morphTargetsRelative; - - for (let j = 0, jl = morphAttribute.count; j < jl; j++) { - - _vector$8.fromBufferAttribute(morphAttribute, j); - - if (morphTargetsRelative) { - - _offset.fromBufferAttribute(position, j); - _vector$8.add(_offset); - - } - - maxRadiusSq = Math.max(maxRadiusSq, center.distanceToSquared(_vector$8)); - - } - - } - - } - - this.boundingSphere.radius = Math.sqrt(maxRadiusSq); - - if (isNaN(this.boundingSphere.radius)) { - - console.error('THREE.BufferGeometry.computeBoundingSphere(): Computed radius is NaN. The "position" attribute is likely to have NaN values.', this); - - } - - } - - } - - computeTangents() { - - const index = this.index; - const attributes = this.attributes; - - // based on http://www.terathon.com/code/tangent.html - // (per vertex tangents) - - if (index === null || - attributes.position === undefined || - attributes.normal === undefined || - attributes.uv === undefined) { - - console.error('THREE.BufferGeometry: .computeTangents() failed. Missing required attributes (index, position, normal or uv)'); - return; - - } - - const indices = index.array; - const positions = attributes.position.array; - const normals = attributes.normal.array; - const uvs = attributes.uv.array; - - const nVertices = positions.length / 3; - - if (this.hasAttribute('tangent') === false) { - - this.setAttribute('tangent', new BufferAttribute(new Float32Array(4 * nVertices), 4)); - - } - - const tangents = this.getAttribute('tangent').array; - - const tan1 = [], tan2 = []; - - for (let i = 0; i < nVertices; i++) { - - tan1[i] = new Vector3(); - tan2[i] = new Vector3(); - - } - - const vA = new Vector3(), - vB = new Vector3(), - vC = new Vector3(), - - uvA = new Vector2(), - uvB = new Vector2(), - uvC = new Vector2(), - - sdir = new Vector3(), - tdir = new Vector3(); - - function handleTriangle(a, b, c) { - - vA.fromArray(positions, a * 3); - vB.fromArray(positions, b * 3); - vC.fromArray(positions, c * 3); - - uvA.fromArray(uvs, a * 2); - uvB.fromArray(uvs, b * 2); - uvC.fromArray(uvs, c * 2); - - vB.sub(vA); - vC.sub(vA); - - uvB.sub(uvA); - uvC.sub(uvA); - - const r = 1.0 / (uvB.x * uvC.y - uvC.x * uvB.y); - - // silently ignore degenerate uv triangles having coincident or colinear vertices - - if (!isFinite(r)) return; - - sdir.copy(vB).multiplyScalar(uvC.y).addScaledVector(vC, - uvB.y).multiplyScalar(r); - tdir.copy(vC).multiplyScalar(uvB.x).addScaledVector(vB, - uvC.x).multiplyScalar(r); - - tan1[a].add(sdir); - tan1[b].add(sdir); - tan1[c].add(sdir); - - tan2[a].add(tdir); - tan2[b].add(tdir); - tan2[c].add(tdir); - - } - - let groups = this.groups; - - if (groups.length === 0) { - - groups = [{ - start: 0, - count: indices.length - }]; - - } - - for (let i = 0, il = groups.length; i < il; ++i) { - - const group = groups[i]; - - const start = group.start; - const count = group.count; - - for (let j = start, jl = start + count; j < jl; j += 3) { - - handleTriangle( - indices[j + 0], - indices[j + 1], - indices[j + 2] - ); - - } - - } - - const tmp = new Vector3(), tmp2 = new Vector3(); - const n = new Vector3(), n2 = new Vector3(); - - function handleVertex(v) { - - n.fromArray(normals, v * 3); - n2.copy(n); - - const t = tan1[v]; - - // Gram-Schmidt orthogonalize - - tmp.copy(t); - tmp.sub(n.multiplyScalar(n.dot(t))).normalize(); - - // Calculate handedness - - tmp2.crossVectors(n2, t); - const test = tmp2.dot(tan2[v]); - const w = (test < 0.0) ? - 1.0 : 1.0; - - tangents[v * 4] = tmp.x; - tangents[v * 4 + 1] = tmp.y; - tangents[v * 4 + 2] = tmp.z; - tangents[v * 4 + 3] = w; - - } - - for (let i = 0, il = groups.length; i < il; ++i) { - - const group = groups[i]; - - const start = group.start; - const count = group.count; - - for (let j = start, jl = start + count; j < jl; j += 3) { - - handleVertex(indices[j + 0]); - handleVertex(indices[j + 1]); - handleVertex(indices[j + 2]); - - } - - } - - } - - computeVertexNormals() { - - const index = this.index; - const positionAttribute = this.getAttribute('position'); - - if (positionAttribute !== undefined) { - - let normalAttribute = this.getAttribute('normal'); - - if (normalAttribute === undefined) { - - normalAttribute = new BufferAttribute(new Float32Array(positionAttribute.count * 3), 3); - this.setAttribute('normal', normalAttribute); - - } else { - - // reset existing normals to zero - - for (let i = 0, il = normalAttribute.count; i < il; i++) { - - normalAttribute.setXYZ(i, 0, 0, 0); - - } - - } - - const pA = new Vector3(), pB = new Vector3(), pC = new Vector3(); - const nA = new Vector3(), nB = new Vector3(), nC = new Vector3(); - const cb = new Vector3(), ab = new Vector3(); - - // indexed elements - - if (index) { - - for (let i = 0, il = index.count; i < il; i += 3) { - - const vA = index.getX(i + 0); - const vB = index.getX(i + 1); - const vC = index.getX(i + 2); - - pA.fromBufferAttribute(positionAttribute, vA); - pB.fromBufferAttribute(positionAttribute, vB); - pC.fromBufferAttribute(positionAttribute, vC); - - cb.subVectors(pC, pB); - ab.subVectors(pA, pB); - cb.cross(ab); - - nA.fromBufferAttribute(normalAttribute, vA); - nB.fromBufferAttribute(normalAttribute, vB); - nC.fromBufferAttribute(normalAttribute, vC); - - nA.add(cb); - nB.add(cb); - nC.add(cb); - - normalAttribute.setXYZ(vA, nA.x, nA.y, nA.z); - normalAttribute.setXYZ(vB, nB.x, nB.y, nB.z); - normalAttribute.setXYZ(vC, nC.x, nC.y, nC.z); - - } - - } else { - - // non-indexed elements (unconnected triangle soup) - - for (let i = 0, il = positionAttribute.count; i < il; i += 3) { - - pA.fromBufferAttribute(positionAttribute, i + 0); - pB.fromBufferAttribute(positionAttribute, i + 1); - pC.fromBufferAttribute(positionAttribute, i + 2); - - cb.subVectors(pC, pB); - ab.subVectors(pA, pB); - cb.cross(ab); - - normalAttribute.setXYZ(i + 0, cb.x, cb.y, cb.z); - normalAttribute.setXYZ(i + 1, cb.x, cb.y, cb.z); - normalAttribute.setXYZ(i + 2, cb.x, cb.y, cb.z); - - } - - } - - this.normalizeNormals(); - - normalAttribute.needsUpdate = true; - - } - - } - - // @deprecated since r144 - - merge() { - - console.error('THREE.BufferGeometry.merge() has been removed. Use THREE.BufferGeometryUtils.mergeBufferGeometries() instead.'); - return this; - - } - - normalizeNormals() { - - const normals = this.attributes.normal; - - for (let i = 0, il = normals.count; i < il; i++) { - - _vector$8.fromBufferAttribute(normals, i); - - _vector$8.normalize(); - - normals.setXYZ(i, _vector$8.x, _vector$8.y, _vector$8.z); - - } - - } - - toNonIndexed() { - - function convertBufferAttribute(attribute, indices) { - - const array = attribute.array; - const itemSize = attribute.itemSize; - const normalized = attribute.normalized; - - const array2 = new array.constructor(indices.length * itemSize); - - let index = 0, index2 = 0; - - for (let i = 0, l = indices.length; i < l; i++) { - - if (attribute.isInterleavedBufferAttribute) { - - index = indices[i] * attribute.data.stride + attribute.offset; - - } else { - - index = indices[i] * itemSize; - - } - - for (let j = 0; j < itemSize; j++) { - - array2[index2++] = array[index++]; - - } - - } - - return new BufferAttribute(array2, itemSize, normalized); - - } - - // - - if (this.index === null) { - - console.warn('THREE.BufferGeometry.toNonIndexed(): BufferGeometry is already non-indexed.'); - return this; - - } - - const geometry2 = new BufferGeometry(); - - const indices = this.index.array; - const attributes = this.attributes; - - // attributes - - for (const name in attributes) { - - const attribute = attributes[name]; - - const newAttribute = convertBufferAttribute(attribute, indices); - - geometry2.setAttribute(name, newAttribute); - - } - - // morph attributes - - const morphAttributes = this.morphAttributes; - - for (const name in morphAttributes) { - - const morphArray = []; - const morphAttribute = morphAttributes[name]; // morphAttribute: array of Float32BufferAttributes - - for (let i = 0, il = morphAttribute.length; i < il; i++) { - - const attribute = morphAttribute[i]; - - const newAttribute = convertBufferAttribute(attribute, indices); - - morphArray.push(newAttribute); - - } - - geometry2.morphAttributes[name] = morphArray; - - } - - geometry2.morphTargetsRelative = this.morphTargetsRelative; - - // groups - - const groups = this.groups; - - for (let i = 0, l = groups.length; i < l; i++) { - - const group = groups[i]; - geometry2.addGroup(group.start, group.count, group.materialIndex); - - } - - return geometry2; - - } - - toJSON() { - - const data = { - metadata: { - version: 4.5, - type: 'BufferGeometry', - generator: 'BufferGeometry.toJSON' - } - }; - - // standard BufferGeometry serialization - - data.uuid = this.uuid; - data.type = this.type; - if (this.name !== '') data.name = this.name; - if (Object.keys(this.userData).length > 0) data.userData = this.userData; - - if (this.parameters !== undefined) { - - const parameters = this.parameters; - - for (const key in parameters) { - - if (parameters[key] !== undefined) data[key] = parameters[key]; - - } - - return data; - - } - - // for simplicity the code assumes attributes are not shared across geometries, see #15811 - - data.data = { attributes: {} }; - - const index = this.index; - - if (index !== null) { - - data.data.index = { - type: index.array.constructor.name, - array: Array.prototype.slice.call(index.array) - }; - - } - - const attributes = this.attributes; - - for (const key in attributes) { - - const attribute = attributes[key]; - - data.data.attributes[key] = attribute.toJSON(data.data); - - } - - const morphAttributes = {}; - let hasMorphAttributes = false; - - for (const key in this.morphAttributes) { - - const attributeArray = this.morphAttributes[key]; - - const array = []; - - for (let i = 0, il = attributeArray.length; i < il; i++) { - - const attribute = attributeArray[i]; - - array.push(attribute.toJSON(data.data)); - - } - - if (array.length > 0) { - - morphAttributes[key] = array; - - hasMorphAttributes = true; - - } - - } - - if (hasMorphAttributes) { - - data.data.morphAttributes = morphAttributes; - data.data.morphTargetsRelative = this.morphTargetsRelative; - - } - - const groups = this.groups; - - if (groups.length > 0) { - - data.data.groups = JSON.parse(JSON.stringify(groups)); - - } - - const boundingSphere = this.boundingSphere; - - if (boundingSphere !== null) { - - data.data.boundingSphere = { - center: boundingSphere.center.toArray(), - radius: boundingSphere.radius - }; - - } - - return data; - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(source) { - - // reset - - this.index = null; - this.attributes = {}; - this.morphAttributes = {}; - this.groups = []; - this.boundingBox = null; - this.boundingSphere = null; - - // used for storing cloned, shared data - - const data = {}; - - // name - - this.name = source.name; - - // index - - const index = source.index; - - if (index !== null) { - - this.setIndex(index.clone(data)); - - } - - // attributes - - const attributes = source.attributes; - - for (const name in attributes) { - - const attribute = attributes[name]; - this.setAttribute(name, attribute.clone(data)); - - } - - // morph attributes - - const morphAttributes = source.morphAttributes; - - for (const name in morphAttributes) { - - const array = []; - const morphAttribute = morphAttributes[name]; // morphAttribute: array of Float32BufferAttributes - - for (let i = 0, l = morphAttribute.length; i < l; i++) { - - array.push(morphAttribute[i].clone(data)); - - } - - this.morphAttributes[name] = array; - - } - - this.morphTargetsRelative = source.morphTargetsRelative; - - // groups - - const groups = source.groups; - - for (let i = 0, l = groups.length; i < l; i++) { - - const group = groups[i]; - this.addGroup(group.start, group.count, group.materialIndex); - - } - - // bounding box - - const boundingBox = source.boundingBox; - - if (boundingBox !== null) { - - this.boundingBox = boundingBox.clone(); - - } - - // bounding sphere - - const boundingSphere = source.boundingSphere; - - if (boundingSphere !== null) { - - this.boundingSphere = boundingSphere.clone(); - - } - - // draw range - - this.drawRange.start = source.drawRange.start; - this.drawRange.count = source.drawRange.count; - - // user data - - this.userData = source.userData; - - // geometry generator parameters - - if (source.parameters !== undefined) this.parameters = Object.assign({}, source.parameters); - - return this; - - } - - dispose() { - - this.dispatchEvent({ type: 'dispose' }); - - } - -} - -const _inverseMatrix$2 = /*@__PURE__*/ new Matrix4(); -const _ray$2 = /*@__PURE__*/ new Ray(); -const _sphere$3 = /*@__PURE__*/ new Sphere(); - -const _vA$1 = /*@__PURE__*/ new Vector3(); -const _vB$1 = /*@__PURE__*/ new Vector3(); -const _vC$1 = /*@__PURE__*/ new Vector3(); - -const _tempA = /*@__PURE__*/ new Vector3(); -const _morphA = /*@__PURE__*/ new Vector3(); - -const _uvA$1 = /*@__PURE__*/ new Vector2(); -const _uvB$1 = /*@__PURE__*/ new Vector2(); -const _uvC$1 = /*@__PURE__*/ new Vector2(); - -const _intersectionPoint = /*@__PURE__*/ new Vector3(); -const _intersectionPointWorld = /*@__PURE__*/ new Vector3(); - -class Mesh extends Object3D { - - constructor(geometry = new BufferGeometry(), material = new MeshBasicMaterial()) { - - super(); - - this.isMesh = true; - - this.type = 'Mesh'; - - this.geometry = geometry; - this.material = material; - - this.updateMorphTargets(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - if (source.morphTargetInfluences !== undefined) { - - this.morphTargetInfluences = source.morphTargetInfluences.slice(); - - } - - if (source.morphTargetDictionary !== undefined) { - - this.morphTargetDictionary = Object.assign({}, source.morphTargetDictionary); - - } - - this.material = source.material; - this.geometry = source.geometry; - - return this; - - } - - updateMorphTargets() { - - const geometry = this.geometry; - - const morphAttributes = geometry.morphAttributes; - const keys = Object.keys(morphAttributes); - - if (keys.length > 0) { - - const morphAttribute = morphAttributes[keys[0]]; - - if (morphAttribute !== undefined) { - - this.morphTargetInfluences = []; - this.morphTargetDictionary = {}; - - for (let m = 0, ml = morphAttribute.length; m < ml; m++) { - - const name = morphAttribute[m].name || String(m); - - this.morphTargetInfluences.push(0); - this.morphTargetDictionary[name] = m; - - } - - } - - } - - } - - getVertexPosition(index, target) { - - const geometry = this.geometry; - const position = geometry.attributes.position; - const morphPosition = geometry.morphAttributes.position; - const morphTargetsRelative = geometry.morphTargetsRelative; - - target.fromBufferAttribute(position, index); - - const morphInfluences = this.morphTargetInfluences; - - if (morphPosition && morphInfluences) { - - _morphA.set(0, 0, 0); - - for (let i = 0, il = morphPosition.length; i < il; i++) { - - const influence = morphInfluences[i]; - const morphAttribute = morphPosition[i]; - - if (influence === 0) continue; - - _tempA.fromBufferAttribute(morphAttribute, index); - - if (morphTargetsRelative) { - - _morphA.addScaledVector(_tempA, influence); - - } else { - - _morphA.addScaledVector(_tempA.sub(target), influence); - - } - - } - - target.add(_morphA); - - } - - if (this.isSkinnedMesh) { - - this.boneTransform(index, target); - - } - - return target; - - } - - raycast(raycaster, intersects) { - - const geometry = this.geometry; - const material = this.material; - const matrixWorld = this.matrixWorld; - - if (material === undefined) return; - - // Checking boundingSphere distance to ray - - if (geometry.boundingSphere === null) geometry.computeBoundingSphere(); - - _sphere$3.copy(geometry.boundingSphere); - _sphere$3.applyMatrix4(matrixWorld); - - if (raycaster.ray.intersectsSphere(_sphere$3) === false) return; - - // - - _inverseMatrix$2.copy(matrixWorld).invert(); - _ray$2.copy(raycaster.ray).applyMatrix4(_inverseMatrix$2); - - // Check boundingBox before continuing - - if (geometry.boundingBox !== null) { - - if (_ray$2.intersectsBox(geometry.boundingBox) === false) return; - - } - - let intersection; - - const index = geometry.index; - const position = geometry.attributes.position; - const uv = geometry.attributes.uv; - const uv2 = geometry.attributes.uv2; - const groups = geometry.groups; - const drawRange = geometry.drawRange; - - if (index !== null) { - - // indexed buffer geometry - - if (Array.isArray(material)) { - - for (let i = 0, il = groups.length; i < il; i++) { - - const group = groups[i]; - const groupMaterial = material[group.materialIndex]; - - const start = Math.max(group.start, drawRange.start); - const end = Math.min(index.count, Math.min((group.start + group.count), (drawRange.start + drawRange.count))); - - for (let j = start, jl = end; j < jl; j += 3) { - - const a = index.getX(j); - const b = index.getX(j + 1); - const c = index.getX(j + 2); - - intersection = checkBufferGeometryIntersection(this, groupMaterial, raycaster, _ray$2, uv, uv2, a, b, c); - - if (intersection) { - - intersection.faceIndex = Math.floor(j / 3); // triangle number in indexed buffer semantics - intersection.face.materialIndex = group.materialIndex; - intersects.push(intersection); - - } - - } - - } - - } else { - - const start = Math.max(0, drawRange.start); - const end = Math.min(index.count, (drawRange.start + drawRange.count)); - - for (let i = start, il = end; i < il; i += 3) { - - const a = index.getX(i); - const b = index.getX(i + 1); - const c = index.getX(i + 2); - - intersection = checkBufferGeometryIntersection(this, material, raycaster, _ray$2, uv, uv2, a, b, c); - - if (intersection) { - - intersection.faceIndex = Math.floor(i / 3); // triangle number in indexed buffer semantics - intersects.push(intersection); - - } - - } - - } - - } else if (position !== undefined) { - - // non-indexed buffer geometry - - if (Array.isArray(material)) { - - for (let i = 0, il = groups.length; i < il; i++) { - - const group = groups[i]; - const groupMaterial = material[group.materialIndex]; - - const start = Math.max(group.start, drawRange.start); - const end = Math.min(position.count, Math.min((group.start + group.count), (drawRange.start + drawRange.count))); - - for (let j = start, jl = end; j < jl; j += 3) { - - const a = j; - const b = j + 1; - const c = j + 2; - - intersection = checkBufferGeometryIntersection(this, groupMaterial, raycaster, _ray$2, uv, uv2, a, b, c); - - if (intersection) { - - intersection.faceIndex = Math.floor(j / 3); // triangle number in non-indexed buffer semantics - intersection.face.materialIndex = group.materialIndex; - intersects.push(intersection); - - } - - } - - } - - } else { - - const start = Math.max(0, drawRange.start); - const end = Math.min(position.count, (drawRange.start + drawRange.count)); - - for (let i = start, il = end; i < il; i += 3) { - - const a = i; - const b = i + 1; - const c = i + 2; - - intersection = checkBufferGeometryIntersection(this, material, raycaster, _ray$2, uv, uv2, a, b, c); - - if (intersection) { - - intersection.faceIndex = Math.floor(i / 3); // triangle number in non-indexed buffer semantics - intersects.push(intersection); - - } - - } - - } - - } - - } - -} - -function checkIntersection(object, material, raycaster, ray, pA, pB, pC, point) { - - let intersect; - - if (material.side === BackSide) { - - intersect = ray.intersectTriangle(pC, pB, pA, true, point); - - } else { - - intersect = ray.intersectTriangle(pA, pB, pC, (material.side === FrontSide), point); - - } - - if (intersect === null) return null; - - _intersectionPointWorld.copy(point); - _intersectionPointWorld.applyMatrix4(object.matrixWorld); - - const distance = raycaster.ray.origin.distanceTo(_intersectionPointWorld); - - if (distance < raycaster.near || distance > raycaster.far) return null; - - return { - distance: distance, - point: _intersectionPointWorld.clone(), - object: object - }; - -} - -function checkBufferGeometryIntersection(object, material, raycaster, ray, uv, uv2, a, b, c) { - - object.getVertexPosition(a, _vA$1); - object.getVertexPosition(b, _vB$1); - object.getVertexPosition(c, _vC$1); - - const intersection = checkIntersection(object, material, raycaster, ray, _vA$1, _vB$1, _vC$1, _intersectionPoint); - - if (intersection) { - - if (uv) { - - _uvA$1.fromBufferAttribute(uv, a); - _uvB$1.fromBufferAttribute(uv, b); - _uvC$1.fromBufferAttribute(uv, c); - - intersection.uv = Triangle.getUV(_intersectionPoint, _vA$1, _vB$1, _vC$1, _uvA$1, _uvB$1, _uvC$1, new Vector2()); - - } - - if (uv2) { - - _uvA$1.fromBufferAttribute(uv2, a); - _uvB$1.fromBufferAttribute(uv2, b); - _uvC$1.fromBufferAttribute(uv2, c); - - intersection.uv2 = Triangle.getUV(_intersectionPoint, _vA$1, _vB$1, _vC$1, _uvA$1, _uvB$1, _uvC$1, new Vector2()); - - } - - const face = { - a: a, - b: b, - c: c, - normal: new Vector3(), - materialIndex: 0 - }; - - Triangle.getNormal(_vA$1, _vB$1, _vC$1, face.normal); - - intersection.face = face; - - } - - return intersection; - -} - -class BoxGeometry extends BufferGeometry { - - constructor(width = 1, height = 1, depth = 1, widthSegments = 1, heightSegments = 1, depthSegments = 1) { - - super(); - - this.type = 'BoxGeometry'; - - this.parameters = { - width: width, - height: height, - depth: depth, - widthSegments: widthSegments, - heightSegments: heightSegments, - depthSegments: depthSegments - }; - - const scope = this; - - // segments - - widthSegments = Math.floor(widthSegments); - heightSegments = Math.floor(heightSegments); - depthSegments = Math.floor(depthSegments); - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // helper variables - - let numberOfVertices = 0; - let groupStart = 0; - - // build each side of the box geometry - - buildPlane('z', 'y', 'x', - 1, - 1, depth, height, width, depthSegments, heightSegments, 0); // px - buildPlane('z', 'y', 'x', 1, - 1, depth, height, - width, depthSegments, heightSegments, 1); // nx - buildPlane('x', 'z', 'y', 1, 1, width, depth, height, widthSegments, depthSegments, 2); // py - buildPlane('x', 'z', 'y', 1, - 1, width, depth, - height, widthSegments, depthSegments, 3); // ny - buildPlane('x', 'y', 'z', 1, - 1, width, height, depth, widthSegments, heightSegments, 4); // pz - buildPlane('x', 'y', 'z', - 1, - 1, width, height, - depth, widthSegments, heightSegments, 5); // nz - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - function buildPlane(u, v, w, udir, vdir, width, height, depth, gridX, gridY, materialIndex) { - - const segmentWidth = width / gridX; - const segmentHeight = height / gridY; - - const widthHalf = width / 2; - const heightHalf = height / 2; - const depthHalf = depth / 2; - - const gridX1 = gridX + 1; - const gridY1 = gridY + 1; - - let vertexCounter = 0; - let groupCount = 0; - - const vector = new Vector3(); - - // generate vertices, normals and uvs - - for (let iy = 0; iy < gridY1; iy++) { - - const y = iy * segmentHeight - heightHalf; - - for (let ix = 0; ix < gridX1; ix++) { - - const x = ix * segmentWidth - widthHalf; - - // set values to correct vector component - - vector[u] = x * udir; - vector[v] = y * vdir; - vector[w] = depthHalf; - - // now apply vector to vertex buffer - - vertices.push(vector.x, vector.y, vector.z); - - // set values to correct vector component - - vector[u] = 0; - vector[v] = 0; - vector[w] = depth > 0 ? 1 : - 1; - - // now apply vector to normal buffer - - normals.push(vector.x, vector.y, vector.z); - - // uvs - - uvs.push(ix / gridX); - uvs.push(1 - (iy / gridY)); - - // counters - - vertexCounter += 1; - - } - - } - - // indices - - // 1. you need three indices to draw a single face - // 2. a single segment consists of two faces - // 3. so we need to generate six (2*3) indices per segment - - for (let iy = 0; iy < gridY; iy++) { - - for (let ix = 0; ix < gridX; ix++) { - - const a = numberOfVertices + ix + gridX1 * iy; - const b = numberOfVertices + ix + gridX1 * (iy + 1); - const c = numberOfVertices + (ix + 1) + gridX1 * (iy + 1); - const d = numberOfVertices + (ix + 1) + gridX1 * iy; - - // faces - - indices.push(a, b, d); - indices.push(b, c, d); - - // increase counter - - groupCount += 6; - - } - - } - - // add a group to the geometry. this will ensure multi material support - - scope.addGroup(groupStart, groupCount, materialIndex); - - // calculate new start value for groups - - groupStart += groupCount; - - // update total number of vertices - - numberOfVertices += vertexCounter; - - } - - } - - static fromJSON(data) { - - return new BoxGeometry(data.width, data.height, data.depth, data.widthSegments, data.heightSegments, data.depthSegments); - - } - -} - -/** - * Uniform Utilities - */ - -function cloneUniforms(src) { - - const dst = {}; - - for (const u in src) { - - dst[u] = {}; - - for (const p in src[u]) { - - const property = src[u][p]; - - if (property && (property.isColor || - property.isMatrix3 || property.isMatrix4 || - property.isVector2 || property.isVector3 || property.isVector4 || - property.isTexture || property.isQuaternion)) { - - dst[u][p] = property.clone(); - - } else if (Array.isArray(property)) { - - dst[u][p] = property.slice(); - - } else { - - dst[u][p] = property; - - } - - } - - } - - return dst; - -} - -function mergeUniforms(uniforms) { - - const merged = {}; - - for (let u = 0; u < uniforms.length; u++) { - - const tmp = cloneUniforms(uniforms[u]); - - for (const p in tmp) { - - merged[p] = tmp[p]; - - } - - } - - return merged; - -} - -function cloneUniformsGroups(src) { - - const dst = []; - - for (let u = 0; u < src.length; u++) { - - dst.push(src[u].clone()); - - } - - return dst; - -} - -function getUnlitUniformColorSpace(renderer) { - - if (renderer.getRenderTarget() === null) { - - // https://github.com/mrdoob/three.js/pull/23937#issuecomment-1111067398 - return renderer.outputEncoding === sRGBEncoding ? SRGBColorSpace : LinearSRGBColorSpace; - - } - - return LinearSRGBColorSpace; - -} - -// Legacy - -const UniformsUtils = { clone: cloneUniforms, merge: mergeUniforms }; - -var default_vertex = "void main() {\n\tgl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\n}"; - -var default_fragment = "void main() {\n\tgl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );\n}"; - -class ShaderMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isShaderMaterial = true; - - this.type = 'ShaderMaterial'; - - this.defines = {}; - this.uniforms = {}; - this.uniformsGroups = []; - - this.vertexShader = default_vertex; - this.fragmentShader = default_fragment; - - this.linewidth = 1; - - this.wireframe = false; - this.wireframeLinewidth = 1; - - this.fog = false; // set to use scene fog - this.lights = false; // set to use scene lights - this.clipping = false; // set to use user-defined clipping planes - - this.extensions = { - derivatives: false, // set to use derivatives - fragDepth: false, // set to use fragment depth values - drawBuffers: false, // set to use draw buffers - shaderTextureLOD: false // set to use shader texture LOD - }; - - // When rendered geometry doesn't include these attributes but the material does, - // use these default values in WebGL. This avoids errors when buffer data is missing. - this.defaultAttributeValues = { - 'color': [1, 1, 1], - 'uv': [0, 0], - 'uv2': [0, 0] - }; - - this.index0AttributeName = undefined; - this.uniformsNeedUpdate = false; - - this.glslVersion = null; - - if (parameters !== undefined) { - - this.setValues(parameters); - - } - - } - - copy(source) { - - super.copy(source); - - this.fragmentShader = source.fragmentShader; - this.vertexShader = source.vertexShader; - - this.uniforms = cloneUniforms(source.uniforms); - this.uniformsGroups = cloneUniformsGroups(source.uniformsGroups); - - this.defines = Object.assign({}, source.defines); - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - - this.fog = source.fog; - this.lights = source.lights; - this.clipping = source.clipping; - - this.extensions = Object.assign({}, source.extensions); - - this.glslVersion = source.glslVersion; - - return this; - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - data.glslVersion = this.glslVersion; - data.uniforms = {}; - - for (const name in this.uniforms) { - - const uniform = this.uniforms[name]; - const value = uniform.value; - - if (value && value.isTexture) { - - data.uniforms[name] = { - type: 't', - value: value.toJSON(meta).uuid - }; - - } else if (value && value.isColor) { - - data.uniforms[name] = { - type: 'c', - value: value.getHex() - }; - - } else if (value && value.isVector2) { - - data.uniforms[name] = { - type: 'v2', - value: value.toArray() - }; - - } else if (value && value.isVector3) { - - data.uniforms[name] = { - type: 'v3', - value: value.toArray() - }; - - } else if (value && value.isVector4) { - - data.uniforms[name] = { - type: 'v4', - value: value.toArray() - }; - - } else if (value && value.isMatrix3) { - - data.uniforms[name] = { - type: 'm3', - value: value.toArray() - }; - - } else if (value && value.isMatrix4) { - - data.uniforms[name] = { - type: 'm4', - value: value.toArray() - }; - - } else { - - data.uniforms[name] = { - value: value - }; - - // note: the array variants v2v, v3v, v4v, m4v and tv are not supported so far - - } - - } - - if (Object.keys(this.defines).length > 0) data.defines = this.defines; - - data.vertexShader = this.vertexShader; - data.fragmentShader = this.fragmentShader; - - const extensions = {}; - - for (const key in this.extensions) { - - if (this.extensions[key] === true) extensions[key] = true; - - } - - if (Object.keys(extensions).length > 0) data.extensions = extensions; - - return data; - - } - -} - -class Camera extends Object3D { - - constructor() { - - super(); - - this.isCamera = true; - - this.type = 'Camera'; - - this.matrixWorldInverse = new Matrix4(); - - this.projectionMatrix = new Matrix4(); - this.projectionMatrixInverse = new Matrix4(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.matrixWorldInverse.copy(source.matrixWorldInverse); - - this.projectionMatrix.copy(source.projectionMatrix); - this.projectionMatrixInverse.copy(source.projectionMatrixInverse); - - return this; - - } - - getWorldDirection(target) { - - this.updateWorldMatrix(true, false); - - const e = this.matrixWorld.elements; - - return target.set(- e[8], - e[9], - e[10]).normalize(); - - } - - updateMatrixWorld(force) { - - super.updateMatrixWorld(force); - - this.matrixWorldInverse.copy(this.matrixWorld).invert(); - - } - - updateWorldMatrix(updateParents, updateChildren) { - - super.updateWorldMatrix(updateParents, updateChildren); - - this.matrixWorldInverse.copy(this.matrixWorld).invert(); - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -class PerspectiveCamera extends Camera { - - constructor(fov = 50, aspect = 1, near = 0.1, far = 2000) { - - super(); - - this.isPerspectiveCamera = true; - - this.type = 'PerspectiveCamera'; - - this.fov = fov; - this.zoom = 1; - - this.near = near; - this.far = far; - this.focus = 10; - - this.aspect = aspect; - this.view = null; - - this.filmGauge = 35; // width of the film (default in millimeters) - this.filmOffset = 0; // horizontal film offset (same unit as gauge) - - this.updateProjectionMatrix(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.fov = source.fov; - this.zoom = source.zoom; - - this.near = source.near; - this.far = source.far; - this.focus = source.focus; - - this.aspect = source.aspect; - this.view = source.view === null ? null : Object.assign({}, source.view); - - this.filmGauge = source.filmGauge; - this.filmOffset = source.filmOffset; - - return this; - - } - - /** - * Sets the FOV by focal length in respect to the current .filmGauge. - * - * The default film gauge is 35, so that the focal length can be specified for - * a 35mm (full frame) camera. - * - * Values for focal length and film gauge must have the same unit. - */ - setFocalLength(focalLength) { - - /** see {@link http://www.bobatkins.com/photography/technical/field_of_view.html} */ - const vExtentSlope = 0.5 * this.getFilmHeight() / focalLength; - - this.fov = RAD2DEG * 2 * Math.atan(vExtentSlope); - this.updateProjectionMatrix(); - - } - - /** - * Calculates the focal length from the current .fov and .filmGauge. - */ - getFocalLength() { - - const vExtentSlope = Math.tan(DEG2RAD * 0.5 * this.fov); - - return 0.5 * this.getFilmHeight() / vExtentSlope; - - } - - getEffectiveFOV() { - - return RAD2DEG * 2 * Math.atan( - Math.tan(DEG2RAD * 0.5 * this.fov) / this.zoom); - - } - - getFilmWidth() { - - // film not completely covered in portrait format (aspect < 1) - return this.filmGauge * Math.min(this.aspect, 1); - - } - - getFilmHeight() { - - // film not completely covered in landscape format (aspect > 1) - return this.filmGauge / Math.max(this.aspect, 1); - - } - - /** - * Sets an offset in a larger frustum. This is useful for multi-window or - * multi-monitor/multi-machine setups. - * - * For example, if you have 3x2 monitors and each monitor is 1920x1080 and - * the monitors are in grid like this - * - * +---+---+---+ - * | A | B | C | - * +---+---+---+ - * | D | E | F | - * +---+---+---+ - * - * then for each monitor you would call it like this - * - * const w = 1920; - * const h = 1080; - * const fullWidth = w * 3; - * const fullHeight = h * 2; - * - * --A-- - * camera.setViewOffset( fullWidth, fullHeight, w * 0, h * 0, w, h ); - * --B-- - * camera.setViewOffset( fullWidth, fullHeight, w * 1, h * 0, w, h ); - * --C-- - * camera.setViewOffset( fullWidth, fullHeight, w * 2, h * 0, w, h ); - * --D-- - * camera.setViewOffset( fullWidth, fullHeight, w * 0, h * 1, w, h ); - * --E-- - * camera.setViewOffset( fullWidth, fullHeight, w * 1, h * 1, w, h ); - * --F-- - * camera.setViewOffset( fullWidth, fullHeight, w * 2, h * 1, w, h ); - * - * Note there is no reason monitors have to be the same size or in a grid. - */ - setViewOffset(fullWidth, fullHeight, x, y, width, height) { - - this.aspect = fullWidth / fullHeight; - - if (this.view === null) { - - this.view = { - enabled: true, - fullWidth: 1, - fullHeight: 1, - offsetX: 0, - offsetY: 0, - width: 1, - height: 1 - }; - - } - - this.view.enabled = true; - this.view.fullWidth = fullWidth; - this.view.fullHeight = fullHeight; - this.view.offsetX = x; - this.view.offsetY = y; - this.view.width = width; - this.view.height = height; - - this.updateProjectionMatrix(); - - } - - clearViewOffset() { - - if (this.view !== null) { - - this.view.enabled = false; - - } - - this.updateProjectionMatrix(); - - } - - updateProjectionMatrix() { - - const near = this.near; - let top = near * Math.tan(DEG2RAD * 0.5 * this.fov) / this.zoom; - let height = 2 * top; - let width = this.aspect * height; - let left = - 0.5 * width; - const view = this.view; - - if (this.view !== null && this.view.enabled) { - - const fullWidth = view.fullWidth, - fullHeight = view.fullHeight; - - left += view.offsetX * width / fullWidth; - top -= view.offsetY * height / fullHeight; - width *= view.width / fullWidth; - height *= view.height / fullHeight; - - } - - const skew = this.filmOffset; - if (skew !== 0) left += near * skew / this.getFilmWidth(); - - this.projectionMatrix.makePerspective(left, left + width, top, top - height, near, this.far); - - this.projectionMatrixInverse.copy(this.projectionMatrix).invert(); - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - data.object.fov = this.fov; - data.object.zoom = this.zoom; - - data.object.near = this.near; - data.object.far = this.far; - data.object.focus = this.focus; - - data.object.aspect = this.aspect; - - if (this.view !== null) data.object.view = Object.assign({}, this.view); - - data.object.filmGauge = this.filmGauge; - data.object.filmOffset = this.filmOffset; - - return data; - - } - -} - -const fov = - 90; // negative fov is not an error -const aspect = 1; - -class CubeCamera extends Object3D { - - constructor(near, far, renderTarget) { - - super(); - - this.type = 'CubeCamera'; - - this.renderTarget = renderTarget; - - const cameraPX = new PerspectiveCamera(fov, aspect, near, far); - cameraPX.layers = this.layers; - cameraPX.up.set(0, 1, 0); - cameraPX.lookAt(1, 0, 0); - this.add(cameraPX); - - const cameraNX = new PerspectiveCamera(fov, aspect, near, far); - cameraNX.layers = this.layers; - cameraNX.up.set(0, 1, 0); - cameraNX.lookAt(- 1, 0, 0); - this.add(cameraNX); - - const cameraPY = new PerspectiveCamera(fov, aspect, near, far); - cameraPY.layers = this.layers; - cameraPY.up.set(0, 0, - 1); - cameraPY.lookAt(0, 1, 0); - this.add(cameraPY); - - const cameraNY = new PerspectiveCamera(fov, aspect, near, far); - cameraNY.layers = this.layers; - cameraNY.up.set(0, 0, 1); - cameraNY.lookAt(0, - 1, 0); - this.add(cameraNY); - - const cameraPZ = new PerspectiveCamera(fov, aspect, near, far); - cameraPZ.layers = this.layers; - cameraPZ.up.set(0, 1, 0); - cameraPZ.lookAt(0, 0, 1); - this.add(cameraPZ); - - const cameraNZ = new PerspectiveCamera(fov, aspect, near, far); - cameraNZ.layers = this.layers; - cameraNZ.up.set(0, 1, 0); - cameraNZ.lookAt(0, 0, - 1); - this.add(cameraNZ); - - } - - update(renderer, scene) { - - if (this.parent === null) this.updateMatrixWorld(); - - const renderTarget = this.renderTarget; - - const [cameraPX, cameraNX, cameraPY, cameraNY, cameraPZ, cameraNZ] = this.children; - - const currentRenderTarget = renderer.getRenderTarget(); - - const currentToneMapping = renderer.toneMapping; - const currentXrEnabled = renderer.xr.enabled; - - renderer.toneMapping = NoToneMapping; - renderer.xr.enabled = false; - - const generateMipmaps = renderTarget.texture.generateMipmaps; - - renderTarget.texture.generateMipmaps = false; - - renderer.setRenderTarget(renderTarget, 0); - renderer.render(scene, cameraPX); - - renderer.setRenderTarget(renderTarget, 1); - renderer.render(scene, cameraNX); - - renderer.setRenderTarget(renderTarget, 2); - renderer.render(scene, cameraPY); - - renderer.setRenderTarget(renderTarget, 3); - renderer.render(scene, cameraNY); - - renderer.setRenderTarget(renderTarget, 4); - renderer.render(scene, cameraPZ); - - renderTarget.texture.generateMipmaps = generateMipmaps; - - renderer.setRenderTarget(renderTarget, 5); - renderer.render(scene, cameraNZ); - - renderer.setRenderTarget(currentRenderTarget); - - renderer.toneMapping = currentToneMapping; - renderer.xr.enabled = currentXrEnabled; - - renderTarget.texture.needsPMREMUpdate = true; - - } - -} - -class CubeTexture extends Texture { - - constructor(images, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy, encoding) { - - images = images !== undefined ? images : []; - mapping = mapping !== undefined ? mapping : CubeReflectionMapping; - - super(images, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy, encoding); - - this.isCubeTexture = true; - - this.flipY = false; - - } - - get images() { - - return this.image; - - } - - set images(value) { - - this.image = value; - - } - -} - -class WebGLCubeRenderTarget extends WebGLRenderTarget { - - constructor(size = 1, options = {}) { - - super(size, size, options); - - this.isWebGLCubeRenderTarget = true; - - const image = { width: size, height: size, depth: 1 }; - const images = [image, image, image, image, image, image]; - - this.texture = new CubeTexture(images, options.mapping, options.wrapS, options.wrapT, options.magFilter, options.minFilter, options.format, options.type, options.anisotropy, options.encoding); - - // By convention -- likely based on the RenderMan spec from the 1990's -- cube maps are specified by WebGL (and three.js) - // in a coordinate system in which positive-x is to the right when looking up the positive-z axis -- in other words, - // in a left-handed coordinate system. By continuing this convention, preexisting cube maps continued to render correctly. - - // three.js uses a right-handed coordinate system. So environment maps used in three.js appear to have px and nx swapped - // and the flag isRenderTargetTexture controls this conversion. The flip is not required when using WebGLCubeRenderTarget.texture - // as a cube texture (this is detected when isRenderTargetTexture is set to true for cube textures). - - this.texture.isRenderTargetTexture = true; - - this.texture.generateMipmaps = options.generateMipmaps !== undefined ? options.generateMipmaps : false; - this.texture.minFilter = options.minFilter !== undefined ? options.minFilter : LinearFilter; - - } - - fromEquirectangularTexture(renderer, texture) { - - this.texture.type = texture.type; - this.texture.encoding = texture.encoding; - - this.texture.generateMipmaps = texture.generateMipmaps; - this.texture.minFilter = texture.minFilter; - this.texture.magFilter = texture.magFilter; - - const shader = { - - uniforms: { - tEquirect: { value: null }, - }, - - vertexShader: /* glsl */` - - varying vec3 vWorldDirection; - - vec3 transformDirection( in vec3 dir, in mat4 matrix ) { - - return normalize( ( matrix * vec4( dir, 0.0 ) ).xyz ); - - } - - void main() { - - vWorldDirection = transformDirection( position, modelMatrix ); - - #include - #include - - } - `, - - fragmentShader: /* glsl */` - - uniform sampler2D tEquirect; - - varying vec3 vWorldDirection; - - #include - - void main() { - - vec3 direction = normalize( vWorldDirection ); - - vec2 sampleUV = equirectUv( direction ); - - gl_FragColor = texture2D( tEquirect, sampleUV ); - - } - ` - }; - - const geometry = new BoxGeometry(5, 5, 5); - - const material = new ShaderMaterial({ - - name: 'CubemapFromEquirect', - - uniforms: cloneUniforms(shader.uniforms), - vertexShader: shader.vertexShader, - fragmentShader: shader.fragmentShader, - side: BackSide, - blending: NoBlending - - }); - - material.uniforms.tEquirect.value = texture; - - const mesh = new Mesh(geometry, material); - - const currentMinFilter = texture.minFilter; - - // Avoid blurred poles - if (texture.minFilter === LinearMipmapLinearFilter) texture.minFilter = LinearFilter; - - const camera = new CubeCamera(1, 10, this); - camera.update(renderer, mesh); - - texture.minFilter = currentMinFilter; - - mesh.geometry.dispose(); - mesh.material.dispose(); - - return this; - - } - - clear(renderer, color, depth, stencil) { - - const currentRenderTarget = renderer.getRenderTarget(); - - for (let i = 0; i < 6; i++) { - - renderer.setRenderTarget(this, i); - - renderer.clear(color, depth, stencil); - - } - - renderer.setRenderTarget(currentRenderTarget); - - } - -} - -const _vector1 = /*@__PURE__*/ new Vector3(); -const _vector2 = /*@__PURE__*/ new Vector3(); -const _normalMatrix = /*@__PURE__*/ new Matrix3(); - -class Plane { - - constructor(normal = new Vector3(1, 0, 0), constant = 0) { - - this.isPlane = true; - - // normal is assumed to be normalized - - this.normal = normal; - this.constant = constant; - - } - - set(normal, constant) { - - this.normal.copy(normal); - this.constant = constant; - - return this; - - } - - setComponents(x, y, z, w) { - - this.normal.set(x, y, z); - this.constant = w; - - return this; - - } - - setFromNormalAndCoplanarPoint(normal, point) { - - this.normal.copy(normal); - this.constant = - point.dot(this.normal); - - return this; - - } - - setFromCoplanarPoints(a, b, c) { - - const normal = _vector1.subVectors(c, b).cross(_vector2.subVectors(a, b)).normalize(); - - // Q: should an error be thrown if normal is zero (e.g. degenerate plane)? - - this.setFromNormalAndCoplanarPoint(normal, a); - - return this; - - } - - copy(plane) { - - this.normal.copy(plane.normal); - this.constant = plane.constant; - - return this; - - } - - normalize() { - - // Note: will lead to a divide by zero if the plane is invalid. - - const inverseNormalLength = 1.0 / this.normal.length(); - this.normal.multiplyScalar(inverseNormalLength); - this.constant *= inverseNormalLength; - - return this; - - } - - negate() { - - this.constant *= - 1; - this.normal.negate(); - - return this; - - } - - distanceToPoint(point) { - - return this.normal.dot(point) + this.constant; - - } - - distanceToSphere(sphere) { - - return this.distanceToPoint(sphere.center) - sphere.radius; - - } - - projectPoint(point, target) { - - return target.copy(this.normal).multiplyScalar(- this.distanceToPoint(point)).add(point); - - } - - intersectLine(line, target) { - - const direction = line.delta(_vector1); - - const denominator = this.normal.dot(direction); - - if (denominator === 0) { - - // line is coplanar, return origin - if (this.distanceToPoint(line.start) === 0) { - - return target.copy(line.start); - - } - - // Unsure if this is the correct method to handle this case. - return null; - - } - - const t = - (line.start.dot(this.normal) + this.constant) / denominator; - - if (t < 0 || t > 1) { - - return null; - - } - - return target.copy(direction).multiplyScalar(t).add(line.start); - - } - - intersectsLine(line) { - - // Note: this tests if a line intersects the plane, not whether it (or its end-points) are coplanar with it. - - const startSign = this.distanceToPoint(line.start); - const endSign = this.distanceToPoint(line.end); - - return (startSign < 0 && endSign > 0) || (endSign < 0 && startSign > 0); - - } - - intersectsBox(box) { - - return box.intersectsPlane(this); - - } - - intersectsSphere(sphere) { - - return sphere.intersectsPlane(this); - - } - - coplanarPoint(target) { - - return target.copy(this.normal).multiplyScalar(- this.constant); - - } - - applyMatrix4(matrix, optionalNormalMatrix) { - - const normalMatrix = optionalNormalMatrix || _normalMatrix.getNormalMatrix(matrix); - - const referencePoint = this.coplanarPoint(_vector1).applyMatrix4(matrix); - - const normal = this.normal.applyMatrix3(normalMatrix).normalize(); - - this.constant = - referencePoint.dot(normal); - - return this; - - } - - translate(offset) { - - this.constant -= offset.dot(this.normal); - - return this; - - } - - equals(plane) { - - return plane.normal.equals(this.normal) && (plane.constant === this.constant); - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -const _sphere$2 = /*@__PURE__*/ new Sphere(); -const _vector$7 = /*@__PURE__*/ new Vector3(); - -class Frustum { - - constructor(p0 = new Plane(), p1 = new Plane(), p2 = new Plane(), p3 = new Plane(), p4 = new Plane(), p5 = new Plane()) { - - this.planes = [p0, p1, p2, p3, p4, p5]; - - } - - set(p0, p1, p2, p3, p4, p5) { - - const planes = this.planes; - - planes[0].copy(p0); - planes[1].copy(p1); - planes[2].copy(p2); - planes[3].copy(p3); - planes[4].copy(p4); - planes[5].copy(p5); - - return this; - - } - - copy(frustum) { - - const planes = this.planes; - - for (let i = 0; i < 6; i++) { - - planes[i].copy(frustum.planes[i]); - - } - - return this; - - } - - setFromProjectionMatrix(m) { - - const planes = this.planes; - const me = m.elements; - const me0 = me[0], me1 = me[1], me2 = me[2], me3 = me[3]; - const me4 = me[4], me5 = me[5], me6 = me[6], me7 = me[7]; - const me8 = me[8], me9 = me[9], me10 = me[10], me11 = me[11]; - const me12 = me[12], me13 = me[13], me14 = me[14], me15 = me[15]; - - planes[0].setComponents(me3 - me0, me7 - me4, me11 - me8, me15 - me12).normalize(); - planes[1].setComponents(me3 + me0, me7 + me4, me11 + me8, me15 + me12).normalize(); - planes[2].setComponents(me3 + me1, me7 + me5, me11 + me9, me15 + me13).normalize(); - planes[3].setComponents(me3 - me1, me7 - me5, me11 - me9, me15 - me13).normalize(); - planes[4].setComponents(me3 - me2, me7 - me6, me11 - me10, me15 - me14).normalize(); - planes[5].setComponents(me3 + me2, me7 + me6, me11 + me10, me15 + me14).normalize(); - - return this; - - } - - intersectsObject(object) { - - const geometry = object.geometry; - - if (geometry.boundingSphere === null) geometry.computeBoundingSphere(); - - _sphere$2.copy(geometry.boundingSphere).applyMatrix4(object.matrixWorld); - - return this.intersectsSphere(_sphere$2); - - } - - intersectsSprite(sprite) { - - _sphere$2.center.set(0, 0, 0); - _sphere$2.radius = 0.7071067811865476; - _sphere$2.applyMatrix4(sprite.matrixWorld); - - return this.intersectsSphere(_sphere$2); - - } - - intersectsSphere(sphere) { - - const planes = this.planes; - const center = sphere.center; - const negRadius = - sphere.radius; - - for (let i = 0; i < 6; i++) { - - const distance = planes[i].distanceToPoint(center); - - if (distance < negRadius) { - - return false; - - } - - } - - return true; - - } - - intersectsBox(box) { - - const planes = this.planes; - - for (let i = 0; i < 6; i++) { - - const plane = planes[i]; - - // corner at max distance - - _vector$7.x = plane.normal.x > 0 ? box.max.x : box.min.x; - _vector$7.y = plane.normal.y > 0 ? box.max.y : box.min.y; - _vector$7.z = plane.normal.z > 0 ? box.max.z : box.min.z; - - if (plane.distanceToPoint(_vector$7) < 0) { - - return false; - - } - - } - - return true; - - } - - containsPoint(point) { - - const planes = this.planes; - - for (let i = 0; i < 6; i++) { - - if (planes[i].distanceToPoint(point) < 0) { - - return false; - - } - - } - - return true; - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -function WebGLAnimation() { - - let context = null; - let isAnimating = false; - let animationLoop = null; - let requestId = null; - - function onAnimationFrame(time, frame) { - - animationLoop(time, frame); - - requestId = context.requestAnimationFrame(onAnimationFrame); - - } - - return { - - start: function () { - - if (isAnimating === true) return; - if (animationLoop === null) return; - - requestId = context.requestAnimationFrame(onAnimationFrame); - - isAnimating = true; - - }, - - stop: function () { - - context.cancelAnimationFrame(requestId); - - isAnimating = false; - - }, - - setAnimationLoop: function (callback) { - - animationLoop = callback; - - }, - - setContext: function (value) { - - context = value; - - } - - }; - -} - -function WebGLAttributes(gl, capabilities) { - - const isWebGL2 = capabilities.isWebGL2; - - const buffers = new WeakMap(); - - function createBuffer(attribute, bufferType) { - - const array = attribute.array; - const usage = attribute.usage; - - const buffer = gl.createBuffer(); - - gl.bindBuffer(bufferType, buffer); - gl.bufferData(bufferType, array, usage); - - attribute.onUploadCallback(); - - let type; - - if (array instanceof Float32Array) { - - type = 5126; - - } else if (array instanceof Uint16Array) { - - if (attribute.isFloat16BufferAttribute) { - - if (isWebGL2) { - - type = 5131; - - } else { - - throw new Error('THREE.WebGLAttributes: Usage of Float16BufferAttribute requires WebGL2.'); - - } - - } else { - - type = 5123; - - } - - } else if (array instanceof Int16Array) { - - type = 5122; - - } else if (array instanceof Uint32Array) { - - type = 5125; - - } else if (array instanceof Int32Array) { - - type = 5124; - - } else if (array instanceof Int8Array) { - - type = 5120; - - } else if (array instanceof Uint8Array) { - - type = 5121; - - } else if (array instanceof Uint8ClampedArray) { - - type = 5121; - - } else { - - throw new Error('THREE.WebGLAttributes: Unsupported buffer data format: ' + array); - - } - - return { - buffer: buffer, - type: type, - bytesPerElement: array.BYTES_PER_ELEMENT, - version: attribute.version - }; - - } - - function updateBuffer(buffer, attribute, bufferType) { - - const array = attribute.array; - const updateRange = attribute.updateRange; - - gl.bindBuffer(bufferType, buffer); - - if (updateRange.count === - 1) { - - // Not using update ranges - - gl.bufferSubData(bufferType, 0, array); - - } else { - - if (isWebGL2) { - - gl.bufferSubData(bufferType, updateRange.offset * array.BYTES_PER_ELEMENT, - array, updateRange.offset, updateRange.count); - - } else { - - gl.bufferSubData(bufferType, updateRange.offset * array.BYTES_PER_ELEMENT, - array.subarray(updateRange.offset, updateRange.offset + updateRange.count)); - - } - - updateRange.count = - 1; // reset range - - } - - attribute.onUploadCallback(); - - } - - // - - function get(attribute) { - - if (attribute.isInterleavedBufferAttribute) attribute = attribute.data; - - return buffers.get(attribute); - - } - - function remove(attribute) { - - if (attribute.isInterleavedBufferAttribute) attribute = attribute.data; - - const data = buffers.get(attribute); - - if (data) { - - gl.deleteBuffer(data.buffer); - - buffers.delete(attribute); - - } - - } - - function update(attribute, bufferType) { - - if (attribute.isGLBufferAttribute) { - - const cached = buffers.get(attribute); - - if (!cached || cached.version < attribute.version) { - - buffers.set(attribute, { - buffer: attribute.buffer, - type: attribute.type, - bytesPerElement: attribute.elementSize, - version: attribute.version - }); - - } - - return; - - } - - if (attribute.isInterleavedBufferAttribute) attribute = attribute.data; - - const data = buffers.get(attribute); - - if (data === undefined) { - - buffers.set(attribute, createBuffer(attribute, bufferType)); - - } else if (data.version < attribute.version) { - - updateBuffer(data.buffer, attribute, bufferType); - - data.version = attribute.version; - - } - - } - - return { - - get: get, - remove: remove, - update: update - - }; - -} - -class PlaneGeometry extends BufferGeometry { - - constructor(width = 1, height = 1, widthSegments = 1, heightSegments = 1) { - - super(); - - this.type = 'PlaneGeometry'; - - this.parameters = { - width: width, - height: height, - widthSegments: widthSegments, - heightSegments: heightSegments - }; - - const width_half = width / 2; - const height_half = height / 2; - - const gridX = Math.floor(widthSegments); - const gridY = Math.floor(heightSegments); - - const gridX1 = gridX + 1; - const gridY1 = gridY + 1; - - const segment_width = width / gridX; - const segment_height = height / gridY; - - // - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - for (let iy = 0; iy < gridY1; iy++) { - - const y = iy * segment_height - height_half; - - for (let ix = 0; ix < gridX1; ix++) { - - const x = ix * segment_width - width_half; - - vertices.push(x, - y, 0); - - normals.push(0, 0, 1); - - uvs.push(ix / gridX); - uvs.push(1 - (iy / gridY)); - - } - - } - - for (let iy = 0; iy < gridY; iy++) { - - for (let ix = 0; ix < gridX; ix++) { - - const a = ix + gridX1 * iy; - const b = ix + gridX1 * (iy + 1); - const c = (ix + 1) + gridX1 * (iy + 1); - const d = (ix + 1) + gridX1 * iy; - - indices.push(a, b, d); - indices.push(b, c, d); - - } - - } - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - } - - static fromJSON(data) { - - return new PlaneGeometry(data.width, data.height, data.widthSegments, data.heightSegments); - - } - -} - -var alphamap_fragment = "#ifdef USE_ALPHAMAP\n\tdiffuseColor.a *= texture2D( alphaMap, vUv ).g;\n#endif"; - -var alphamap_pars_fragment = "#ifdef USE_ALPHAMAP\n\tuniform sampler2D alphaMap;\n#endif"; - -var alphatest_fragment = "#ifdef USE_ALPHATEST\n\tif ( diffuseColor.a < alphaTest ) discard;\n#endif"; - -var alphatest_pars_fragment = "#ifdef USE_ALPHATEST\n\tuniform float alphaTest;\n#endif"; - -var aomap_fragment = "#ifdef USE_AOMAP\n\tfloat ambientOcclusion = ( texture2D( aoMap, vUv2 ).r - 1.0 ) * aoMapIntensity + 1.0;\n\treflectedLight.indirectDiffuse *= ambientOcclusion;\n\t#if defined( USE_ENVMAP ) && defined( STANDARD )\n\t\tfloat dotNV = saturate( dot( geometry.normal, geometry.viewDir ) );\n\t\treflectedLight.indirectSpecular *= computeSpecularOcclusion( dotNV, ambientOcclusion, material.roughness );\n\t#endif\n#endif"; - -var aomap_pars_fragment = "#ifdef USE_AOMAP\n\tuniform sampler2D aoMap;\n\tuniform float aoMapIntensity;\n#endif"; - -var begin_vertex = "vec3 transformed = vec3( position );"; - -var beginnormal_vertex = "vec3 objectNormal = vec3( normal );\n#ifdef USE_TANGENT\n\tvec3 objectTangent = vec3( tangent.xyz );\n#endif"; - -var bsdfs = "vec3 BRDF_Lambert( const in vec3 diffuseColor ) {\n\treturn RECIPROCAL_PI * diffuseColor;\n}\nvec3 F_Schlick( const in vec3 f0, const in float f90, const in float dotVH ) {\n\tfloat fresnel = exp2( ( - 5.55473 * dotVH - 6.98316 ) * dotVH );\n\treturn f0 * ( 1.0 - fresnel ) + ( f90 * fresnel );\n}\nfloat F_Schlick( const in float f0, const in float f90, const in float dotVH ) {\n\tfloat fresnel = exp2( ( - 5.55473 * dotVH - 6.98316 ) * dotVH );\n\treturn f0 * ( 1.0 - fresnel ) + ( f90 * fresnel );\n}\nvec3 Schlick_to_F0( const in vec3 f, const in float f90, const in float dotVH ) {\n float x = clamp( 1.0 - dotVH, 0.0, 1.0 );\n float x2 = x * x;\n float x5 = clamp( x * x2 * x2, 0.0, 0.9999 );\n return ( f - vec3( f90 ) * x5 ) / ( 1.0 - x5 );\n}\nfloat V_GGX_SmithCorrelated( const in float alpha, const in float dotNL, const in float dotNV ) {\n\tfloat a2 = pow2( alpha );\n\tfloat gv = dotNL * sqrt( a2 + ( 1.0 - a2 ) * pow2( dotNV ) );\n\tfloat gl = dotNV * sqrt( a2 + ( 1.0 - a2 ) * pow2( dotNL ) );\n\treturn 0.5 / max( gv + gl, EPSILON );\n}\nfloat D_GGX( const in float alpha, const in float dotNH ) {\n\tfloat a2 = pow2( alpha );\n\tfloat denom = pow2( dotNH ) * ( a2 - 1.0 ) + 1.0;\n\treturn RECIPROCAL_PI * a2 / pow2( denom );\n}\nvec3 BRDF_GGX( const in vec3 lightDir, const in vec3 viewDir, const in vec3 normal, const in vec3 f0, const in float f90, const in float roughness ) {\n\tfloat alpha = pow2( roughness );\n\tvec3 halfDir = normalize( lightDir + viewDir );\n\tfloat dotNL = saturate( dot( normal, lightDir ) );\n\tfloat dotNV = saturate( dot( normal, viewDir ) );\n\tfloat dotNH = saturate( dot( normal, halfDir ) );\n\tfloat dotVH = saturate( dot( viewDir, halfDir ) );\n\tvec3 F = F_Schlick( f0, f90, dotVH );\n\tfloat V = V_GGX_SmithCorrelated( alpha, dotNL, dotNV );\n\tfloat D = D_GGX( alpha, dotNH );\n\treturn F * ( V * D );\n}\n#ifdef USE_IRIDESCENCE\n\tvec3 BRDF_GGX_Iridescence( const in vec3 lightDir, const in vec3 viewDir, const in vec3 normal, const in vec3 f0, const in float f90, const in float iridescence, const in vec3 iridescenceFresnel, const in float roughness ) {\n\t\tfloat alpha = pow2( roughness );\n\t\tvec3 halfDir = normalize( lightDir + viewDir );\n\t\tfloat dotNL = saturate( dot( normal, lightDir ) );\n\t\tfloat dotNV = saturate( dot( normal, viewDir ) );\n\t\tfloat dotNH = saturate( dot( normal, halfDir ) );\n\t\tfloat dotVH = saturate( dot( viewDir, halfDir ) );\n\t\tvec3 F = mix( F_Schlick( f0, f90, dotVH ), iridescenceFresnel, iridescence );\n\t\tfloat V = V_GGX_SmithCorrelated( alpha, dotNL, dotNV );\n\t\tfloat D = D_GGX( alpha, dotNH );\n\t\treturn F * ( V * D );\n\t}\n#endif\nvec2 LTC_Uv( const in vec3 N, const in vec3 V, const in float roughness ) {\n\tconst float LUT_SIZE = 64.0;\n\tconst float LUT_SCALE = ( LUT_SIZE - 1.0 ) / LUT_SIZE;\n\tconst float LUT_BIAS = 0.5 / LUT_SIZE;\n\tfloat dotNV = saturate( dot( N, V ) );\n\tvec2 uv = vec2( roughness, sqrt( 1.0 - dotNV ) );\n\tuv = uv * LUT_SCALE + LUT_BIAS;\n\treturn uv;\n}\nfloat LTC_ClippedSphereFormFactor( const in vec3 f ) {\n\tfloat l = length( f );\n\treturn max( ( l * l + f.z ) / ( l + 1.0 ), 0.0 );\n}\nvec3 LTC_EdgeVectorFormFactor( const in vec3 v1, const in vec3 v2 ) {\n\tfloat x = dot( v1, v2 );\n\tfloat y = abs( x );\n\tfloat a = 0.8543985 + ( 0.4965155 + 0.0145206 * y ) * y;\n\tfloat b = 3.4175940 + ( 4.1616724 + y ) * y;\n\tfloat v = a / b;\n\tfloat theta_sintheta = ( x > 0.0 ) ? v : 0.5 * inversesqrt( max( 1.0 - x * x, 1e-7 ) ) - v;\n\treturn cross( v1, v2 ) * theta_sintheta;\n}\nvec3 LTC_Evaluate( const in vec3 N, const in vec3 V, const in vec3 P, const in mat3 mInv, const in vec3 rectCoords[ 4 ] ) {\n\tvec3 v1 = rectCoords[ 1 ] - rectCoords[ 0 ];\n\tvec3 v2 = rectCoords[ 3 ] - rectCoords[ 0 ];\n\tvec3 lightNormal = cross( v1, v2 );\n\tif( dot( lightNormal, P - rectCoords[ 0 ] ) < 0.0 ) return vec3( 0.0 );\n\tvec3 T1, T2;\n\tT1 = normalize( V - N * dot( V, N ) );\n\tT2 = - cross( N, T1 );\n\tmat3 mat = mInv * transposeMat3( mat3( T1, T2, N ) );\n\tvec3 coords[ 4 ];\n\tcoords[ 0 ] = mat * ( rectCoords[ 0 ] - P );\n\tcoords[ 1 ] = mat * ( rectCoords[ 1 ] - P );\n\tcoords[ 2 ] = mat * ( rectCoords[ 2 ] - P );\n\tcoords[ 3 ] = mat * ( rectCoords[ 3 ] - P );\n\tcoords[ 0 ] = normalize( coords[ 0 ] );\n\tcoords[ 1 ] = normalize( coords[ 1 ] );\n\tcoords[ 2 ] = normalize( coords[ 2 ] );\n\tcoords[ 3 ] = normalize( coords[ 3 ] );\n\tvec3 vectorFormFactor = vec3( 0.0 );\n\tvectorFormFactor += LTC_EdgeVectorFormFactor( coords[ 0 ], coords[ 1 ] );\n\tvectorFormFactor += LTC_EdgeVectorFormFactor( coords[ 1 ], coords[ 2 ] );\n\tvectorFormFactor += LTC_EdgeVectorFormFactor( coords[ 2 ], coords[ 3 ] );\n\tvectorFormFactor += LTC_EdgeVectorFormFactor( coords[ 3 ], coords[ 0 ] );\n\tfloat result = LTC_ClippedSphereFormFactor( vectorFormFactor );\n\treturn vec3( result );\n}\nfloat G_BlinnPhong_Implicit( ) {\n\treturn 0.25;\n}\nfloat D_BlinnPhong( const in float shininess, const in float dotNH ) {\n\treturn RECIPROCAL_PI * ( shininess * 0.5 + 1.0 ) * pow( dotNH, shininess );\n}\nvec3 BRDF_BlinnPhong( const in vec3 lightDir, const in vec3 viewDir, const in vec3 normal, const in vec3 specularColor, const in float shininess ) {\n\tvec3 halfDir = normalize( lightDir + viewDir );\n\tfloat dotNH = saturate( dot( normal, halfDir ) );\n\tfloat dotVH = saturate( dot( viewDir, halfDir ) );\n\tvec3 F = F_Schlick( specularColor, 1.0, dotVH );\n\tfloat G = G_BlinnPhong_Implicit( );\n\tfloat D = D_BlinnPhong( shininess, dotNH );\n\treturn F * ( G * D );\n}\n#if defined( USE_SHEEN )\nfloat D_Charlie( float roughness, float dotNH ) {\n\tfloat alpha = pow2( roughness );\n\tfloat invAlpha = 1.0 / alpha;\n\tfloat cos2h = dotNH * dotNH;\n\tfloat sin2h = max( 1.0 - cos2h, 0.0078125 );\n\treturn ( 2.0 + invAlpha ) * pow( sin2h, invAlpha * 0.5 ) / ( 2.0 * PI );\n}\nfloat V_Neubelt( float dotNV, float dotNL ) {\n\treturn saturate( 1.0 / ( 4.0 * ( dotNL + dotNV - dotNL * dotNV ) ) );\n}\nvec3 BRDF_Sheen( const in vec3 lightDir, const in vec3 viewDir, const in vec3 normal, vec3 sheenColor, const in float sheenRoughness ) {\n\tvec3 halfDir = normalize( lightDir + viewDir );\n\tfloat dotNL = saturate( dot( normal, lightDir ) );\n\tfloat dotNV = saturate( dot( normal, viewDir ) );\n\tfloat dotNH = saturate( dot( normal, halfDir ) );\n\tfloat D = D_Charlie( sheenRoughness, dotNH );\n\tfloat V = V_Neubelt( dotNV, dotNL );\n\treturn sheenColor * ( D * V );\n}\n#endif"; - -var iridescence_fragment = "#ifdef USE_IRIDESCENCE\n\tconst mat3 XYZ_TO_REC709 = mat3(\n\t\t 3.2404542, -0.9692660, 0.0556434,\n\t\t-1.5371385, 1.8760108, -0.2040259,\n\t\t-0.4985314, 0.0415560, 1.0572252\n\t);\n\tvec3 Fresnel0ToIor( vec3 fresnel0 ) {\n\t\tvec3 sqrtF0 = sqrt( fresnel0 );\n\t\treturn ( vec3( 1.0 ) + sqrtF0 ) / ( vec3( 1.0 ) - sqrtF0 );\n\t}\n\tvec3 IorToFresnel0( vec3 transmittedIor, float incidentIor ) {\n\t\treturn pow2( ( transmittedIor - vec3( incidentIor ) ) / ( transmittedIor + vec3( incidentIor ) ) );\n\t}\n\tfloat IorToFresnel0( float transmittedIor, float incidentIor ) {\n\t\treturn pow2( ( transmittedIor - incidentIor ) / ( transmittedIor + incidentIor ));\n\t}\n\tvec3 evalSensitivity( float OPD, vec3 shift ) {\n\t\tfloat phase = 2.0 * PI * OPD * 1.0e-9;\n\t\tvec3 val = vec3( 5.4856e-13, 4.4201e-13, 5.2481e-13 );\n\t\tvec3 pos = vec3( 1.6810e+06, 1.7953e+06, 2.2084e+06 );\n\t\tvec3 var = vec3( 4.3278e+09, 9.3046e+09, 6.6121e+09 );\n\t\tvec3 xyz = val * sqrt( 2.0 * PI * var ) * cos( pos * phase + shift ) * exp( - pow2( phase ) * var );\n\t\txyz.x += 9.7470e-14 * sqrt( 2.0 * PI * 4.5282e+09 ) * cos( 2.2399e+06 * phase + shift[ 0 ] ) * exp( - 4.5282e+09 * pow2( phase ) );\n\t\txyz /= 1.0685e-7;\n\t\tvec3 rgb = XYZ_TO_REC709 * xyz;\n\t\treturn rgb;\n\t}\n\tvec3 evalIridescence( float outsideIOR, float eta2, float cosTheta1, float thinFilmThickness, vec3 baseF0 ) {\n\t\tvec3 I;\n\t\tfloat iridescenceIOR = mix( outsideIOR, eta2, smoothstep( 0.0, 0.03, thinFilmThickness ) );\n\t\tfloat sinTheta2Sq = pow2( outsideIOR / iridescenceIOR ) * ( 1.0 - pow2( cosTheta1 ) );\n\t\tfloat cosTheta2Sq = 1.0 - sinTheta2Sq;\n\t\tif ( cosTheta2Sq < 0.0 ) {\n\t\t\t return vec3( 1.0 );\n\t\t}\n\t\tfloat cosTheta2 = sqrt( cosTheta2Sq );\n\t\tfloat R0 = IorToFresnel0( iridescenceIOR, outsideIOR );\n\t\tfloat R12 = F_Schlick( R0, 1.0, cosTheta1 );\n\t\tfloat R21 = R12;\n\t\tfloat T121 = 1.0 - R12;\n\t\tfloat phi12 = 0.0;\n\t\tif ( iridescenceIOR < outsideIOR ) phi12 = PI;\n\t\tfloat phi21 = PI - phi12;\n\t\tvec3 baseIOR = Fresnel0ToIor( clamp( baseF0, 0.0, 0.9999 ) );\t\tvec3 R1 = IorToFresnel0( baseIOR, iridescenceIOR );\n\t\tvec3 R23 = F_Schlick( R1, 1.0, cosTheta2 );\n\t\tvec3 phi23 = vec3( 0.0 );\n\t\tif ( baseIOR[ 0 ] < iridescenceIOR ) phi23[ 0 ] = PI;\n\t\tif ( baseIOR[ 1 ] < iridescenceIOR ) phi23[ 1 ] = PI;\n\t\tif ( baseIOR[ 2 ] < iridescenceIOR ) phi23[ 2 ] = PI;\n\t\tfloat OPD = 2.0 * iridescenceIOR * thinFilmThickness * cosTheta2;\n\t\tvec3 phi = vec3( phi21 ) + phi23;\n\t\tvec3 R123 = clamp( R12 * R23, 1e-5, 0.9999 );\n\t\tvec3 r123 = sqrt( R123 );\n\t\tvec3 Rs = pow2( T121 ) * R23 / ( vec3( 1.0 ) - R123 );\n\t\tvec3 C0 = R12 + Rs;\n\t\tI = C0;\n\t\tvec3 Cm = Rs - T121;\n\t\tfor ( int m = 1; m <= 2; ++ m ) {\n\t\t\tCm *= r123;\n\t\t\tvec3 Sm = 2.0 * evalSensitivity( float( m ) * OPD, float( m ) * phi );\n\t\t\tI += Cm * Sm;\n\t\t}\n\t\treturn max( I, vec3( 0.0 ) );\n\t}\n#endif"; - -var bumpmap_pars_fragment = "#ifdef USE_BUMPMAP\n\tuniform sampler2D bumpMap;\n\tuniform float bumpScale;\n\tvec2 dHdxy_fwd() {\n\t\tvec2 dSTdx = dFdx( vUv );\n\t\tvec2 dSTdy = dFdy( vUv );\n\t\tfloat Hll = bumpScale * texture2D( bumpMap, vUv ).x;\n\t\tfloat dBx = bumpScale * texture2D( bumpMap, vUv + dSTdx ).x - Hll;\n\t\tfloat dBy = bumpScale * texture2D( bumpMap, vUv + dSTdy ).x - Hll;\n\t\treturn vec2( dBx, dBy );\n\t}\n\tvec3 perturbNormalArb( vec3 surf_pos, vec3 surf_norm, vec2 dHdxy, float faceDirection ) {\n\t\tvec3 vSigmaX = dFdx( surf_pos.xyz );\n\t\tvec3 vSigmaY = dFdy( surf_pos.xyz );\n\t\tvec3 vN = surf_norm;\n\t\tvec3 R1 = cross( vSigmaY, vN );\n\t\tvec3 R2 = cross( vN, vSigmaX );\n\t\tfloat fDet = dot( vSigmaX, R1 ) * faceDirection;\n\t\tvec3 vGrad = sign( fDet ) * ( dHdxy.x * R1 + dHdxy.y * R2 );\n\t\treturn normalize( abs( fDet ) * surf_norm - vGrad );\n\t}\n#endif"; - -var clipping_planes_fragment = "#if NUM_CLIPPING_PLANES > 0\n\tvec4 plane;\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < UNION_CLIPPING_PLANES; i ++ ) {\n\t\tplane = clippingPlanes[ i ];\n\t\tif ( dot( vClipPosition, plane.xyz ) > plane.w ) discard;\n\t}\n\t#pragma unroll_loop_end\n\t#if UNION_CLIPPING_PLANES < NUM_CLIPPING_PLANES\n\t\tbool clipped = true;\n\t\t#pragma unroll_loop_start\n\t\tfor ( int i = UNION_CLIPPING_PLANES; i < NUM_CLIPPING_PLANES; i ++ ) {\n\t\t\tplane = clippingPlanes[ i ];\n\t\t\tclipped = ( dot( vClipPosition, plane.xyz ) > plane.w ) && clipped;\n\t\t}\n\t\t#pragma unroll_loop_end\n\t\tif ( clipped ) discard;\n\t#endif\n#endif"; - -var clipping_planes_pars_fragment = "#if NUM_CLIPPING_PLANES > 0\n\tvarying vec3 vClipPosition;\n\tuniform vec4 clippingPlanes[ NUM_CLIPPING_PLANES ];\n#endif"; - -var clipping_planes_pars_vertex = "#if NUM_CLIPPING_PLANES > 0\n\tvarying vec3 vClipPosition;\n#endif"; - -var clipping_planes_vertex = "#if NUM_CLIPPING_PLANES > 0\n\tvClipPosition = - mvPosition.xyz;\n#endif"; - -var color_fragment = "#if defined( USE_COLOR_ALPHA )\n\tdiffuseColor *= vColor;\n#elif defined( USE_COLOR )\n\tdiffuseColor.rgb *= vColor;\n#endif"; - -var color_pars_fragment = "#if defined( USE_COLOR_ALPHA )\n\tvarying vec4 vColor;\n#elif defined( USE_COLOR )\n\tvarying vec3 vColor;\n#endif"; - -var color_pars_vertex = "#if defined( USE_COLOR_ALPHA )\n\tvarying vec4 vColor;\n#elif defined( USE_COLOR ) || defined( USE_INSTANCING_COLOR )\n\tvarying vec3 vColor;\n#endif"; - -var color_vertex = "#if defined( USE_COLOR_ALPHA )\n\tvColor = vec4( 1.0 );\n#elif defined( USE_COLOR ) || defined( USE_INSTANCING_COLOR )\n\tvColor = vec3( 1.0 );\n#endif\n#ifdef USE_COLOR\n\tvColor *= color;\n#endif\n#ifdef USE_INSTANCING_COLOR\n\tvColor.xyz *= instanceColor.xyz;\n#endif"; - -var common = "#define PI 3.141592653589793\n#define PI2 6.283185307179586\n#define PI_HALF 1.5707963267948966\n#define RECIPROCAL_PI 0.3183098861837907\n#define RECIPROCAL_PI2 0.15915494309189535\n#define EPSILON 1e-6\n#ifndef saturate\n#define saturate( a ) clamp( a, 0.0, 1.0 )\n#endif\n#define whiteComplement( a ) ( 1.0 - saturate( a ) )\nfloat pow2( const in float x ) { return x*x; }\nvec3 pow2( const in vec3 x ) { return x*x; }\nfloat pow3( const in float x ) { return x*x*x; }\nfloat pow4( const in float x ) { float x2 = x*x; return x2*x2; }\nfloat max3( const in vec3 v ) { return max( max( v.x, v.y ), v.z ); }\nfloat average( const in vec3 v ) { return dot( v, vec3( 0.3333333 ) ); }\nhighp float rand( const in vec2 uv ) {\n\tconst highp float a = 12.9898, b = 78.233, c = 43758.5453;\n\thighp float dt = dot( uv.xy, vec2( a,b ) ), sn = mod( dt, PI );\n\treturn fract( sin( sn ) * c );\n}\n#ifdef HIGH_PRECISION\n\tfloat precisionSafeLength( vec3 v ) { return length( v ); }\n#else\n\tfloat precisionSafeLength( vec3 v ) {\n\t\tfloat maxComponent = max3( abs( v ) );\n\t\treturn length( v / maxComponent ) * maxComponent;\n\t}\n#endif\nstruct IncidentLight {\n\tvec3 color;\n\tvec3 direction;\n\tbool visible;\n};\nstruct ReflectedLight {\n\tvec3 directDiffuse;\n\tvec3 directSpecular;\n\tvec3 indirectDiffuse;\n\tvec3 indirectSpecular;\n};\nstruct GeometricContext {\n\tvec3 position;\n\tvec3 normal;\n\tvec3 viewDir;\n#ifdef USE_CLEARCOAT\n\tvec3 clearcoatNormal;\n#endif\n};\nvec3 transformDirection( in vec3 dir, in mat4 matrix ) {\n\treturn normalize( ( matrix * vec4( dir, 0.0 ) ).xyz );\n}\nvec3 inverseTransformDirection( in vec3 dir, in mat4 matrix ) {\n\treturn normalize( ( vec4( dir, 0.0 ) * matrix ).xyz );\n}\nmat3 transposeMat3( const in mat3 m ) {\n\tmat3 tmp;\n\ttmp[ 0 ] = vec3( m[ 0 ].x, m[ 1 ].x, m[ 2 ].x );\n\ttmp[ 1 ] = vec3( m[ 0 ].y, m[ 1 ].y, m[ 2 ].y );\n\ttmp[ 2 ] = vec3( m[ 0 ].z, m[ 1 ].z, m[ 2 ].z );\n\treturn tmp;\n}\nfloat luminance( const in vec3 rgb ) {\n\tconst vec3 weights = vec3( 0.2126729, 0.7151522, 0.0721750 );\n\treturn dot( weights, rgb );\n}\nbool isPerspectiveMatrix( mat4 m ) {\n\treturn m[ 2 ][ 3 ] == - 1.0;\n}\nvec2 equirectUv( in vec3 dir ) {\n\tfloat u = atan( dir.z, dir.x ) * RECIPROCAL_PI2 + 0.5;\n\tfloat v = asin( clamp( dir.y, - 1.0, 1.0 ) ) * RECIPROCAL_PI + 0.5;\n\treturn vec2( u, v );\n}"; - -var cube_uv_reflection_fragment = "#ifdef ENVMAP_TYPE_CUBE_UV\n\t#define cubeUV_minMipLevel 4.0\n\t#define cubeUV_minTileSize 16.0\n\tfloat getFace( vec3 direction ) {\n\t\tvec3 absDirection = abs( direction );\n\t\tfloat face = - 1.0;\n\t\tif ( absDirection.x > absDirection.z ) {\n\t\t\tif ( absDirection.x > absDirection.y )\n\t\t\t\tface = direction.x > 0.0 ? 0.0 : 3.0;\n\t\t\telse\n\t\t\t\tface = direction.y > 0.0 ? 1.0 : 4.0;\n\t\t} else {\n\t\t\tif ( absDirection.z > absDirection.y )\n\t\t\t\tface = direction.z > 0.0 ? 2.0 : 5.0;\n\t\t\telse\n\t\t\t\tface = direction.y > 0.0 ? 1.0 : 4.0;\n\t\t}\n\t\treturn face;\n\t}\n\tvec2 getUV( vec3 direction, float face ) {\n\t\tvec2 uv;\n\t\tif ( face == 0.0 ) {\n\t\t\tuv = vec2( direction.z, direction.y ) / abs( direction.x );\n\t\t} else if ( face == 1.0 ) {\n\t\t\tuv = vec2( - direction.x, - direction.z ) / abs( direction.y );\n\t\t} else if ( face == 2.0 ) {\n\t\t\tuv = vec2( - direction.x, direction.y ) / abs( direction.z );\n\t\t} else if ( face == 3.0 ) {\n\t\t\tuv = vec2( - direction.z, direction.y ) / abs( direction.x );\n\t\t} else if ( face == 4.0 ) {\n\t\t\tuv = vec2( - direction.x, direction.z ) / abs( direction.y );\n\t\t} else {\n\t\t\tuv = vec2( direction.x, direction.y ) / abs( direction.z );\n\t\t}\n\t\treturn 0.5 * ( uv + 1.0 );\n\t}\n\tvec3 bilinearCubeUV( sampler2D envMap, vec3 direction, float mipInt ) {\n\t\tfloat face = getFace( direction );\n\t\tfloat filterInt = max( cubeUV_minMipLevel - mipInt, 0.0 );\n\t\tmipInt = max( mipInt, cubeUV_minMipLevel );\n\t\tfloat faceSize = exp2( mipInt );\n\t\thighp vec2 uv = getUV( direction, face ) * ( faceSize - 2.0 ) + 1.0;\n\t\tif ( face > 2.0 ) {\n\t\t\tuv.y += faceSize;\n\t\t\tface -= 3.0;\n\t\t}\n\t\tuv.x += face * faceSize;\n\t\tuv.x += filterInt * 3.0 * cubeUV_minTileSize;\n\t\tuv.y += 4.0 * ( exp2( CUBEUV_MAX_MIP ) - faceSize );\n\t\tuv.x *= CUBEUV_TEXEL_WIDTH;\n\t\tuv.y *= CUBEUV_TEXEL_HEIGHT;\n\t\t#ifdef texture2DGradEXT\n\t\t\treturn texture2DGradEXT( envMap, uv, vec2( 0.0 ), vec2( 0.0 ) ).rgb;\n\t\t#else\n\t\t\treturn texture2D( envMap, uv ).rgb;\n\t\t#endif\n\t}\n\t#define cubeUV_r0 1.0\n\t#define cubeUV_v0 0.339\n\t#define cubeUV_m0 - 2.0\n\t#define cubeUV_r1 0.8\n\t#define cubeUV_v1 0.276\n\t#define cubeUV_m1 - 1.0\n\t#define cubeUV_r4 0.4\n\t#define cubeUV_v4 0.046\n\t#define cubeUV_m4 2.0\n\t#define cubeUV_r5 0.305\n\t#define cubeUV_v5 0.016\n\t#define cubeUV_m5 3.0\n\t#define cubeUV_r6 0.21\n\t#define cubeUV_v6 0.0038\n\t#define cubeUV_m6 4.0\n\tfloat roughnessToMip( float roughness ) {\n\t\tfloat mip = 0.0;\n\t\tif ( roughness >= cubeUV_r1 ) {\n\t\t\tmip = ( cubeUV_r0 - roughness ) * ( cubeUV_m1 - cubeUV_m0 ) / ( cubeUV_r0 - cubeUV_r1 ) + cubeUV_m0;\n\t\t} else if ( roughness >= cubeUV_r4 ) {\n\t\t\tmip = ( cubeUV_r1 - roughness ) * ( cubeUV_m4 - cubeUV_m1 ) / ( cubeUV_r1 - cubeUV_r4 ) + cubeUV_m1;\n\t\t} else if ( roughness >= cubeUV_r5 ) {\n\t\t\tmip = ( cubeUV_r4 - roughness ) * ( cubeUV_m5 - cubeUV_m4 ) / ( cubeUV_r4 - cubeUV_r5 ) + cubeUV_m4;\n\t\t} else if ( roughness >= cubeUV_r6 ) {\n\t\t\tmip = ( cubeUV_r5 - roughness ) * ( cubeUV_m6 - cubeUV_m5 ) / ( cubeUV_r5 - cubeUV_r6 ) + cubeUV_m5;\n\t\t} else {\n\t\t\tmip = - 2.0 * log2( 1.16 * roughness );\t\t}\n\t\treturn mip;\n\t}\n\tvec4 textureCubeUV( sampler2D envMap, vec3 sampleDir, float roughness ) {\n\t\tfloat mip = clamp( roughnessToMip( roughness ), cubeUV_m0, CUBEUV_MAX_MIP );\n\t\tfloat mipF = fract( mip );\n\t\tfloat mipInt = floor( mip );\n\t\tvec3 color0 = bilinearCubeUV( envMap, sampleDir, mipInt );\n\t\tif ( mipF == 0.0 ) {\n\t\t\treturn vec4( color0, 1.0 );\n\t\t} else {\n\t\t\tvec3 color1 = bilinearCubeUV( envMap, sampleDir, mipInt + 1.0 );\n\t\t\treturn vec4( mix( color0, color1, mipF ), 1.0 );\n\t\t}\n\t}\n#endif"; - -var defaultnormal_vertex = "vec3 transformedNormal = objectNormal;\n#ifdef USE_INSTANCING\n\tmat3 m = mat3( instanceMatrix );\n\ttransformedNormal /= vec3( dot( m[ 0 ], m[ 0 ] ), dot( m[ 1 ], m[ 1 ] ), dot( m[ 2 ], m[ 2 ] ) );\n\ttransformedNormal = m * transformedNormal;\n#endif\ntransformedNormal = normalMatrix * transformedNormal;\n#ifdef FLIP_SIDED\n\ttransformedNormal = - transformedNormal;\n#endif\n#ifdef USE_TANGENT\n\tvec3 transformedTangent = ( modelViewMatrix * vec4( objectTangent, 0.0 ) ).xyz;\n\t#ifdef FLIP_SIDED\n\t\ttransformedTangent = - transformedTangent;\n\t#endif\n#endif"; - -var displacementmap_pars_vertex = "#ifdef USE_DISPLACEMENTMAP\n\tuniform sampler2D displacementMap;\n\tuniform float displacementScale;\n\tuniform float displacementBias;\n#endif"; - -var displacementmap_vertex = "#ifdef USE_DISPLACEMENTMAP\n\ttransformed += normalize( objectNormal ) * ( texture2D( displacementMap, vUv ).x * displacementScale + displacementBias );\n#endif"; - -var emissivemap_fragment = "#ifdef USE_EMISSIVEMAP\n\tvec4 emissiveColor = texture2D( emissiveMap, vUv );\n\ttotalEmissiveRadiance *= emissiveColor.rgb;\n#endif"; - -var emissivemap_pars_fragment = "#ifdef USE_EMISSIVEMAP\n\tuniform sampler2D emissiveMap;\n#endif"; - -var encodings_fragment = "gl_FragColor = linearToOutputTexel( gl_FragColor );"; - -var encodings_pars_fragment = "vec4 LinearToLinear( in vec4 value ) {\n\treturn value;\n}\nvec4 LinearTosRGB( in vec4 value ) {\n\treturn vec4( mix( pow( value.rgb, vec3( 0.41666 ) ) * 1.055 - vec3( 0.055 ), value.rgb * 12.92, vec3( lessThanEqual( value.rgb, vec3( 0.0031308 ) ) ) ), value.a );\n}"; - -var envmap_fragment = "#ifdef USE_ENVMAP\n\t#ifdef ENV_WORLDPOS\n\t\tvec3 cameraToFrag;\n\t\tif ( isOrthographic ) {\n\t\t\tcameraToFrag = normalize( vec3( - viewMatrix[ 0 ][ 2 ], - viewMatrix[ 1 ][ 2 ], - viewMatrix[ 2 ][ 2 ] ) );\n\t\t} else {\n\t\t\tcameraToFrag = normalize( vWorldPosition - cameraPosition );\n\t\t}\n\t\tvec3 worldNormal = inverseTransformDirection( normal, viewMatrix );\n\t\t#ifdef ENVMAP_MODE_REFLECTION\n\t\t\tvec3 reflectVec = reflect( cameraToFrag, worldNormal );\n\t\t#else\n\t\t\tvec3 reflectVec = refract( cameraToFrag, worldNormal, refractionRatio );\n\t\t#endif\n\t#else\n\t\tvec3 reflectVec = vReflect;\n\t#endif\n\t#ifdef ENVMAP_TYPE_CUBE\n\t\tvec4 envColor = textureCube( envMap, vec3( flipEnvMap * reflectVec.x, reflectVec.yz ) );\n\t#else\n\t\tvec4 envColor = vec4( 0.0 );\n\t#endif\n\t#ifdef ENVMAP_BLENDING_MULTIPLY\n\t\toutgoingLight = mix( outgoingLight, outgoingLight * envColor.xyz, specularStrength * reflectivity );\n\t#elif defined( ENVMAP_BLENDING_MIX )\n\t\toutgoingLight = mix( outgoingLight, envColor.xyz, specularStrength * reflectivity );\n\t#elif defined( ENVMAP_BLENDING_ADD )\n\t\toutgoingLight += envColor.xyz * specularStrength * reflectivity;\n\t#endif\n#endif"; - -var envmap_common_pars_fragment = "#ifdef USE_ENVMAP\n\tuniform float envMapIntensity;\n\tuniform float flipEnvMap;\n\t#ifdef ENVMAP_TYPE_CUBE\n\t\tuniform samplerCube envMap;\n\t#else\n\t\tuniform sampler2D envMap;\n\t#endif\n\t\n#endif"; - -var envmap_pars_fragment = "#ifdef USE_ENVMAP\n\tuniform float reflectivity;\n\t#if defined( USE_BUMPMAP ) || defined( USE_NORMALMAP ) || defined( PHONG ) || defined( LAMBERT )\n\t\t#define ENV_WORLDPOS\n\t#endif\n\t#ifdef ENV_WORLDPOS\n\t\tvarying vec3 vWorldPosition;\n\t\tuniform float refractionRatio;\n\t#else\n\t\tvarying vec3 vReflect;\n\t#endif\n#endif"; - -var envmap_pars_vertex = "#ifdef USE_ENVMAP\n\t#if defined( USE_BUMPMAP ) || defined( USE_NORMALMAP ) || defined( PHONG ) || defined( LAMBERT )\n\t\t#define ENV_WORLDPOS\n\t#endif\n\t#ifdef ENV_WORLDPOS\n\t\t\n\t\tvarying vec3 vWorldPosition;\n\t#else\n\t\tvarying vec3 vReflect;\n\t\tuniform float refractionRatio;\n\t#endif\n#endif"; - -var envmap_vertex = "#ifdef USE_ENVMAP\n\t#ifdef ENV_WORLDPOS\n\t\tvWorldPosition = worldPosition.xyz;\n\t#else\n\t\tvec3 cameraToVertex;\n\t\tif ( isOrthographic ) {\n\t\t\tcameraToVertex = normalize( vec3( - viewMatrix[ 0 ][ 2 ], - viewMatrix[ 1 ][ 2 ], - viewMatrix[ 2 ][ 2 ] ) );\n\t\t} else {\n\t\t\tcameraToVertex = normalize( worldPosition.xyz - cameraPosition );\n\t\t}\n\t\tvec3 worldNormal = inverseTransformDirection( transformedNormal, viewMatrix );\n\t\t#ifdef ENVMAP_MODE_REFLECTION\n\t\t\tvReflect = reflect( cameraToVertex, worldNormal );\n\t\t#else\n\t\t\tvReflect = refract( cameraToVertex, worldNormal, refractionRatio );\n\t\t#endif\n\t#endif\n#endif"; - -var fog_vertex = "#ifdef USE_FOG\n\tvFogDepth = - mvPosition.z;\n#endif"; - -var fog_pars_vertex = "#ifdef USE_FOG\n\tvarying float vFogDepth;\n#endif"; - -var fog_fragment = "#ifdef USE_FOG\n\t#ifdef FOG_EXP2\n\t\tfloat fogFactor = 1.0 - exp( - fogDensity * fogDensity * vFogDepth * vFogDepth );\n\t#else\n\t\tfloat fogFactor = smoothstep( fogNear, fogFar, vFogDepth );\n\t#endif\n\tgl_FragColor.rgb = mix( gl_FragColor.rgb, fogColor, fogFactor );\n#endif"; - -var fog_pars_fragment = "#ifdef USE_FOG\n\tuniform vec3 fogColor;\n\tvarying float vFogDepth;\n\t#ifdef FOG_EXP2\n\t\tuniform float fogDensity;\n\t#else\n\t\tuniform float fogNear;\n\t\tuniform float fogFar;\n\t#endif\n#endif"; - -var gradientmap_pars_fragment = "#ifdef USE_GRADIENTMAP\n\tuniform sampler2D gradientMap;\n#endif\nvec3 getGradientIrradiance( vec3 normal, vec3 lightDirection ) {\n\tfloat dotNL = dot( normal, lightDirection );\n\tvec2 coord = vec2( dotNL * 0.5 + 0.5, 0.0 );\n\t#ifdef USE_GRADIENTMAP\n\t\treturn vec3( texture2D( gradientMap, coord ).r );\n\t#else\n\t\tvec2 fw = fwidth( coord ) * 0.5;\n\t\treturn mix( vec3( 0.7 ), vec3( 1.0 ), smoothstep( 0.7 - fw.x, 0.7 + fw.x, coord.x ) );\n\t#endif\n}"; - -var lightmap_fragment = "#ifdef USE_LIGHTMAP\n\tvec4 lightMapTexel = texture2D( lightMap, vUv2 );\n\tvec3 lightMapIrradiance = lightMapTexel.rgb * lightMapIntensity;\n\treflectedLight.indirectDiffuse += lightMapIrradiance;\n#endif"; - -var lightmap_pars_fragment = "#ifdef USE_LIGHTMAP\n\tuniform sampler2D lightMap;\n\tuniform float lightMapIntensity;\n#endif"; - -var lights_lambert_fragment = "LambertMaterial material;\nmaterial.diffuseColor = diffuseColor.rgb;\nmaterial.specularStrength = specularStrength;"; - -var lights_lambert_pars_fragment = "varying vec3 vViewPosition;\nstruct LambertMaterial {\n\tvec3 diffuseColor;\n\tfloat specularStrength;\n};\nvoid RE_Direct_Lambert( const in IncidentLight directLight, const in GeometricContext geometry, const in LambertMaterial material, inout ReflectedLight reflectedLight ) {\n\tfloat dotNL = saturate( dot( geometry.normal, directLight.direction ) );\n\tvec3 irradiance = dotNL * directLight.color;\n\treflectedLight.directDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n}\nvoid RE_IndirectDiffuse_Lambert( const in vec3 irradiance, const in GeometricContext geometry, const in LambertMaterial material, inout ReflectedLight reflectedLight ) {\n\treflectedLight.indirectDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n}\n#define RE_Direct\t\t\t\tRE_Direct_Lambert\n#define RE_IndirectDiffuse\t\tRE_IndirectDiffuse_Lambert"; - -var lights_pars_begin = "uniform bool receiveShadow;\nuniform vec3 ambientLightColor;\nuniform vec3 lightProbe[ 9 ];\nvec3 shGetIrradianceAt( in vec3 normal, in vec3 shCoefficients[ 9 ] ) {\n\tfloat x = normal.x, y = normal.y, z = normal.z;\n\tvec3 result = shCoefficients[ 0 ] * 0.886227;\n\tresult += shCoefficients[ 1 ] * 2.0 * 0.511664 * y;\n\tresult += shCoefficients[ 2 ] * 2.0 * 0.511664 * z;\n\tresult += shCoefficients[ 3 ] * 2.0 * 0.511664 * x;\n\tresult += shCoefficients[ 4 ] * 2.0 * 0.429043 * x * y;\n\tresult += shCoefficients[ 5 ] * 2.0 * 0.429043 * y * z;\n\tresult += shCoefficients[ 6 ] * ( 0.743125 * z * z - 0.247708 );\n\tresult += shCoefficients[ 7 ] * 2.0 * 0.429043 * x * z;\n\tresult += shCoefficients[ 8 ] * 0.429043 * ( x * x - y * y );\n\treturn result;\n}\nvec3 getLightProbeIrradiance( const in vec3 lightProbe[ 9 ], const in vec3 normal ) {\n\tvec3 worldNormal = inverseTransformDirection( normal, viewMatrix );\n\tvec3 irradiance = shGetIrradianceAt( worldNormal, lightProbe );\n\treturn irradiance;\n}\nvec3 getAmbientLightIrradiance( const in vec3 ambientLightColor ) {\n\tvec3 irradiance = ambientLightColor;\n\treturn irradiance;\n}\nfloat getDistanceAttenuation( const in float lightDistance, const in float cutoffDistance, const in float decayExponent ) {\n\t#if defined ( PHYSICALLY_CORRECT_LIGHTS )\n\t\tfloat distanceFalloff = 1.0 / max( pow( lightDistance, decayExponent ), 0.01 );\n\t\tif ( cutoffDistance > 0.0 ) {\n\t\t\tdistanceFalloff *= pow2( saturate( 1.0 - pow4( lightDistance / cutoffDistance ) ) );\n\t\t}\n\t\treturn distanceFalloff;\n\t#else\n\t\tif ( cutoffDistance > 0.0 && decayExponent > 0.0 ) {\n\t\t\treturn pow( saturate( - lightDistance / cutoffDistance + 1.0 ), decayExponent );\n\t\t}\n\t\treturn 1.0;\n\t#endif\n}\nfloat getSpotAttenuation( const in float coneCosine, const in float penumbraCosine, const in float angleCosine ) {\n\treturn smoothstep( coneCosine, penumbraCosine, angleCosine );\n}\n#if NUM_DIR_LIGHTS > 0\n\tstruct DirectionalLight {\n\t\tvec3 direction;\n\t\tvec3 color;\n\t};\n\tuniform DirectionalLight directionalLights[ NUM_DIR_LIGHTS ];\n\tvoid getDirectionalLightInfo( const in DirectionalLight directionalLight, const in GeometricContext geometry, out IncidentLight light ) {\n\t\tlight.color = directionalLight.color;\n\t\tlight.direction = directionalLight.direction;\n\t\tlight.visible = true;\n\t}\n#endif\n#if NUM_POINT_LIGHTS > 0\n\tstruct PointLight {\n\t\tvec3 position;\n\t\tvec3 color;\n\t\tfloat distance;\n\t\tfloat decay;\n\t};\n\tuniform PointLight pointLights[ NUM_POINT_LIGHTS ];\n\tvoid getPointLightInfo( const in PointLight pointLight, const in GeometricContext geometry, out IncidentLight light ) {\n\t\tvec3 lVector = pointLight.position - geometry.position;\n\t\tlight.direction = normalize( lVector );\n\t\tfloat lightDistance = length( lVector );\n\t\tlight.color = pointLight.color;\n\t\tlight.color *= getDistanceAttenuation( lightDistance, pointLight.distance, pointLight.decay );\n\t\tlight.visible = ( light.color != vec3( 0.0 ) );\n\t}\n#endif\n#if NUM_SPOT_LIGHTS > 0\n\tstruct SpotLight {\n\t\tvec3 position;\n\t\tvec3 direction;\n\t\tvec3 color;\n\t\tfloat distance;\n\t\tfloat decay;\n\t\tfloat coneCos;\n\t\tfloat penumbraCos;\n\t};\n\tuniform SpotLight spotLights[ NUM_SPOT_LIGHTS ];\n\tvoid getSpotLightInfo( const in SpotLight spotLight, const in GeometricContext geometry, out IncidentLight light ) {\n\t\tvec3 lVector = spotLight.position - geometry.position;\n\t\tlight.direction = normalize( lVector );\n\t\tfloat angleCos = dot( light.direction, spotLight.direction );\n\t\tfloat spotAttenuation = getSpotAttenuation( spotLight.coneCos, spotLight.penumbraCos, angleCos );\n\t\tif ( spotAttenuation > 0.0 ) {\n\t\t\tfloat lightDistance = length( lVector );\n\t\t\tlight.color = spotLight.color * spotAttenuation;\n\t\t\tlight.color *= getDistanceAttenuation( lightDistance, spotLight.distance, spotLight.decay );\n\t\t\tlight.visible = ( light.color != vec3( 0.0 ) );\n\t\t} else {\n\t\t\tlight.color = vec3( 0.0 );\n\t\t\tlight.visible = false;\n\t\t}\n\t}\n#endif\n#if NUM_RECT_AREA_LIGHTS > 0\n\tstruct RectAreaLight {\n\t\tvec3 color;\n\t\tvec3 position;\n\t\tvec3 halfWidth;\n\t\tvec3 halfHeight;\n\t};\n\tuniform sampler2D ltc_1;\tuniform sampler2D ltc_2;\n\tuniform RectAreaLight rectAreaLights[ NUM_RECT_AREA_LIGHTS ];\n#endif\n#if NUM_HEMI_LIGHTS > 0\n\tstruct HemisphereLight {\n\t\tvec3 direction;\n\t\tvec3 skyColor;\n\t\tvec3 groundColor;\n\t};\n\tuniform HemisphereLight hemisphereLights[ NUM_HEMI_LIGHTS ];\n\tvec3 getHemisphereLightIrradiance( const in HemisphereLight hemiLight, const in vec3 normal ) {\n\t\tfloat dotNL = dot( normal, hemiLight.direction );\n\t\tfloat hemiDiffuseWeight = 0.5 * dotNL + 0.5;\n\t\tvec3 irradiance = mix( hemiLight.groundColor, hemiLight.skyColor, hemiDiffuseWeight );\n\t\treturn irradiance;\n\t}\n#endif"; - -var envmap_physical_pars_fragment = "#if defined( USE_ENVMAP )\n\tvec3 getIBLIrradiance( const in vec3 normal ) {\n\t\t#if defined( ENVMAP_TYPE_CUBE_UV )\n\t\t\tvec3 worldNormal = inverseTransformDirection( normal, viewMatrix );\n\t\t\tvec4 envMapColor = textureCubeUV( envMap, worldNormal, 1.0 );\n\t\t\treturn PI * envMapColor.rgb * envMapIntensity;\n\t\t#else\n\t\t\treturn vec3( 0.0 );\n\t\t#endif\n\t}\n\tvec3 getIBLRadiance( const in vec3 viewDir, const in vec3 normal, const in float roughness ) {\n\t\t#if defined( ENVMAP_TYPE_CUBE_UV )\n\t\t\tvec3 reflectVec = reflect( - viewDir, normal );\n\t\t\treflectVec = normalize( mix( reflectVec, normal, roughness * roughness) );\n\t\t\treflectVec = inverseTransformDirection( reflectVec, viewMatrix );\n\t\t\tvec4 envMapColor = textureCubeUV( envMap, reflectVec, roughness );\n\t\t\treturn envMapColor.rgb * envMapIntensity;\n\t\t#else\n\t\t\treturn vec3( 0.0 );\n\t\t#endif\n\t}\n#endif"; - -var lights_toon_fragment = "ToonMaterial material;\nmaterial.diffuseColor = diffuseColor.rgb;"; - -var lights_toon_pars_fragment = "varying vec3 vViewPosition;\nstruct ToonMaterial {\n\tvec3 diffuseColor;\n};\nvoid RE_Direct_Toon( const in IncidentLight directLight, const in GeometricContext geometry, const in ToonMaterial material, inout ReflectedLight reflectedLight ) {\n\tvec3 irradiance = getGradientIrradiance( geometry.normal, directLight.direction ) * directLight.color;\n\treflectedLight.directDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n}\nvoid RE_IndirectDiffuse_Toon( const in vec3 irradiance, const in GeometricContext geometry, const in ToonMaterial material, inout ReflectedLight reflectedLight ) {\n\treflectedLight.indirectDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n}\n#define RE_Direct\t\t\t\tRE_Direct_Toon\n#define RE_IndirectDiffuse\t\tRE_IndirectDiffuse_Toon"; - -var lights_phong_fragment = "BlinnPhongMaterial material;\nmaterial.diffuseColor = diffuseColor.rgb;\nmaterial.specularColor = specular;\nmaterial.specularShininess = shininess;\nmaterial.specularStrength = specularStrength;"; - -var lights_phong_pars_fragment = "varying vec3 vViewPosition;\nstruct BlinnPhongMaterial {\n\tvec3 diffuseColor;\n\tvec3 specularColor;\n\tfloat specularShininess;\n\tfloat specularStrength;\n};\nvoid RE_Direct_BlinnPhong( const in IncidentLight directLight, const in GeometricContext geometry, const in BlinnPhongMaterial material, inout ReflectedLight reflectedLight ) {\n\tfloat dotNL = saturate( dot( geometry.normal, directLight.direction ) );\n\tvec3 irradiance = dotNL * directLight.color;\n\treflectedLight.directDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n\treflectedLight.directSpecular += irradiance * BRDF_BlinnPhong( directLight.direction, geometry.viewDir, geometry.normal, material.specularColor, material.specularShininess ) * material.specularStrength;\n}\nvoid RE_IndirectDiffuse_BlinnPhong( const in vec3 irradiance, const in GeometricContext geometry, const in BlinnPhongMaterial material, inout ReflectedLight reflectedLight ) {\n\treflectedLight.indirectDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n}\n#define RE_Direct\t\t\t\tRE_Direct_BlinnPhong\n#define RE_IndirectDiffuse\t\tRE_IndirectDiffuse_BlinnPhong"; - -var lights_physical_fragment = "PhysicalMaterial material;\nmaterial.diffuseColor = diffuseColor.rgb * ( 1.0 - metalnessFactor );\nvec3 dxy = max( abs( dFdx( geometryNormal ) ), abs( dFdy( geometryNormal ) ) );\nfloat geometryRoughness = max( max( dxy.x, dxy.y ), dxy.z );\nmaterial.roughness = max( roughnessFactor, 0.0525 );material.roughness += geometryRoughness;\nmaterial.roughness = min( material.roughness, 1.0 );\n#ifdef IOR\n\tmaterial.ior = ior;\n\t#ifdef SPECULAR\n\t\tfloat specularIntensityFactor = specularIntensity;\n\t\tvec3 specularColorFactor = specularColor;\n\t\t#ifdef USE_SPECULARINTENSITYMAP\n\t\t\tspecularIntensityFactor *= texture2D( specularIntensityMap, vUv ).a;\n\t\t#endif\n\t\t#ifdef USE_SPECULARCOLORMAP\n\t\t\tspecularColorFactor *= texture2D( specularColorMap, vUv ).rgb;\n\t\t#endif\n\t\tmaterial.specularF90 = mix( specularIntensityFactor, 1.0, metalnessFactor );\n\t#else\n\t\tfloat specularIntensityFactor = 1.0;\n\t\tvec3 specularColorFactor = vec3( 1.0 );\n\t\tmaterial.specularF90 = 1.0;\n\t#endif\n\tmaterial.specularColor = mix( min( pow2( ( material.ior - 1.0 ) / ( material.ior + 1.0 ) ) * specularColorFactor, vec3( 1.0 ) ) * specularIntensityFactor, diffuseColor.rgb, metalnessFactor );\n#else\n\tmaterial.specularColor = mix( vec3( 0.04 ), diffuseColor.rgb, metalnessFactor );\n\tmaterial.specularF90 = 1.0;\n#endif\n#ifdef USE_CLEARCOAT\n\tmaterial.clearcoat = clearcoat;\n\tmaterial.clearcoatRoughness = clearcoatRoughness;\n\tmaterial.clearcoatF0 = vec3( 0.04 );\n\tmaterial.clearcoatF90 = 1.0;\n\t#ifdef USE_CLEARCOATMAP\n\t\tmaterial.clearcoat *= texture2D( clearcoatMap, vUv ).x;\n\t#endif\n\t#ifdef USE_CLEARCOAT_ROUGHNESSMAP\n\t\tmaterial.clearcoatRoughness *= texture2D( clearcoatRoughnessMap, vUv ).y;\n\t#endif\n\tmaterial.clearcoat = saturate( material.clearcoat );\tmaterial.clearcoatRoughness = max( material.clearcoatRoughness, 0.0525 );\n\tmaterial.clearcoatRoughness += geometryRoughness;\n\tmaterial.clearcoatRoughness = min( material.clearcoatRoughness, 1.0 );\n#endif\n#ifdef USE_IRIDESCENCE\n\tmaterial.iridescence = iridescence;\n\tmaterial.iridescenceIOR = iridescenceIOR;\n\t#ifdef USE_IRIDESCENCEMAP\n\t\tmaterial.iridescence *= texture2D( iridescenceMap, vUv ).r;\n\t#endif\n\t#ifdef USE_IRIDESCENCE_THICKNESSMAP\n\t\tmaterial.iridescenceThickness = (iridescenceThicknessMaximum - iridescenceThicknessMinimum) * texture2D( iridescenceThicknessMap, vUv ).g + iridescenceThicknessMinimum;\n\t#else\n\t\tmaterial.iridescenceThickness = iridescenceThicknessMaximum;\n\t#endif\n#endif\n#ifdef USE_SHEEN\n\tmaterial.sheenColor = sheenColor;\n\t#ifdef USE_SHEENCOLORMAP\n\t\tmaterial.sheenColor *= texture2D( sheenColorMap, vUv ).rgb;\n\t#endif\n\tmaterial.sheenRoughness = clamp( sheenRoughness, 0.07, 1.0 );\n\t#ifdef USE_SHEENROUGHNESSMAP\n\t\tmaterial.sheenRoughness *= texture2D( sheenRoughnessMap, vUv ).a;\n\t#endif\n#endif"; - -var lights_physical_pars_fragment = "struct PhysicalMaterial {\n\tvec3 diffuseColor;\n\tfloat roughness;\n\tvec3 specularColor;\n\tfloat specularF90;\n\t#ifdef USE_CLEARCOAT\n\t\tfloat clearcoat;\n\t\tfloat clearcoatRoughness;\n\t\tvec3 clearcoatF0;\n\t\tfloat clearcoatF90;\n\t#endif\n\t#ifdef USE_IRIDESCENCE\n\t\tfloat iridescence;\n\t\tfloat iridescenceIOR;\n\t\tfloat iridescenceThickness;\n\t\tvec3 iridescenceFresnel;\n\t\tvec3 iridescenceF0;\n\t#endif\n\t#ifdef USE_SHEEN\n\t\tvec3 sheenColor;\n\t\tfloat sheenRoughness;\n\t#endif\n\t#ifdef IOR\n\t\tfloat ior;\n\t#endif\n\t#ifdef USE_TRANSMISSION\n\t\tfloat transmission;\n\t\tfloat transmissionAlpha;\n\t\tfloat thickness;\n\t\tfloat attenuationDistance;\n\t\tvec3 attenuationColor;\n\t#endif\n};\nvec3 clearcoatSpecular = vec3( 0.0 );\nvec3 sheenSpecular = vec3( 0.0 );\nfloat IBLSheenBRDF( const in vec3 normal, const in vec3 viewDir, const in float roughness ) {\n\tfloat dotNV = saturate( dot( normal, viewDir ) );\n\tfloat r2 = roughness * roughness;\n\tfloat a = roughness < 0.25 ? -339.2 * r2 + 161.4 * roughness - 25.9 : -8.48 * r2 + 14.3 * roughness - 9.95;\n\tfloat b = roughness < 0.25 ? 44.0 * r2 - 23.7 * roughness + 3.26 : 1.97 * r2 - 3.27 * roughness + 0.72;\n\tfloat DG = exp( a * dotNV + b ) + ( roughness < 0.25 ? 0.0 : 0.1 * ( roughness - 0.25 ) );\n\treturn saturate( DG * RECIPROCAL_PI );\n}\nvec2 DFGApprox( const in vec3 normal, const in vec3 viewDir, const in float roughness ) {\n\tfloat dotNV = saturate( dot( normal, viewDir ) );\n\tconst vec4 c0 = vec4( - 1, - 0.0275, - 0.572, 0.022 );\n\tconst vec4 c1 = vec4( 1, 0.0425, 1.04, - 0.04 );\n\tvec4 r = roughness * c0 + c1;\n\tfloat a004 = min( r.x * r.x, exp2( - 9.28 * dotNV ) ) * r.x + r.y;\n\tvec2 fab = vec2( - 1.04, 1.04 ) * a004 + r.zw;\n\treturn fab;\n}\nvec3 EnvironmentBRDF( const in vec3 normal, const in vec3 viewDir, const in vec3 specularColor, const in float specularF90, const in float roughness ) {\n\tvec2 fab = DFGApprox( normal, viewDir, roughness );\n\treturn specularColor * fab.x + specularF90 * fab.y;\n}\n#ifdef USE_IRIDESCENCE\nvoid computeMultiscatteringIridescence( const in vec3 normal, const in vec3 viewDir, const in vec3 specularColor, const in float specularF90, const in float iridescence, const in vec3 iridescenceF0, const in float roughness, inout vec3 singleScatter, inout vec3 multiScatter ) {\n#else\nvoid computeMultiscattering( const in vec3 normal, const in vec3 viewDir, const in vec3 specularColor, const in float specularF90, const in float roughness, inout vec3 singleScatter, inout vec3 multiScatter ) {\n#endif\n\tvec2 fab = DFGApprox( normal, viewDir, roughness );\n\t#ifdef USE_IRIDESCENCE\n\t\tvec3 Fr = mix( specularColor, iridescenceF0, iridescence );\n\t#else\n\t\tvec3 Fr = specularColor;\n\t#endif\n\tvec3 FssEss = Fr * fab.x + specularF90 * fab.y;\n\tfloat Ess = fab.x + fab.y;\n\tfloat Ems = 1.0 - Ess;\n\tvec3 Favg = Fr + ( 1.0 - Fr ) * 0.047619;\tvec3 Fms = FssEss * Favg / ( 1.0 - Ems * Favg );\n\tsingleScatter += FssEss;\n\tmultiScatter += Fms * Ems;\n}\n#if NUM_RECT_AREA_LIGHTS > 0\n\tvoid RE_Direct_RectArea_Physical( const in RectAreaLight rectAreaLight, const in GeometricContext geometry, const in PhysicalMaterial material, inout ReflectedLight reflectedLight ) {\n\t\tvec3 normal = geometry.normal;\n\t\tvec3 viewDir = geometry.viewDir;\n\t\tvec3 position = geometry.position;\n\t\tvec3 lightPos = rectAreaLight.position;\n\t\tvec3 halfWidth = rectAreaLight.halfWidth;\n\t\tvec3 halfHeight = rectAreaLight.halfHeight;\n\t\tvec3 lightColor = rectAreaLight.color;\n\t\tfloat roughness = material.roughness;\n\t\tvec3 rectCoords[ 4 ];\n\t\trectCoords[ 0 ] = lightPos + halfWidth - halfHeight;\t\trectCoords[ 1 ] = lightPos - halfWidth - halfHeight;\n\t\trectCoords[ 2 ] = lightPos - halfWidth + halfHeight;\n\t\trectCoords[ 3 ] = lightPos + halfWidth + halfHeight;\n\t\tvec2 uv = LTC_Uv( normal, viewDir, roughness );\n\t\tvec4 t1 = texture2D( ltc_1, uv );\n\t\tvec4 t2 = texture2D( ltc_2, uv );\n\t\tmat3 mInv = mat3(\n\t\t\tvec3( t1.x, 0, t1.y ),\n\t\t\tvec3( 0, 1, 0 ),\n\t\t\tvec3( t1.z, 0, t1.w )\n\t\t);\n\t\tvec3 fresnel = ( material.specularColor * t2.x + ( vec3( 1.0 ) - material.specularColor ) * t2.y );\n\t\treflectedLight.directSpecular += lightColor * fresnel * LTC_Evaluate( normal, viewDir, position, mInv, rectCoords );\n\t\treflectedLight.directDiffuse += lightColor * material.diffuseColor * LTC_Evaluate( normal, viewDir, position, mat3( 1.0 ), rectCoords );\n\t}\n#endif\nvoid RE_Direct_Physical( const in IncidentLight directLight, const in GeometricContext geometry, const in PhysicalMaterial material, inout ReflectedLight reflectedLight ) {\n\tfloat dotNL = saturate( dot( geometry.normal, directLight.direction ) );\n\tvec3 irradiance = dotNL * directLight.color;\n\t#ifdef USE_CLEARCOAT\n\t\tfloat dotNLcc = saturate( dot( geometry.clearcoatNormal, directLight.direction ) );\n\t\tvec3 ccIrradiance = dotNLcc * directLight.color;\n\t\tclearcoatSpecular += ccIrradiance * BRDF_GGX( directLight.direction, geometry.viewDir, geometry.clearcoatNormal, material.clearcoatF0, material.clearcoatF90, material.clearcoatRoughness );\n\t#endif\n\t#ifdef USE_SHEEN\n\t\tsheenSpecular += irradiance * BRDF_Sheen( directLight.direction, geometry.viewDir, geometry.normal, material.sheenColor, material.sheenRoughness );\n\t#endif\n\t#ifdef USE_IRIDESCENCE\n\t\treflectedLight.directSpecular += irradiance * BRDF_GGX_Iridescence( directLight.direction, geometry.viewDir, geometry.normal, material.specularColor, material.specularF90, material.iridescence, material.iridescenceFresnel, material.roughness );\n\t#else\n\t\treflectedLight.directSpecular += irradiance * BRDF_GGX( directLight.direction, geometry.viewDir, geometry.normal, material.specularColor, material.specularF90, material.roughness );\n\t#endif\n\treflectedLight.directDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n}\nvoid RE_IndirectDiffuse_Physical( const in vec3 irradiance, const in GeometricContext geometry, const in PhysicalMaterial material, inout ReflectedLight reflectedLight ) {\n\treflectedLight.indirectDiffuse += irradiance * BRDF_Lambert( material.diffuseColor );\n}\nvoid RE_IndirectSpecular_Physical( const in vec3 radiance, const in vec3 irradiance, const in vec3 clearcoatRadiance, const in GeometricContext geometry, const in PhysicalMaterial material, inout ReflectedLight reflectedLight) {\n\t#ifdef USE_CLEARCOAT\n\t\tclearcoatSpecular += clearcoatRadiance * EnvironmentBRDF( geometry.clearcoatNormal, geometry.viewDir, material.clearcoatF0, material.clearcoatF90, material.clearcoatRoughness );\n\t#endif\n\t#ifdef USE_SHEEN\n\t\tsheenSpecular += irradiance * material.sheenColor * IBLSheenBRDF( geometry.normal, geometry.viewDir, material.sheenRoughness );\n\t#endif\n\tvec3 singleScattering = vec3( 0.0 );\n\tvec3 multiScattering = vec3( 0.0 );\n\tvec3 cosineWeightedIrradiance = irradiance * RECIPROCAL_PI;\n\t#ifdef USE_IRIDESCENCE\n\t\tcomputeMultiscatteringIridescence( geometry.normal, geometry.viewDir, material.specularColor, material.specularF90, material.iridescence, material.iridescenceFresnel, material.roughness, singleScattering, multiScattering );\n\t#else\n\t\tcomputeMultiscattering( geometry.normal, geometry.viewDir, material.specularColor, material.specularF90, material.roughness, singleScattering, multiScattering );\n\t#endif\n\tvec3 totalScattering = singleScattering + multiScattering;\n\tvec3 diffuse = material.diffuseColor * ( 1.0 - max( max( totalScattering.r, totalScattering.g ), totalScattering.b ) );\n\treflectedLight.indirectSpecular += radiance * singleScattering;\n\treflectedLight.indirectSpecular += multiScattering * cosineWeightedIrradiance;\n\treflectedLight.indirectDiffuse += diffuse * cosineWeightedIrradiance;\n}\n#define RE_Direct\t\t\t\tRE_Direct_Physical\n#define RE_Direct_RectArea\t\tRE_Direct_RectArea_Physical\n#define RE_IndirectDiffuse\t\tRE_IndirectDiffuse_Physical\n#define RE_IndirectSpecular\t\tRE_IndirectSpecular_Physical\nfloat computeSpecularOcclusion( const in float dotNV, const in float ambientOcclusion, const in float roughness ) {\n\treturn saturate( pow( dotNV + ambientOcclusion, exp2( - 16.0 * roughness - 1.0 ) ) - 1.0 + ambientOcclusion );\n}"; - -var lights_fragment_begin = "\nGeometricContext geometry;\ngeometry.position = - vViewPosition;\ngeometry.normal = normal;\ngeometry.viewDir = ( isOrthographic ) ? vec3( 0, 0, 1 ) : normalize( vViewPosition );\n#ifdef USE_CLEARCOAT\n\tgeometry.clearcoatNormal = clearcoatNormal;\n#endif\n#ifdef USE_IRIDESCENCE\n\tfloat dotNVi = saturate( dot( normal, geometry.viewDir ) );\n\tif ( material.iridescenceThickness == 0.0 ) {\n\t\tmaterial.iridescence = 0.0;\n\t} else {\n\t\tmaterial.iridescence = saturate( material.iridescence );\n\t}\n\tif ( material.iridescence > 0.0 ) {\n\t\tmaterial.iridescenceFresnel = evalIridescence( 1.0, material.iridescenceIOR, dotNVi, material.iridescenceThickness, material.specularColor );\n\t\tmaterial.iridescenceF0 = Schlick_to_F0( material.iridescenceFresnel, 1.0, dotNVi );\n\t}\n#endif\nIncidentLight directLight;\n#if ( NUM_POINT_LIGHTS > 0 ) && defined( RE_Direct )\n\tPointLight pointLight;\n\t#if defined( USE_SHADOWMAP ) && NUM_POINT_LIGHT_SHADOWS > 0\n\tPointLightShadow pointLightShadow;\n\t#endif\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_POINT_LIGHTS; i ++ ) {\n\t\tpointLight = pointLights[ i ];\n\t\tgetPointLightInfo( pointLight, geometry, directLight );\n\t\t#if defined( USE_SHADOWMAP ) && ( UNROLLED_LOOP_INDEX < NUM_POINT_LIGHT_SHADOWS )\n\t\tpointLightShadow = pointLightShadows[ i ];\n\t\tdirectLight.color *= ( directLight.visible && receiveShadow ) ? getPointShadow( pointShadowMap[ i ], pointLightShadow.shadowMapSize, pointLightShadow.shadowBias, pointLightShadow.shadowRadius, vPointShadowCoord[ i ], pointLightShadow.shadowCameraNear, pointLightShadow.shadowCameraFar ) : 1.0;\n\t\t#endif\n\t\tRE_Direct( directLight, geometry, material, reflectedLight );\n\t}\n\t#pragma unroll_loop_end\n#endif\n#if ( NUM_SPOT_LIGHTS > 0 ) && defined( RE_Direct )\n\tSpotLight spotLight;\n\tvec4 spotColor;\n\tvec3 spotLightCoord;\n\tbool inSpotLightMap;\n\t#if defined( USE_SHADOWMAP ) && NUM_SPOT_LIGHT_SHADOWS > 0\n\tSpotLightShadow spotLightShadow;\n\t#endif\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_SPOT_LIGHTS; i ++ ) {\n\t\tspotLight = spotLights[ i ];\n\t\tgetSpotLightInfo( spotLight, geometry, directLight );\n\t\t#if ( UNROLLED_LOOP_INDEX < NUM_SPOT_LIGHT_SHADOWS_WITH_MAPS )\n\t\t#define SPOT_LIGHT_MAP_INDEX UNROLLED_LOOP_INDEX\n\t\t#elif ( UNROLLED_LOOP_INDEX < NUM_SPOT_LIGHT_SHADOWS )\n\t\t#define SPOT_LIGHT_MAP_INDEX NUM_SPOT_LIGHT_MAPS\n\t\t#else\n\t\t#define SPOT_LIGHT_MAP_INDEX ( UNROLLED_LOOP_INDEX - NUM_SPOT_LIGHT_SHADOWS + NUM_SPOT_LIGHT_SHADOWS_WITH_MAPS )\n\t\t#endif\n\t\t#if ( SPOT_LIGHT_MAP_INDEX < NUM_SPOT_LIGHT_MAPS )\n\t\t\tspotLightCoord = vSpotLightCoord[ i ].xyz / vSpotLightCoord[ i ].w;\n\t\t\tinSpotLightMap = all( lessThan( abs( spotLightCoord * 2. - 1. ), vec3( 1.0 ) ) );\n\t\t\tspotColor = texture2D( spotLightMap[ SPOT_LIGHT_MAP_INDEX ], spotLightCoord.xy );\n\t\t\tdirectLight.color = inSpotLightMap ? directLight.color * spotColor.rgb : directLight.color;\n\t\t#endif\n\t\t#undef SPOT_LIGHT_MAP_INDEX\n\t\t#if defined( USE_SHADOWMAP ) && ( UNROLLED_LOOP_INDEX < NUM_SPOT_LIGHT_SHADOWS )\n\t\tspotLightShadow = spotLightShadows[ i ];\n\t\tdirectLight.color *= ( directLight.visible && receiveShadow ) ? getShadow( spotShadowMap[ i ], spotLightShadow.shadowMapSize, spotLightShadow.shadowBias, spotLightShadow.shadowRadius, vSpotLightCoord[ i ] ) : 1.0;\n\t\t#endif\n\t\tRE_Direct( directLight, geometry, material, reflectedLight );\n\t}\n\t#pragma unroll_loop_end\n#endif\n#if ( NUM_DIR_LIGHTS > 0 ) && defined( RE_Direct )\n\tDirectionalLight directionalLight;\n\t#if defined( USE_SHADOWMAP ) && NUM_DIR_LIGHT_SHADOWS > 0\n\tDirectionalLightShadow directionalLightShadow;\n\t#endif\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_DIR_LIGHTS; i ++ ) {\n\t\tdirectionalLight = directionalLights[ i ];\n\t\tgetDirectionalLightInfo( directionalLight, geometry, directLight );\n\t\t#if defined( USE_SHADOWMAP ) && ( UNROLLED_LOOP_INDEX < NUM_DIR_LIGHT_SHADOWS )\n\t\tdirectionalLightShadow = directionalLightShadows[ i ];\n\t\tdirectLight.color *= ( directLight.visible && receiveShadow ) ? getShadow( directionalShadowMap[ i ], directionalLightShadow.shadowMapSize, directionalLightShadow.shadowBias, directionalLightShadow.shadowRadius, vDirectionalShadowCoord[ i ] ) : 1.0;\n\t\t#endif\n\t\tRE_Direct( directLight, geometry, material, reflectedLight );\n\t}\n\t#pragma unroll_loop_end\n#endif\n#if ( NUM_RECT_AREA_LIGHTS > 0 ) && defined( RE_Direct_RectArea )\n\tRectAreaLight rectAreaLight;\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_RECT_AREA_LIGHTS; i ++ ) {\n\t\trectAreaLight = rectAreaLights[ i ];\n\t\tRE_Direct_RectArea( rectAreaLight, geometry, material, reflectedLight );\n\t}\n\t#pragma unroll_loop_end\n#endif\n#if defined( RE_IndirectDiffuse )\n\tvec3 iblIrradiance = vec3( 0.0 );\n\tvec3 irradiance = getAmbientLightIrradiance( ambientLightColor );\n\tirradiance += getLightProbeIrradiance( lightProbe, geometry.normal );\n\t#if ( NUM_HEMI_LIGHTS > 0 )\n\t\t#pragma unroll_loop_start\n\t\tfor ( int i = 0; i < NUM_HEMI_LIGHTS; i ++ ) {\n\t\t\tirradiance += getHemisphereLightIrradiance( hemisphereLights[ i ], geometry.normal );\n\t\t}\n\t\t#pragma unroll_loop_end\n\t#endif\n#endif\n#if defined( RE_IndirectSpecular )\n\tvec3 radiance = vec3( 0.0 );\n\tvec3 clearcoatRadiance = vec3( 0.0 );\n#endif"; - -var lights_fragment_maps = "#if defined( RE_IndirectDiffuse )\n\t#ifdef USE_LIGHTMAP\n\t\tvec4 lightMapTexel = texture2D( lightMap, vUv2 );\n\t\tvec3 lightMapIrradiance = lightMapTexel.rgb * lightMapIntensity;\n\t\tirradiance += lightMapIrradiance;\n\t#endif\n\t#if defined( USE_ENVMAP ) && defined( STANDARD ) && defined( ENVMAP_TYPE_CUBE_UV )\n\t\tiblIrradiance += getIBLIrradiance( geometry.normal );\n\t#endif\n#endif\n#if defined( USE_ENVMAP ) && defined( RE_IndirectSpecular )\n\tradiance += getIBLRadiance( geometry.viewDir, geometry.normal, material.roughness );\n\t#ifdef USE_CLEARCOAT\n\t\tclearcoatRadiance += getIBLRadiance( geometry.viewDir, geometry.clearcoatNormal, material.clearcoatRoughness );\n\t#endif\n#endif"; - -var lights_fragment_end = "#if defined( RE_IndirectDiffuse )\n\tRE_IndirectDiffuse( irradiance, geometry, material, reflectedLight );\n#endif\n#if defined( RE_IndirectSpecular )\n\tRE_IndirectSpecular( radiance, iblIrradiance, clearcoatRadiance, geometry, material, reflectedLight );\n#endif"; - -var logdepthbuf_fragment = "#if defined( USE_LOGDEPTHBUF ) && defined( USE_LOGDEPTHBUF_EXT )\n\tgl_FragDepthEXT = vIsPerspective == 0.0 ? gl_FragCoord.z : log2( vFragDepth ) * logDepthBufFC * 0.5;\n#endif"; - -var logdepthbuf_pars_fragment = "#if defined( USE_LOGDEPTHBUF ) && defined( USE_LOGDEPTHBUF_EXT )\n\tuniform float logDepthBufFC;\n\tvarying float vFragDepth;\n\tvarying float vIsPerspective;\n#endif"; - -var logdepthbuf_pars_vertex = "#ifdef USE_LOGDEPTHBUF\n\t#ifdef USE_LOGDEPTHBUF_EXT\n\t\tvarying float vFragDepth;\n\t\tvarying float vIsPerspective;\n\t#else\n\t\tuniform float logDepthBufFC;\n\t#endif\n#endif"; - -var logdepthbuf_vertex = "#ifdef USE_LOGDEPTHBUF\n\t#ifdef USE_LOGDEPTHBUF_EXT\n\t\tvFragDepth = 1.0 + gl_Position.w;\n\t\tvIsPerspective = float( isPerspectiveMatrix( projectionMatrix ) );\n\t#else\n\t\tif ( isPerspectiveMatrix( projectionMatrix ) ) {\n\t\t\tgl_Position.z = log2( max( EPSILON, gl_Position.w + 1.0 ) ) * logDepthBufFC - 1.0;\n\t\t\tgl_Position.z *= gl_Position.w;\n\t\t}\n\t#endif\n#endif"; - -var map_fragment = "#ifdef USE_MAP\n\tvec4 sampledDiffuseColor = texture2D( map, vUv );\n\t#ifdef DECODE_VIDEO_TEXTURE\n\t\tsampledDiffuseColor = vec4( mix( pow( sampledDiffuseColor.rgb * 0.9478672986 + vec3( 0.0521327014 ), vec3( 2.4 ) ), sampledDiffuseColor.rgb * 0.0773993808, vec3( lessThanEqual( sampledDiffuseColor.rgb, vec3( 0.04045 ) ) ) ), sampledDiffuseColor.w );\n\t#endif\n\tdiffuseColor *= sampledDiffuseColor;\n#endif"; - -var map_pars_fragment = "#ifdef USE_MAP\n\tuniform sampler2D map;\n#endif"; - -var map_particle_fragment = "#if defined( USE_MAP ) || defined( USE_ALPHAMAP )\n\tvec2 uv = ( uvTransform * vec3( gl_PointCoord.x, 1.0 - gl_PointCoord.y, 1 ) ).xy;\n#endif\n#ifdef USE_MAP\n\tdiffuseColor *= texture2D( map, uv );\n#endif\n#ifdef USE_ALPHAMAP\n\tdiffuseColor.a *= texture2D( alphaMap, uv ).g;\n#endif"; - -var map_particle_pars_fragment = "#if defined( USE_MAP ) || defined( USE_ALPHAMAP )\n\tuniform mat3 uvTransform;\n#endif\n#ifdef USE_MAP\n\tuniform sampler2D map;\n#endif\n#ifdef USE_ALPHAMAP\n\tuniform sampler2D alphaMap;\n#endif"; - -var metalnessmap_fragment = "float metalnessFactor = metalness;\n#ifdef USE_METALNESSMAP\n\tvec4 texelMetalness = texture2D( metalnessMap, vUv );\n\tmetalnessFactor *= texelMetalness.b;\n#endif"; - -var metalnessmap_pars_fragment = "#ifdef USE_METALNESSMAP\n\tuniform sampler2D metalnessMap;\n#endif"; - -var morphcolor_vertex = "#if defined( USE_MORPHCOLORS ) && defined( MORPHTARGETS_TEXTURE )\n\tvColor *= morphTargetBaseInfluence;\n\tfor ( int i = 0; i < MORPHTARGETS_COUNT; i ++ ) {\n\t\t#if defined( USE_COLOR_ALPHA )\n\t\t\tif ( morphTargetInfluences[ i ] != 0.0 ) vColor += getMorph( gl_VertexID, i, 2 ) * morphTargetInfluences[ i ];\n\t\t#elif defined( USE_COLOR )\n\t\t\tif ( morphTargetInfluences[ i ] != 0.0 ) vColor += getMorph( gl_VertexID, i, 2 ).rgb * morphTargetInfluences[ i ];\n\t\t#endif\n\t}\n#endif"; - -var morphnormal_vertex = "#ifdef USE_MORPHNORMALS\n\tobjectNormal *= morphTargetBaseInfluence;\n\t#ifdef MORPHTARGETS_TEXTURE\n\t\tfor ( int i = 0; i < MORPHTARGETS_COUNT; i ++ ) {\n\t\t\tif ( morphTargetInfluences[ i ] != 0.0 ) objectNormal += getMorph( gl_VertexID, i, 1 ).xyz * morphTargetInfluences[ i ];\n\t\t}\n\t#else\n\t\tobjectNormal += morphNormal0 * morphTargetInfluences[ 0 ];\n\t\tobjectNormal += morphNormal1 * morphTargetInfluences[ 1 ];\n\t\tobjectNormal += morphNormal2 * morphTargetInfluences[ 2 ];\n\t\tobjectNormal += morphNormal3 * morphTargetInfluences[ 3 ];\n\t#endif\n#endif"; - -var morphtarget_pars_vertex = "#ifdef USE_MORPHTARGETS\n\tuniform float morphTargetBaseInfluence;\n\t#ifdef MORPHTARGETS_TEXTURE\n\t\tuniform float morphTargetInfluences[ MORPHTARGETS_COUNT ];\n\t\tuniform sampler2DArray morphTargetsTexture;\n\t\tuniform ivec2 morphTargetsTextureSize;\n\t\tvec4 getMorph( const in int vertexIndex, const in int morphTargetIndex, const in int offset ) {\n\t\t\tint texelIndex = vertexIndex * MORPHTARGETS_TEXTURE_STRIDE + offset;\n\t\t\tint y = texelIndex / morphTargetsTextureSize.x;\n\t\t\tint x = texelIndex - y * morphTargetsTextureSize.x;\n\t\t\tivec3 morphUV = ivec3( x, y, morphTargetIndex );\n\t\t\treturn texelFetch( morphTargetsTexture, morphUV, 0 );\n\t\t}\n\t#else\n\t\t#ifndef USE_MORPHNORMALS\n\t\t\tuniform float morphTargetInfluences[ 8 ];\n\t\t#else\n\t\t\tuniform float morphTargetInfluences[ 4 ];\n\t\t#endif\n\t#endif\n#endif"; - -var morphtarget_vertex = "#ifdef USE_MORPHTARGETS\n\ttransformed *= morphTargetBaseInfluence;\n\t#ifdef MORPHTARGETS_TEXTURE\n\t\tfor ( int i = 0; i < MORPHTARGETS_COUNT; i ++ ) {\n\t\t\tif ( morphTargetInfluences[ i ] != 0.0 ) transformed += getMorph( gl_VertexID, i, 0 ).xyz * morphTargetInfluences[ i ];\n\t\t}\n\t#else\n\t\ttransformed += morphTarget0 * morphTargetInfluences[ 0 ];\n\t\ttransformed += morphTarget1 * morphTargetInfluences[ 1 ];\n\t\ttransformed += morphTarget2 * morphTargetInfluences[ 2 ];\n\t\ttransformed += morphTarget3 * morphTargetInfluences[ 3 ];\n\t\t#ifndef USE_MORPHNORMALS\n\t\t\ttransformed += morphTarget4 * morphTargetInfluences[ 4 ];\n\t\t\ttransformed += morphTarget5 * morphTargetInfluences[ 5 ];\n\t\t\ttransformed += morphTarget6 * morphTargetInfluences[ 6 ];\n\t\t\ttransformed += morphTarget7 * morphTargetInfluences[ 7 ];\n\t\t#endif\n\t#endif\n#endif"; - -var normal_fragment_begin = "float faceDirection = gl_FrontFacing ? 1.0 : - 1.0;\n#ifdef FLAT_SHADED\n\tvec3 fdx = dFdx( vViewPosition );\n\tvec3 fdy = dFdy( vViewPosition );\n\tvec3 normal = normalize( cross( fdx, fdy ) );\n#else\n\tvec3 normal = normalize( vNormal );\n\t#ifdef DOUBLE_SIDED\n\t\tnormal = normal * faceDirection;\n\t#endif\n\t#ifdef USE_TANGENT\n\t\tvec3 tangent = normalize( vTangent );\n\t\tvec3 bitangent = normalize( vBitangent );\n\t\t#ifdef DOUBLE_SIDED\n\t\t\ttangent = tangent * faceDirection;\n\t\t\tbitangent = bitangent * faceDirection;\n\t\t#endif\n\t\t#if defined( TANGENTSPACE_NORMALMAP ) || defined( USE_CLEARCOAT_NORMALMAP )\n\t\t\tmat3 vTBN = mat3( tangent, bitangent, normal );\n\t\t#endif\n\t#endif\n#endif\nvec3 geometryNormal = normal;"; - -var normal_fragment_maps = "#ifdef OBJECTSPACE_NORMALMAP\n\tnormal = texture2D( normalMap, vUv ).xyz * 2.0 - 1.0;\n\t#ifdef FLIP_SIDED\n\t\tnormal = - normal;\n\t#endif\n\t#ifdef DOUBLE_SIDED\n\t\tnormal = normal * faceDirection;\n\t#endif\n\tnormal = normalize( normalMatrix * normal );\n#elif defined( TANGENTSPACE_NORMALMAP )\n\tvec3 mapN = texture2D( normalMap, vUv ).xyz * 2.0 - 1.0;\n\tmapN.xy *= normalScale;\n\t#ifdef USE_TANGENT\n\t\tnormal = normalize( vTBN * mapN );\n\t#else\n\t\tnormal = perturbNormal2Arb( - vViewPosition, normal, mapN, faceDirection );\n\t#endif\n#elif defined( USE_BUMPMAP )\n\tnormal = perturbNormalArb( - vViewPosition, normal, dHdxy_fwd(), faceDirection );\n#endif"; - -var normal_pars_fragment = "#ifndef FLAT_SHADED\n\tvarying vec3 vNormal;\n\t#ifdef USE_TANGENT\n\t\tvarying vec3 vTangent;\n\t\tvarying vec3 vBitangent;\n\t#endif\n#endif"; - -var normal_pars_vertex = "#ifndef FLAT_SHADED\n\tvarying vec3 vNormal;\n\t#ifdef USE_TANGENT\n\t\tvarying vec3 vTangent;\n\t\tvarying vec3 vBitangent;\n\t#endif\n#endif"; - -var normal_vertex = "#ifndef FLAT_SHADED\n\tvNormal = normalize( transformedNormal );\n\t#ifdef USE_TANGENT\n\t\tvTangent = normalize( transformedTangent );\n\t\tvBitangent = normalize( cross( vNormal, vTangent ) * tangent.w );\n\t#endif\n#endif"; - -var normalmap_pars_fragment = "#ifdef USE_NORMALMAP\n\tuniform sampler2D normalMap;\n\tuniform vec2 normalScale;\n#endif\n#ifdef OBJECTSPACE_NORMALMAP\n\tuniform mat3 normalMatrix;\n#endif\n#if ! defined ( USE_TANGENT ) && ( defined ( TANGENTSPACE_NORMALMAP ) || defined ( USE_CLEARCOAT_NORMALMAP ) )\n\tvec3 perturbNormal2Arb( vec3 eye_pos, vec3 surf_norm, vec3 mapN, float faceDirection ) {\n\t\tvec3 q0 = dFdx( eye_pos.xyz );\n\t\tvec3 q1 = dFdy( eye_pos.xyz );\n\t\tvec2 st0 = dFdx( vUv.st );\n\t\tvec2 st1 = dFdy( vUv.st );\n\t\tvec3 N = surf_norm;\n\t\tvec3 q1perp = cross( q1, N );\n\t\tvec3 q0perp = cross( N, q0 );\n\t\tvec3 T = q1perp * st0.x + q0perp * st1.x;\n\t\tvec3 B = q1perp * st0.y + q0perp * st1.y;\n\t\tfloat det = max( dot( T, T ), dot( B, B ) );\n\t\tfloat scale = ( det == 0.0 ) ? 0.0 : faceDirection * inversesqrt( det );\n\t\treturn normalize( T * ( mapN.x * scale ) + B * ( mapN.y * scale ) + N * mapN.z );\n\t}\n#endif"; - -var clearcoat_normal_fragment_begin = "#ifdef USE_CLEARCOAT\n\tvec3 clearcoatNormal = geometryNormal;\n#endif"; - -var clearcoat_normal_fragment_maps = "#ifdef USE_CLEARCOAT_NORMALMAP\n\tvec3 clearcoatMapN = texture2D( clearcoatNormalMap, vUv ).xyz * 2.0 - 1.0;\n\tclearcoatMapN.xy *= clearcoatNormalScale;\n\t#ifdef USE_TANGENT\n\t\tclearcoatNormal = normalize( vTBN * clearcoatMapN );\n\t#else\n\t\tclearcoatNormal = perturbNormal2Arb( - vViewPosition, clearcoatNormal, clearcoatMapN, faceDirection );\n\t#endif\n#endif"; - -var clearcoat_pars_fragment = "#ifdef USE_CLEARCOATMAP\n\tuniform sampler2D clearcoatMap;\n#endif\n#ifdef USE_CLEARCOAT_ROUGHNESSMAP\n\tuniform sampler2D clearcoatRoughnessMap;\n#endif\n#ifdef USE_CLEARCOAT_NORMALMAP\n\tuniform sampler2D clearcoatNormalMap;\n\tuniform vec2 clearcoatNormalScale;\n#endif"; - -var iridescence_pars_fragment = "#ifdef USE_IRIDESCENCEMAP\n\tuniform sampler2D iridescenceMap;\n#endif\n#ifdef USE_IRIDESCENCE_THICKNESSMAP\n\tuniform sampler2D iridescenceThicknessMap;\n#endif"; - -var output_fragment = "#ifdef OPAQUE\ndiffuseColor.a = 1.0;\n#endif\n#ifdef USE_TRANSMISSION\ndiffuseColor.a *= material.transmissionAlpha + 0.1;\n#endif\ngl_FragColor = vec4( outgoingLight, diffuseColor.a );"; - -var packing = "vec3 packNormalToRGB( const in vec3 normal ) {\n\treturn normalize( normal ) * 0.5 + 0.5;\n}\nvec3 unpackRGBToNormal( const in vec3 rgb ) {\n\treturn 2.0 * rgb.xyz - 1.0;\n}\nconst float PackUpscale = 256. / 255.;const float UnpackDownscale = 255. / 256.;\nconst vec3 PackFactors = vec3( 256. * 256. * 256., 256. * 256., 256. );\nconst vec4 UnpackFactors = UnpackDownscale / vec4( PackFactors, 1. );\nconst float ShiftRight8 = 1. / 256.;\nvec4 packDepthToRGBA( const in float v ) {\n\tvec4 r = vec4( fract( v * PackFactors ), v );\n\tr.yzw -= r.xyz * ShiftRight8;\treturn r * PackUpscale;\n}\nfloat unpackRGBAToDepth( const in vec4 v ) {\n\treturn dot( v, UnpackFactors );\n}\nvec2 packDepthToRG( in highp float v ) {\n\treturn packDepthToRGBA( v ).yx;\n}\nfloat unpackRGToDepth( const in highp vec2 v ) {\n\treturn unpackRGBAToDepth( vec4( v.xy, 0.0, 0.0 ) );\n}\nvec4 pack2HalfToRGBA( vec2 v ) {\n\tvec4 r = vec4( v.x, fract( v.x * 255.0 ), v.y, fract( v.y * 255.0 ) );\n\treturn vec4( r.x - r.y / 255.0, r.y, r.z - r.w / 255.0, r.w );\n}\nvec2 unpackRGBATo2Half( vec4 v ) {\n\treturn vec2( v.x + ( v.y / 255.0 ), v.z + ( v.w / 255.0 ) );\n}\nfloat viewZToOrthographicDepth( const in float viewZ, const in float near, const in float far ) {\n\treturn ( viewZ + near ) / ( near - far );\n}\nfloat orthographicDepthToViewZ( const in float linearClipZ, const in float near, const in float far ) {\n\treturn linearClipZ * ( near - far ) - near;\n}\nfloat viewZToPerspectiveDepth( const in float viewZ, const in float near, const in float far ) {\n\treturn ( ( near + viewZ ) * far ) / ( ( far - near ) * viewZ );\n}\nfloat perspectiveDepthToViewZ( const in float invClipZ, const in float near, const in float far ) {\n\treturn ( near * far ) / ( ( far - near ) * invClipZ - far );\n}"; - -var premultiplied_alpha_fragment = "#ifdef PREMULTIPLIED_ALPHA\n\tgl_FragColor.rgb *= gl_FragColor.a;\n#endif"; - -var project_vertex = "vec4 mvPosition = vec4( transformed, 1.0 );\n#ifdef USE_INSTANCING\n\tmvPosition = instanceMatrix * mvPosition;\n#endif\nmvPosition = modelViewMatrix * mvPosition;\ngl_Position = projectionMatrix * mvPosition;"; - -var dithering_fragment = "#ifdef DITHERING\n\tgl_FragColor.rgb = dithering( gl_FragColor.rgb );\n#endif"; - -var dithering_pars_fragment = "#ifdef DITHERING\n\tvec3 dithering( vec3 color ) {\n\t\tfloat grid_position = rand( gl_FragCoord.xy );\n\t\tvec3 dither_shift_RGB = vec3( 0.25 / 255.0, -0.25 / 255.0, 0.25 / 255.0 );\n\t\tdither_shift_RGB = mix( 2.0 * dither_shift_RGB, -2.0 * dither_shift_RGB, grid_position );\n\t\treturn color + dither_shift_RGB;\n\t}\n#endif"; - -var roughnessmap_fragment = "float roughnessFactor = roughness;\n#ifdef USE_ROUGHNESSMAP\n\tvec4 texelRoughness = texture2D( roughnessMap, vUv );\n\troughnessFactor *= texelRoughness.g;\n#endif"; - -var roughnessmap_pars_fragment = "#ifdef USE_ROUGHNESSMAP\n\tuniform sampler2D roughnessMap;\n#endif"; - -var shadowmap_pars_fragment = "#if NUM_SPOT_LIGHT_COORDS > 0\n varying vec4 vSpotLightCoord[ NUM_SPOT_LIGHT_COORDS ];\n#endif\n#if NUM_SPOT_LIGHT_MAPS > 0\n uniform sampler2D spotLightMap[ NUM_SPOT_LIGHT_MAPS ];\n#endif\n#ifdef USE_SHADOWMAP\n\t#if NUM_DIR_LIGHT_SHADOWS > 0\n\t\tuniform sampler2D directionalShadowMap[ NUM_DIR_LIGHT_SHADOWS ];\n\t\tvarying vec4 vDirectionalShadowCoord[ NUM_DIR_LIGHT_SHADOWS ];\n\t\tstruct DirectionalLightShadow {\n\t\t\tfloat shadowBias;\n\t\t\tfloat shadowNormalBias;\n\t\t\tfloat shadowRadius;\n\t\t\tvec2 shadowMapSize;\n\t\t};\n\t\tuniform DirectionalLightShadow directionalLightShadows[ NUM_DIR_LIGHT_SHADOWS ];\n\t#endif\n\t#if NUM_SPOT_LIGHT_SHADOWS > 0\n\t\tuniform sampler2D spotShadowMap[ NUM_SPOT_LIGHT_SHADOWS ];\n\t\tstruct SpotLightShadow {\n\t\t\tfloat shadowBias;\n\t\t\tfloat shadowNormalBias;\n\t\t\tfloat shadowRadius;\n\t\t\tvec2 shadowMapSize;\n\t\t};\n\t\tuniform SpotLightShadow spotLightShadows[ NUM_SPOT_LIGHT_SHADOWS ];\n\t#endif\n\t#if NUM_POINT_LIGHT_SHADOWS > 0\n\t\tuniform sampler2D pointShadowMap[ NUM_POINT_LIGHT_SHADOWS ];\n\t\tvarying vec4 vPointShadowCoord[ NUM_POINT_LIGHT_SHADOWS ];\n\t\tstruct PointLightShadow {\n\t\t\tfloat shadowBias;\n\t\t\tfloat shadowNormalBias;\n\t\t\tfloat shadowRadius;\n\t\t\tvec2 shadowMapSize;\n\t\t\tfloat shadowCameraNear;\n\t\t\tfloat shadowCameraFar;\n\t\t};\n\t\tuniform PointLightShadow pointLightShadows[ NUM_POINT_LIGHT_SHADOWS ];\n\t#endif\n\tfloat texture2DCompare( sampler2D depths, vec2 uv, float compare ) {\n\t\treturn step( compare, unpackRGBAToDepth( texture2D( depths, uv ) ) );\n\t}\n\tvec2 texture2DDistribution( sampler2D shadow, vec2 uv ) {\n\t\treturn unpackRGBATo2Half( texture2D( shadow, uv ) );\n\t}\n\tfloat VSMShadow (sampler2D shadow, vec2 uv, float compare ){\n\t\tfloat occlusion = 1.0;\n\t\tvec2 distribution = texture2DDistribution( shadow, uv );\n\t\tfloat hard_shadow = step( compare , distribution.x );\n\t\tif (hard_shadow != 1.0 ) {\n\t\t\tfloat distance = compare - distribution.x ;\n\t\t\tfloat variance = max( 0.00000, distribution.y * distribution.y );\n\t\t\tfloat softness_probability = variance / (variance + distance * distance );\t\t\tsoftness_probability = clamp( ( softness_probability - 0.3 ) / ( 0.95 - 0.3 ), 0.0, 1.0 );\t\t\tocclusion = clamp( max( hard_shadow, softness_probability ), 0.0, 1.0 );\n\t\t}\n\t\treturn occlusion;\n\t}\n\tfloat getShadow( sampler2D shadowMap, vec2 shadowMapSize, float shadowBias, float shadowRadius, vec4 shadowCoord ) {\n\t\tfloat shadow = 1.0;\n\t\tshadowCoord.xyz /= shadowCoord.w;\n\t\tshadowCoord.z += shadowBias;\n\t\tbool inFrustum = shadowCoord.x >= 0.0 && shadowCoord.x <= 1.0 && shadowCoord.y >= 0.0 && shadowCoord.y <= 1.0;\n\t\tbool frustumTest = inFrustum && shadowCoord.z <= 1.0;\n\t\tif ( frustumTest ) {\n\t\t#if defined( SHADOWMAP_TYPE_PCF )\n\t\t\tvec2 texelSize = vec2( 1.0 ) / shadowMapSize;\n\t\t\tfloat dx0 = - texelSize.x * shadowRadius;\n\t\t\tfloat dy0 = - texelSize.y * shadowRadius;\n\t\t\tfloat dx1 = + texelSize.x * shadowRadius;\n\t\t\tfloat dy1 = + texelSize.y * shadowRadius;\n\t\t\tfloat dx2 = dx0 / 2.0;\n\t\t\tfloat dy2 = dy0 / 2.0;\n\t\t\tfloat dx3 = dx1 / 2.0;\n\t\t\tfloat dy3 = dy1 / 2.0;\n\t\t\tshadow = (\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx0, dy0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( 0.0, dy0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx1, dy0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx2, dy2 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( 0.0, dy2 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx3, dy2 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx0, 0.0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx2, 0.0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy, shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx3, 0.0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx1, 0.0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx2, dy3 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( 0.0, dy3 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx3, dy3 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx0, dy1 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( 0.0, dy1 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, shadowCoord.xy + vec2( dx1, dy1 ), shadowCoord.z )\n\t\t\t) * ( 1.0 / 17.0 );\n\t\t#elif defined( SHADOWMAP_TYPE_PCF_SOFT )\n\t\t\tvec2 texelSize = vec2( 1.0 ) / shadowMapSize;\n\t\t\tfloat dx = texelSize.x;\n\t\t\tfloat dy = texelSize.y;\n\t\t\tvec2 uv = shadowCoord.xy;\n\t\t\tvec2 f = fract( uv * shadowMapSize + 0.5 );\n\t\t\tuv -= f * texelSize;\n\t\t\tshadow = (\n\t\t\t\ttexture2DCompare( shadowMap, uv, shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, uv + vec2( dx, 0.0 ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, uv + vec2( 0.0, dy ), shadowCoord.z ) +\n\t\t\t\ttexture2DCompare( shadowMap, uv + texelSize, shadowCoord.z ) +\n\t\t\t\tmix( texture2DCompare( shadowMap, uv + vec2( -dx, 0.0 ), shadowCoord.z ),\n\t\t\t\t\t texture2DCompare( shadowMap, uv + vec2( 2.0 * dx, 0.0 ), shadowCoord.z ),\n\t\t\t\t\t f.x ) +\n\t\t\t\tmix( texture2DCompare( shadowMap, uv + vec2( -dx, dy ), shadowCoord.z ),\n\t\t\t\t\t texture2DCompare( shadowMap, uv + vec2( 2.0 * dx, dy ), shadowCoord.z ),\n\t\t\t\t\t f.x ) +\n\t\t\t\tmix( texture2DCompare( shadowMap, uv + vec2( 0.0, -dy ), shadowCoord.z ),\n\t\t\t\t\t texture2DCompare( shadowMap, uv + vec2( 0.0, 2.0 * dy ), shadowCoord.z ),\n\t\t\t\t\t f.y ) +\n\t\t\t\tmix( texture2DCompare( shadowMap, uv + vec2( dx, -dy ), shadowCoord.z ),\n\t\t\t\t\t texture2DCompare( shadowMap, uv + vec2( dx, 2.0 * dy ), shadowCoord.z ),\n\t\t\t\t\t f.y ) +\n\t\t\t\tmix( mix( texture2DCompare( shadowMap, uv + vec2( -dx, -dy ), shadowCoord.z ),\n\t\t\t\t\t\t texture2DCompare( shadowMap, uv + vec2( 2.0 * dx, -dy ), shadowCoord.z ),\n\t\t\t\t\t\t f.x ),\n\t\t\t\t\t mix( texture2DCompare( shadowMap, uv + vec2( -dx, 2.0 * dy ), shadowCoord.z ),\n\t\t\t\t\t\t texture2DCompare( shadowMap, uv + vec2( 2.0 * dx, 2.0 * dy ), shadowCoord.z ),\n\t\t\t\t\t\t f.x ),\n\t\t\t\t\t f.y )\n\t\t\t) * ( 1.0 / 9.0 );\n\t\t#elif defined( SHADOWMAP_TYPE_VSM )\n\t\t\tshadow = VSMShadow( shadowMap, shadowCoord.xy, shadowCoord.z );\n\t\t#else\n\t\t\tshadow = texture2DCompare( shadowMap, shadowCoord.xy, shadowCoord.z );\n\t\t#endif\n\t\t}\n\t\treturn shadow;\n\t}\n\tvec2 cubeToUV( vec3 v, float texelSizeY ) {\n\t\tvec3 absV = abs( v );\n\t\tfloat scaleToCube = 1.0 / max( absV.x, max( absV.y, absV.z ) );\n\t\tabsV *= scaleToCube;\n\t\tv *= scaleToCube * ( 1.0 - 2.0 * texelSizeY );\n\t\tvec2 planar = v.xy;\n\t\tfloat almostATexel = 1.5 * texelSizeY;\n\t\tfloat almostOne = 1.0 - almostATexel;\n\t\tif ( absV.z >= almostOne ) {\n\t\t\tif ( v.z > 0.0 )\n\t\t\t\tplanar.x = 4.0 - v.x;\n\t\t} else if ( absV.x >= almostOne ) {\n\t\t\tfloat signX = sign( v.x );\n\t\t\tplanar.x = v.z * signX + 2.0 * signX;\n\t\t} else if ( absV.y >= almostOne ) {\n\t\t\tfloat signY = sign( v.y );\n\t\t\tplanar.x = v.x + 2.0 * signY + 2.0;\n\t\t\tplanar.y = v.z * signY - 2.0;\n\t\t}\n\t\treturn vec2( 0.125, 0.25 ) * planar + vec2( 0.375, 0.75 );\n\t}\n\tfloat getPointShadow( sampler2D shadowMap, vec2 shadowMapSize, float shadowBias, float shadowRadius, vec4 shadowCoord, float shadowCameraNear, float shadowCameraFar ) {\n\t\tvec2 texelSize = vec2( 1.0 ) / ( shadowMapSize * vec2( 4.0, 2.0 ) );\n\t\tvec3 lightToPosition = shadowCoord.xyz;\n\t\tfloat dp = ( length( lightToPosition ) - shadowCameraNear ) / ( shadowCameraFar - shadowCameraNear );\t\tdp += shadowBias;\n\t\tvec3 bd3D = normalize( lightToPosition );\n\t\t#if defined( SHADOWMAP_TYPE_PCF ) || defined( SHADOWMAP_TYPE_PCF_SOFT ) || defined( SHADOWMAP_TYPE_VSM )\n\t\t\tvec2 offset = vec2( - 1, 1 ) * shadowRadius * texelSize.y;\n\t\t\treturn (\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.xyy, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.yyy, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.xyx, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.yyx, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.xxy, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.yxy, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.xxx, texelSize.y ), dp ) +\n\t\t\t\ttexture2DCompare( shadowMap, cubeToUV( bd3D + offset.yxx, texelSize.y ), dp )\n\t\t\t) * ( 1.0 / 9.0 );\n\t\t#else\n\t\t\treturn texture2DCompare( shadowMap, cubeToUV( bd3D, texelSize.y ), dp );\n\t\t#endif\n\t}\n#endif"; - -var shadowmap_pars_vertex = "#if NUM_SPOT_LIGHT_COORDS > 0\n uniform mat4 spotLightMatrix[ NUM_SPOT_LIGHT_COORDS ];\n varying vec4 vSpotLightCoord[ NUM_SPOT_LIGHT_COORDS ];\n#endif\n#ifdef USE_SHADOWMAP\n\t#if NUM_DIR_LIGHT_SHADOWS > 0\n\t\tuniform mat4 directionalShadowMatrix[ NUM_DIR_LIGHT_SHADOWS ];\n\t\tvarying vec4 vDirectionalShadowCoord[ NUM_DIR_LIGHT_SHADOWS ];\n\t\tstruct DirectionalLightShadow {\n\t\t\tfloat shadowBias;\n\t\t\tfloat shadowNormalBias;\n\t\t\tfloat shadowRadius;\n\t\t\tvec2 shadowMapSize;\n\t\t};\n\t\tuniform DirectionalLightShadow directionalLightShadows[ NUM_DIR_LIGHT_SHADOWS ];\n\t#endif\n\t#if NUM_SPOT_LIGHT_SHADOWS > 0\n\t\tstruct SpotLightShadow {\n\t\t\tfloat shadowBias;\n\t\t\tfloat shadowNormalBias;\n\t\t\tfloat shadowRadius;\n\t\t\tvec2 shadowMapSize;\n\t\t};\n\t\tuniform SpotLightShadow spotLightShadows[ NUM_SPOT_LIGHT_SHADOWS ];\n\t#endif\n\t#if NUM_POINT_LIGHT_SHADOWS > 0\n\t\tuniform mat4 pointShadowMatrix[ NUM_POINT_LIGHT_SHADOWS ];\n\t\tvarying vec4 vPointShadowCoord[ NUM_POINT_LIGHT_SHADOWS ];\n\t\tstruct PointLightShadow {\n\t\t\tfloat shadowBias;\n\t\t\tfloat shadowNormalBias;\n\t\t\tfloat shadowRadius;\n\t\t\tvec2 shadowMapSize;\n\t\t\tfloat shadowCameraNear;\n\t\t\tfloat shadowCameraFar;\n\t\t};\n\t\tuniform PointLightShadow pointLightShadows[ NUM_POINT_LIGHT_SHADOWS ];\n\t#endif\n#endif"; - -var shadowmap_vertex = "#if ( defined( USE_SHADOWMAP ) && ( NUM_DIR_LIGHT_SHADOWS > 0 || NUM_POINT_LIGHT_SHADOWS > 0 ) ) || ( NUM_SPOT_LIGHT_COORDS > 0 )\n\tvec3 shadowWorldNormal = inverseTransformDirection( transformedNormal, viewMatrix );\n\tvec4 shadowWorldPosition;\n#endif\n#if defined( USE_SHADOWMAP )\n\t#if NUM_DIR_LIGHT_SHADOWS > 0\n\t\t#pragma unroll_loop_start\n\t\tfor ( int i = 0; i < NUM_DIR_LIGHT_SHADOWS; i ++ ) {\n\t\t\tshadowWorldPosition = worldPosition + vec4( shadowWorldNormal * directionalLightShadows[ i ].shadowNormalBias, 0 );\n\t\t\tvDirectionalShadowCoord[ i ] = directionalShadowMatrix[ i ] * shadowWorldPosition;\n\t\t}\n\t\t#pragma unroll_loop_end\n\t#endif\n\t#if NUM_POINT_LIGHT_SHADOWS > 0\n\t\t#pragma unroll_loop_start\n\t\tfor ( int i = 0; i < NUM_POINT_LIGHT_SHADOWS; i ++ ) {\n\t\t\tshadowWorldPosition = worldPosition + vec4( shadowWorldNormal * pointLightShadows[ i ].shadowNormalBias, 0 );\n\t\t\tvPointShadowCoord[ i ] = pointShadowMatrix[ i ] * shadowWorldPosition;\n\t\t}\n\t\t#pragma unroll_loop_end\n\t#endif\n#endif\n#if NUM_SPOT_LIGHT_COORDS > 0\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_SPOT_LIGHT_COORDS; i ++ ) {\n\t\tshadowWorldPosition = worldPosition;\n\t\t#if ( defined( USE_SHADOWMAP ) && UNROLLED_LOOP_INDEX < NUM_SPOT_LIGHT_SHADOWS )\n\t\t\tshadowWorldPosition.xyz += shadowWorldNormal * spotLightShadows[ i ].shadowNormalBias;\n\t\t#endif\n\t\tvSpotLightCoord[ i ] = spotLightMatrix[ i ] * shadowWorldPosition;\n\t}\n\t#pragma unroll_loop_end\n#endif"; - -var shadowmask_pars_fragment = "float getShadowMask() {\n\tfloat shadow = 1.0;\n\t#ifdef USE_SHADOWMAP\n\t#if NUM_DIR_LIGHT_SHADOWS > 0\n\tDirectionalLightShadow directionalLight;\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_DIR_LIGHT_SHADOWS; i ++ ) {\n\t\tdirectionalLight = directionalLightShadows[ i ];\n\t\tshadow *= receiveShadow ? getShadow( directionalShadowMap[ i ], directionalLight.shadowMapSize, directionalLight.shadowBias, directionalLight.shadowRadius, vDirectionalShadowCoord[ i ] ) : 1.0;\n\t}\n\t#pragma unroll_loop_end\n\t#endif\n\t#if NUM_SPOT_LIGHT_SHADOWS > 0\n\tSpotLightShadow spotLight;\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_SPOT_LIGHT_SHADOWS; i ++ ) {\n\t\tspotLight = spotLightShadows[ i ];\n\t\tshadow *= receiveShadow ? getShadow( spotShadowMap[ i ], spotLight.shadowMapSize, spotLight.shadowBias, spotLight.shadowRadius, vSpotLightCoord[ i ] ) : 1.0;\n\t}\n\t#pragma unroll_loop_end\n\t#endif\n\t#if NUM_POINT_LIGHT_SHADOWS > 0\n\tPointLightShadow pointLight;\n\t#pragma unroll_loop_start\n\tfor ( int i = 0; i < NUM_POINT_LIGHT_SHADOWS; i ++ ) {\n\t\tpointLight = pointLightShadows[ i ];\n\t\tshadow *= receiveShadow ? getPointShadow( pointShadowMap[ i ], pointLight.shadowMapSize, pointLight.shadowBias, pointLight.shadowRadius, vPointShadowCoord[ i ], pointLight.shadowCameraNear, pointLight.shadowCameraFar ) : 1.0;\n\t}\n\t#pragma unroll_loop_end\n\t#endif\n\t#endif\n\treturn shadow;\n}"; - -var skinbase_vertex = "#ifdef USE_SKINNING\n\tmat4 boneMatX = getBoneMatrix( skinIndex.x );\n\tmat4 boneMatY = getBoneMatrix( skinIndex.y );\n\tmat4 boneMatZ = getBoneMatrix( skinIndex.z );\n\tmat4 boneMatW = getBoneMatrix( skinIndex.w );\n#endif"; - -var skinning_pars_vertex = "#ifdef USE_SKINNING\n\tuniform mat4 bindMatrix;\n\tuniform mat4 bindMatrixInverse;\n\tuniform highp sampler2D boneTexture;\n\tuniform int boneTextureSize;\n\tmat4 getBoneMatrix( const in float i ) {\n\t\tfloat j = i * 4.0;\n\t\tfloat x = mod( j, float( boneTextureSize ) );\n\t\tfloat y = floor( j / float( boneTextureSize ) );\n\t\tfloat dx = 1.0 / float( boneTextureSize );\n\t\tfloat dy = 1.0 / float( boneTextureSize );\n\t\ty = dy * ( y + 0.5 );\n\t\tvec4 v1 = texture2D( boneTexture, vec2( dx * ( x + 0.5 ), y ) );\n\t\tvec4 v2 = texture2D( boneTexture, vec2( dx * ( x + 1.5 ), y ) );\n\t\tvec4 v3 = texture2D( boneTexture, vec2( dx * ( x + 2.5 ), y ) );\n\t\tvec4 v4 = texture2D( boneTexture, vec2( dx * ( x + 3.5 ), y ) );\n\t\tmat4 bone = mat4( v1, v2, v3, v4 );\n\t\treturn bone;\n\t}\n#endif"; - -var skinning_vertex = "#ifdef USE_SKINNING\n\tvec4 skinVertex = bindMatrix * vec4( transformed, 1.0 );\n\tvec4 skinned = vec4( 0.0 );\n\tskinned += boneMatX * skinVertex * skinWeight.x;\n\tskinned += boneMatY * skinVertex * skinWeight.y;\n\tskinned += boneMatZ * skinVertex * skinWeight.z;\n\tskinned += boneMatW * skinVertex * skinWeight.w;\n\ttransformed = ( bindMatrixInverse * skinned ).xyz;\n#endif"; - -var skinnormal_vertex = "#ifdef USE_SKINNING\n\tmat4 skinMatrix = mat4( 0.0 );\n\tskinMatrix += skinWeight.x * boneMatX;\n\tskinMatrix += skinWeight.y * boneMatY;\n\tskinMatrix += skinWeight.z * boneMatZ;\n\tskinMatrix += skinWeight.w * boneMatW;\n\tskinMatrix = bindMatrixInverse * skinMatrix * bindMatrix;\n\tobjectNormal = vec4( skinMatrix * vec4( objectNormal, 0.0 ) ).xyz;\n\t#ifdef USE_TANGENT\n\t\tobjectTangent = vec4( skinMatrix * vec4( objectTangent, 0.0 ) ).xyz;\n\t#endif\n#endif"; - -var specularmap_fragment = "float specularStrength;\n#ifdef USE_SPECULARMAP\n\tvec4 texelSpecular = texture2D( specularMap, vUv );\n\tspecularStrength = texelSpecular.r;\n#else\n\tspecularStrength = 1.0;\n#endif"; - -var specularmap_pars_fragment = "#ifdef USE_SPECULARMAP\n\tuniform sampler2D specularMap;\n#endif"; - -var tonemapping_fragment = "#if defined( TONE_MAPPING )\n\tgl_FragColor.rgb = toneMapping( gl_FragColor.rgb );\n#endif"; - -var tonemapping_pars_fragment = "#ifndef saturate\n#define saturate( a ) clamp( a, 0.0, 1.0 )\n#endif\nuniform float toneMappingExposure;\nvec3 LinearToneMapping( vec3 color ) {\n\treturn toneMappingExposure * color;\n}\nvec3 ReinhardToneMapping( vec3 color ) {\n\tcolor *= toneMappingExposure;\n\treturn saturate( color / ( vec3( 1.0 ) + color ) );\n}\nvec3 OptimizedCineonToneMapping( vec3 color ) {\n\tcolor *= toneMappingExposure;\n\tcolor = max( vec3( 0.0 ), color - 0.004 );\n\treturn pow( ( color * ( 6.2 * color + 0.5 ) ) / ( color * ( 6.2 * color + 1.7 ) + 0.06 ), vec3( 2.2 ) );\n}\nvec3 RRTAndODTFit( vec3 v ) {\n\tvec3 a = v * ( v + 0.0245786 ) - 0.000090537;\n\tvec3 b = v * ( 0.983729 * v + 0.4329510 ) + 0.238081;\n\treturn a / b;\n}\nvec3 ACESFilmicToneMapping( vec3 color ) {\n\tconst mat3 ACESInputMat = mat3(\n\t\tvec3( 0.59719, 0.07600, 0.02840 ),\t\tvec3( 0.35458, 0.90834, 0.13383 ),\n\t\tvec3( 0.04823, 0.01566, 0.83777 )\n\t);\n\tconst mat3 ACESOutputMat = mat3(\n\t\tvec3( 1.60475, -0.10208, -0.00327 ),\t\tvec3( -0.53108, 1.10813, -0.07276 ),\n\t\tvec3( -0.07367, -0.00605, 1.07602 )\n\t);\n\tcolor *= toneMappingExposure / 0.6;\n\tcolor = ACESInputMat * color;\n\tcolor = RRTAndODTFit( color );\n\tcolor = ACESOutputMat * color;\n\treturn saturate( color );\n}\nvec3 CustomToneMapping( vec3 color ) { return color; }"; - -var transmission_fragment = "#ifdef USE_TRANSMISSION\n\tmaterial.transmission = transmission;\n\tmaterial.transmissionAlpha = 1.0;\n\tmaterial.thickness = thickness;\n\tmaterial.attenuationDistance = attenuationDistance;\n\tmaterial.attenuationColor = attenuationColor;\n\t#ifdef USE_TRANSMISSIONMAP\n\t\tmaterial.transmission *= texture2D( transmissionMap, vUv ).r;\n\t#endif\n\t#ifdef USE_THICKNESSMAP\n\t\tmaterial.thickness *= texture2D( thicknessMap, vUv ).g;\n\t#endif\n\tvec3 pos = vWorldPosition;\n\tvec3 v = normalize( cameraPosition - pos );\n\tvec3 n = inverseTransformDirection( normal, viewMatrix );\n\tvec4 transmission = getIBLVolumeRefraction(\n\t\tn, v, material.roughness, material.diffuseColor, material.specularColor, material.specularF90,\n\t\tpos, modelMatrix, viewMatrix, projectionMatrix, material.ior, material.thickness,\n\t\tmaterial.attenuationColor, material.attenuationDistance );\n\tmaterial.transmissionAlpha = mix( material.transmissionAlpha, transmission.a, material.transmission );\n\ttotalDiffuse = mix( totalDiffuse, transmission.rgb, material.transmission );\n#endif"; - -var transmission_pars_fragment = "#ifdef USE_TRANSMISSION\n\tuniform float transmission;\n\tuniform float thickness;\n\tuniform float attenuationDistance;\n\tuniform vec3 attenuationColor;\n\t#ifdef USE_TRANSMISSIONMAP\n\t\tuniform sampler2D transmissionMap;\n\t#endif\n\t#ifdef USE_THICKNESSMAP\n\t\tuniform sampler2D thicknessMap;\n\t#endif\n\tuniform vec2 transmissionSamplerSize;\n\tuniform sampler2D transmissionSamplerMap;\n\tuniform mat4 modelMatrix;\n\tuniform mat4 projectionMatrix;\n\tvarying vec3 vWorldPosition;\n\tvec3 getVolumeTransmissionRay( const in vec3 n, const in vec3 v, const in float thickness, const in float ior, const in mat4 modelMatrix ) {\n\t\tvec3 refractionVector = refract( - v, normalize( n ), 1.0 / ior );\n\t\tvec3 modelScale;\n\t\tmodelScale.x = length( vec3( modelMatrix[ 0 ].xyz ) );\n\t\tmodelScale.y = length( vec3( modelMatrix[ 1 ].xyz ) );\n\t\tmodelScale.z = length( vec3( modelMatrix[ 2 ].xyz ) );\n\t\treturn normalize( refractionVector ) * thickness * modelScale;\n\t}\n\tfloat applyIorToRoughness( const in float roughness, const in float ior ) {\n\t\treturn roughness * clamp( ior * 2.0 - 2.0, 0.0, 1.0 );\n\t}\n\tvec4 getTransmissionSample( const in vec2 fragCoord, const in float roughness, const in float ior ) {\n\t\tfloat framebufferLod = log2( transmissionSamplerSize.x ) * applyIorToRoughness( roughness, ior );\n\t\t#ifdef texture2DLodEXT\n\t\t\treturn texture2DLodEXT( transmissionSamplerMap, fragCoord.xy, framebufferLod );\n\t\t#else\n\t\t\treturn texture2D( transmissionSamplerMap, fragCoord.xy, framebufferLod );\n\t\t#endif\n\t}\n\tvec3 applyVolumeAttenuation( const in vec3 radiance, const in float transmissionDistance, const in vec3 attenuationColor, const in float attenuationDistance ) {\n\t\tif ( isinf( attenuationDistance ) ) {\n\t\t\treturn radiance;\n\t\t} else {\n\t\t\tvec3 attenuationCoefficient = -log( attenuationColor ) / attenuationDistance;\n\t\t\tvec3 transmittance = exp( - attenuationCoefficient * transmissionDistance );\t\t\treturn transmittance * radiance;\n\t\t}\n\t}\n\tvec4 getIBLVolumeRefraction( const in vec3 n, const in vec3 v, const in float roughness, const in vec3 diffuseColor,\n\t\tconst in vec3 specularColor, const in float specularF90, const in vec3 position, const in mat4 modelMatrix,\n\t\tconst in mat4 viewMatrix, const in mat4 projMatrix, const in float ior, const in float thickness,\n\t\tconst in vec3 attenuationColor, const in float attenuationDistance ) {\n\t\tvec3 transmissionRay = getVolumeTransmissionRay( n, v, thickness, ior, modelMatrix );\n\t\tvec3 refractedRayExit = position + transmissionRay;\n\t\tvec4 ndcPos = projMatrix * viewMatrix * vec4( refractedRayExit, 1.0 );\n\t\tvec2 refractionCoords = ndcPos.xy / ndcPos.w;\n\t\trefractionCoords += 1.0;\n\t\trefractionCoords /= 2.0;\n\t\tvec4 transmittedLight = getTransmissionSample( refractionCoords, roughness, ior );\n\t\tvec3 attenuatedColor = applyVolumeAttenuation( transmittedLight.rgb, length( transmissionRay ), attenuationColor, attenuationDistance );\n\t\tvec3 F = EnvironmentBRDF( n, v, specularColor, specularF90, roughness );\n\t\treturn vec4( ( 1.0 - F ) * attenuatedColor * diffuseColor, transmittedLight.a );\n\t}\n#endif"; - -var uv_pars_fragment = "#if ( defined( USE_UV ) && ! defined( UVS_VERTEX_ONLY ) )\n\tvarying vec2 vUv;\n#endif"; - -var uv_pars_vertex = "#ifdef USE_UV\n\t#ifdef UVS_VERTEX_ONLY\n\t\tvec2 vUv;\n\t#else\n\t\tvarying vec2 vUv;\n\t#endif\n\tuniform mat3 uvTransform;\n#endif"; - -var uv_vertex = "#ifdef USE_UV\n\tvUv = ( uvTransform * vec3( uv, 1 ) ).xy;\n#endif"; - -var uv2_pars_fragment = "#if defined( USE_LIGHTMAP ) || defined( USE_AOMAP )\n\tvarying vec2 vUv2;\n#endif"; - -var uv2_pars_vertex = "#if defined( USE_LIGHTMAP ) || defined( USE_AOMAP )\n\tattribute vec2 uv2;\n\tvarying vec2 vUv2;\n\tuniform mat3 uv2Transform;\n#endif"; - -var uv2_vertex = "#if defined( USE_LIGHTMAP ) || defined( USE_AOMAP )\n\tvUv2 = ( uv2Transform * vec3( uv2, 1 ) ).xy;\n#endif"; - -var worldpos_vertex = "#if defined( USE_ENVMAP ) || defined( DISTANCE ) || defined ( USE_SHADOWMAP ) || defined ( USE_TRANSMISSION ) || NUM_SPOT_LIGHT_COORDS > 0\n\tvec4 worldPosition = vec4( transformed, 1.0 );\n\t#ifdef USE_INSTANCING\n\t\tworldPosition = instanceMatrix * worldPosition;\n\t#endif\n\tworldPosition = modelMatrix * worldPosition;\n#endif"; - -const vertex$h = "varying vec2 vUv;\nuniform mat3 uvTransform;\nvoid main() {\n\tvUv = ( uvTransform * vec3( uv, 1 ) ).xy;\n\tgl_Position = vec4( position.xy, 1.0, 1.0 );\n}"; - -const fragment$h = "uniform sampler2D t2D;\nuniform float backgroundIntensity;\nvarying vec2 vUv;\nvoid main() {\n\tvec4 texColor = texture2D( t2D, vUv );\n\t#ifdef DECODE_VIDEO_TEXTURE\n\t\ttexColor = vec4( mix( pow( texColor.rgb * 0.9478672986 + vec3( 0.0521327014 ), vec3( 2.4 ) ), texColor.rgb * 0.0773993808, vec3( lessThanEqual( texColor.rgb, vec3( 0.04045 ) ) ) ), texColor.w );\n\t#endif\n\ttexColor.rgb *= backgroundIntensity;\n\tgl_FragColor = texColor;\n\t#include \n\t#include \n}"; - -const vertex$g = "varying vec3 vWorldDirection;\n#include \nvoid main() {\n\tvWorldDirection = transformDirection( position, modelMatrix );\n\t#include \n\t#include \n\tgl_Position.z = gl_Position.w;\n}"; - -const fragment$g = "#ifdef ENVMAP_TYPE_CUBE\n\tuniform samplerCube envMap;\n#elif defined( ENVMAP_TYPE_CUBE_UV )\n\tuniform sampler2D envMap;\n#endif\nuniform float flipEnvMap;\nuniform float backgroundBlurriness;\nuniform float backgroundIntensity;\nvarying vec3 vWorldDirection;\n#include \nvoid main() {\n\t#ifdef ENVMAP_TYPE_CUBE\n\t\tvec4 texColor = textureCube( envMap, vec3( flipEnvMap * vWorldDirection.x, vWorldDirection.yz ) );\n\t#elif defined( ENVMAP_TYPE_CUBE_UV )\n\t\tvec4 texColor = textureCubeUV( envMap, vWorldDirection, backgroundBlurriness );\n\t#else\n\t\tvec4 texColor = vec4( 0.0, 0.0, 0.0, 1.0 );\n\t#endif\n\ttexColor.rgb *= backgroundIntensity;\n\tgl_FragColor = texColor;\n\t#include \n\t#include \n}"; - -const vertex$f = "varying vec3 vWorldDirection;\n#include \nvoid main() {\n\tvWorldDirection = transformDirection( position, modelMatrix );\n\t#include \n\t#include \n\tgl_Position.z = gl_Position.w;\n}"; - -const fragment$f = "uniform samplerCube tCube;\nuniform float tFlip;\nuniform float opacity;\nvarying vec3 vWorldDirection;\nvoid main() {\n\tvec4 texColor = textureCube( tCube, vec3( tFlip * vWorldDirection.x, vWorldDirection.yz ) );\n\tgl_FragColor = texColor;\n\tgl_FragColor.a *= opacity;\n\t#include \n\t#include \n}"; - -const vertex$e = "#include \n#include \n#include \n#include \n#include \n#include \n#include \nvarying vec2 vHighPrecisionZW;\nvoid main() {\n\t#include \n\t#include \n\t#ifdef USE_DISPLACEMENTMAP\n\t\t#include \n\t\t#include \n\t\t#include \n\t#endif\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvHighPrecisionZW = gl_Position.zw;\n}"; - -const fragment$e = "#if DEPTH_PACKING == 3200\n\tuniform float opacity;\n#endif\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvarying vec2 vHighPrecisionZW;\nvoid main() {\n\t#include \n\tvec4 diffuseColor = vec4( 1.0 );\n\t#if DEPTH_PACKING == 3200\n\t\tdiffuseColor.a = opacity;\n\t#endif\n\t#include \n\t#include \n\t#include \n\t#include \n\tfloat fragCoordZ = 0.5 * vHighPrecisionZW[0] / vHighPrecisionZW[1] + 0.5;\n\t#if DEPTH_PACKING == 3200\n\t\tgl_FragColor = vec4( vec3( 1.0 - fragCoordZ ), opacity );\n\t#elif DEPTH_PACKING == 3201\n\t\tgl_FragColor = packDepthToRGBA( fragCoordZ );\n\t#endif\n}"; - -const vertex$d = "#define DISTANCE\nvarying vec3 vWorldPosition;\n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#ifdef USE_DISPLACEMENTMAP\n\t\t#include \n\t\t#include \n\t\t#include \n\t#endif\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvWorldPosition = worldPosition.xyz;\n}"; - -const fragment$d = "#define DISTANCE\nuniform vec3 referencePosition;\nuniform float nearDistance;\nuniform float farDistance;\nvarying vec3 vWorldPosition;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main () {\n\t#include \n\tvec4 diffuseColor = vec4( 1.0 );\n\t#include \n\t#include \n\t#include \n\tfloat dist = length( vWorldPosition - referencePosition );\n\tdist = ( dist - nearDistance ) / ( farDistance - nearDistance );\n\tdist = saturate( dist );\n\tgl_FragColor = packDepthToRGBA( dist );\n}"; - -const vertex$c = "varying vec3 vWorldDirection;\n#include \nvoid main() {\n\tvWorldDirection = transformDirection( position, modelMatrix );\n\t#include \n\t#include \n}"; - -const fragment$c = "uniform sampler2D tEquirect;\nvarying vec3 vWorldDirection;\n#include \nvoid main() {\n\tvec3 direction = normalize( vWorldDirection );\n\tvec2 sampleUV = equirectUv( direction );\n\tgl_FragColor = texture2D( tEquirect, sampleUV );\n\t#include \n\t#include \n}"; - -const vertex$b = "uniform float scale;\nattribute float lineDistance;\nvarying float vLineDistance;\n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\tvLineDistance = scale * lineDistance;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const fragment$b = "uniform vec3 diffuse;\nuniform float opacity;\nuniform float dashSize;\nuniform float totalSize;\nvarying float vLineDistance;\n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tif ( mod( vLineDistance, totalSize ) > dashSize ) {\n\t\tdiscard;\n\t}\n\tvec3 outgoingLight = vec3( 0.0 );\n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\t#include \n\t#include \n\toutgoingLight = diffuseColor.rgb;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$a = "#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#if defined ( USE_ENVMAP ) || defined ( USE_SKINNING )\n\t\t#include \n\t\t#include \n\t\t#include \n\t\t#include \n\t\t#include \n\t#endif\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const fragment$a = "uniform vec3 diffuse;\nuniform float opacity;\n#ifndef FLAT_SHADED\n\tvarying vec3 vNormal;\n#endif\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tReflectedLight reflectedLight = ReflectedLight( vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ) );\n\t#ifdef USE_LIGHTMAP\n\t\tvec4 lightMapTexel = texture2D( lightMap, vUv2 );\n\t\treflectedLight.indirectDiffuse += lightMapTexel.rgb * lightMapIntensity * RECIPROCAL_PI;\n\t#else\n\t\treflectedLight.indirectDiffuse += vec3( 1.0 );\n\t#endif\n\t#include \n\treflectedLight.indirectDiffuse *= diffuseColor.rgb;\n\tvec3 outgoingLight = reflectedLight.indirectDiffuse;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$9 = "#define LAMBERT\nvarying vec3 vViewPosition;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvViewPosition = - mvPosition.xyz;\n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const fragment$9 = "#define LAMBERT\nuniform vec3 diffuse;\nuniform vec3 emissive;\nuniform float opacity;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\tReflectedLight reflectedLight = ReflectedLight( vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ) );\n\tvec3 totalEmissiveRadiance = emissive;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvec3 outgoingLight = reflectedLight.directDiffuse + reflectedLight.indirectDiffuse + totalEmissiveRadiance;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$8 = "#define MATCAP\nvarying vec3 vViewPosition;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvViewPosition = - mvPosition.xyz;\n}"; - -const fragment$8 = "#define MATCAP\nuniform vec3 diffuse;\nuniform float opacity;\nuniform sampler2D matcap;\nvarying vec3 vViewPosition;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvec3 viewDir = normalize( vViewPosition );\n\tvec3 x = normalize( vec3( viewDir.z, 0.0, - viewDir.x ) );\n\tvec3 y = cross( viewDir, x );\n\tvec2 uv = vec2( dot( x, normal ), dot( y, normal ) ) * 0.495 + 0.5;\n\t#ifdef USE_MATCAP\n\t\tvec4 matcapColor = texture2D( matcap, uv );\n\t#else\n\t\tvec4 matcapColor = vec4( vec3( mix( 0.2, 0.8, uv.y ) ), 1.0 );\n\t#endif\n\tvec3 outgoingLight = diffuseColor.rgb * matcapColor.rgb;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$7 = "#define NORMAL\n#if defined( FLAT_SHADED ) || defined( USE_BUMPMAP ) || defined( TANGENTSPACE_NORMALMAP )\n\tvarying vec3 vViewPosition;\n#endif\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n#if defined( FLAT_SHADED ) || defined( USE_BUMPMAP ) || defined( TANGENTSPACE_NORMALMAP )\n\tvViewPosition = - mvPosition.xyz;\n#endif\n}"; - -const fragment$7 = "#define NORMAL\nuniform float opacity;\n#if defined( FLAT_SHADED ) || defined( USE_BUMPMAP ) || defined( TANGENTSPACE_NORMALMAP )\n\tvarying vec3 vViewPosition;\n#endif\n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\tgl_FragColor = vec4( packNormalToRGB( normal ), opacity );\n\t#ifdef OPAQUE\n\t\tgl_FragColor.a = 1.0;\n\t#endif\n}"; - -const vertex$6 = "#define PHONG\nvarying vec3 vViewPosition;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvViewPosition = - mvPosition.xyz;\n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const fragment$6 = "#define PHONG\nuniform vec3 diffuse;\nuniform vec3 emissive;\nuniform vec3 specular;\nuniform float shininess;\nuniform float opacity;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\tReflectedLight reflectedLight = ReflectedLight( vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ) );\n\tvec3 totalEmissiveRadiance = emissive;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvec3 outgoingLight = reflectedLight.directDiffuse + reflectedLight.indirectDiffuse + reflectedLight.directSpecular + reflectedLight.indirectSpecular + totalEmissiveRadiance;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$5 = "#define STANDARD\nvarying vec3 vViewPosition;\n#ifdef USE_TRANSMISSION\n\tvarying vec3 vWorldPosition;\n#endif\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvViewPosition = - mvPosition.xyz;\n\t#include \n\t#include \n\t#include \n#ifdef USE_TRANSMISSION\n\tvWorldPosition = worldPosition.xyz;\n#endif\n}"; - -const fragment$5 = "#define STANDARD\n#ifdef PHYSICAL\n\t#define IOR\n\t#define SPECULAR\n#endif\nuniform vec3 diffuse;\nuniform vec3 emissive;\nuniform float roughness;\nuniform float metalness;\nuniform float opacity;\n#ifdef IOR\n\tuniform float ior;\n#endif\n#ifdef SPECULAR\n\tuniform float specularIntensity;\n\tuniform vec3 specularColor;\n\t#ifdef USE_SPECULARINTENSITYMAP\n\t\tuniform sampler2D specularIntensityMap;\n\t#endif\n\t#ifdef USE_SPECULARCOLORMAP\n\t\tuniform sampler2D specularColorMap;\n\t#endif\n#endif\n#ifdef USE_CLEARCOAT\n\tuniform float clearcoat;\n\tuniform float clearcoatRoughness;\n#endif\n#ifdef USE_IRIDESCENCE\n\tuniform float iridescence;\n\tuniform float iridescenceIOR;\n\tuniform float iridescenceThicknessMinimum;\n\tuniform float iridescenceThicknessMaximum;\n#endif\n#ifdef USE_SHEEN\n\tuniform vec3 sheenColor;\n\tuniform float sheenRoughness;\n\t#ifdef USE_SHEENCOLORMAP\n\t\tuniform sampler2D sheenColorMap;\n\t#endif\n\t#ifdef USE_SHEENROUGHNESSMAP\n\t\tuniform sampler2D sheenRoughnessMap;\n\t#endif\n#endif\nvarying vec3 vViewPosition;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\tReflectedLight reflectedLight = ReflectedLight( vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ) );\n\tvec3 totalEmissiveRadiance = emissive;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvec3 totalDiffuse = reflectedLight.directDiffuse + reflectedLight.indirectDiffuse;\n\tvec3 totalSpecular = reflectedLight.directSpecular + reflectedLight.indirectSpecular;\n\t#include \n\tvec3 outgoingLight = totalDiffuse + totalSpecular + totalEmissiveRadiance;\n\t#ifdef USE_SHEEN\n\t\tfloat sheenEnergyComp = 1.0 - 0.157 * max3( material.sheenColor );\n\t\toutgoingLight = outgoingLight * sheenEnergyComp + sheenSpecular;\n\t#endif\n\t#ifdef USE_CLEARCOAT\n\t\tfloat dotNVcc = saturate( dot( geometry.clearcoatNormal, geometry.viewDir ) );\n\t\tvec3 Fcc = F_Schlick( material.clearcoatF0, material.clearcoatF90, dotNVcc );\n\t\toutgoingLight = outgoingLight * ( 1.0 - material.clearcoat * Fcc ) + clearcoatSpecular * material.clearcoat;\n\t#endif\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$4 = "#define TOON\nvarying vec3 vViewPosition;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvViewPosition = - mvPosition.xyz;\n\t#include \n\t#include \n\t#include \n}"; - -const fragment$4 = "#define TOON\nuniform vec3 diffuse;\nuniform vec3 emissive;\nuniform float opacity;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\tReflectedLight reflectedLight = ReflectedLight( vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ) );\n\tvec3 totalEmissiveRadiance = emissive;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tvec3 outgoingLight = reflectedLight.directDiffuse + reflectedLight.indirectDiffuse + totalEmissiveRadiance;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$3 = "uniform float size;\nuniform float scale;\n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\tgl_PointSize = size;\n\t#ifdef USE_SIZEATTENUATION\n\t\tbool isPerspective = isPerspectiveMatrix( projectionMatrix );\n\t\tif ( isPerspective ) gl_PointSize *= ( scale / - mvPosition.z );\n\t#endif\n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const fragment$3 = "uniform vec3 diffuse;\nuniform float opacity;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec3 outgoingLight = vec3( 0.0 );\n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\t#include \n\t#include \n\t#include \n\t#include \n\toutgoingLight = diffuseColor.rgb;\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const vertex$2 = "#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const fragment$2 = "uniform vec3 color;\nuniform float opacity;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\tgl_FragColor = vec4( color, opacity * ( 1.0 - getShadowMask() ) );\n\t#include \n\t#include \n\t#include \n}"; - -const vertex$1 = "uniform float rotation;\nuniform vec2 center;\n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec4 mvPosition = modelViewMatrix * vec4( 0.0, 0.0, 0.0, 1.0 );\n\tvec2 scale;\n\tscale.x = length( vec3( modelMatrix[ 0 ].x, modelMatrix[ 0 ].y, modelMatrix[ 0 ].z ) );\n\tscale.y = length( vec3( modelMatrix[ 1 ].x, modelMatrix[ 1 ].y, modelMatrix[ 1 ].z ) );\n\t#ifndef USE_SIZEATTENUATION\n\t\tbool isPerspective = isPerspectiveMatrix( projectionMatrix );\n\t\tif ( isPerspective ) scale *= - mvPosition.z;\n\t#endif\n\tvec2 alignedPosition = ( position.xy - ( center - vec2( 0.5 ) ) ) * scale;\n\tvec2 rotatedPosition;\n\trotatedPosition.x = cos( rotation ) * alignedPosition.x - sin( rotation ) * alignedPosition.y;\n\trotatedPosition.y = sin( rotation ) * alignedPosition.x + cos( rotation ) * alignedPosition.y;\n\tmvPosition.xy += rotatedPosition;\n\tgl_Position = projectionMatrix * mvPosition;\n\t#include \n\t#include \n\t#include \n}"; - -const fragment$1 = "uniform vec3 diffuse;\nuniform float opacity;\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nvoid main() {\n\t#include \n\tvec3 outgoingLight = vec3( 0.0 );\n\tvec4 diffuseColor = vec4( diffuse, opacity );\n\t#include \n\t#include \n\t#include \n\t#include \n\toutgoingLight = diffuseColor.rgb;\n\t#include \n\t#include \n\t#include \n\t#include \n}"; - -const ShaderChunk = { - alphamap_fragment: alphamap_fragment, - alphamap_pars_fragment: alphamap_pars_fragment, - alphatest_fragment: alphatest_fragment, - alphatest_pars_fragment: alphatest_pars_fragment, - aomap_fragment: aomap_fragment, - aomap_pars_fragment: aomap_pars_fragment, - begin_vertex: begin_vertex, - beginnormal_vertex: beginnormal_vertex, - bsdfs: bsdfs, - iridescence_fragment: iridescence_fragment, - bumpmap_pars_fragment: bumpmap_pars_fragment, - clipping_planes_fragment: clipping_planes_fragment, - clipping_planes_pars_fragment: clipping_planes_pars_fragment, - clipping_planes_pars_vertex: clipping_planes_pars_vertex, - clipping_planes_vertex: clipping_planes_vertex, - color_fragment: color_fragment, - color_pars_fragment: color_pars_fragment, - color_pars_vertex: color_pars_vertex, - color_vertex: color_vertex, - common: common, - cube_uv_reflection_fragment: cube_uv_reflection_fragment, - defaultnormal_vertex: defaultnormal_vertex, - displacementmap_pars_vertex: displacementmap_pars_vertex, - displacementmap_vertex: displacementmap_vertex, - emissivemap_fragment: emissivemap_fragment, - emissivemap_pars_fragment: emissivemap_pars_fragment, - encodings_fragment: encodings_fragment, - encodings_pars_fragment: encodings_pars_fragment, - envmap_fragment: envmap_fragment, - envmap_common_pars_fragment: envmap_common_pars_fragment, - envmap_pars_fragment: envmap_pars_fragment, - envmap_pars_vertex: envmap_pars_vertex, - envmap_physical_pars_fragment: envmap_physical_pars_fragment, - envmap_vertex: envmap_vertex, - fog_vertex: fog_vertex, - fog_pars_vertex: fog_pars_vertex, - fog_fragment: fog_fragment, - fog_pars_fragment: fog_pars_fragment, - gradientmap_pars_fragment: gradientmap_pars_fragment, - lightmap_fragment: lightmap_fragment, - lightmap_pars_fragment: lightmap_pars_fragment, - lights_lambert_fragment: lights_lambert_fragment, - lights_lambert_pars_fragment: lights_lambert_pars_fragment, - lights_pars_begin: lights_pars_begin, - lights_toon_fragment: lights_toon_fragment, - lights_toon_pars_fragment: lights_toon_pars_fragment, - lights_phong_fragment: lights_phong_fragment, - lights_phong_pars_fragment: lights_phong_pars_fragment, - lights_physical_fragment: lights_physical_fragment, - lights_physical_pars_fragment: lights_physical_pars_fragment, - lights_fragment_begin: lights_fragment_begin, - lights_fragment_maps: lights_fragment_maps, - lights_fragment_end: lights_fragment_end, - logdepthbuf_fragment: logdepthbuf_fragment, - logdepthbuf_pars_fragment: logdepthbuf_pars_fragment, - logdepthbuf_pars_vertex: logdepthbuf_pars_vertex, - logdepthbuf_vertex: logdepthbuf_vertex, - map_fragment: map_fragment, - map_pars_fragment: map_pars_fragment, - map_particle_fragment: map_particle_fragment, - map_particle_pars_fragment: map_particle_pars_fragment, - metalnessmap_fragment: metalnessmap_fragment, - metalnessmap_pars_fragment: metalnessmap_pars_fragment, - morphcolor_vertex: morphcolor_vertex, - morphnormal_vertex: morphnormal_vertex, - morphtarget_pars_vertex: morphtarget_pars_vertex, - morphtarget_vertex: morphtarget_vertex, - normal_fragment_begin: normal_fragment_begin, - normal_fragment_maps: normal_fragment_maps, - normal_pars_fragment: normal_pars_fragment, - normal_pars_vertex: normal_pars_vertex, - normal_vertex: normal_vertex, - normalmap_pars_fragment: normalmap_pars_fragment, - clearcoat_normal_fragment_begin: clearcoat_normal_fragment_begin, - clearcoat_normal_fragment_maps: clearcoat_normal_fragment_maps, - clearcoat_pars_fragment: clearcoat_pars_fragment, - iridescence_pars_fragment: iridescence_pars_fragment, - output_fragment: output_fragment, - packing: packing, - premultiplied_alpha_fragment: premultiplied_alpha_fragment, - project_vertex: project_vertex, - dithering_fragment: dithering_fragment, - dithering_pars_fragment: dithering_pars_fragment, - roughnessmap_fragment: roughnessmap_fragment, - roughnessmap_pars_fragment: roughnessmap_pars_fragment, - shadowmap_pars_fragment: shadowmap_pars_fragment, - shadowmap_pars_vertex: shadowmap_pars_vertex, - shadowmap_vertex: shadowmap_vertex, - shadowmask_pars_fragment: shadowmask_pars_fragment, - skinbase_vertex: skinbase_vertex, - skinning_pars_vertex: skinning_pars_vertex, - skinning_vertex: skinning_vertex, - skinnormal_vertex: skinnormal_vertex, - specularmap_fragment: specularmap_fragment, - specularmap_pars_fragment: specularmap_pars_fragment, - tonemapping_fragment: tonemapping_fragment, - tonemapping_pars_fragment: tonemapping_pars_fragment, - transmission_fragment: transmission_fragment, - transmission_pars_fragment: transmission_pars_fragment, - uv_pars_fragment: uv_pars_fragment, - uv_pars_vertex: uv_pars_vertex, - uv_vertex: uv_vertex, - uv2_pars_fragment: uv2_pars_fragment, - uv2_pars_vertex: uv2_pars_vertex, - uv2_vertex: uv2_vertex, - worldpos_vertex: worldpos_vertex, - - background_vert: vertex$h, - background_frag: fragment$h, - backgroundCube_vert: vertex$g, - backgroundCube_frag: fragment$g, - cube_vert: vertex$f, - cube_frag: fragment$f, - depth_vert: vertex$e, - depth_frag: fragment$e, - distanceRGBA_vert: vertex$d, - distanceRGBA_frag: fragment$d, - equirect_vert: vertex$c, - equirect_frag: fragment$c, - linedashed_vert: vertex$b, - linedashed_frag: fragment$b, - meshbasic_vert: vertex$a, - meshbasic_frag: fragment$a, - meshlambert_vert: vertex$9, - meshlambert_frag: fragment$9, - meshmatcap_vert: vertex$8, - meshmatcap_frag: fragment$8, - meshnormal_vert: vertex$7, - meshnormal_frag: fragment$7, - meshphong_vert: vertex$6, - meshphong_frag: fragment$6, - meshphysical_vert: vertex$5, - meshphysical_frag: fragment$5, - meshtoon_vert: vertex$4, - meshtoon_frag: fragment$4, - points_vert: vertex$3, - points_frag: fragment$3, - shadow_vert: vertex$2, - shadow_frag: fragment$2, - sprite_vert: vertex$1, - sprite_frag: fragment$1 -}; - -/** - * Uniforms library for shared webgl shaders - */ - -const UniformsLib = { - - common: { - - diffuse: { value: /*@__PURE__*/ new Color(0xffffff) }, - opacity: { value: 1.0 }, - - map: { value: null }, - uvTransform: { value: /*@__PURE__*/ new Matrix3() }, - uv2Transform: { value: /*@__PURE__*/ new Matrix3() }, - - alphaMap: { value: null }, - alphaTest: { value: 0 } - - }, - - specularmap: { - - specularMap: { value: null }, - - }, - - envmap: { - - envMap: { value: null }, - flipEnvMap: { value: - 1 }, - reflectivity: { value: 1.0 }, // basic, lambert, phong - ior: { value: 1.5 }, // physical - refractionRatio: { value: 0.98 }, // basic, lambert, phong - - }, - - aomap: { - - aoMap: { value: null }, - aoMapIntensity: { value: 1 } - - }, - - lightmap: { - - lightMap: { value: null }, - lightMapIntensity: { value: 1 } - - }, - - emissivemap: { - - emissiveMap: { value: null } - - }, - - bumpmap: { - - bumpMap: { value: null }, - bumpScale: { value: 1 } - - }, - - normalmap: { - - normalMap: { value: null }, - normalScale: { value: /*@__PURE__*/ new Vector2(1, 1) } - - }, - - displacementmap: { - - displacementMap: { value: null }, - displacementScale: { value: 1 }, - displacementBias: { value: 0 } - - }, - - roughnessmap: { - - roughnessMap: { value: null } - - }, - - metalnessmap: { - - metalnessMap: { value: null } - - }, - - gradientmap: { - - gradientMap: { value: null } - - }, - - fog: { - - fogDensity: { value: 0.00025 }, - fogNear: { value: 1 }, - fogFar: { value: 2000 }, - fogColor: { value: /*@__PURE__*/ new Color(0xffffff) } - - }, - - lights: { - - ambientLightColor: { value: [] }, - - lightProbe: { value: [] }, - - directionalLights: { - value: [], properties: { - direction: {}, - color: {} - } - }, - - directionalLightShadows: { - value: [], properties: { - shadowBias: {}, - shadowNormalBias: {}, - shadowRadius: {}, - shadowMapSize: {} - } - }, - - directionalShadowMap: { value: [] }, - directionalShadowMatrix: { value: [] }, - - spotLights: { - value: [], properties: { - color: {}, - position: {}, - direction: {}, - distance: {}, - coneCos: {}, - penumbraCos: {}, - decay: {} - } - }, - - spotLightShadows: { - value: [], properties: { - shadowBias: {}, - shadowNormalBias: {}, - shadowRadius: {}, - shadowMapSize: {} - } - }, - - spotLightMap: { value: [] }, - spotShadowMap: { value: [] }, - spotLightMatrix: { value: [] }, - - pointLights: { - value: [], properties: { - color: {}, - position: {}, - decay: {}, - distance: {} - } - }, - - pointLightShadows: { - value: [], properties: { - shadowBias: {}, - shadowNormalBias: {}, - shadowRadius: {}, - shadowMapSize: {}, - shadowCameraNear: {}, - shadowCameraFar: {} - } - }, - - pointShadowMap: { value: [] }, - pointShadowMatrix: { value: [] }, - - hemisphereLights: { - value: [], properties: { - direction: {}, - skyColor: {}, - groundColor: {} - } - }, - - // TODO (abelnation): RectAreaLight BRDF data needs to be moved from example to main src - rectAreaLights: { - value: [], properties: { - color: {}, - position: {}, - width: {}, - height: {} - } - }, - - ltc_1: { value: null }, - ltc_2: { value: null } - - }, - - points: { - - diffuse: { value: /*@__PURE__*/ new Color(0xffffff) }, - opacity: { value: 1.0 }, - size: { value: 1.0 }, - scale: { value: 1.0 }, - map: { value: null }, - alphaMap: { value: null }, - alphaTest: { value: 0 }, - uvTransform: { value: /*@__PURE__*/ new Matrix3() } - - }, - - sprite: { - - diffuse: { value: /*@__PURE__*/ new Color(0xffffff) }, - opacity: { value: 1.0 }, - center: { value: /*@__PURE__*/ new Vector2(0.5, 0.5) }, - rotation: { value: 0.0 }, - map: { value: null }, - alphaMap: { value: null }, - alphaTest: { value: 0 }, - uvTransform: { value: /*@__PURE__*/ new Matrix3() } - - } - -}; - -const ShaderLib = { - - basic: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.specularmap, - UniformsLib.envmap, - UniformsLib.aomap, - UniformsLib.lightmap, - UniformsLib.fog - ]), - - vertexShader: ShaderChunk.meshbasic_vert, - fragmentShader: ShaderChunk.meshbasic_frag - - }, - - lambert: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.specularmap, - UniformsLib.envmap, - UniformsLib.aomap, - UniformsLib.lightmap, - UniformsLib.emissivemap, - UniformsLib.bumpmap, - UniformsLib.normalmap, - UniformsLib.displacementmap, - UniformsLib.fog, - UniformsLib.lights, - { - emissive: { value: /*@__PURE__*/ new Color(0x000000) } - } - ]), - - vertexShader: ShaderChunk.meshlambert_vert, - fragmentShader: ShaderChunk.meshlambert_frag - - }, - - phong: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.specularmap, - UniformsLib.envmap, - UniformsLib.aomap, - UniformsLib.lightmap, - UniformsLib.emissivemap, - UniformsLib.bumpmap, - UniformsLib.normalmap, - UniformsLib.displacementmap, - UniformsLib.fog, - UniformsLib.lights, - { - emissive: { value: /*@__PURE__*/ new Color(0x000000) }, - specular: { value: /*@__PURE__*/ new Color(0x111111) }, - shininess: { value: 30 } - } - ]), - - vertexShader: ShaderChunk.meshphong_vert, - fragmentShader: ShaderChunk.meshphong_frag - - }, - - standard: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.envmap, - UniformsLib.aomap, - UniformsLib.lightmap, - UniformsLib.emissivemap, - UniformsLib.bumpmap, - UniformsLib.normalmap, - UniformsLib.displacementmap, - UniformsLib.roughnessmap, - UniformsLib.metalnessmap, - UniformsLib.fog, - UniformsLib.lights, - { - emissive: { value: /*@__PURE__*/ new Color(0x000000) }, - roughness: { value: 1.0 }, - metalness: { value: 0.0 }, - envMapIntensity: { value: 1 } // temporary - } - ]), - - vertexShader: ShaderChunk.meshphysical_vert, - fragmentShader: ShaderChunk.meshphysical_frag - - }, - - toon: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.aomap, - UniformsLib.lightmap, - UniformsLib.emissivemap, - UniformsLib.bumpmap, - UniformsLib.normalmap, - UniformsLib.displacementmap, - UniformsLib.gradientmap, - UniformsLib.fog, - UniformsLib.lights, - { - emissive: { value: /*@__PURE__*/ new Color(0x000000) } - } - ]), - - vertexShader: ShaderChunk.meshtoon_vert, - fragmentShader: ShaderChunk.meshtoon_frag - - }, - - matcap: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.bumpmap, - UniformsLib.normalmap, - UniformsLib.displacementmap, - UniformsLib.fog, - { - matcap: { value: null } - } - ]), - - vertexShader: ShaderChunk.meshmatcap_vert, - fragmentShader: ShaderChunk.meshmatcap_frag - - }, - - points: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.points, - UniformsLib.fog - ]), - - vertexShader: ShaderChunk.points_vert, - fragmentShader: ShaderChunk.points_frag - - }, - - dashed: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.fog, - { - scale: { value: 1 }, - dashSize: { value: 1 }, - totalSize: { value: 2 } - } - ]), - - vertexShader: ShaderChunk.linedashed_vert, - fragmentShader: ShaderChunk.linedashed_frag - - }, - - depth: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.displacementmap - ]), - - vertexShader: ShaderChunk.depth_vert, - fragmentShader: ShaderChunk.depth_frag - - }, - - normal: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.bumpmap, - UniformsLib.normalmap, - UniformsLib.displacementmap, - { - opacity: { value: 1.0 } - } - ]), - - vertexShader: ShaderChunk.meshnormal_vert, - fragmentShader: ShaderChunk.meshnormal_frag - - }, - - sprite: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.sprite, - UniformsLib.fog - ]), - - vertexShader: ShaderChunk.sprite_vert, - fragmentShader: ShaderChunk.sprite_frag - - }, - - background: { - - uniforms: { - uvTransform: { value: /*@__PURE__*/ new Matrix3() }, - t2D: { value: null }, - backgroundIntensity: { value: 1 } - }, - - vertexShader: ShaderChunk.background_vert, - fragmentShader: ShaderChunk.background_frag - - }, - - backgroundCube: { - - uniforms: { - envMap: { value: null }, - flipEnvMap: { value: - 1 }, - backgroundBlurriness: { value: 0 }, - backgroundIntensity: { value: 1 } - }, - - vertexShader: ShaderChunk.backgroundCube_vert, - fragmentShader: ShaderChunk.backgroundCube_frag - - }, - - cube: { - - uniforms: { - tCube: { value: null }, - tFlip: { value: - 1 }, - opacity: { value: 1.0 } - }, - - vertexShader: ShaderChunk.cube_vert, - fragmentShader: ShaderChunk.cube_frag - - }, - - equirect: { - - uniforms: { - tEquirect: { value: null }, - }, - - vertexShader: ShaderChunk.equirect_vert, - fragmentShader: ShaderChunk.equirect_frag - - }, - - distanceRGBA: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.common, - UniformsLib.displacementmap, - { - referencePosition: { value: /*@__PURE__*/ new Vector3() }, - nearDistance: { value: 1 }, - farDistance: { value: 1000 } - } - ]), - - vertexShader: ShaderChunk.distanceRGBA_vert, - fragmentShader: ShaderChunk.distanceRGBA_frag - - }, - - shadow: { - - uniforms: /*@__PURE__*/ mergeUniforms([ - UniformsLib.lights, - UniformsLib.fog, - { - color: { value: /*@__PURE__*/ new Color(0x00000) }, - opacity: { value: 1.0 } - }, - ]), - - vertexShader: ShaderChunk.shadow_vert, - fragmentShader: ShaderChunk.shadow_frag - - } - -}; - -ShaderLib.physical = { - - uniforms: /*@__PURE__*/ mergeUniforms([ - ShaderLib.standard.uniforms, - { - clearcoat: { value: 0 }, - clearcoatMap: { value: null }, - clearcoatRoughness: { value: 0 }, - clearcoatRoughnessMap: { value: null }, - clearcoatNormalScale: { value: /*@__PURE__*/ new Vector2(1, 1) }, - clearcoatNormalMap: { value: null }, - iridescence: { value: 0 }, - iridescenceMap: { value: null }, - iridescenceIOR: { value: 1.3 }, - iridescenceThicknessMinimum: { value: 100 }, - iridescenceThicknessMaximum: { value: 400 }, - iridescenceThicknessMap: { value: null }, - sheen: { value: 0 }, - sheenColor: { value: /*@__PURE__*/ new Color(0x000000) }, - sheenColorMap: { value: null }, - sheenRoughness: { value: 1 }, - sheenRoughnessMap: { value: null }, - transmission: { value: 0 }, - transmissionMap: { value: null }, - transmissionSamplerSize: { value: /*@__PURE__*/ new Vector2() }, - transmissionSamplerMap: { value: null }, - thickness: { value: 0 }, - thicknessMap: { value: null }, - attenuationDistance: { value: 0 }, - attenuationColor: { value: /*@__PURE__*/ new Color(0x000000) }, - specularIntensity: { value: 1 }, - specularIntensityMap: { value: null }, - specularColor: { value: /*@__PURE__*/ new Color(1, 1, 1) }, - specularColorMap: { value: null }, - } - ]), - - vertexShader: ShaderChunk.meshphysical_vert, - fragmentShader: ShaderChunk.meshphysical_frag - -}; - -const _rgb = { r: 0, b: 0, g: 0 }; - -function WebGLBackground(renderer, cubemaps, cubeuvmaps, state, objects, alpha, premultipliedAlpha) { - - const clearColor = new Color(0x000000); - let clearAlpha = alpha === true ? 0 : 1; - - let planeMesh; - let boxMesh; - - let currentBackground = null; - let currentBackgroundVersion = 0; - let currentTonemapping = null; - - function render(renderList, scene) { - - let forceClear = false; - let background = scene.isScene === true ? scene.background : null; - - if (background && background.isTexture) { - - const usePMREM = scene.backgroundBlurriness > 0; // use PMREM if the user wants to blur the background - background = (usePMREM ? cubeuvmaps : cubemaps).get(background); - - } - - // Ignore background in AR - // TODO: Reconsider this. - - const xr = renderer.xr; - const session = xr.getSession && xr.getSession(); - - if (session && session.environmentBlendMode === 'additive') { - - background = null; - - } - - if (background === null) { - - setClear(clearColor, clearAlpha); - - } else if (background && background.isColor) { - - setClear(background, 1); - forceClear = true; - - } - - if (renderer.autoClear || forceClear) { - - renderer.clear(renderer.autoClearColor, renderer.autoClearDepth, renderer.autoClearStencil); - - } - - if (background && (background.isCubeTexture || background.mapping === CubeUVReflectionMapping)) { - - if (boxMesh === undefined) { - - boxMesh = new Mesh( - new BoxGeometry(1, 1, 1), - new ShaderMaterial({ - name: 'BackgroundCubeMaterial', - uniforms: cloneUniforms(ShaderLib.backgroundCube.uniforms), - vertexShader: ShaderLib.backgroundCube.vertexShader, - fragmentShader: ShaderLib.backgroundCube.fragmentShader, - side: BackSide, - depthTest: false, - depthWrite: false, - fog: false - }) - ); - - boxMesh.geometry.deleteAttribute('normal'); - boxMesh.geometry.deleteAttribute('uv'); - - boxMesh.onBeforeRender = function (renderer, scene, camera) { - - this.matrixWorld.copyPosition(camera.matrixWorld); - - }; - - // add "envMap" material property so the renderer can evaluate it like for built-in materials - Object.defineProperty(boxMesh.material, 'envMap', { - - get: function () { - - return this.uniforms.envMap.value; - - } - - }); - - objects.update(boxMesh); - - } - - boxMesh.material.uniforms.envMap.value = background; - boxMesh.material.uniforms.flipEnvMap.value = (background.isCubeTexture && background.isRenderTargetTexture === false) ? - 1 : 1; - boxMesh.material.uniforms.backgroundBlurriness.value = scene.backgroundBlurriness; - boxMesh.material.uniforms.backgroundIntensity.value = scene.backgroundIntensity; - boxMesh.material.toneMapped = (background.encoding === sRGBEncoding) ? false : true; - - if (currentBackground !== background || - currentBackgroundVersion !== background.version || - currentTonemapping !== renderer.toneMapping) { - - boxMesh.material.needsUpdate = true; - - currentBackground = background; - currentBackgroundVersion = background.version; - currentTonemapping = renderer.toneMapping; - - } - - boxMesh.layers.enableAll(); - - // push to the pre-sorted opaque render list - renderList.unshift(boxMesh, boxMesh.geometry, boxMesh.material, 0, 0, null); - - } else if (background && background.isTexture) { - - if (planeMesh === undefined) { - - planeMesh = new Mesh( - new PlaneGeometry(2, 2), - new ShaderMaterial({ - name: 'BackgroundMaterial', - uniforms: cloneUniforms(ShaderLib.background.uniforms), - vertexShader: ShaderLib.background.vertexShader, - fragmentShader: ShaderLib.background.fragmentShader, - side: FrontSide, - depthTest: false, - depthWrite: false, - fog: false - }) - ); - - planeMesh.geometry.deleteAttribute('normal'); - - // add "map" material property so the renderer can evaluate it like for built-in materials - Object.defineProperty(planeMesh.material, 'map', { - - get: function () { - - return this.uniforms.t2D.value; - - } - - }); - - objects.update(planeMesh); - - } - - planeMesh.material.uniforms.t2D.value = background; - planeMesh.material.uniforms.backgroundIntensity.value = scene.backgroundIntensity; - planeMesh.material.toneMapped = (background.encoding === sRGBEncoding) ? false : true; - - if (background.matrixAutoUpdate === true) { - - background.updateMatrix(); - - } - - planeMesh.material.uniforms.uvTransform.value.copy(background.matrix); - - if (currentBackground !== background || - currentBackgroundVersion !== background.version || - currentTonemapping !== renderer.toneMapping) { - - planeMesh.material.needsUpdate = true; - - currentBackground = background; - currentBackgroundVersion = background.version; - currentTonemapping = renderer.toneMapping; - - } - - planeMesh.layers.enableAll(); - - // push to the pre-sorted opaque render list - renderList.unshift(planeMesh, planeMesh.geometry, planeMesh.material, 0, 0, null); - - } - - } - - function setClear(color, alpha) { - - color.getRGB(_rgb, getUnlitUniformColorSpace(renderer)); - - state.buffers.color.setClear(_rgb.r, _rgb.g, _rgb.b, alpha, premultipliedAlpha); - - } - - return { - - getClearColor: function () { - - return clearColor; - - }, - setClearColor: function (color, alpha = 1) { - - clearColor.set(color); - clearAlpha = alpha; - setClear(clearColor, clearAlpha); - - }, - getClearAlpha: function () { - - return clearAlpha; - - }, - setClearAlpha: function (alpha) { - - clearAlpha = alpha; - setClear(clearColor, clearAlpha); - - }, - render: render - - }; - -} - -function WebGLBindingStates(gl, extensions, attributes, capabilities) { - - const maxVertexAttributes = gl.getParameter(34921); - - const extension = capabilities.isWebGL2 ? null : extensions.get('OES_vertex_array_object'); - const vaoAvailable = capabilities.isWebGL2 || extension !== null; - - const bindingStates = {}; - - const defaultState = createBindingState(null); - let currentState = defaultState; - let forceUpdate = false; - - function setup(object, material, program, geometry, index) { - - let updateBuffers = false; - - if (vaoAvailable) { - - const state = getBindingState(geometry, program, material); - - if (currentState !== state) { - - currentState = state; - bindVertexArrayObject(currentState.object); - - } - - updateBuffers = needsUpdate(object, geometry, program, index); - - if (updateBuffers) saveCache(object, geometry, program, index); - - } else { - - const wireframe = (material.wireframe === true); - - if (currentState.geometry !== geometry.id || - currentState.program !== program.id || - currentState.wireframe !== wireframe) { - - currentState.geometry = geometry.id; - currentState.program = program.id; - currentState.wireframe = wireframe; - - updateBuffers = true; - - } - - } - - if (index !== null) { - - attributes.update(index, 34963); - - } - - if (updateBuffers || forceUpdate) { - - forceUpdate = false; - - setupVertexAttributes(object, material, program, geometry); - - if (index !== null) { - - gl.bindBuffer(34963, attributes.get(index).buffer); - - } - - } - - } - - function createVertexArrayObject() { - - if (capabilities.isWebGL2) return gl.createVertexArray(); - - return extension.createVertexArrayOES(); - - } - - function bindVertexArrayObject(vao) { - - if (capabilities.isWebGL2) return gl.bindVertexArray(vao); - - return extension.bindVertexArrayOES(vao); - - } - - function deleteVertexArrayObject(vao) { - - if (capabilities.isWebGL2) return gl.deleteVertexArray(vao); - - return extension.deleteVertexArrayOES(vao); - - } - - function getBindingState(geometry, program, material) { - - const wireframe = (material.wireframe === true); - - let programMap = bindingStates[geometry.id]; - - if (programMap === undefined) { - - programMap = {}; - bindingStates[geometry.id] = programMap; - - } - - let stateMap = programMap[program.id]; - - if (stateMap === undefined) { - - stateMap = {}; - programMap[program.id] = stateMap; - - } - - let state = stateMap[wireframe]; - - if (state === undefined) { - - state = createBindingState(createVertexArrayObject()); - stateMap[wireframe] = state; - - } - - return state; - - } - - function createBindingState(vao) { - - const newAttributes = []; - const enabledAttributes = []; - const attributeDivisors = []; - - for (let i = 0; i < maxVertexAttributes; i++) { - - newAttributes[i] = 0; - enabledAttributes[i] = 0; - attributeDivisors[i] = 0; - - } - - return { - - // for backward compatibility on non-VAO support browser - geometry: null, - program: null, - wireframe: false, - - newAttributes: newAttributes, - enabledAttributes: enabledAttributes, - attributeDivisors: attributeDivisors, - object: vao, - attributes: {}, - index: null - - }; - - } - - function needsUpdate(object, geometry, program, index) { - - const cachedAttributes = currentState.attributes; - const geometryAttributes = geometry.attributes; - - let attributesNum = 0; - - const programAttributes = program.getAttributes(); - - for (const name in programAttributes) { - - const programAttribute = programAttributes[name]; - - if (programAttribute.location >= 0) { - - const cachedAttribute = cachedAttributes[name]; - let geometryAttribute = geometryAttributes[name]; - - if (geometryAttribute === undefined) { - - if (name === 'instanceMatrix' && object.instanceMatrix) geometryAttribute = object.instanceMatrix; - if (name === 'instanceColor' && object.instanceColor) geometryAttribute = object.instanceColor; - - } - - if (cachedAttribute === undefined) return true; - - if (cachedAttribute.attribute !== geometryAttribute) return true; - - if (geometryAttribute && cachedAttribute.data !== geometryAttribute.data) return true; - - attributesNum++; - - } - - } - - if (currentState.attributesNum !== attributesNum) return true; - - if (currentState.index !== index) return true; - - return false; - - } - - function saveCache(object, geometry, program, index) { - - const cache = {}; - const attributes = geometry.attributes; - let attributesNum = 0; - - const programAttributes = program.getAttributes(); - - for (const name in programAttributes) { - - const programAttribute = programAttributes[name]; - - if (programAttribute.location >= 0) { - - let attribute = attributes[name]; - - if (attribute === undefined) { - - if (name === 'instanceMatrix' && object.instanceMatrix) attribute = object.instanceMatrix; - if (name === 'instanceColor' && object.instanceColor) attribute = object.instanceColor; - - } - - const data = {}; - data.attribute = attribute; - - if (attribute && attribute.data) { - - data.data = attribute.data; - - } - - cache[name] = data; - - attributesNum++; - - } - - } - - currentState.attributes = cache; - currentState.attributesNum = attributesNum; - - currentState.index = index; - - } - - function initAttributes() { - - const newAttributes = currentState.newAttributes; - - for (let i = 0, il = newAttributes.length; i < il; i++) { - - newAttributes[i] = 0; - - } - - } - - function enableAttribute(attribute) { - - enableAttributeAndDivisor(attribute, 0); - - } - - function enableAttributeAndDivisor(attribute, meshPerAttribute) { - - const newAttributes = currentState.newAttributes; - const enabledAttributes = currentState.enabledAttributes; - const attributeDivisors = currentState.attributeDivisors; - - newAttributes[attribute] = 1; - - if (enabledAttributes[attribute] === 0) { - - gl.enableVertexAttribArray(attribute); - enabledAttributes[attribute] = 1; - - } - - if (attributeDivisors[attribute] !== meshPerAttribute) { - - const extension = capabilities.isWebGL2 ? gl : extensions.get('ANGLE_instanced_arrays'); - - extension[capabilities.isWebGL2 ? 'vertexAttribDivisor' : 'vertexAttribDivisorANGLE'](attribute, meshPerAttribute); - attributeDivisors[attribute] = meshPerAttribute; - - } - - } - - function disableUnusedAttributes() { - - const newAttributes = currentState.newAttributes; - const enabledAttributes = currentState.enabledAttributes; - - for (let i = 0, il = enabledAttributes.length; i < il; i++) { - - if (enabledAttributes[i] !== newAttributes[i]) { - - gl.disableVertexAttribArray(i); - enabledAttributes[i] = 0; - - } - - } - - } - - function vertexAttribPointer(index, size, type, normalized, stride, offset) { - - if (capabilities.isWebGL2 === true && (type === 5124 || type === 5125)) { - - gl.vertexAttribIPointer(index, size, type, stride, offset); - - } else { - - gl.vertexAttribPointer(index, size, type, normalized, stride, offset); - - } - - } - - function setupVertexAttributes(object, material, program, geometry) { - - if (capabilities.isWebGL2 === false && (object.isInstancedMesh || geometry.isInstancedBufferGeometry)) { - - if (extensions.get('ANGLE_instanced_arrays') === null) return; - - } - - initAttributes(); - - const geometryAttributes = geometry.attributes; - - const programAttributes = program.getAttributes(); - - const materialDefaultAttributeValues = material.defaultAttributeValues; - - for (const name in programAttributes) { - - const programAttribute = programAttributes[name]; - - if (programAttribute.location >= 0) { - - let geometryAttribute = geometryAttributes[name]; - - if (geometryAttribute === undefined) { - - if (name === 'instanceMatrix' && object.instanceMatrix) geometryAttribute = object.instanceMatrix; - if (name === 'instanceColor' && object.instanceColor) geometryAttribute = object.instanceColor; - - } - - if (geometryAttribute !== undefined) { - - const normalized = geometryAttribute.normalized; - const size = geometryAttribute.itemSize; - - const attribute = attributes.get(geometryAttribute); - - // TODO Attribute may not be available on context restore - - if (attribute === undefined) continue; - - const buffer = attribute.buffer; - const type = attribute.type; - const bytesPerElement = attribute.bytesPerElement; - - if (geometryAttribute.isInterleavedBufferAttribute) { - - const data = geometryAttribute.data; - const stride = data.stride; - const offset = geometryAttribute.offset; - - if (data.isInstancedInterleavedBuffer) { - - for (let i = 0; i < programAttribute.locationSize; i++) { - - enableAttributeAndDivisor(programAttribute.location + i, data.meshPerAttribute); - - } - - if (object.isInstancedMesh !== true && geometry._maxInstanceCount === undefined) { - - geometry._maxInstanceCount = data.meshPerAttribute * data.count; - - } - - } else { - - for (let i = 0; i < programAttribute.locationSize; i++) { - - enableAttribute(programAttribute.location + i); - - } - - } - - gl.bindBuffer(34962, buffer); - - for (let i = 0; i < programAttribute.locationSize; i++) { - - vertexAttribPointer( - programAttribute.location + i, - size / programAttribute.locationSize, - type, - normalized, - stride * bytesPerElement, - (offset + (size / programAttribute.locationSize) * i) * bytesPerElement - ); - - } - - } else { - - if (geometryAttribute.isInstancedBufferAttribute) { - - for (let i = 0; i < programAttribute.locationSize; i++) { - - enableAttributeAndDivisor(programAttribute.location + i, geometryAttribute.meshPerAttribute); - - } - - if (object.isInstancedMesh !== true && geometry._maxInstanceCount === undefined) { - - geometry._maxInstanceCount = geometryAttribute.meshPerAttribute * geometryAttribute.count; - - } - - } else { - - for (let i = 0; i < programAttribute.locationSize; i++) { - - enableAttribute(programAttribute.location + i); - - } - - } - - gl.bindBuffer(34962, buffer); - - for (let i = 0; i < programAttribute.locationSize; i++) { - - vertexAttribPointer( - programAttribute.location + i, - size / programAttribute.locationSize, - type, - normalized, - size * bytesPerElement, - (size / programAttribute.locationSize) * i * bytesPerElement - ); - - } - - } - - } else if (materialDefaultAttributeValues !== undefined) { - - const value = materialDefaultAttributeValues[name]; - - if (value !== undefined) { - - switch (value.length) { - - case 2: - gl.vertexAttrib2fv(programAttribute.location, value); - break; - - case 3: - gl.vertexAttrib3fv(programAttribute.location, value); - break; - - case 4: - gl.vertexAttrib4fv(programAttribute.location, value); - break; - - default: - gl.vertexAttrib1fv(programAttribute.location, value); - - } - - } - - } - - } - - } - - disableUnusedAttributes(); - - } - - function dispose() { - - reset(); - - for (const geometryId in bindingStates) { - - const programMap = bindingStates[geometryId]; - - for (const programId in programMap) { - - const stateMap = programMap[programId]; - - for (const wireframe in stateMap) { - - deleteVertexArrayObject(stateMap[wireframe].object); - - delete stateMap[wireframe]; - - } - - delete programMap[programId]; - - } - - delete bindingStates[geometryId]; - - } - - } - - function releaseStatesOfGeometry(geometry) { - - if (bindingStates[geometry.id] === undefined) return; - - const programMap = bindingStates[geometry.id]; - - for (const programId in programMap) { - - const stateMap = programMap[programId]; - - for (const wireframe in stateMap) { - - deleteVertexArrayObject(stateMap[wireframe].object); - - delete stateMap[wireframe]; - - } - - delete programMap[programId]; - - } - - delete bindingStates[geometry.id]; - - } - - function releaseStatesOfProgram(program) { - - for (const geometryId in bindingStates) { - - const programMap = bindingStates[geometryId]; - - if (programMap[program.id] === undefined) continue; - - const stateMap = programMap[program.id]; - - for (const wireframe in stateMap) { - - deleteVertexArrayObject(stateMap[wireframe].object); - - delete stateMap[wireframe]; - - } - - delete programMap[program.id]; - - } - - } - - function reset() { - - resetDefaultState(); - forceUpdate = true; - - if (currentState === defaultState) return; - - currentState = defaultState; - bindVertexArrayObject(currentState.object); - - } - - // for backward-compatibility - - function resetDefaultState() { - - defaultState.geometry = null; - defaultState.program = null; - defaultState.wireframe = false; - - } - - return { - - setup: setup, - reset: reset, - resetDefaultState: resetDefaultState, - dispose: dispose, - releaseStatesOfGeometry: releaseStatesOfGeometry, - releaseStatesOfProgram: releaseStatesOfProgram, - - initAttributes: initAttributes, - enableAttribute: enableAttribute, - disableUnusedAttributes: disableUnusedAttributes - - }; - -} - -function WebGLBufferRenderer(gl, extensions, info, capabilities) { - - const isWebGL2 = capabilities.isWebGL2; - - let mode; - - function setMode(value) { - - mode = value; - - } - - function render(start, count) { - - gl.drawArrays(mode, start, count); - - info.update(count, mode, 1); - - } - - function renderInstances(start, count, primcount) { - - if (primcount === 0) return; - - let extension, methodName; - - if (isWebGL2) { - - extension = gl; - methodName = 'drawArraysInstanced'; - - } else { - - extension = extensions.get('ANGLE_instanced_arrays'); - methodName = 'drawArraysInstancedANGLE'; - - if (extension === null) { - - console.error('THREE.WebGLBufferRenderer: using THREE.InstancedBufferGeometry but hardware does not support extension ANGLE_instanced_arrays.'); - return; - - } - - } - - extension[methodName](mode, start, count, primcount); - - info.update(count, mode, primcount); - - } - - // - - this.setMode = setMode; - this.render = render; - this.renderInstances = renderInstances; - -} - -function WebGLCapabilities(gl, extensions, parameters) { - - let maxAnisotropy; - - function getMaxAnisotropy() { - - if (maxAnisotropy !== undefined) return maxAnisotropy; - - if (extensions.has('EXT_texture_filter_anisotropic') === true) { - - const extension = extensions.get('EXT_texture_filter_anisotropic'); - - maxAnisotropy = gl.getParameter(extension.MAX_TEXTURE_MAX_ANISOTROPY_EXT); - - } else { - - maxAnisotropy = 0; - - } - - return maxAnisotropy; - - } - - function getMaxPrecision(precision) { - - if (precision === 'highp') { - - if (gl.getShaderPrecisionFormat(35633, 36338).precision > 0 && - gl.getShaderPrecisionFormat(35632, 36338).precision > 0) { - - return 'highp'; - - } - - precision = 'mediump'; - - } - - if (precision === 'mediump') { - - if (gl.getShaderPrecisionFormat(35633, 36337).precision > 0 && - gl.getShaderPrecisionFormat(35632, 36337).precision > 0) { - - return 'mediump'; - - } - - } - - return 'lowp'; - - } - - const isWebGL2 = typeof WebGL2RenderingContext !== 'undefined' && gl instanceof WebGL2RenderingContext; - - let precision = parameters.precision !== undefined ? parameters.precision : 'highp'; - const maxPrecision = getMaxPrecision(precision); - - if (maxPrecision !== precision) { - - console.warn('THREE.WebGLRenderer:', precision, 'not supported, using', maxPrecision, 'instead.'); - precision = maxPrecision; - - } - - const drawBuffers = isWebGL2 || extensions.has('WEBGL_draw_buffers'); - - const logarithmicDepthBuffer = parameters.logarithmicDepthBuffer === true; - - const maxTextures = gl.getParameter(34930); - const maxVertexTextures = gl.getParameter(35660); - const maxTextureSize = gl.getParameter(3379); - const maxCubemapSize = gl.getParameter(34076); - - const maxAttributes = gl.getParameter(34921); - const maxVertexUniforms = gl.getParameter(36347); - const maxVaryings = gl.getParameter(36348); - const maxFragmentUniforms = gl.getParameter(36349); - - const vertexTextures = maxVertexTextures > 0; - const floatFragmentTextures = isWebGL2 || extensions.has('OES_texture_float'); - const floatVertexTextures = vertexTextures && floatFragmentTextures; - - const maxSamples = isWebGL2 ? gl.getParameter(36183) : 0; - - return { - - isWebGL2: isWebGL2, - - drawBuffers: drawBuffers, - - getMaxAnisotropy: getMaxAnisotropy, - getMaxPrecision: getMaxPrecision, - - precision: precision, - logarithmicDepthBuffer: logarithmicDepthBuffer, - - maxTextures: maxTextures, - maxVertexTextures: maxVertexTextures, - maxTextureSize: maxTextureSize, - maxCubemapSize: maxCubemapSize, - - maxAttributes: maxAttributes, - maxVertexUniforms: maxVertexUniforms, - maxVaryings: maxVaryings, - maxFragmentUniforms: maxFragmentUniforms, - - vertexTextures: vertexTextures, - floatFragmentTextures: floatFragmentTextures, - floatVertexTextures: floatVertexTextures, - - maxSamples: maxSamples - - }; - -} - -function WebGLClipping(properties) { - - const scope = this; - - let globalState = null, - numGlobalPlanes = 0, - localClippingEnabled = false, - renderingShadows = false; - - const plane = new Plane(), - viewNormalMatrix = new Matrix3(), - - uniform = { value: null, needsUpdate: false }; - - this.uniform = uniform; - this.numPlanes = 0; - this.numIntersection = 0; - - this.init = function (planes, enableLocalClipping) { - - const enabled = - planes.length !== 0 || - enableLocalClipping || - // enable state of previous frame - the clipping code has to - // run another frame in order to reset the state: - numGlobalPlanes !== 0 || - localClippingEnabled; - - localClippingEnabled = enableLocalClipping; - - numGlobalPlanes = planes.length; - - return enabled; - - }; - - this.beginShadows = function () { - - renderingShadows = true; - projectPlanes(null); - - }; - - this.endShadows = function () { - - renderingShadows = false; - - }; - - this.setGlobalState = function (planes, camera) { - - globalState = projectPlanes(planes, camera, 0); - - }; - - this.setState = function (material, camera, useCache) { - - const planes = material.clippingPlanes, - clipIntersection = material.clipIntersection, - clipShadows = material.clipShadows; - - const materialProperties = properties.get(material); - - if (!localClippingEnabled || planes === null || planes.length === 0 || renderingShadows && !clipShadows) { - - // there's no local clipping - - if (renderingShadows) { - - // there's no global clipping - - projectPlanes(null); - - } else { - - resetGlobalState(); - - } - - } else { - - const nGlobal = renderingShadows ? 0 : numGlobalPlanes, - lGlobal = nGlobal * 4; - - let dstArray = materialProperties.clippingState || null; - - uniform.value = dstArray; // ensure unique state - - dstArray = projectPlanes(planes, camera, lGlobal, useCache); - - for (let i = 0; i !== lGlobal; ++i) { - - dstArray[i] = globalState[i]; - - } - - materialProperties.clippingState = dstArray; - this.numIntersection = clipIntersection ? this.numPlanes : 0; - this.numPlanes += nGlobal; - - } - - - }; - - function resetGlobalState() { - - if (uniform.value !== globalState) { - - uniform.value = globalState; - uniform.needsUpdate = numGlobalPlanes > 0; - - } - - scope.numPlanes = numGlobalPlanes; - scope.numIntersection = 0; - - } - - function projectPlanes(planes, camera, dstOffset, skipTransform) { - - const nPlanes = planes !== null ? planes.length : 0; - let dstArray = null; - - if (nPlanes !== 0) { - - dstArray = uniform.value; - - if (skipTransform !== true || dstArray === null) { - - const flatSize = dstOffset + nPlanes * 4, - viewMatrix = camera.matrixWorldInverse; - - viewNormalMatrix.getNormalMatrix(viewMatrix); - - if (dstArray === null || dstArray.length < flatSize) { - - dstArray = new Float32Array(flatSize); - - } - - for (let i = 0, i4 = dstOffset; i !== nPlanes; ++i, i4 += 4) { - - plane.copy(planes[i]).applyMatrix4(viewMatrix, viewNormalMatrix); - - plane.normal.toArray(dstArray, i4); - dstArray[i4 + 3] = plane.constant; - - } - - } - - uniform.value = dstArray; - uniform.needsUpdate = true; - - } - - scope.numPlanes = nPlanes; - scope.numIntersection = 0; - - return dstArray; - - } - -} - -function WebGLCubeMaps(renderer) { - - let cubemaps = new WeakMap(); - - function mapTextureMapping(texture, mapping) { - - if (mapping === EquirectangularReflectionMapping) { - - texture.mapping = CubeReflectionMapping; - - } else if (mapping === EquirectangularRefractionMapping) { - - texture.mapping = CubeRefractionMapping; - - } - - return texture; - - } - - function get(texture) { - - if (texture && texture.isTexture && texture.isRenderTargetTexture === false) { - - const mapping = texture.mapping; - - if (mapping === EquirectangularReflectionMapping || mapping === EquirectangularRefractionMapping) { - - if (cubemaps.has(texture)) { - - const cubemap = cubemaps.get(texture).texture; - return mapTextureMapping(cubemap, texture.mapping); - - } else { - - const image = texture.image; - - if (image && image.height > 0) { - - const renderTarget = new WebGLCubeRenderTarget(image.height / 2); - renderTarget.fromEquirectangularTexture(renderer, texture); - cubemaps.set(texture, renderTarget); - - texture.addEventListener('dispose', onTextureDispose); - - return mapTextureMapping(renderTarget.texture, texture.mapping); - - } else { - - // image not yet ready. try the conversion next frame - - return null; - - } - - } - - } - - } - - return texture; - - } - - function onTextureDispose(event) { - - const texture = event.target; - - texture.removeEventListener('dispose', onTextureDispose); - - const cubemap = cubemaps.get(texture); - - if (cubemap !== undefined) { - - cubemaps.delete(texture); - cubemap.dispose(); - - } - - } - - function dispose() { - - cubemaps = new WeakMap(); - - } - - return { - get: get, - dispose: dispose - }; - -} - -class OrthographicCamera extends Camera { - - constructor(left = - 1, right = 1, top = 1, bottom = - 1, near = 0.1, far = 2000) { - - super(); - - this.isOrthographicCamera = true; - - this.type = 'OrthographicCamera'; - - this.zoom = 1; - this.view = null; - - this.left = left; - this.right = right; - this.top = top; - this.bottom = bottom; - - this.near = near; - this.far = far; - - this.updateProjectionMatrix(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.left = source.left; - this.right = source.right; - this.top = source.top; - this.bottom = source.bottom; - this.near = source.near; - this.far = source.far; - - this.zoom = source.zoom; - this.view = source.view === null ? null : Object.assign({}, source.view); - - return this; - - } - - setViewOffset(fullWidth, fullHeight, x, y, width, height) { - - if (this.view === null) { - - this.view = { - enabled: true, - fullWidth: 1, - fullHeight: 1, - offsetX: 0, - offsetY: 0, - width: 1, - height: 1 - }; - - } - - this.view.enabled = true; - this.view.fullWidth = fullWidth; - this.view.fullHeight = fullHeight; - this.view.offsetX = x; - this.view.offsetY = y; - this.view.width = width; - this.view.height = height; - - this.updateProjectionMatrix(); - - } - - clearViewOffset() { - - if (this.view !== null) { - - this.view.enabled = false; - - } - - this.updateProjectionMatrix(); - - } - - updateProjectionMatrix() { - - const dx = (this.right - this.left) / (2 * this.zoom); - const dy = (this.top - this.bottom) / (2 * this.zoom); - const cx = (this.right + this.left) / 2; - const cy = (this.top + this.bottom) / 2; - - let left = cx - dx; - let right = cx + dx; - let top = cy + dy; - let bottom = cy - dy; - - if (this.view !== null && this.view.enabled) { - - const scaleW = (this.right - this.left) / this.view.fullWidth / this.zoom; - const scaleH = (this.top - this.bottom) / this.view.fullHeight / this.zoom; - - left += scaleW * this.view.offsetX; - right = left + scaleW * this.view.width; - top -= scaleH * this.view.offsetY; - bottom = top - scaleH * this.view.height; - - } - - this.projectionMatrix.makeOrthographic(left, right, top, bottom, this.near, this.far); - - this.projectionMatrixInverse.copy(this.projectionMatrix).invert(); - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - data.object.zoom = this.zoom; - data.object.left = this.left; - data.object.right = this.right; - data.object.top = this.top; - data.object.bottom = this.bottom; - data.object.near = this.near; - data.object.far = this.far; - - if (this.view !== null) data.object.view = Object.assign({}, this.view); - - return data; - - } - -} - -const LOD_MIN = 4; - -// The standard deviations (radians) associated with the extra mips. These are -// chosen to approximate a Trowbridge-Reitz distribution function times the -// geometric shadowing function. These sigma values squared must match the -// variance #defines in cube_uv_reflection_fragment.glsl.js. -const EXTRA_LOD_SIGMA = [0.125, 0.215, 0.35, 0.446, 0.526, 0.582]; - -// The maximum length of the blur for loop. Smaller sigmas will use fewer -// samples and exit early, but not recompile the shader. -const MAX_SAMPLES = 20; - -const _flatCamera = /*@__PURE__*/ new OrthographicCamera(); -const _clearColor = /*@__PURE__*/ new Color(); -let _oldTarget = null; - -// Golden Ratio -const PHI = (1 + Math.sqrt(5)) / 2; -const INV_PHI = 1 / PHI; - -// Vertices of a dodecahedron (except the opposites, which represent the -// same axis), used as axis directions evenly spread on a sphere. -const _axisDirections = [ - /*@__PURE__*/ new Vector3(1, 1, 1), - /*@__PURE__*/ new Vector3(- 1, 1, 1), - /*@__PURE__*/ new Vector3(1, 1, - 1), - /*@__PURE__*/ new Vector3(- 1, 1, - 1), - /*@__PURE__*/ new Vector3(0, PHI, INV_PHI), - /*@__PURE__*/ new Vector3(0, PHI, - INV_PHI), - /*@__PURE__*/ new Vector3(INV_PHI, 0, PHI), - /*@__PURE__*/ new Vector3(- INV_PHI, 0, PHI), - /*@__PURE__*/ new Vector3(PHI, INV_PHI, 0), - /*@__PURE__*/ new Vector3(- PHI, INV_PHI, 0)]; - -/** - * This class generates a Prefiltered, Mipmapped Radiance Environment Map - * (PMREM) from a cubeMap environment texture. This allows different levels of - * blur to be quickly accessed based on material roughness. It is packed into a - * special CubeUV format that allows us to perform custom interpolation so that - * we can support nonlinear formats such as RGBE. Unlike a traditional mipmap - * chain, it only goes down to the LOD_MIN level (above), and then creates extra - * even more filtered 'mips' at the same LOD_MIN resolution, associated with - * higher roughness levels. In this way we maintain resolution to smoothly - * interpolate diffuse lighting while limiting sampling computation. - * - * Paper: Fast, Accurate Image-Based Lighting - * https://drive.google.com/file/d/15y8r_UpKlU9SvV4ILb0C3qCPecS8pvLz/view -*/ - -class PMREMGenerator { - - constructor(renderer) { - - this._renderer = renderer; - this._pingPongRenderTarget = null; - - this._lodMax = 0; - this._cubeSize = 0; - this._lodPlanes = []; - this._sizeLods = []; - this._sigmas = []; - - this._blurMaterial = null; - this._cubemapMaterial = null; - this._equirectMaterial = null; - - this._compileMaterial(this._blurMaterial); - - } - - /** - * Generates a PMREM from a supplied Scene, which can be faster than using an - * image if networking bandwidth is low. Optional sigma specifies a blur radius - * in radians to be applied to the scene before PMREM generation. Optional near - * and far planes ensure the scene is rendered in its entirety (the cubeCamera - * is placed at the origin). - */ - fromScene(scene, sigma = 0, near = 0.1, far = 100) { - - _oldTarget = this._renderer.getRenderTarget(); - - this._setSize(256); - - const cubeUVRenderTarget = this._allocateTargets(); - cubeUVRenderTarget.depthBuffer = true; - - this._sceneToCubeUV(scene, near, far, cubeUVRenderTarget); - - if (sigma > 0) { - - this._blur(cubeUVRenderTarget, 0, 0, sigma); - - } - - this._applyPMREM(cubeUVRenderTarget); - this._cleanup(cubeUVRenderTarget); - - return cubeUVRenderTarget; - - } - - /** - * Generates a PMREM from an equirectangular texture, which can be either LDR - * or HDR. The ideal input image size is 1k (1024 x 512), - * as this matches best with the 256 x 256 cubemap output. - */ - fromEquirectangular(equirectangular, renderTarget = null) { - - return this._fromTexture(equirectangular, renderTarget); - - } - - /** - * Generates a PMREM from an cubemap texture, which can be either LDR - * or HDR. The ideal input cube size is 256 x 256, - * as this matches best with the 256 x 256 cubemap output. - */ - fromCubemap(cubemap, renderTarget = null) { - - return this._fromTexture(cubemap, renderTarget); - - } - - /** - * Pre-compiles the cubemap shader. You can get faster start-up by invoking this method during - * your texture's network fetch for increased concurrency. - */ - compileCubemapShader() { - - if (this._cubemapMaterial === null) { - - this._cubemapMaterial = _getCubemapMaterial(); - this._compileMaterial(this._cubemapMaterial); - - } - - } - - /** - * Pre-compiles the equirectangular shader. You can get faster start-up by invoking this method during - * your texture's network fetch for increased concurrency. - */ - compileEquirectangularShader() { - - if (this._equirectMaterial === null) { - - this._equirectMaterial = _getEquirectMaterial(); - this._compileMaterial(this._equirectMaterial); - - } - - } - - /** - * Disposes of the PMREMGenerator's internal memory. Note that PMREMGenerator is a static class, - * so you should not need more than one PMREMGenerator object. If you do, calling dispose() on - * one of them will cause any others to also become unusable. - */ - dispose() { - - this._dispose(); - - if (this._cubemapMaterial !== null) this._cubemapMaterial.dispose(); - if (this._equirectMaterial !== null) this._equirectMaterial.dispose(); - - } - - // private interface - - _setSize(cubeSize) { - - this._lodMax = Math.floor(Math.log2(cubeSize)); - this._cubeSize = Math.pow(2, this._lodMax); - - } - - _dispose() { - - if (this._blurMaterial !== null) this._blurMaterial.dispose(); - - if (this._pingPongRenderTarget !== null) this._pingPongRenderTarget.dispose(); - - for (let i = 0; i < this._lodPlanes.length; i++) { - - this._lodPlanes[i].dispose(); - - } - - } - - _cleanup(outputTarget) { - - this._renderer.setRenderTarget(_oldTarget); - outputTarget.scissorTest = false; - _setViewport(outputTarget, 0, 0, outputTarget.width, outputTarget.height); - - } - - _fromTexture(texture, renderTarget) { - - if (texture.mapping === CubeReflectionMapping || texture.mapping === CubeRefractionMapping) { - - this._setSize(texture.image.length === 0 ? 16 : (texture.image[0].width || texture.image[0].image.width)); - - } else { // Equirectangular - - this._setSize(texture.image.width / 4); - - } - - _oldTarget = this._renderer.getRenderTarget(); - - const cubeUVRenderTarget = renderTarget || this._allocateTargets(); - this._textureToCubeUV(texture, cubeUVRenderTarget); - this._applyPMREM(cubeUVRenderTarget); - this._cleanup(cubeUVRenderTarget); - - return cubeUVRenderTarget; - - } - - _allocateTargets() { - - const width = 3 * Math.max(this._cubeSize, 16 * 7); - const height = 4 * this._cubeSize; - - const params = { - magFilter: LinearFilter, - minFilter: LinearFilter, - generateMipmaps: false, - type: HalfFloatType, - format: RGBAFormat, - encoding: LinearEncoding, - depthBuffer: false - }; - - const cubeUVRenderTarget = _createRenderTarget(width, height, params); - - if (this._pingPongRenderTarget === null || this._pingPongRenderTarget.width !== width || this._pingPongRenderTarget.height !== height) { - - if (this._pingPongRenderTarget !== null) { - - this._dispose(); - - } - - this._pingPongRenderTarget = _createRenderTarget(width, height, params); - - const { _lodMax } = this; - ({ sizeLods: this._sizeLods, lodPlanes: this._lodPlanes, sigmas: this._sigmas } = _createPlanes(_lodMax)); - - this._blurMaterial = _getBlurShader(_lodMax, width, height); - - } - - return cubeUVRenderTarget; - - } - - _compileMaterial(material) { - - const tmpMesh = new Mesh(this._lodPlanes[0], material); - this._renderer.compile(tmpMesh, _flatCamera); - - } - - _sceneToCubeUV(scene, near, far, cubeUVRenderTarget) { - - const fov = 90; - const aspect = 1; - const cubeCamera = new PerspectiveCamera(fov, aspect, near, far); - const upSign = [1, - 1, 1, 1, 1, 1]; - const forwardSign = [1, 1, 1, - 1, - 1, - 1]; - const renderer = this._renderer; - - const originalAutoClear = renderer.autoClear; - const toneMapping = renderer.toneMapping; - renderer.getClearColor(_clearColor); - - renderer.toneMapping = NoToneMapping; - renderer.autoClear = false; - - const backgroundMaterial = new MeshBasicMaterial({ - name: 'PMREM.Background', - side: BackSide, - depthWrite: false, - depthTest: false, - }); - - const backgroundBox = new Mesh(new BoxGeometry(), backgroundMaterial); - - let useSolidColor = false; - const background = scene.background; - - if (background) { - - if (background.isColor) { - - backgroundMaterial.color.copy(background); - scene.background = null; - useSolidColor = true; - - } - - } else { - - backgroundMaterial.color.copy(_clearColor); - useSolidColor = true; - - } - - for (let i = 0; i < 6; i++) { - - const col = i % 3; - - if (col === 0) { - - cubeCamera.up.set(0, upSign[i], 0); - cubeCamera.lookAt(forwardSign[i], 0, 0); - - } else if (col === 1) { - - cubeCamera.up.set(0, 0, upSign[i]); - cubeCamera.lookAt(0, forwardSign[i], 0); - - } else { - - cubeCamera.up.set(0, upSign[i], 0); - cubeCamera.lookAt(0, 0, forwardSign[i]); - - } - - const size = this._cubeSize; - - _setViewport(cubeUVRenderTarget, col * size, i > 2 ? size : 0, size, size); - - renderer.setRenderTarget(cubeUVRenderTarget); - - if (useSolidColor) { - - renderer.render(backgroundBox, cubeCamera); - - } - - renderer.render(scene, cubeCamera); - - } - - backgroundBox.geometry.dispose(); - backgroundBox.material.dispose(); - - renderer.toneMapping = toneMapping; - renderer.autoClear = originalAutoClear; - scene.background = background; - - } - - _textureToCubeUV(texture, cubeUVRenderTarget) { - - const renderer = this._renderer; - - const isCubeTexture = (texture.mapping === CubeReflectionMapping || texture.mapping === CubeRefractionMapping); - - if (isCubeTexture) { - - if (this._cubemapMaterial === null) { - - this._cubemapMaterial = _getCubemapMaterial(); - - } - - this._cubemapMaterial.uniforms.flipEnvMap.value = (texture.isRenderTargetTexture === false) ? - 1 : 1; - - } else { - - if (this._equirectMaterial === null) { - - this._equirectMaterial = _getEquirectMaterial(); - - } - - } - - const material = isCubeTexture ? this._cubemapMaterial : this._equirectMaterial; - const mesh = new Mesh(this._lodPlanes[0], material); - - const uniforms = material.uniforms; - - uniforms['envMap'].value = texture; - - const size = this._cubeSize; - - _setViewport(cubeUVRenderTarget, 0, 0, 3 * size, 2 * size); - - renderer.setRenderTarget(cubeUVRenderTarget); - renderer.render(mesh, _flatCamera); - - } - - _applyPMREM(cubeUVRenderTarget) { - - const renderer = this._renderer; - const autoClear = renderer.autoClear; - renderer.autoClear = false; - - for (let i = 1; i < this._lodPlanes.length; i++) { - - const sigma = Math.sqrt(this._sigmas[i] * this._sigmas[i] - this._sigmas[i - 1] * this._sigmas[i - 1]); - - const poleAxis = _axisDirections[(i - 1) % _axisDirections.length]; - - this._blur(cubeUVRenderTarget, i - 1, i, sigma, poleAxis); - - } - - renderer.autoClear = autoClear; - - } - - /** - * This is a two-pass Gaussian blur for a cubemap. Normally this is done - * vertically and horizontally, but this breaks down on a cube. Here we apply - * the blur latitudinally (around the poles), and then longitudinally (towards - * the poles) to approximate the orthogonally-separable blur. It is least - * accurate at the poles, but still does a decent job. - */ - _blur(cubeUVRenderTarget, lodIn, lodOut, sigma, poleAxis) { - - const pingPongRenderTarget = this._pingPongRenderTarget; - - this._halfBlur( - cubeUVRenderTarget, - pingPongRenderTarget, - lodIn, - lodOut, - sigma, - 'latitudinal', - poleAxis); - - this._halfBlur( - pingPongRenderTarget, - cubeUVRenderTarget, - lodOut, - lodOut, - sigma, - 'longitudinal', - poleAxis); - - } - - _halfBlur(targetIn, targetOut, lodIn, lodOut, sigmaRadians, direction, poleAxis) { - - const renderer = this._renderer; - const blurMaterial = this._blurMaterial; - - if (direction !== 'latitudinal' && direction !== 'longitudinal') { - - console.error( - 'blur direction must be either latitudinal or longitudinal!'); - - } - - // Number of standard deviations at which to cut off the discrete approximation. - const STANDARD_DEVIATIONS = 3; - - const blurMesh = new Mesh(this._lodPlanes[lodOut], blurMaterial); - const blurUniforms = blurMaterial.uniforms; - - const pixels = this._sizeLods[lodIn] - 1; - const radiansPerPixel = isFinite(sigmaRadians) ? Math.PI / (2 * pixels) : 2 * Math.PI / (2 * MAX_SAMPLES - 1); - const sigmaPixels = sigmaRadians / radiansPerPixel; - const samples = isFinite(sigmaRadians) ? 1 + Math.floor(STANDARD_DEVIATIONS * sigmaPixels) : MAX_SAMPLES; - - if (samples > MAX_SAMPLES) { - - console.warn(`sigmaRadians, ${sigmaRadians}, is too large and will clip, as it requested ${samples} samples when the maximum is set to ${MAX_SAMPLES}`); - - } - - const weights = []; - let sum = 0; - - for (let i = 0; i < MAX_SAMPLES; ++i) { - - const x = i / sigmaPixels; - const weight = Math.exp(- x * x / 2); - weights.push(weight); - - if (i === 0) { - - sum += weight; - - } else if (i < samples) { - - sum += 2 * weight; - - } - - } - - for (let i = 0; i < weights.length; i++) { - - weights[i] = weights[i] / sum; - - } - - blurUniforms['envMap'].value = targetIn.texture; - blurUniforms['samples'].value = samples; - blurUniforms['weights'].value = weights; - blurUniforms['latitudinal'].value = direction === 'latitudinal'; - - if (poleAxis) { - - blurUniforms['poleAxis'].value = poleAxis; - - } - - const { _lodMax } = this; - blurUniforms['dTheta'].value = radiansPerPixel; - blurUniforms['mipInt'].value = _lodMax - lodIn; - - const outputSize = this._sizeLods[lodOut]; - const x = 3 * outputSize * (lodOut > _lodMax - LOD_MIN ? lodOut - _lodMax + LOD_MIN : 0); - const y = 4 * (this._cubeSize - outputSize); - - _setViewport(targetOut, x, y, 3 * outputSize, 2 * outputSize); - renderer.setRenderTarget(targetOut); - renderer.render(blurMesh, _flatCamera); - - } - -} - - - -function _createPlanes(lodMax) { - - const lodPlanes = []; - const sizeLods = []; - const sigmas = []; - - let lod = lodMax; - - const totalLods = lodMax - LOD_MIN + 1 + EXTRA_LOD_SIGMA.length; - - for (let i = 0; i < totalLods; i++) { - - const sizeLod = Math.pow(2, lod); - sizeLods.push(sizeLod); - let sigma = 1.0 / sizeLod; - - if (i > lodMax - LOD_MIN) { - - sigma = EXTRA_LOD_SIGMA[i - lodMax + LOD_MIN - 1]; - - } else if (i === 0) { - - sigma = 0; - - } - - sigmas.push(sigma); - - const texelSize = 1.0 / (sizeLod - 2); - const min = - texelSize; - const max = 1 + texelSize; - const uv1 = [min, min, max, min, max, max, min, min, max, max, min, max]; - - const cubeFaces = 6; - const vertices = 6; - const positionSize = 3; - const uvSize = 2; - const faceIndexSize = 1; - - const position = new Float32Array(positionSize * vertices * cubeFaces); - const uv = new Float32Array(uvSize * vertices * cubeFaces); - const faceIndex = new Float32Array(faceIndexSize * vertices * cubeFaces); - - for (let face = 0; face < cubeFaces; face++) { - - const x = (face % 3) * 2 / 3 - 1; - const y = face > 2 ? 0 : - 1; - const coordinates = [ - x, y, 0, - x + 2 / 3, y, 0, - x + 2 / 3, y + 1, 0, - x, y, 0, - x + 2 / 3, y + 1, 0, - x, y + 1, 0 - ]; - position.set(coordinates, positionSize * vertices * face); - uv.set(uv1, uvSize * vertices * face); - const fill = [face, face, face, face, face, face]; - faceIndex.set(fill, faceIndexSize * vertices * face); - - } - - const planes = new BufferGeometry(); - planes.setAttribute('position', new BufferAttribute(position, positionSize)); - planes.setAttribute('uv', new BufferAttribute(uv, uvSize)); - planes.setAttribute('faceIndex', new BufferAttribute(faceIndex, faceIndexSize)); - lodPlanes.push(planes); - - if (lod > LOD_MIN) { - - lod--; - - } - - } - - return { lodPlanes, sizeLods, sigmas }; - -} - -function _createRenderTarget(width, height, params) { - - const cubeUVRenderTarget = new WebGLRenderTarget(width, height, params); - cubeUVRenderTarget.texture.mapping = CubeUVReflectionMapping; - cubeUVRenderTarget.texture.name = 'PMREM.cubeUv'; - cubeUVRenderTarget.scissorTest = true; - return cubeUVRenderTarget; - -} - -function _setViewport(target, x, y, width, height) { - - target.viewport.set(x, y, width, height); - target.scissor.set(x, y, width, height); - -} - -function _getBlurShader(lodMax, width, height) { - - const weights = new Float32Array(MAX_SAMPLES); - const poleAxis = new Vector3(0, 1, 0); - const shaderMaterial = new ShaderMaterial({ - - name: 'SphericalGaussianBlur', - - defines: { - 'n': MAX_SAMPLES, - 'CUBEUV_TEXEL_WIDTH': 1.0 / width, - 'CUBEUV_TEXEL_HEIGHT': 1.0 / height, - 'CUBEUV_MAX_MIP': `${lodMax}.0`, - }, - - uniforms: { - 'envMap': { value: null }, - 'samples': { value: 1 }, - 'weights': { value: weights }, - 'latitudinal': { value: false }, - 'dTheta': { value: 0 }, - 'mipInt': { value: 0 }, - 'poleAxis': { value: poleAxis } - }, - - vertexShader: _getCommonVertexShader(), - - fragmentShader: /* glsl */` - - precision mediump float; - precision mediump int; - - varying vec3 vOutputDirection; - - uniform sampler2D envMap; - uniform int samples; - uniform float weights[ n ]; - uniform bool latitudinal; - uniform float dTheta; - uniform float mipInt; - uniform vec3 poleAxis; - - #define ENVMAP_TYPE_CUBE_UV - #include - - vec3 getSample( float theta, vec3 axis ) { - - float cosTheta = cos( theta ); - // Rodrigues' axis-angle rotation - vec3 sampleDirection = vOutputDirection * cosTheta - + cross( axis, vOutputDirection ) * sin( theta ) - + axis * dot( axis, vOutputDirection ) * ( 1.0 - cosTheta ); - - return bilinearCubeUV( envMap, sampleDirection, mipInt ); - - } - - void main() { - - vec3 axis = latitudinal ? poleAxis : cross( poleAxis, vOutputDirection ); - - if ( all( equal( axis, vec3( 0.0 ) ) ) ) { - - axis = vec3( vOutputDirection.z, 0.0, - vOutputDirection.x ); - - } - - axis = normalize( axis ); - - gl_FragColor = vec4( 0.0, 0.0, 0.0, 1.0 ); - gl_FragColor.rgb += weights[ 0 ] * getSample( 0.0, axis ); - - for ( int i = 1; i < n; i++ ) { - - if ( i >= samples ) { - - break; - - } - - float theta = dTheta * float( i ); - gl_FragColor.rgb += weights[ i ] * getSample( -1.0 * theta, axis ); - gl_FragColor.rgb += weights[ i ] * getSample( theta, axis ); - - } - - } - `, - - blending: NoBlending, - depthTest: false, - depthWrite: false - - }); - - return shaderMaterial; - -} - -function _getEquirectMaterial() { - - return new ShaderMaterial({ - - name: 'EquirectangularToCubeUV', - - uniforms: { - 'envMap': { value: null } - }, - - vertexShader: _getCommonVertexShader(), - - fragmentShader: /* glsl */` - - precision mediump float; - precision mediump int; - - varying vec3 vOutputDirection; - - uniform sampler2D envMap; - - #include - - void main() { - - vec3 outputDirection = normalize( vOutputDirection ); - vec2 uv = equirectUv( outputDirection ); - - gl_FragColor = vec4( texture2D ( envMap, uv ).rgb, 1.0 ); - - } - `, - - blending: NoBlending, - depthTest: false, - depthWrite: false - - }); - -} - -function _getCubemapMaterial() { - - return new ShaderMaterial({ - - name: 'CubemapToCubeUV', - - uniforms: { - 'envMap': { value: null }, - 'flipEnvMap': { value: - 1 } - }, - - vertexShader: _getCommonVertexShader(), - - fragmentShader: /* glsl */` - - precision mediump float; - precision mediump int; - - uniform float flipEnvMap; - - varying vec3 vOutputDirection; - - uniform samplerCube envMap; - - void main() { - - gl_FragColor = textureCube( envMap, vec3( flipEnvMap * vOutputDirection.x, vOutputDirection.yz ) ); - - } - `, - - blending: NoBlending, - depthTest: false, - depthWrite: false - - }); - -} - -function _getCommonVertexShader() { - - return /* glsl */` - - precision mediump float; - precision mediump int; - - attribute float faceIndex; - - varying vec3 vOutputDirection; - - // RH coordinate system; PMREM face-indexing convention - vec3 getDirection( vec2 uv, float face ) { - - uv = 2.0 * uv - 1.0; - - vec3 direction = vec3( uv, 1.0 ); - - if ( face == 0.0 ) { - - direction = direction.zyx; // ( 1, v, u ) pos x - - } else if ( face == 1.0 ) { - - direction = direction.xzy; - direction.xz *= -1.0; // ( -u, 1, -v ) pos y - - } else if ( face == 2.0 ) { - - direction.x *= -1.0; // ( -u, v, 1 ) pos z - - } else if ( face == 3.0 ) { - - direction = direction.zyx; - direction.xz *= -1.0; // ( -1, v, -u ) neg x - - } else if ( face == 4.0 ) { - - direction = direction.xzy; - direction.xy *= -1.0; // ( -u, -1, v ) neg y - - } else if ( face == 5.0 ) { - - direction.z *= -1.0; // ( u, v, -1 ) neg z - - } - - return direction; - - } - - void main() { - - vOutputDirection = getDirection( uv, faceIndex ); - gl_Position = vec4( position, 1.0 ); - - } - `; - -} - -function WebGLCubeUVMaps(renderer) { - - let cubeUVmaps = new WeakMap(); - - let pmremGenerator = null; - - function get(texture) { - - if (texture && texture.isTexture) { - - const mapping = texture.mapping; - - const isEquirectMap = (mapping === EquirectangularReflectionMapping || mapping === EquirectangularRefractionMapping); - const isCubeMap = (mapping === CubeReflectionMapping || mapping === CubeRefractionMapping); - - // equirect/cube map to cubeUV conversion - - if (isEquirectMap || isCubeMap) { - - if (texture.isRenderTargetTexture && texture.needsPMREMUpdate === true) { - - texture.needsPMREMUpdate = false; - - let renderTarget = cubeUVmaps.get(texture); - - if (pmremGenerator === null) pmremGenerator = new PMREMGenerator(renderer); - - renderTarget = isEquirectMap ? pmremGenerator.fromEquirectangular(texture, renderTarget) : pmremGenerator.fromCubemap(texture, renderTarget); - cubeUVmaps.set(texture, renderTarget); - - return renderTarget.texture; - - } else { - - if (cubeUVmaps.has(texture)) { - - return cubeUVmaps.get(texture).texture; - - } else { - - const image = texture.image; - - if ((isEquirectMap && image && image.height > 0) || (isCubeMap && image && isCubeTextureComplete(image))) { - - if (pmremGenerator === null) pmremGenerator = new PMREMGenerator(renderer); - - const renderTarget = isEquirectMap ? pmremGenerator.fromEquirectangular(texture) : pmremGenerator.fromCubemap(texture); - cubeUVmaps.set(texture, renderTarget); - - texture.addEventListener('dispose', onTextureDispose); - - return renderTarget.texture; - - } else { - - // image not yet ready. try the conversion next frame - - return null; - - } - - } - - } - - } - - } - - return texture; - - } - - function isCubeTextureComplete(image) { - - let count = 0; - const length = 6; - - for (let i = 0; i < length; i++) { - - if (image[i] !== undefined) count++; - - } - - return count === length; - - - } - - function onTextureDispose(event) { - - const texture = event.target; - - texture.removeEventListener('dispose', onTextureDispose); - - const cubemapUV = cubeUVmaps.get(texture); - - if (cubemapUV !== undefined) { - - cubeUVmaps.delete(texture); - cubemapUV.dispose(); - - } - - } - - function dispose() { - - cubeUVmaps = new WeakMap(); - - if (pmremGenerator !== null) { - - pmremGenerator.dispose(); - pmremGenerator = null; - - } - - } - - return { - get: get, - dispose: dispose - }; - -} - -function WebGLExtensions(gl) { - - const extensions = {}; - - function getExtension(name) { - - if (extensions[name] !== undefined) { - - return extensions[name]; - - } - - let extension; - - switch (name) { - - case 'WEBGL_depth_texture': - extension = gl.getExtension('WEBGL_depth_texture') || gl.getExtension('MOZ_WEBGL_depth_texture') || gl.getExtension('WEBKIT_WEBGL_depth_texture'); - break; - - case 'EXT_texture_filter_anisotropic': - extension = gl.getExtension('EXT_texture_filter_anisotropic') || gl.getExtension('MOZ_EXT_texture_filter_anisotropic') || gl.getExtension('WEBKIT_EXT_texture_filter_anisotropic'); - break; - - case 'WEBGL_compressed_texture_s3tc': - extension = gl.getExtension('WEBGL_compressed_texture_s3tc') || gl.getExtension('MOZ_WEBGL_compressed_texture_s3tc') || gl.getExtension('WEBKIT_WEBGL_compressed_texture_s3tc'); - break; - - case 'WEBGL_compressed_texture_pvrtc': - extension = gl.getExtension('WEBGL_compressed_texture_pvrtc') || gl.getExtension('WEBKIT_WEBGL_compressed_texture_pvrtc'); - break; - - default: - extension = gl.getExtension(name); - - } - - extensions[name] = extension; - - return extension; - - } - - return { - - has: function (name) { - - return getExtension(name) !== null; - - }, - - init: function (capabilities) { - - if (capabilities.isWebGL2) { - - getExtension('EXT_color_buffer_float'); - - } else { - - getExtension('WEBGL_depth_texture'); - getExtension('OES_texture_float'); - getExtension('OES_texture_half_float'); - getExtension('OES_texture_half_float_linear'); - getExtension('OES_standard_derivatives'); - getExtension('OES_element_index_uint'); - getExtension('OES_vertex_array_object'); - getExtension('ANGLE_instanced_arrays'); - - } - - getExtension('OES_texture_float_linear'); - getExtension('EXT_color_buffer_half_float'); - getExtension('WEBGL_multisampled_render_to_texture'); - - }, - - get: function (name) { - - const extension = getExtension(name); - - if (extension === null) { - - console.warn('THREE.WebGLRenderer: ' + name + ' extension not supported.'); - - } - - return extension; - - } - - }; - -} - -function WebGLGeometries(gl, attributes, info, bindingStates) { - - const geometries = {}; - const wireframeAttributes = new WeakMap(); - - function onGeometryDispose(event) { - - const geometry = event.target; - - if (geometry.index !== null) { - - attributes.remove(geometry.index); - - } - - for (const name in geometry.attributes) { - - attributes.remove(geometry.attributes[name]); - - } - - geometry.removeEventListener('dispose', onGeometryDispose); - - delete geometries[geometry.id]; - - const attribute = wireframeAttributes.get(geometry); - - if (attribute) { - - attributes.remove(attribute); - wireframeAttributes.delete(geometry); - - } - - bindingStates.releaseStatesOfGeometry(geometry); - - if (geometry.isInstancedBufferGeometry === true) { - - delete geometry._maxInstanceCount; - - } - - // - - info.memory.geometries--; - - } - - function get(object, geometry) { - - if (geometries[geometry.id] === true) return geometry; - - geometry.addEventListener('dispose', onGeometryDispose); - - geometries[geometry.id] = true; - - info.memory.geometries++; - - return geometry; - - } - - function update(geometry) { - - const geometryAttributes = geometry.attributes; - - // Updating index buffer in VAO now. See WebGLBindingStates. - - for (const name in geometryAttributes) { - - attributes.update(geometryAttributes[name], 34962); - - } - - // morph targets - - const morphAttributes = geometry.morphAttributes; - - for (const name in morphAttributes) { - - const array = morphAttributes[name]; - - for (let i = 0, l = array.length; i < l; i++) { - - attributes.update(array[i], 34962); - - } - - } - - } - - function updateWireframeAttribute(geometry) { - - const indices = []; - - const geometryIndex = geometry.index; - const geometryPosition = geometry.attributes.position; - let version = 0; - - if (geometryIndex !== null) { - - const array = geometryIndex.array; - version = geometryIndex.version; - - for (let i = 0, l = array.length; i < l; i += 3) { - - const a = array[i + 0]; - const b = array[i + 1]; - const c = array[i + 2]; - - indices.push(a, b, b, c, c, a); - - } - - } else { - - const array = geometryPosition.array; - version = geometryPosition.version; - - for (let i = 0, l = (array.length / 3) - 1; i < l; i += 3) { - - const a = i + 0; - const b = i + 1; - const c = i + 2; - - indices.push(a, b, b, c, c, a); - - } - - } - - const attribute = new (arrayNeedsUint32(indices) ? Uint32BufferAttribute : Uint16BufferAttribute)(indices, 1); - attribute.version = version; - - // Updating index buffer in VAO now. See WebGLBindingStates - - // - - const previousAttribute = wireframeAttributes.get(geometry); - - if (previousAttribute) attributes.remove(previousAttribute); - - // - - wireframeAttributes.set(geometry, attribute); - - } - - function getWireframeAttribute(geometry) { - - const currentAttribute = wireframeAttributes.get(geometry); - - if (currentAttribute) { - - const geometryIndex = geometry.index; - - if (geometryIndex !== null) { - - // if the attribute is obsolete, create a new one - - if (currentAttribute.version < geometryIndex.version) { - - updateWireframeAttribute(geometry); - - } - - } - - } else { - - updateWireframeAttribute(geometry); - - } - - return wireframeAttributes.get(geometry); - - } - - return { - - get: get, - update: update, - - getWireframeAttribute: getWireframeAttribute - - }; - -} - -function WebGLIndexedBufferRenderer(gl, extensions, info, capabilities) { - - const isWebGL2 = capabilities.isWebGL2; - - let mode; - - function setMode(value) { - - mode = value; - - } - - let type, bytesPerElement; - - function setIndex(value) { - - type = value.type; - bytesPerElement = value.bytesPerElement; - - } - - function render(start, count) { - - gl.drawElements(mode, count, type, start * bytesPerElement); - - info.update(count, mode, 1); - - } - - function renderInstances(start, count, primcount) { - - if (primcount === 0) return; - - let extension, methodName; - - if (isWebGL2) { - - extension = gl; - methodName = 'drawElementsInstanced'; - - } else { - - extension = extensions.get('ANGLE_instanced_arrays'); - methodName = 'drawElementsInstancedANGLE'; - - if (extension === null) { - - console.error('THREE.WebGLIndexedBufferRenderer: using THREE.InstancedBufferGeometry but hardware does not support extension ANGLE_instanced_arrays.'); - return; - - } - - } - - extension[methodName](mode, count, type, start * bytesPerElement, primcount); - - info.update(count, mode, primcount); - - } - - // - - this.setMode = setMode; - this.setIndex = setIndex; - this.render = render; - this.renderInstances = renderInstances; - -} - -function WebGLInfo(gl) { - - const memory = { - geometries: 0, - textures: 0 - }; - - const render = { - frame: 0, - calls: 0, - triangles: 0, - points: 0, - lines: 0 - }; - - function update(count, mode, instanceCount) { - - render.calls++; - - switch (mode) { - - case 4: - render.triangles += instanceCount * (count / 3); - break; - - case 1: - render.lines += instanceCount * (count / 2); - break; - - case 3: - render.lines += instanceCount * (count - 1); - break; - - case 2: - render.lines += instanceCount * count; - break; - - case 0: - render.points += instanceCount * count; - break; - - default: - console.error('THREE.WebGLInfo: Unknown draw mode:', mode); - break; - - } - - } - - function reset() { - - render.frame++; - render.calls = 0; - render.triangles = 0; - render.points = 0; - render.lines = 0; - - } - - return { - memory: memory, - render: render, - programs: null, - autoReset: true, - reset: reset, - update: update - }; - -} - -function numericalSort(a, b) { - - return a[0] - b[0]; - -} - -function absNumericalSort(a, b) { - - return Math.abs(b[1]) - Math.abs(a[1]); - -} - -function WebGLMorphtargets(gl, capabilities, textures) { - - const influencesList = {}; - const morphInfluences = new Float32Array(8); - const morphTextures = new WeakMap(); - const morph = new Vector4(); - - const workInfluences = []; - - for (let i = 0; i < 8; i++) { - - workInfluences[i] = [i, 0]; - - } - - function update(object, geometry, material, program) { - - const objectInfluences = object.morphTargetInfluences; - - if (capabilities.isWebGL2 === true) { - - // instead of using attributes, the WebGL 2 code path encodes morph targets - // into an array of data textures. Each layer represents a single morph target. - - const morphAttribute = geometry.morphAttributes.position || geometry.morphAttributes.normal || geometry.morphAttributes.color; - const morphTargetsCount = (morphAttribute !== undefined) ? morphAttribute.length : 0; - - let entry = morphTextures.get(geometry); - - if (entry === undefined || entry.count !== morphTargetsCount) { - - if (entry !== undefined) entry.texture.dispose(); - - const hasMorphPosition = geometry.morphAttributes.position !== undefined; - const hasMorphNormals = geometry.morphAttributes.normal !== undefined; - const hasMorphColors = geometry.morphAttributes.color !== undefined; - - const morphTargets = geometry.morphAttributes.position || []; - const morphNormals = geometry.morphAttributes.normal || []; - const morphColors = geometry.morphAttributes.color || []; - - let vertexDataCount = 0; - - if (hasMorphPosition === true) vertexDataCount = 1; - if (hasMorphNormals === true) vertexDataCount = 2; - if (hasMorphColors === true) vertexDataCount = 3; - - let width = geometry.attributes.position.count * vertexDataCount; - let height = 1; - - if (width > capabilities.maxTextureSize) { - - height = Math.ceil(width / capabilities.maxTextureSize); - width = capabilities.maxTextureSize; - - } - - const buffer = new Float32Array(width * height * 4 * morphTargetsCount); - - const texture = new DataArrayTexture(buffer, width, height, morphTargetsCount); - texture.type = FloatType; - texture.needsUpdate = true; - - // fill buffer - - const vertexDataStride = vertexDataCount * 4; - - for (let i = 0; i < morphTargetsCount; i++) { - - const morphTarget = morphTargets[i]; - const morphNormal = morphNormals[i]; - const morphColor = morphColors[i]; - - const offset = width * height * 4 * i; - - for (let j = 0; j < morphTarget.count; j++) { - - const stride = j * vertexDataStride; - - if (hasMorphPosition === true) { - - morph.fromBufferAttribute(morphTarget, j); - - buffer[offset + stride + 0] = morph.x; - buffer[offset + stride + 1] = morph.y; - buffer[offset + stride + 2] = morph.z; - buffer[offset + stride + 3] = 0; - - } - - if (hasMorphNormals === true) { - - morph.fromBufferAttribute(morphNormal, j); - - buffer[offset + stride + 4] = morph.x; - buffer[offset + stride + 5] = morph.y; - buffer[offset + stride + 6] = morph.z; - buffer[offset + stride + 7] = 0; - - } - - if (hasMorphColors === true) { - - morph.fromBufferAttribute(morphColor, j); - - buffer[offset + stride + 8] = morph.x; - buffer[offset + stride + 9] = morph.y; - buffer[offset + stride + 10] = morph.z; - buffer[offset + stride + 11] = (morphColor.itemSize === 4) ? morph.w : 1; - - } - - } - - } - - entry = { - count: morphTargetsCount, - texture: texture, - size: new Vector2(width, height) - }; - - morphTextures.set(geometry, entry); - - function disposeTexture() { - - texture.dispose(); - - morphTextures.delete(geometry); - - geometry.removeEventListener('dispose', disposeTexture); - - } - - geometry.addEventListener('dispose', disposeTexture); - - } - - // - - let morphInfluencesSum = 0; - - for (let i = 0; i < objectInfluences.length; i++) { - - morphInfluencesSum += objectInfluences[i]; - - } - - const morphBaseInfluence = geometry.morphTargetsRelative ? 1 : 1 - morphInfluencesSum; - - program.getUniforms().setValue(gl, 'morphTargetBaseInfluence', morphBaseInfluence); - program.getUniforms().setValue(gl, 'morphTargetInfluences', objectInfluences); - - program.getUniforms().setValue(gl, 'morphTargetsTexture', entry.texture, textures); - program.getUniforms().setValue(gl, 'morphTargetsTextureSize', entry.size); - - - } else { - - // When object doesn't have morph target influences defined, we treat it as a 0-length array - // This is important to make sure we set up morphTargetBaseInfluence / morphTargetInfluences - - const length = objectInfluences === undefined ? 0 : objectInfluences.length; - - let influences = influencesList[geometry.id]; - - if (influences === undefined || influences.length !== length) { - - // initialise list - - influences = []; - - for (let i = 0; i < length; i++) { - - influences[i] = [i, 0]; - - } - - influencesList[geometry.id] = influences; - - } - - // Collect influences - - for (let i = 0; i < length; i++) { - - const influence = influences[i]; - - influence[0] = i; - influence[1] = objectInfluences[i]; - - } - - influences.sort(absNumericalSort); - - for (let i = 0; i < 8; i++) { - - if (i < length && influences[i][1]) { - - workInfluences[i][0] = influences[i][0]; - workInfluences[i][1] = influences[i][1]; - - } else { - - workInfluences[i][0] = Number.MAX_SAFE_INTEGER; - workInfluences[i][1] = 0; - - } - - } - - workInfluences.sort(numericalSort); - - const morphTargets = geometry.morphAttributes.position; - const morphNormals = geometry.morphAttributes.normal; - - let morphInfluencesSum = 0; - - for (let i = 0; i < 8; i++) { - - const influence = workInfluences[i]; - const index = influence[0]; - const value = influence[1]; - - if (index !== Number.MAX_SAFE_INTEGER && value) { - - if (morphTargets && geometry.getAttribute('morphTarget' + i) !== morphTargets[index]) { - - geometry.setAttribute('morphTarget' + i, morphTargets[index]); - - } - - if (morphNormals && geometry.getAttribute('morphNormal' + i) !== morphNormals[index]) { - - geometry.setAttribute('morphNormal' + i, morphNormals[index]); - - } - - morphInfluences[i] = value; - morphInfluencesSum += value; - - } else { - - if (morphTargets && geometry.hasAttribute('morphTarget' + i) === true) { - - geometry.deleteAttribute('morphTarget' + i); - - } - - if (morphNormals && geometry.hasAttribute('morphNormal' + i) === true) { - - geometry.deleteAttribute('morphNormal' + i); - - } - - morphInfluences[i] = 0; - - } - - } - - // GLSL shader uses formula baseinfluence * base + sum(target * influence) - // This allows us to switch between absolute morphs and relative morphs without changing shader code - // When baseinfluence = 1 - sum(influence), the above is equivalent to sum((target - base) * influence) - const morphBaseInfluence = geometry.morphTargetsRelative ? 1 : 1 - morphInfluencesSum; - - program.getUniforms().setValue(gl, 'morphTargetBaseInfluence', morphBaseInfluence); - program.getUniforms().setValue(gl, 'morphTargetInfluences', morphInfluences); - - } - - } - - return { - - update: update - - }; - -} - -function WebGLObjects(gl, geometries, attributes, info) { - - let updateMap = new WeakMap(); - - function update(object) { - - const frame = info.render.frame; - - const geometry = object.geometry; - const buffergeometry = geometries.get(object, geometry); - - // Update once per frame - - if (updateMap.get(buffergeometry) !== frame) { - - geometries.update(buffergeometry); - - updateMap.set(buffergeometry, frame); - - } - - if (object.isInstancedMesh) { - - if (object.hasEventListener('dispose', onInstancedMeshDispose) === false) { - - object.addEventListener('dispose', onInstancedMeshDispose); - - } - - attributes.update(object.instanceMatrix, 34962); - - if (object.instanceColor !== null) { - - attributes.update(object.instanceColor, 34962); - - } - - } - - return buffergeometry; - - } - - function dispose() { - - updateMap = new WeakMap(); - - } - - function onInstancedMeshDispose(event) { - - const instancedMesh = event.target; - - instancedMesh.removeEventListener('dispose', onInstancedMeshDispose); - - attributes.remove(instancedMesh.instanceMatrix); - - if (instancedMesh.instanceColor !== null) attributes.remove(instancedMesh.instanceColor); - - } - - return { - - update: update, - dispose: dispose - - }; - -} - -/** - * Uniforms of a program. - * Those form a tree structure with a special top-level container for the root, - * which you get by calling 'new WebGLUniforms( gl, program )'. - * - * - * Properties of inner nodes including the top-level container: - * - * .seq - array of nested uniforms - * .map - nested uniforms by name - * - * - * Methods of all nodes except the top-level container: - * - * .setValue( gl, value, [textures] ) - * - * uploads a uniform value(s) - * the 'textures' parameter is needed for sampler uniforms - * - * - * Static methods of the top-level container (textures factorizations): - * - * .upload( gl, seq, values, textures ) - * - * sets uniforms in 'seq' to 'values[id].value' - * - * .seqWithValue( seq, values ) : filteredSeq - * - * filters 'seq' entries with corresponding entry in values - * - * - * Methods of the top-level container (textures factorizations): - * - * .setValue( gl, name, value, textures ) - * - * sets uniform with name 'name' to 'value' - * - * .setOptional( gl, obj, prop ) - * - * like .set for an optional property of the object - * - */ - -const emptyTexture = /*@__PURE__*/ new Texture(); -const emptyArrayTexture = /*@__PURE__*/ new DataArrayTexture(); -const empty3dTexture = /*@__PURE__*/ new Data3DTexture(); -const emptyCubeTexture = /*@__PURE__*/ new CubeTexture(); - -// --- Utilities --- - -// Array Caches (provide typed arrays for temporary by size) - -const arrayCacheF32 = []; -const arrayCacheI32 = []; - -// Float32Array caches used for uploading Matrix uniforms - -const mat4array = new Float32Array(16); -const mat3array = new Float32Array(9); -const mat2array = new Float32Array(4); - -// Flattening for arrays of vectors and matrices - -function flatten(array, nBlocks, blockSize) { - - const firstElem = array[0]; - - if (firstElem <= 0 || firstElem > 0) return array; - // unoptimized: ! isNaN( firstElem ) - // see http://jacksondunstan.com/articles/983 - - const n = nBlocks * blockSize; - let r = arrayCacheF32[n]; - - if (r === undefined) { - - r = new Float32Array(n); - arrayCacheF32[n] = r; - - } - - if (nBlocks !== 0) { - - firstElem.toArray(r, 0); - - for (let i = 1, offset = 0; i !== nBlocks; ++i) { - - offset += blockSize; - array[i].toArray(r, offset); - - } - - } - - return r; - -} - -function arraysEqual(a, b) { - - if (a.length !== b.length) return false; - - for (let i = 0, l = a.length; i < l; i++) { - - if (a[i] !== b[i]) return false; - - } - - return true; - -} - -function copyArray(a, b) { - - for (let i = 0, l = b.length; i < l; i++) { - - a[i] = b[i]; - - } - -} - -// Texture unit allocation - -function allocTexUnits(textures, n) { - - let r = arrayCacheI32[n]; - - if (r === undefined) { - - r = new Int32Array(n); - arrayCacheI32[n] = r; - - } - - for (let i = 0; i !== n; ++i) { - - r[i] = textures.allocateTextureUnit(); - - } - - return r; - -} - -// --- Setters --- - -// Note: Defining these methods externally, because they come in a bunch -// and this way their names minify. - -// Single scalar - -function setValueV1f(gl, v) { - - const cache = this.cache; - - if (cache[0] === v) return; - - gl.uniform1f(this.addr, v); - - cache[0] = v; - -} - -// Single float vector (from flat array or THREE.VectorN) - -function setValueV2f(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y) { - - gl.uniform2f(this.addr, v.x, v.y); - - cache[0] = v.x; - cache[1] = v.y; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform2fv(this.addr, v); - - copyArray(cache, v); - - } - -} - -function setValueV3f(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y || cache[2] !== v.z) { - - gl.uniform3f(this.addr, v.x, v.y, v.z); - - cache[0] = v.x; - cache[1] = v.y; - cache[2] = v.z; - - } - - } else if (v.r !== undefined) { - - if (cache[0] !== v.r || cache[1] !== v.g || cache[2] !== v.b) { - - gl.uniform3f(this.addr, v.r, v.g, v.b); - - cache[0] = v.r; - cache[1] = v.g; - cache[2] = v.b; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform3fv(this.addr, v); - - copyArray(cache, v); - - } - -} - -function setValueV4f(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y || cache[2] !== v.z || cache[3] !== v.w) { - - gl.uniform4f(this.addr, v.x, v.y, v.z, v.w); - - cache[0] = v.x; - cache[1] = v.y; - cache[2] = v.z; - cache[3] = v.w; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform4fv(this.addr, v); - - copyArray(cache, v); - - } - -} - -// Single matrix (from flat array or THREE.MatrixN) - -function setValueM2(gl, v) { - - const cache = this.cache; - const elements = v.elements; - - if (elements === undefined) { - - if (arraysEqual(cache, v)) return; - - gl.uniformMatrix2fv(this.addr, false, v); - - copyArray(cache, v); - - } else { - - if (arraysEqual(cache, elements)) return; - - mat2array.set(elements); - - gl.uniformMatrix2fv(this.addr, false, mat2array); - - copyArray(cache, elements); - - } - -} - -function setValueM3(gl, v) { - - const cache = this.cache; - const elements = v.elements; - - if (elements === undefined) { - - if (arraysEqual(cache, v)) return; - - gl.uniformMatrix3fv(this.addr, false, v); - - copyArray(cache, v); - - } else { - - if (arraysEqual(cache, elements)) return; - - mat3array.set(elements); - - gl.uniformMatrix3fv(this.addr, false, mat3array); - - copyArray(cache, elements); - - } - -} - -function setValueM4(gl, v) { - - const cache = this.cache; - const elements = v.elements; - - if (elements === undefined) { - - if (arraysEqual(cache, v)) return; - - gl.uniformMatrix4fv(this.addr, false, v); - - copyArray(cache, v); - - } else { - - if (arraysEqual(cache, elements)) return; - - mat4array.set(elements); - - gl.uniformMatrix4fv(this.addr, false, mat4array); - - copyArray(cache, elements); - - } - -} - -// Single integer / boolean - -function setValueV1i(gl, v) { - - const cache = this.cache; - - if (cache[0] === v) return; - - gl.uniform1i(this.addr, v); - - cache[0] = v; - -} - -// Single integer / boolean vector (from flat array or THREE.VectorN) - -function setValueV2i(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y) { - - gl.uniform2i(this.addr, v.x, v.y); - - cache[0] = v.x; - cache[1] = v.y; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform2iv(this.addr, v); - - copyArray(cache, v); - - } - -} - -function setValueV3i(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y || cache[2] !== v.z) { - - gl.uniform3i(this.addr, v.x, v.y, v.z); - - cache[0] = v.x; - cache[1] = v.y; - cache[2] = v.z; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform3iv(this.addr, v); - - copyArray(cache, v); - - } - -} - -function setValueV4i(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y || cache[2] !== v.z || cache[3] !== v.w) { - - gl.uniform4i(this.addr, v.x, v.y, v.z, v.w); - - cache[0] = v.x; - cache[1] = v.y; - cache[2] = v.z; - cache[3] = v.w; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform4iv(this.addr, v); - - copyArray(cache, v); - - } - -} - -// Single unsigned integer - -function setValueV1ui(gl, v) { - - const cache = this.cache; - - if (cache[0] === v) return; - - gl.uniform1ui(this.addr, v); - - cache[0] = v; - -} - -// Single unsigned integer vector (from flat array or THREE.VectorN) - -function setValueV2ui(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y) { - - gl.uniform2ui(this.addr, v.x, v.y); - - cache[0] = v.x; - cache[1] = v.y; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform2uiv(this.addr, v); - - copyArray(cache, v); - - } - -} - -function setValueV3ui(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y || cache[2] !== v.z) { - - gl.uniform3ui(this.addr, v.x, v.y, v.z); - - cache[0] = v.x; - cache[1] = v.y; - cache[2] = v.z; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform3uiv(this.addr, v); - - copyArray(cache, v); - - } - -} - -function setValueV4ui(gl, v) { - - const cache = this.cache; - - if (v.x !== undefined) { - - if (cache[0] !== v.x || cache[1] !== v.y || cache[2] !== v.z || cache[3] !== v.w) { - - gl.uniform4ui(this.addr, v.x, v.y, v.z, v.w); - - cache[0] = v.x; - cache[1] = v.y; - cache[2] = v.z; - cache[3] = v.w; - - } - - } else { - - if (arraysEqual(cache, v)) return; - - gl.uniform4uiv(this.addr, v); - - copyArray(cache, v); - - } - -} - - -// Single texture (2D / Cube) - -function setValueT1(gl, v, textures) { - - const cache = this.cache; - const unit = textures.allocateTextureUnit(); - - if (cache[0] !== unit) { - - gl.uniform1i(this.addr, unit); - cache[0] = unit; - - } - - textures.setTexture2D(v || emptyTexture, unit); - -} - -function setValueT3D1(gl, v, textures) { - - const cache = this.cache; - const unit = textures.allocateTextureUnit(); - - if (cache[0] !== unit) { - - gl.uniform1i(this.addr, unit); - cache[0] = unit; - - } - - textures.setTexture3D(v || empty3dTexture, unit); - -} - -function setValueT6(gl, v, textures) { - - const cache = this.cache; - const unit = textures.allocateTextureUnit(); - - if (cache[0] !== unit) { - - gl.uniform1i(this.addr, unit); - cache[0] = unit; - - } - - textures.setTextureCube(v || emptyCubeTexture, unit); - -} - -function setValueT2DArray1(gl, v, textures) { - - const cache = this.cache; - const unit = textures.allocateTextureUnit(); - - if (cache[0] !== unit) { - - gl.uniform1i(this.addr, unit); - cache[0] = unit; - - } - - textures.setTexture2DArray(v || emptyArrayTexture, unit); - -} - -// Helper to pick the right setter for the singular case - -function getSingularSetter(type) { - - switch (type) { - - case 0x1406: return setValueV1f; // FLOAT - case 0x8b50: return setValueV2f; // _VEC2 - case 0x8b51: return setValueV3f; // _VEC3 - case 0x8b52: return setValueV4f; // _VEC4 - - case 0x8b5a: return setValueM2; // _MAT2 - case 0x8b5b: return setValueM3; // _MAT3 - case 0x8b5c: return setValueM4; // _MAT4 - - case 0x1404: case 0x8b56: return setValueV1i; // INT, BOOL - case 0x8b53: case 0x8b57: return setValueV2i; // _VEC2 - case 0x8b54: case 0x8b58: return setValueV3i; // _VEC3 - case 0x8b55: case 0x8b59: return setValueV4i; // _VEC4 - - case 0x1405: return setValueV1ui; // UINT - case 0x8dc6: return setValueV2ui; // _VEC2 - case 0x8dc7: return setValueV3ui; // _VEC3 - case 0x8dc8: return setValueV4ui; // _VEC4 - - case 0x8b5e: // SAMPLER_2D - case 0x8d66: // SAMPLER_EXTERNAL_OES - case 0x8dca: // INT_SAMPLER_2D - case 0x8dd2: // UNSIGNED_INT_SAMPLER_2D - case 0x8b62: // SAMPLER_2D_SHADOW - return setValueT1; - - case 0x8b5f: // SAMPLER_3D - case 0x8dcb: // INT_SAMPLER_3D - case 0x8dd3: // UNSIGNED_INT_SAMPLER_3D - return setValueT3D1; - - case 0x8b60: // SAMPLER_CUBE - case 0x8dcc: // INT_SAMPLER_CUBE - case 0x8dd4: // UNSIGNED_INT_SAMPLER_CUBE - case 0x8dc5: // SAMPLER_CUBE_SHADOW - return setValueT6; - - case 0x8dc1: // SAMPLER_2D_ARRAY - case 0x8dcf: // INT_SAMPLER_2D_ARRAY - case 0x8dd7: // UNSIGNED_INT_SAMPLER_2D_ARRAY - case 0x8dc4: // SAMPLER_2D_ARRAY_SHADOW - return setValueT2DArray1; - - } - -} - - -// Array of scalars - -function setValueV1fArray(gl, v) { - - gl.uniform1fv(this.addr, v); - -} - -// Array of vectors (from flat array or array of THREE.VectorN) - -function setValueV2fArray(gl, v) { - - const data = flatten(v, this.size, 2); - - gl.uniform2fv(this.addr, data); - -} - -function setValueV3fArray(gl, v) { - - const data = flatten(v, this.size, 3); - - gl.uniform3fv(this.addr, data); - -} - -function setValueV4fArray(gl, v) { - - const data = flatten(v, this.size, 4); - - gl.uniform4fv(this.addr, data); - -} - -// Array of matrices (from flat array or array of THREE.MatrixN) - -function setValueM2Array(gl, v) { - - const data = flatten(v, this.size, 4); - - gl.uniformMatrix2fv(this.addr, false, data); - -} - -function setValueM3Array(gl, v) { - - const data = flatten(v, this.size, 9); - - gl.uniformMatrix3fv(this.addr, false, data); - -} - -function setValueM4Array(gl, v) { - - const data = flatten(v, this.size, 16); - - gl.uniformMatrix4fv(this.addr, false, data); - -} - -// Array of integer / boolean - -function setValueV1iArray(gl, v) { - - gl.uniform1iv(this.addr, v); - -} - -// Array of integer / boolean vectors (from flat array) - -function setValueV2iArray(gl, v) { - - gl.uniform2iv(this.addr, v); - -} - -function setValueV3iArray(gl, v) { - - gl.uniform3iv(this.addr, v); - -} - -function setValueV4iArray(gl, v) { - - gl.uniform4iv(this.addr, v); - -} - -// Array of unsigned integer - -function setValueV1uiArray(gl, v) { - - gl.uniform1uiv(this.addr, v); - -} - -// Array of unsigned integer vectors (from flat array) - -function setValueV2uiArray(gl, v) { - - gl.uniform2uiv(this.addr, v); - -} - -function setValueV3uiArray(gl, v) { - - gl.uniform3uiv(this.addr, v); - -} - -function setValueV4uiArray(gl, v) { - - gl.uniform4uiv(this.addr, v); - -} - - -// Array of textures (2D / 3D / Cube / 2DArray) - -function setValueT1Array(gl, v, textures) { - - const cache = this.cache; - - const n = v.length; - - const units = allocTexUnits(textures, n); - - if (!arraysEqual(cache, units)) { - - gl.uniform1iv(this.addr, units); - - copyArray(cache, units); - - } - - for (let i = 0; i !== n; ++i) { - - textures.setTexture2D(v[i] || emptyTexture, units[i]); - - } - -} - -function setValueT3DArray(gl, v, textures) { - - const cache = this.cache; - - const n = v.length; - - const units = allocTexUnits(textures, n); - - if (!arraysEqual(cache, units)) { - - gl.uniform1iv(this.addr, units); - - copyArray(cache, units); - - } - - for (let i = 0; i !== n; ++i) { - - textures.setTexture3D(v[i] || empty3dTexture, units[i]); - - } - -} - -function setValueT6Array(gl, v, textures) { - - const cache = this.cache; - - const n = v.length; - - const units = allocTexUnits(textures, n); - - if (!arraysEqual(cache, units)) { - - gl.uniform1iv(this.addr, units); - - copyArray(cache, units); - - } - - for (let i = 0; i !== n; ++i) { - - textures.setTextureCube(v[i] || emptyCubeTexture, units[i]); - - } - -} - -function setValueT2DArrayArray(gl, v, textures) { - - const cache = this.cache; - - const n = v.length; - - const units = allocTexUnits(textures, n); - - if (!arraysEqual(cache, units)) { - - gl.uniform1iv(this.addr, units); - - copyArray(cache, units); - - } - - for (let i = 0; i !== n; ++i) { - - textures.setTexture2DArray(v[i] || emptyArrayTexture, units[i]); - - } - -} - - -// Helper to pick the right setter for a pure (bottom-level) array - -function getPureArraySetter(type) { - - switch (type) { - - case 0x1406: return setValueV1fArray; // FLOAT - case 0x8b50: return setValueV2fArray; // _VEC2 - case 0x8b51: return setValueV3fArray; // _VEC3 - case 0x8b52: return setValueV4fArray; // _VEC4 - - case 0x8b5a: return setValueM2Array; // _MAT2 - case 0x8b5b: return setValueM3Array; // _MAT3 - case 0x8b5c: return setValueM4Array; // _MAT4 - - case 0x1404: case 0x8b56: return setValueV1iArray; // INT, BOOL - case 0x8b53: case 0x8b57: return setValueV2iArray; // _VEC2 - case 0x8b54: case 0x8b58: return setValueV3iArray; // _VEC3 - case 0x8b55: case 0x8b59: return setValueV4iArray; // _VEC4 - - case 0x1405: return setValueV1uiArray; // UINT - case 0x8dc6: return setValueV2uiArray; // _VEC2 - case 0x8dc7: return setValueV3uiArray; // _VEC3 - case 0x8dc8: return setValueV4uiArray; // _VEC4 - - case 0x8b5e: // SAMPLER_2D - case 0x8d66: // SAMPLER_EXTERNAL_OES - case 0x8dca: // INT_SAMPLER_2D - case 0x8dd2: // UNSIGNED_INT_SAMPLER_2D - case 0x8b62: // SAMPLER_2D_SHADOW - return setValueT1Array; - - case 0x8b5f: // SAMPLER_3D - case 0x8dcb: // INT_SAMPLER_3D - case 0x8dd3: // UNSIGNED_INT_SAMPLER_3D - return setValueT3DArray; - - case 0x8b60: // SAMPLER_CUBE - case 0x8dcc: // INT_SAMPLER_CUBE - case 0x8dd4: // UNSIGNED_INT_SAMPLER_CUBE - case 0x8dc5: // SAMPLER_CUBE_SHADOW - return setValueT6Array; - - case 0x8dc1: // SAMPLER_2D_ARRAY - case 0x8dcf: // INT_SAMPLER_2D_ARRAY - case 0x8dd7: // UNSIGNED_INT_SAMPLER_2D_ARRAY - case 0x8dc4: // SAMPLER_2D_ARRAY_SHADOW - return setValueT2DArrayArray; - - } - -} - -// --- Uniform Classes --- - -class SingleUniform { - - constructor(id, activeInfo, addr) { - - this.id = id; - this.addr = addr; - this.cache = []; - this.setValue = getSingularSetter(activeInfo.type); - - // this.path = activeInfo.name; // DEBUG - - } - -} - -class PureArrayUniform { - - constructor(id, activeInfo, addr) { - - this.id = id; - this.addr = addr; - this.cache = []; - this.size = activeInfo.size; - this.setValue = getPureArraySetter(activeInfo.type); - - // this.path = activeInfo.name; // DEBUG - - } - -} - -class StructuredUniform { - - constructor(id) { - - this.id = id; - - this.seq = []; - this.map = {}; - - } - - setValue(gl, value, textures) { - - const seq = this.seq; - - for (let i = 0, n = seq.length; i !== n; ++i) { - - const u = seq[i]; - u.setValue(gl, value[u.id], textures); - - } - - } - -} - -// --- Top-level --- - -// Parser - builds up the property tree from the path strings - -const RePathPart = /(\w+)(\])?(\[|\.)?/g; - -// extracts -// - the identifier (member name or array index) -// - followed by an optional right bracket (found when array index) -// - followed by an optional left bracket or dot (type of subscript) -// -// Note: These portions can be read in a non-overlapping fashion and -// allow straightforward parsing of the hierarchy that WebGL encodes -// in the uniform names. - -function addUniform(container, uniformObject) { - - container.seq.push(uniformObject); - container.map[uniformObject.id] = uniformObject; - -} - -function parseUniform(activeInfo, addr, container) { - - const path = activeInfo.name, - pathLength = path.length; - - // reset RegExp object, because of the early exit of a previous run - RePathPart.lastIndex = 0; - - while (true) { - - const match = RePathPart.exec(path), - matchEnd = RePathPart.lastIndex; - - let id = match[1]; - const idIsIndex = match[2] === ']', - subscript = match[3]; - - if (idIsIndex) id = id | 0; // convert to integer - - if (subscript === undefined || subscript === '[' && matchEnd + 2 === pathLength) { - - // bare name or "pure" bottom-level array "[0]" suffix - - addUniform(container, subscript === undefined ? - new SingleUniform(id, activeInfo, addr) : - new PureArrayUniform(id, activeInfo, addr)); - - break; - - } else { - - // step into inner node / create it in case it doesn't exist - - const map = container.map; - let next = map[id]; - - if (next === undefined) { - - next = new StructuredUniform(id); - addUniform(container, next); - - } - - container = next; - - } - - } - -} - -// Root Container - -class WebGLUniforms { - - constructor(gl, program) { - - this.seq = []; - this.map = {}; - - const n = gl.getProgramParameter(program, 35718); - - for (let i = 0; i < n; ++i) { - - const info = gl.getActiveUniform(program, i), - addr = gl.getUniformLocation(program, info.name); - - parseUniform(info, addr, this); - - } - - } - - setValue(gl, name, value, textures) { - - const u = this.map[name]; - - if (u !== undefined) u.setValue(gl, value, textures); - - } - - setOptional(gl, object, name) { - - const v = object[name]; - - if (v !== undefined) this.setValue(gl, name, v); - - } - - static upload(gl, seq, values, textures) { - - for (let i = 0, n = seq.length; i !== n; ++i) { - - const u = seq[i], - v = values[u.id]; - - if (v.needsUpdate !== false) { - - // note: always updating when .needsUpdate is undefined - u.setValue(gl, v.value, textures); - - } - - } - - } - - static seqWithValue(seq, values) { - - const r = []; - - for (let i = 0, n = seq.length; i !== n; ++i) { - - const u = seq[i]; - if (u.id in values) r.push(u); - - } - - return r; - - } - -} - -function WebGLShader(gl, type, string) { - - const shader = gl.createShader(type); - - gl.shaderSource(shader, string); - gl.compileShader(shader); - - return shader; - -} - -let programIdCount = 0; - -function handleSource(string, errorLine) { - - const lines = string.split('\n'); - const lines2 = []; - - const from = Math.max(errorLine - 6, 0); - const to = Math.min(errorLine + 6, lines.length); - - for (let i = from; i < to; i++) { - - const line = i + 1; - lines2.push(`${line === errorLine ? '>' : ' '} ${line}: ${lines[i]}`); - - } - - return lines2.join('\n'); - -} - -function getEncodingComponents(encoding) { - - switch (encoding) { - - case LinearEncoding: - return ['Linear', '( value )']; - case sRGBEncoding: - return ['sRGB', '( value )']; - default: - console.warn('THREE.WebGLProgram: Unsupported encoding:', encoding); - return ['Linear', '( value )']; - - } - -} - -function getShaderErrors(gl, shader, type) { - - const status = gl.getShaderParameter(shader, 35713); - const errors = gl.getShaderInfoLog(shader).trim(); - - if (status && errors === '') return ''; - - const errorMatches = /ERROR: 0:(\d+)/.exec(errors); - if (errorMatches) { - - // --enable-privileged-webgl-extension - // console.log( '**' + type + '**', gl.getExtension( 'WEBGL_debug_shaders' ).getTranslatedShaderSource( shader ) ); - - const errorLine = parseInt(errorMatches[1]); - return type.toUpperCase() + '\n\n' + errors + '\n\n' + handleSource(gl.getShaderSource(shader), errorLine); - - } else { - - return errors; - - } - -} - -function getTexelEncodingFunction(functionName, encoding) { - - const components = getEncodingComponents(encoding); - return 'vec4 ' + functionName + '( vec4 value ) { return LinearTo' + components[0] + components[1] + '; }'; - -} - -function getToneMappingFunction(functionName, toneMapping) { - - let toneMappingName; - - switch (toneMapping) { - - case LinearToneMapping: - toneMappingName = 'Linear'; - break; - - case ReinhardToneMapping: - toneMappingName = 'Reinhard'; - break; - - case CineonToneMapping: - toneMappingName = 'OptimizedCineon'; - break; - - case ACESFilmicToneMapping: - toneMappingName = 'ACESFilmic'; - break; - - case CustomToneMapping: - toneMappingName = 'Custom'; - break; - - default: - console.warn('THREE.WebGLProgram: Unsupported toneMapping:', toneMapping); - toneMappingName = 'Linear'; - - } - - return 'vec3 ' + functionName + '( vec3 color ) { return ' + toneMappingName + 'ToneMapping( color ); }'; - -} - -function generateExtensions(parameters) { - - const chunks = [ - (parameters.extensionDerivatives || !!parameters.envMapCubeUVHeight || parameters.bumpMap || parameters.tangentSpaceNormalMap || parameters.clearcoatNormalMap || parameters.flatShading || parameters.shaderID === 'physical') ? '#extension GL_OES_standard_derivatives : enable' : '', - (parameters.extensionFragDepth || parameters.logarithmicDepthBuffer) && parameters.rendererExtensionFragDepth ? '#extension GL_EXT_frag_depth : enable' : '', - (parameters.extensionDrawBuffers && parameters.rendererExtensionDrawBuffers) ? '#extension GL_EXT_draw_buffers : require' : '', - (parameters.extensionShaderTextureLOD || parameters.envMap || parameters.transmission) && parameters.rendererExtensionShaderTextureLod ? '#extension GL_EXT_shader_texture_lod : enable' : '' - ]; - - return chunks.filter(filterEmptyLine).join('\n'); - -} - -function generateDefines(defines) { - - const chunks = []; - - for (const name in defines) { - - const value = defines[name]; - - if (value === false) continue; - - chunks.push('#define ' + name + ' ' + value); - - } - - return chunks.join('\n'); - -} - -function fetchAttributeLocations(gl, program) { - - const attributes = {}; - - const n = gl.getProgramParameter(program, 35721); - - for (let i = 0; i < n; i++) { - - const info = gl.getActiveAttrib(program, i); - const name = info.name; - - let locationSize = 1; - if (info.type === 35674) locationSize = 2; - if (info.type === 35675) locationSize = 3; - if (info.type === 35676) locationSize = 4; - - // console.log( 'THREE.WebGLProgram: ACTIVE VERTEX ATTRIBUTE:', name, i ); - - attributes[name] = { - type: info.type, - location: gl.getAttribLocation(program, name), - locationSize: locationSize - }; - - } - - return attributes; - -} - -function filterEmptyLine(string) { - - return string !== ''; - -} - -function replaceLightNums(string, parameters) { - - const numSpotLightCoords = parameters.numSpotLightShadows + parameters.numSpotLightMaps - parameters.numSpotLightShadowsWithMaps; - - return string - .replace(/NUM_DIR_LIGHTS/g, parameters.numDirLights) - .replace(/NUM_SPOT_LIGHTS/g, parameters.numSpotLights) - .replace(/NUM_SPOT_LIGHT_MAPS/g, parameters.numSpotLightMaps) - .replace(/NUM_SPOT_LIGHT_COORDS/g, numSpotLightCoords) - .replace(/NUM_RECT_AREA_LIGHTS/g, parameters.numRectAreaLights) - .replace(/NUM_POINT_LIGHTS/g, parameters.numPointLights) - .replace(/NUM_HEMI_LIGHTS/g, parameters.numHemiLights) - .replace(/NUM_DIR_LIGHT_SHADOWS/g, parameters.numDirLightShadows) - .replace(/NUM_SPOT_LIGHT_SHADOWS_WITH_MAPS/g, parameters.numSpotLightShadowsWithMaps) - .replace(/NUM_SPOT_LIGHT_SHADOWS/g, parameters.numSpotLightShadows) - .replace(/NUM_POINT_LIGHT_SHADOWS/g, parameters.numPointLightShadows); - -} - -function replaceClippingPlaneNums(string, parameters) { - - return string - .replace(/NUM_CLIPPING_PLANES/g, parameters.numClippingPlanes) - .replace(/UNION_CLIPPING_PLANES/g, (parameters.numClippingPlanes - parameters.numClipIntersection)); - -} - -// Resolve Includes - -const includePattern = /^[ \t]*#include +<([\w\d./]+)>/gm; - -function resolveIncludes(string) { - - return string.replace(includePattern, includeReplacer); - -} - -function includeReplacer(match, include) { - - const string = ShaderChunk[include]; - - if (string === undefined) { - - throw new Error('Can not resolve #include <' + include + '>'); - - } - - return resolveIncludes(string); - -} - -// Unroll Loops - -const unrollLoopPattern = /#pragma unroll_loop_start\s+for\s*\(\s*int\s+i\s*=\s*(\d+)\s*;\s*i\s*<\s*(\d+)\s*;\s*i\s*\+\+\s*\)\s*{([\s\S]+?)}\s+#pragma unroll_loop_end/g; - -function unrollLoops(string) { - - return string.replace(unrollLoopPattern, loopReplacer); - -} - -function loopReplacer(match, start, end, snippet) { - - let string = ''; - - for (let i = parseInt(start); i < parseInt(end); i++) { - - string += snippet - .replace(/\[\s*i\s*\]/g, '[ ' + i + ' ]') - .replace(/UNROLLED_LOOP_INDEX/g, i); - - } - - return string; - -} - -// - -function generatePrecision(parameters) { - - let precisionstring = 'precision ' + parameters.precision + ' float;\nprecision ' + parameters.precision + ' int;'; - - if (parameters.precision === 'highp') { - - precisionstring += '\n#define HIGH_PRECISION'; - - } else if (parameters.precision === 'mediump') { - - precisionstring += '\n#define MEDIUM_PRECISION'; - - } else if (parameters.precision === 'lowp') { - - precisionstring += '\n#define LOW_PRECISION'; - - } - - return precisionstring; - -} - -function generateShadowMapTypeDefine(parameters) { - - let shadowMapTypeDefine = 'SHADOWMAP_TYPE_BASIC'; - - if (parameters.shadowMapType === PCFShadowMap) { - - shadowMapTypeDefine = 'SHADOWMAP_TYPE_PCF'; - - } else if (parameters.shadowMapType === PCFSoftShadowMap) { - - shadowMapTypeDefine = 'SHADOWMAP_TYPE_PCF_SOFT'; - - } else if (parameters.shadowMapType === VSMShadowMap) { - - shadowMapTypeDefine = 'SHADOWMAP_TYPE_VSM'; - - } - - return shadowMapTypeDefine; - -} - -function generateEnvMapTypeDefine(parameters) { - - let envMapTypeDefine = 'ENVMAP_TYPE_CUBE'; - - if (parameters.envMap) { - - switch (parameters.envMapMode) { - - case CubeReflectionMapping: - case CubeRefractionMapping: - envMapTypeDefine = 'ENVMAP_TYPE_CUBE'; - break; - - case CubeUVReflectionMapping: - envMapTypeDefine = 'ENVMAP_TYPE_CUBE_UV'; - break; - - } - - } - - return envMapTypeDefine; - -} - -function generateEnvMapModeDefine(parameters) { - - let envMapModeDefine = 'ENVMAP_MODE_REFLECTION'; - - if (parameters.envMap) { - - switch (parameters.envMapMode) { - - case CubeRefractionMapping: - - envMapModeDefine = 'ENVMAP_MODE_REFRACTION'; - break; - - } - - } - - return envMapModeDefine; - -} - -function generateEnvMapBlendingDefine(parameters) { - - let envMapBlendingDefine = 'ENVMAP_BLENDING_NONE'; - - if (parameters.envMap) { - - switch (parameters.combine) { - - case MultiplyOperation: - envMapBlendingDefine = 'ENVMAP_BLENDING_MULTIPLY'; - break; - - case MixOperation: - envMapBlendingDefine = 'ENVMAP_BLENDING_MIX'; - break; - - case AddOperation: - envMapBlendingDefine = 'ENVMAP_BLENDING_ADD'; - break; - - } - - } - - return envMapBlendingDefine; - -} - -function generateCubeUVSize(parameters) { - - const imageHeight = parameters.envMapCubeUVHeight; - - if (imageHeight === null) return null; - - const maxMip = Math.log2(imageHeight) - 2; - - const texelHeight = 1.0 / imageHeight; - - const texelWidth = 1.0 / (3 * Math.max(Math.pow(2, maxMip), 7 * 16)); - - return { texelWidth, texelHeight, maxMip }; - -} - -function WebGLProgram(renderer, cacheKey, parameters, bindingStates) { - - // TODO Send this event to Three.js DevTools - // console.log( 'WebGLProgram', cacheKey ); - - const gl = renderer.getContext(); - - const defines = parameters.defines; - - let vertexShader = parameters.vertexShader; - let fragmentShader = parameters.fragmentShader; - - const shadowMapTypeDefine = generateShadowMapTypeDefine(parameters); - const envMapTypeDefine = generateEnvMapTypeDefine(parameters); - const envMapModeDefine = generateEnvMapModeDefine(parameters); - const envMapBlendingDefine = generateEnvMapBlendingDefine(parameters); - const envMapCubeUVSize = generateCubeUVSize(parameters); - - const customExtensions = parameters.isWebGL2 ? '' : generateExtensions(parameters); - - const customDefines = generateDefines(defines); - - const program = gl.createProgram(); - - let prefixVertex, prefixFragment; - let versionString = parameters.glslVersion ? '#version ' + parameters.glslVersion + '\n' : ''; - - if (parameters.isRawShaderMaterial) { - - prefixVertex = [ - - customDefines - - ].filter(filterEmptyLine).join('\n'); - - if (prefixVertex.length > 0) { - - prefixVertex += '\n'; - - } - - prefixFragment = [ - - customExtensions, - customDefines - - ].filter(filterEmptyLine).join('\n'); - - if (prefixFragment.length > 0) { - - prefixFragment += '\n'; - - } - - } else { - - prefixVertex = [ - - generatePrecision(parameters), - - '#define SHADER_NAME ' + parameters.shaderName, - - customDefines, - - parameters.instancing ? '#define USE_INSTANCING' : '', - parameters.instancingColor ? '#define USE_INSTANCING_COLOR' : '', - - parameters.supportsVertexTextures ? '#define VERTEX_TEXTURES' : '', - - (parameters.useFog && parameters.fog) ? '#define USE_FOG' : '', - (parameters.useFog && parameters.fogExp2) ? '#define FOG_EXP2' : '', - - parameters.map ? '#define USE_MAP' : '', - parameters.envMap ? '#define USE_ENVMAP' : '', - parameters.envMap ? '#define ' + envMapModeDefine : '', - parameters.lightMap ? '#define USE_LIGHTMAP' : '', - parameters.aoMap ? '#define USE_AOMAP' : '', - parameters.emissiveMap ? '#define USE_EMISSIVEMAP' : '', - parameters.bumpMap ? '#define USE_BUMPMAP' : '', - parameters.normalMap ? '#define USE_NORMALMAP' : '', - (parameters.normalMap && parameters.objectSpaceNormalMap) ? '#define OBJECTSPACE_NORMALMAP' : '', - (parameters.normalMap && parameters.tangentSpaceNormalMap) ? '#define TANGENTSPACE_NORMALMAP' : '', - - parameters.clearcoatMap ? '#define USE_CLEARCOATMAP' : '', - parameters.clearcoatRoughnessMap ? '#define USE_CLEARCOAT_ROUGHNESSMAP' : '', - parameters.clearcoatNormalMap ? '#define USE_CLEARCOAT_NORMALMAP' : '', - - parameters.iridescenceMap ? '#define USE_IRIDESCENCEMAP' : '', - parameters.iridescenceThicknessMap ? '#define USE_IRIDESCENCE_THICKNESSMAP' : '', - - parameters.displacementMap && parameters.supportsVertexTextures ? '#define USE_DISPLACEMENTMAP' : '', - - parameters.specularMap ? '#define USE_SPECULARMAP' : '', - parameters.specularIntensityMap ? '#define USE_SPECULARINTENSITYMAP' : '', - parameters.specularColorMap ? '#define USE_SPECULARCOLORMAP' : '', - - parameters.roughnessMap ? '#define USE_ROUGHNESSMAP' : '', - parameters.metalnessMap ? '#define USE_METALNESSMAP' : '', - parameters.alphaMap ? '#define USE_ALPHAMAP' : '', - - parameters.transmission ? '#define USE_TRANSMISSION' : '', - parameters.transmissionMap ? '#define USE_TRANSMISSIONMAP' : '', - parameters.thicknessMap ? '#define USE_THICKNESSMAP' : '', - - parameters.sheenColorMap ? '#define USE_SHEENCOLORMAP' : '', - parameters.sheenRoughnessMap ? '#define USE_SHEENROUGHNESSMAP' : '', - - parameters.vertexTangents ? '#define USE_TANGENT' : '', - parameters.vertexColors ? '#define USE_COLOR' : '', - parameters.vertexAlphas ? '#define USE_COLOR_ALPHA' : '', - parameters.vertexUvs ? '#define USE_UV' : '', - parameters.uvsVertexOnly ? '#define UVS_VERTEX_ONLY' : '', - - parameters.flatShading ? '#define FLAT_SHADED' : '', - - parameters.skinning ? '#define USE_SKINNING' : '', - - parameters.morphTargets ? '#define USE_MORPHTARGETS' : '', - parameters.morphNormals && parameters.flatShading === false ? '#define USE_MORPHNORMALS' : '', - (parameters.morphColors && parameters.isWebGL2) ? '#define USE_MORPHCOLORS' : '', - (parameters.morphTargetsCount > 0 && parameters.isWebGL2) ? '#define MORPHTARGETS_TEXTURE' : '', - (parameters.morphTargetsCount > 0 && parameters.isWebGL2) ? '#define MORPHTARGETS_TEXTURE_STRIDE ' + parameters.morphTextureStride : '', - (parameters.morphTargetsCount > 0 && parameters.isWebGL2) ? '#define MORPHTARGETS_COUNT ' + parameters.morphTargetsCount : '', - parameters.doubleSided ? '#define DOUBLE_SIDED' : '', - parameters.flipSided ? '#define FLIP_SIDED' : '', - - parameters.shadowMapEnabled ? '#define USE_SHADOWMAP' : '', - parameters.shadowMapEnabled ? '#define ' + shadowMapTypeDefine : '', - - parameters.sizeAttenuation ? '#define USE_SIZEATTENUATION' : '', - - parameters.logarithmicDepthBuffer ? '#define USE_LOGDEPTHBUF' : '', - (parameters.logarithmicDepthBuffer && parameters.rendererExtensionFragDepth) ? '#define USE_LOGDEPTHBUF_EXT' : '', - - 'uniform mat4 modelMatrix;', - 'uniform mat4 modelViewMatrix;', - 'uniform mat4 projectionMatrix;', - 'uniform mat4 viewMatrix;', - 'uniform mat3 normalMatrix;', - 'uniform vec3 cameraPosition;', - 'uniform bool isOrthographic;', - - '#ifdef USE_INSTANCING', - - ' attribute mat4 instanceMatrix;', - - '#endif', - - '#ifdef USE_INSTANCING_COLOR', - - ' attribute vec3 instanceColor;', - - '#endif', - - 'attribute vec3 position;', - 'attribute vec3 normal;', - 'attribute vec2 uv;', - - '#ifdef USE_TANGENT', - - ' attribute vec4 tangent;', - - '#endif', - - '#if defined( USE_COLOR_ALPHA )', - - ' attribute vec4 color;', - - '#elif defined( USE_COLOR )', - - ' attribute vec3 color;', - - '#endif', - - '#if ( defined( USE_MORPHTARGETS ) && ! defined( MORPHTARGETS_TEXTURE ) )', - - ' attribute vec3 morphTarget0;', - ' attribute vec3 morphTarget1;', - ' attribute vec3 morphTarget2;', - ' attribute vec3 morphTarget3;', - - ' #ifdef USE_MORPHNORMALS', - - ' attribute vec3 morphNormal0;', - ' attribute vec3 morphNormal1;', - ' attribute vec3 morphNormal2;', - ' attribute vec3 morphNormal3;', - - ' #else', - - ' attribute vec3 morphTarget4;', - ' attribute vec3 morphTarget5;', - ' attribute vec3 morphTarget6;', - ' attribute vec3 morphTarget7;', - - ' #endif', - - '#endif', - - '#ifdef USE_SKINNING', - - ' attribute vec4 skinIndex;', - ' attribute vec4 skinWeight;', - - '#endif', - - '\n' - - ].filter(filterEmptyLine).join('\n'); - - prefixFragment = [ - - customExtensions, - - generatePrecision(parameters), - - '#define SHADER_NAME ' + parameters.shaderName, - - customDefines, - - (parameters.useFog && parameters.fog) ? '#define USE_FOG' : '', - (parameters.useFog && parameters.fogExp2) ? '#define FOG_EXP2' : '', - - parameters.map ? '#define USE_MAP' : '', - parameters.matcap ? '#define USE_MATCAP' : '', - parameters.envMap ? '#define USE_ENVMAP' : '', - parameters.envMap ? '#define ' + envMapTypeDefine : '', - parameters.envMap ? '#define ' + envMapModeDefine : '', - parameters.envMap ? '#define ' + envMapBlendingDefine : '', - envMapCubeUVSize ? '#define CUBEUV_TEXEL_WIDTH ' + envMapCubeUVSize.texelWidth : '', - envMapCubeUVSize ? '#define CUBEUV_TEXEL_HEIGHT ' + envMapCubeUVSize.texelHeight : '', - envMapCubeUVSize ? '#define CUBEUV_MAX_MIP ' + envMapCubeUVSize.maxMip + '.0' : '', - parameters.lightMap ? '#define USE_LIGHTMAP' : '', - parameters.aoMap ? '#define USE_AOMAP' : '', - parameters.emissiveMap ? '#define USE_EMISSIVEMAP' : '', - parameters.bumpMap ? '#define USE_BUMPMAP' : '', - parameters.normalMap ? '#define USE_NORMALMAP' : '', - (parameters.normalMap && parameters.objectSpaceNormalMap) ? '#define OBJECTSPACE_NORMALMAP' : '', - (parameters.normalMap && parameters.tangentSpaceNormalMap) ? '#define TANGENTSPACE_NORMALMAP' : '', - - parameters.clearcoat ? '#define USE_CLEARCOAT' : '', - parameters.clearcoatMap ? '#define USE_CLEARCOATMAP' : '', - parameters.clearcoatRoughnessMap ? '#define USE_CLEARCOAT_ROUGHNESSMAP' : '', - parameters.clearcoatNormalMap ? '#define USE_CLEARCOAT_NORMALMAP' : '', - - parameters.iridescence ? '#define USE_IRIDESCENCE' : '', - parameters.iridescenceMap ? '#define USE_IRIDESCENCEMAP' : '', - parameters.iridescenceThicknessMap ? '#define USE_IRIDESCENCE_THICKNESSMAP' : '', - - parameters.specularMap ? '#define USE_SPECULARMAP' : '', - parameters.specularIntensityMap ? '#define USE_SPECULARINTENSITYMAP' : '', - parameters.specularColorMap ? '#define USE_SPECULARCOLORMAP' : '', - parameters.roughnessMap ? '#define USE_ROUGHNESSMAP' : '', - parameters.metalnessMap ? '#define USE_METALNESSMAP' : '', - - parameters.alphaMap ? '#define USE_ALPHAMAP' : '', - parameters.alphaTest ? '#define USE_ALPHATEST' : '', - - parameters.sheen ? '#define USE_SHEEN' : '', - parameters.sheenColorMap ? '#define USE_SHEENCOLORMAP' : '', - parameters.sheenRoughnessMap ? '#define USE_SHEENROUGHNESSMAP' : '', - - parameters.transmission ? '#define USE_TRANSMISSION' : '', - parameters.transmissionMap ? '#define USE_TRANSMISSIONMAP' : '', - parameters.thicknessMap ? '#define USE_THICKNESSMAP' : '', - - parameters.decodeVideoTexture ? '#define DECODE_VIDEO_TEXTURE' : '', - - parameters.vertexTangents ? '#define USE_TANGENT' : '', - parameters.vertexColors || parameters.instancingColor ? '#define USE_COLOR' : '', - parameters.vertexAlphas ? '#define USE_COLOR_ALPHA' : '', - parameters.vertexUvs ? '#define USE_UV' : '', - parameters.uvsVertexOnly ? '#define UVS_VERTEX_ONLY' : '', - - parameters.gradientMap ? '#define USE_GRADIENTMAP' : '', - - parameters.flatShading ? '#define FLAT_SHADED' : '', - - parameters.doubleSided ? '#define DOUBLE_SIDED' : '', - parameters.flipSided ? '#define FLIP_SIDED' : '', - - parameters.shadowMapEnabled ? '#define USE_SHADOWMAP' : '', - parameters.shadowMapEnabled ? '#define ' + shadowMapTypeDefine : '', - - parameters.premultipliedAlpha ? '#define PREMULTIPLIED_ALPHA' : '', - - parameters.physicallyCorrectLights ? '#define PHYSICALLY_CORRECT_LIGHTS' : '', - - parameters.logarithmicDepthBuffer ? '#define USE_LOGDEPTHBUF' : '', - (parameters.logarithmicDepthBuffer && parameters.rendererExtensionFragDepth) ? '#define USE_LOGDEPTHBUF_EXT' : '', - - 'uniform mat4 viewMatrix;', - 'uniform vec3 cameraPosition;', - 'uniform bool isOrthographic;', - - (parameters.toneMapping !== NoToneMapping) ? '#define TONE_MAPPING' : '', - (parameters.toneMapping !== NoToneMapping) ? ShaderChunk['tonemapping_pars_fragment'] : '', // this code is required here because it is used by the toneMapping() function defined below - (parameters.toneMapping !== NoToneMapping) ? getToneMappingFunction('toneMapping', parameters.toneMapping) : '', - - parameters.dithering ? '#define DITHERING' : '', - parameters.opaque ? '#define OPAQUE' : '', - - ShaderChunk['encodings_pars_fragment'], // this code is required here because it is used by the various encoding/decoding function defined below - getTexelEncodingFunction('linearToOutputTexel', parameters.outputEncoding), - - parameters.useDepthPacking ? '#define DEPTH_PACKING ' + parameters.depthPacking : '', - - '\n' - - ].filter(filterEmptyLine).join('\n'); - - } - - vertexShader = resolveIncludes(vertexShader); - vertexShader = replaceLightNums(vertexShader, parameters); - vertexShader = replaceClippingPlaneNums(vertexShader, parameters); - - fragmentShader = resolveIncludes(fragmentShader); - fragmentShader = replaceLightNums(fragmentShader, parameters); - fragmentShader = replaceClippingPlaneNums(fragmentShader, parameters); - - vertexShader = unrollLoops(vertexShader); - fragmentShader = unrollLoops(fragmentShader); - - if (parameters.isWebGL2 && parameters.isRawShaderMaterial !== true) { - - // GLSL 3.0 conversion for built-in materials and ShaderMaterial - - versionString = '#version 300 es\n'; - - prefixVertex = [ - 'precision mediump sampler2DArray;', - '#define attribute in', - '#define varying out', - '#define texture2D texture' - ].join('\n') + '\n' + prefixVertex; - - prefixFragment = [ - '#define varying in', - (parameters.glslVersion === GLSL3) ? '' : 'layout(location = 0) out highp vec4 pc_fragColor;', - (parameters.glslVersion === GLSL3) ? '' : '#define gl_FragColor pc_fragColor', - '#define gl_FragDepthEXT gl_FragDepth', - '#define texture2D texture', - '#define textureCube texture', - '#define texture2DProj textureProj', - '#define texture2DLodEXT textureLod', - '#define texture2DProjLodEXT textureProjLod', - '#define textureCubeLodEXT textureLod', - '#define texture2DGradEXT textureGrad', - '#define texture2DProjGradEXT textureProjGrad', - '#define textureCubeGradEXT textureGrad' - ].join('\n') + '\n' + prefixFragment; - - } - - const vertexGlsl = versionString + prefixVertex + vertexShader; - const fragmentGlsl = versionString + prefixFragment + fragmentShader; - - // console.log( '*VERTEX*', vertexGlsl ); - // console.log( '*FRAGMENT*', fragmentGlsl ); - - const glVertexShader = WebGLShader(gl, 35633, vertexGlsl); - const glFragmentShader = WebGLShader(gl, 35632, fragmentGlsl); - - gl.attachShader(program, glVertexShader); - gl.attachShader(program, glFragmentShader); - - // Force a particular attribute to index 0. - - if (parameters.index0AttributeName !== undefined) { - - gl.bindAttribLocation(program, 0, parameters.index0AttributeName); - - } else if (parameters.morphTargets === true) { - - // programs with morphTargets displace position out of attribute 0 - gl.bindAttribLocation(program, 0, 'position'); - - } - - gl.linkProgram(program); - - // check for link errors - if (renderer.debug.checkShaderErrors) { - - const programLog = gl.getProgramInfoLog(program).trim(); - const vertexLog = gl.getShaderInfoLog(glVertexShader).trim(); - const fragmentLog = gl.getShaderInfoLog(glFragmentShader).trim(); - - let runnable = true; - let haveDiagnostics = true; - - if (gl.getProgramParameter(program, 35714) === false) { - - runnable = false; - - const vertexErrors = getShaderErrors(gl, glVertexShader, 'vertex'); - const fragmentErrors = getShaderErrors(gl, glFragmentShader, 'fragment'); - - console.error( - 'THREE.WebGLProgram: Shader Error ' + gl.getError() + ' - ' + - 'VALIDATE_STATUS ' + gl.getProgramParameter(program, 35715) + '\n\n' + - 'Program Info Log: ' + programLog + '\n' + - vertexErrors + '\n' + - fragmentErrors - ); - - } else if (programLog !== '') { - - console.warn('THREE.WebGLProgram: Program Info Log:', programLog); - - } else if (vertexLog === '' || fragmentLog === '') { - - haveDiagnostics = false; - - } - - if (haveDiagnostics) { - - this.diagnostics = { - - runnable: runnable, - - programLog: programLog, - - vertexShader: { - - log: vertexLog, - prefix: prefixVertex - - }, - - fragmentShader: { - - log: fragmentLog, - prefix: prefixFragment - - } - - }; - - } - - } - - // Clean up - - // Crashes in iOS9 and iOS10. #18402 - // gl.detachShader( program, glVertexShader ); - // gl.detachShader( program, glFragmentShader ); - - gl.deleteShader(glVertexShader); - gl.deleteShader(glFragmentShader); - - // set up caching for uniform locations - - let cachedUniforms; - - this.getUniforms = function () { - - if (cachedUniforms === undefined) { - - cachedUniforms = new WebGLUniforms(gl, program); - - } - - return cachedUniforms; - - }; - - // set up caching for attribute locations - - let cachedAttributes; - - this.getAttributes = function () { - - if (cachedAttributes === undefined) { - - cachedAttributes = fetchAttributeLocations(gl, program); - - } - - return cachedAttributes; - - }; - - // free resource - - this.destroy = function () { - - bindingStates.releaseStatesOfProgram(this); - - gl.deleteProgram(program); - this.program = undefined; - - }; - - // - - this.name = parameters.shaderName; - this.id = programIdCount++; - this.cacheKey = cacheKey; - this.usedTimes = 1; - this.program = program; - this.vertexShader = glVertexShader; - this.fragmentShader = glFragmentShader; - - return this; - -} - -let _id = 0; - -class WebGLShaderCache { - - constructor() { - - this.shaderCache = new Map(); - this.materialCache = new Map(); - - } - - update(material) { - - const vertexShader = material.vertexShader; - const fragmentShader = material.fragmentShader; - - const vertexShaderStage = this._getShaderStage(vertexShader); - const fragmentShaderStage = this._getShaderStage(fragmentShader); - - const materialShaders = this._getShaderCacheForMaterial(material); - - if (materialShaders.has(vertexShaderStage) === false) { - - materialShaders.add(vertexShaderStage); - vertexShaderStage.usedTimes++; - - } - - if (materialShaders.has(fragmentShaderStage) === false) { - - materialShaders.add(fragmentShaderStage); - fragmentShaderStage.usedTimes++; - - } - - return this; - - } - - remove(material) { - - const materialShaders = this.materialCache.get(material); - - for (const shaderStage of materialShaders) { - - shaderStage.usedTimes--; - - if (shaderStage.usedTimes === 0) this.shaderCache.delete(shaderStage.code); - - } - - this.materialCache.delete(material); - - return this; - - } - - getVertexShaderID(material) { - - return this._getShaderStage(material.vertexShader).id; - - } - - getFragmentShaderID(material) { - - return this._getShaderStage(material.fragmentShader).id; - - } - - dispose() { - - this.shaderCache.clear(); - this.materialCache.clear(); - - } - - _getShaderCacheForMaterial(material) { - - const cache = this.materialCache; - let set = cache.get(material); - - if (set === undefined) { - - set = new Set(); - cache.set(material, set); - - } - - return set; - - } - - _getShaderStage(code) { - - const cache = this.shaderCache; - let stage = cache.get(code); - - if (stage === undefined) { - - stage = new WebGLShaderStage(code); - cache.set(code, stage); - - } - - return stage; - - } - -} - -class WebGLShaderStage { - - constructor(code) { - - this.id = _id++; - - this.code = code; - this.usedTimes = 0; - - } - -} - -function WebGLPrograms(renderer, cubemaps, cubeuvmaps, extensions, capabilities, bindingStates, clipping) { - - const _programLayers = new Layers(); - const _customShaders = new WebGLShaderCache(); - const programs = []; - - const isWebGL2 = capabilities.isWebGL2; - const logarithmicDepthBuffer = capabilities.logarithmicDepthBuffer; - const vertexTextures = capabilities.vertexTextures; - let precision = capabilities.precision; - - const shaderIDs = { - MeshDepthMaterial: 'depth', - MeshDistanceMaterial: 'distanceRGBA', - MeshNormalMaterial: 'normal', - MeshBasicMaterial: 'basic', - MeshLambertMaterial: 'lambert', - MeshPhongMaterial: 'phong', - MeshToonMaterial: 'toon', - MeshStandardMaterial: 'physical', - MeshPhysicalMaterial: 'physical', - MeshMatcapMaterial: 'matcap', - LineBasicMaterial: 'basic', - LineDashedMaterial: 'dashed', - PointsMaterial: 'points', - ShadowMaterial: 'shadow', - SpriteMaterial: 'sprite' - }; - - function getParameters(material, lights, shadows, scene, object) { - - const fog = scene.fog; - const geometry = object.geometry; - const environment = material.isMeshStandardMaterial ? scene.environment : null; - - const envMap = (material.isMeshStandardMaterial ? cubeuvmaps : cubemaps).get(material.envMap || environment); - const envMapCubeUVHeight = (!!envMap) && (envMap.mapping === CubeUVReflectionMapping) ? envMap.image.height : null; - - const shaderID = shaderIDs[material.type]; - - // heuristics to create shader parameters according to lights in the scene - // (not to blow over maxLights budget) - - if (material.precision !== null) { - - precision = capabilities.getMaxPrecision(material.precision); - - if (precision !== material.precision) { - - console.warn('THREE.WebGLProgram.getParameters:', material.precision, 'not supported, using', precision, 'instead.'); - - } - - } - - // - - const morphAttribute = geometry.morphAttributes.position || geometry.morphAttributes.normal || geometry.morphAttributes.color; - const morphTargetsCount = (morphAttribute !== undefined) ? morphAttribute.length : 0; - - let morphTextureStride = 0; - - if (geometry.morphAttributes.position !== undefined) morphTextureStride = 1; - if (geometry.morphAttributes.normal !== undefined) morphTextureStride = 2; - if (geometry.morphAttributes.color !== undefined) morphTextureStride = 3; - - // - - let vertexShader, fragmentShader; - let customVertexShaderID, customFragmentShaderID; - - if (shaderID) { - - const shader = ShaderLib[shaderID]; - - vertexShader = shader.vertexShader; - fragmentShader = shader.fragmentShader; - - } else { - - vertexShader = material.vertexShader; - fragmentShader = material.fragmentShader; - - _customShaders.update(material); - - customVertexShaderID = _customShaders.getVertexShaderID(material); - customFragmentShaderID = _customShaders.getFragmentShaderID(material); - - } - - const currentRenderTarget = renderer.getRenderTarget(); - - const useAlphaTest = material.alphaTest > 0; - const useClearcoat = material.clearcoat > 0; - const useIridescence = material.iridescence > 0; - - const parameters = { - - isWebGL2: isWebGL2, - - shaderID: shaderID, - shaderName: material.type, - - vertexShader: vertexShader, - fragmentShader: fragmentShader, - defines: material.defines, - - customVertexShaderID: customVertexShaderID, - customFragmentShaderID: customFragmentShaderID, - - isRawShaderMaterial: material.isRawShaderMaterial === true, - glslVersion: material.glslVersion, - - precision: precision, - - instancing: object.isInstancedMesh === true, - instancingColor: object.isInstancedMesh === true && object.instanceColor !== null, - - supportsVertexTextures: vertexTextures, - outputEncoding: (currentRenderTarget === null) ? renderer.outputEncoding : (currentRenderTarget.isXRRenderTarget === true ? currentRenderTarget.texture.encoding : LinearEncoding), - map: !!material.map, - matcap: !!material.matcap, - envMap: !!envMap, - envMapMode: envMap && envMap.mapping, - envMapCubeUVHeight: envMapCubeUVHeight, - lightMap: !!material.lightMap, - aoMap: !!material.aoMap, - emissiveMap: !!material.emissiveMap, - bumpMap: !!material.bumpMap, - normalMap: !!material.normalMap, - objectSpaceNormalMap: material.normalMapType === ObjectSpaceNormalMap, - tangentSpaceNormalMap: material.normalMapType === TangentSpaceNormalMap, - - decodeVideoTexture: !!material.map && (material.map.isVideoTexture === true) && (material.map.encoding === sRGBEncoding), - - clearcoat: useClearcoat, - clearcoatMap: useClearcoat && !!material.clearcoatMap, - clearcoatRoughnessMap: useClearcoat && !!material.clearcoatRoughnessMap, - clearcoatNormalMap: useClearcoat && !!material.clearcoatNormalMap, - - iridescence: useIridescence, - iridescenceMap: useIridescence && !!material.iridescenceMap, - iridescenceThicknessMap: useIridescence && !!material.iridescenceThicknessMap, - - displacementMap: !!material.displacementMap, - roughnessMap: !!material.roughnessMap, - metalnessMap: !!material.metalnessMap, - specularMap: !!material.specularMap, - specularIntensityMap: !!material.specularIntensityMap, - specularColorMap: !!material.specularColorMap, - - opaque: material.transparent === false && material.blending === NormalBlending, - - alphaMap: !!material.alphaMap, - alphaTest: useAlphaTest, - - gradientMap: !!material.gradientMap, - - sheen: material.sheen > 0, - sheenColorMap: !!material.sheenColorMap, - sheenRoughnessMap: !!material.sheenRoughnessMap, - - transmission: material.transmission > 0, - transmissionMap: !!material.transmissionMap, - thicknessMap: !!material.thicknessMap, - - combine: material.combine, - - vertexTangents: (!!material.normalMap && !!geometry.attributes.tangent), - vertexColors: material.vertexColors, - vertexAlphas: material.vertexColors === true && !!geometry.attributes.color && geometry.attributes.color.itemSize === 4, - vertexUvs: !!material.map || !!material.bumpMap || !!material.normalMap || !!material.specularMap || !!material.alphaMap || !!material.emissiveMap || !!material.roughnessMap || !!material.metalnessMap || !!material.clearcoatMap || !!material.clearcoatRoughnessMap || !!material.clearcoatNormalMap || !!material.iridescenceMap || !!material.iridescenceThicknessMap || !!material.displacementMap || !!material.transmissionMap || !!material.thicknessMap || !!material.specularIntensityMap || !!material.specularColorMap || !!material.sheenColorMap || !!material.sheenRoughnessMap, - uvsVertexOnly: !(!!material.map || !!material.bumpMap || !!material.normalMap || !!material.specularMap || !!material.alphaMap || !!material.emissiveMap || !!material.roughnessMap || !!material.metalnessMap || !!material.clearcoatNormalMap || !!material.iridescenceMap || !!material.iridescenceThicknessMap || material.transmission > 0 || !!material.transmissionMap || !!material.thicknessMap || !!material.specularIntensityMap || !!material.specularColorMap || material.sheen > 0 || !!material.sheenColorMap || !!material.sheenRoughnessMap) && !!material.displacementMap, - - fog: !!fog, - useFog: material.fog === true, - fogExp2: (fog && fog.isFogExp2), - - flatShading: !!material.flatShading, - - sizeAttenuation: material.sizeAttenuation, - logarithmicDepthBuffer: logarithmicDepthBuffer, - - skinning: object.isSkinnedMesh === true, - - morphTargets: geometry.morphAttributes.position !== undefined, - morphNormals: geometry.morphAttributes.normal !== undefined, - morphColors: geometry.morphAttributes.color !== undefined, - morphTargetsCount: morphTargetsCount, - morphTextureStride: morphTextureStride, - - numDirLights: lights.directional.length, - numPointLights: lights.point.length, - numSpotLights: lights.spot.length, - numSpotLightMaps: lights.spotLightMap.length, - numRectAreaLights: lights.rectArea.length, - numHemiLights: lights.hemi.length, - - numDirLightShadows: lights.directionalShadowMap.length, - numPointLightShadows: lights.pointShadowMap.length, - numSpotLightShadows: lights.spotShadowMap.length, - numSpotLightShadowsWithMaps: lights.numSpotLightShadowsWithMaps, - - numClippingPlanes: clipping.numPlanes, - numClipIntersection: clipping.numIntersection, - - dithering: material.dithering, - - shadowMapEnabled: renderer.shadowMap.enabled && shadows.length > 0, - shadowMapType: renderer.shadowMap.type, - - toneMapping: material.toneMapped ? renderer.toneMapping : NoToneMapping, - physicallyCorrectLights: renderer.physicallyCorrectLights, - - premultipliedAlpha: material.premultipliedAlpha, - - doubleSided: material.side === DoubleSide, - flipSided: material.side === BackSide, - - useDepthPacking: !!material.depthPacking, - depthPacking: material.depthPacking || 0, - - index0AttributeName: material.index0AttributeName, - - extensionDerivatives: material.extensions && material.extensions.derivatives, - extensionFragDepth: material.extensions && material.extensions.fragDepth, - extensionDrawBuffers: material.extensions && material.extensions.drawBuffers, - extensionShaderTextureLOD: material.extensions && material.extensions.shaderTextureLOD, - - rendererExtensionFragDepth: isWebGL2 || extensions.has('EXT_frag_depth'), - rendererExtensionDrawBuffers: isWebGL2 || extensions.has('WEBGL_draw_buffers'), - rendererExtensionShaderTextureLod: isWebGL2 || extensions.has('EXT_shader_texture_lod'), - - customProgramCacheKey: material.customProgramCacheKey() - - }; - - return parameters; - - } - - function getProgramCacheKey(parameters) { - - const array = []; - - if (parameters.shaderID) { - - array.push(parameters.shaderID); - - } else { - - array.push(parameters.customVertexShaderID); - array.push(parameters.customFragmentShaderID); - - } - - if (parameters.defines !== undefined) { - - for (const name in parameters.defines) { - - array.push(name); - array.push(parameters.defines[name]); - - } - - } - - if (parameters.isRawShaderMaterial === false) { - - getProgramCacheKeyParameters(array, parameters); - getProgramCacheKeyBooleans(array, parameters); - array.push(renderer.outputEncoding); - - } - - array.push(parameters.customProgramCacheKey); - - return array.join(); - - } - - function getProgramCacheKeyParameters(array, parameters) { - - array.push(parameters.precision); - array.push(parameters.outputEncoding); - array.push(parameters.envMapMode); - array.push(parameters.envMapCubeUVHeight); - array.push(parameters.combine); - array.push(parameters.vertexUvs); - array.push(parameters.fogExp2); - array.push(parameters.sizeAttenuation); - array.push(parameters.morphTargetsCount); - array.push(parameters.morphAttributeCount); - array.push(parameters.numDirLights); - array.push(parameters.numPointLights); - array.push(parameters.numSpotLights); - array.push(parameters.numSpotLightMaps); - array.push(parameters.numHemiLights); - array.push(parameters.numRectAreaLights); - array.push(parameters.numDirLightShadows); - array.push(parameters.numPointLightShadows); - array.push(parameters.numSpotLightShadows); - array.push(parameters.numSpotLightShadowsWithMaps); - array.push(parameters.shadowMapType); - array.push(parameters.toneMapping); - array.push(parameters.numClippingPlanes); - array.push(parameters.numClipIntersection); - array.push(parameters.depthPacking); - - } - - function getProgramCacheKeyBooleans(array, parameters) { - - _programLayers.disableAll(); - - if (parameters.isWebGL2) - _programLayers.enable(0); - if (parameters.supportsVertexTextures) - _programLayers.enable(1); - if (parameters.instancing) - _programLayers.enable(2); - if (parameters.instancingColor) - _programLayers.enable(3); - if (parameters.map) - _programLayers.enable(4); - if (parameters.matcap) - _programLayers.enable(5); - if (parameters.envMap) - _programLayers.enable(6); - if (parameters.lightMap) - _programLayers.enable(7); - if (parameters.aoMap) - _programLayers.enable(8); - if (parameters.emissiveMap) - _programLayers.enable(9); - if (parameters.bumpMap) - _programLayers.enable(10); - if (parameters.normalMap) - _programLayers.enable(11); - if (parameters.objectSpaceNormalMap) - _programLayers.enable(12); - if (parameters.tangentSpaceNormalMap) - _programLayers.enable(13); - if (parameters.clearcoat) - _programLayers.enable(14); - if (parameters.clearcoatMap) - _programLayers.enable(15); - if (parameters.clearcoatRoughnessMap) - _programLayers.enable(16); - if (parameters.clearcoatNormalMap) - _programLayers.enable(17); - if (parameters.iridescence) - _programLayers.enable(18); - if (parameters.iridescenceMap) - _programLayers.enable(19); - if (parameters.iridescenceThicknessMap) - _programLayers.enable(20); - if (parameters.displacementMap) - _programLayers.enable(21); - if (parameters.specularMap) - _programLayers.enable(22); - if (parameters.roughnessMap) - _programLayers.enable(23); - if (parameters.metalnessMap) - _programLayers.enable(24); - if (parameters.gradientMap) - _programLayers.enable(25); - if (parameters.alphaMap) - _programLayers.enable(26); - if (parameters.alphaTest) - _programLayers.enable(27); - if (parameters.vertexColors) - _programLayers.enable(28); - if (parameters.vertexAlphas) - _programLayers.enable(29); - if (parameters.vertexUvs) - _programLayers.enable(30); - if (parameters.vertexTangents) - _programLayers.enable(31); - if (parameters.uvsVertexOnly) - _programLayers.enable(32); - - array.push(_programLayers.mask); - _programLayers.disableAll(); - - if (parameters.fog) - _programLayers.enable(0); - if (parameters.useFog) - _programLayers.enable(1); - if (parameters.flatShading) - _programLayers.enable(2); - if (parameters.logarithmicDepthBuffer) - _programLayers.enable(3); - if (parameters.skinning) - _programLayers.enable(4); - if (parameters.morphTargets) - _programLayers.enable(5); - if (parameters.morphNormals) - _programLayers.enable(6); - if (parameters.morphColors) - _programLayers.enable(7); - if (parameters.premultipliedAlpha) - _programLayers.enable(8); - if (parameters.shadowMapEnabled) - _programLayers.enable(9); - if (parameters.physicallyCorrectLights) - _programLayers.enable(10); - if (parameters.doubleSided) - _programLayers.enable(11); - if (parameters.flipSided) - _programLayers.enable(12); - if (parameters.useDepthPacking) - _programLayers.enable(13); - if (parameters.dithering) - _programLayers.enable(14); - if (parameters.specularIntensityMap) - _programLayers.enable(15); - if (parameters.specularColorMap) - _programLayers.enable(16); - if (parameters.transmission) - _programLayers.enable(17); - if (parameters.transmissionMap) - _programLayers.enable(18); - if (parameters.thicknessMap) - _programLayers.enable(19); - if (parameters.sheen) - _programLayers.enable(20); - if (parameters.sheenColorMap) - _programLayers.enable(21); - if (parameters.sheenRoughnessMap) - _programLayers.enable(22); - if (parameters.decodeVideoTexture) - _programLayers.enable(23); - if (parameters.opaque) - _programLayers.enable(24); - - array.push(_programLayers.mask); - - } - - function getUniforms(material) { - - const shaderID = shaderIDs[material.type]; - let uniforms; - - if (shaderID) { - - const shader = ShaderLib[shaderID]; - uniforms = UniformsUtils.clone(shader.uniforms); - - } else { - - uniforms = material.uniforms; - - } - - return uniforms; - - } - - function acquireProgram(parameters, cacheKey) { - - let program; - - // Check if code has been already compiled - for (let p = 0, pl = programs.length; p < pl; p++) { - - const preexistingProgram = programs[p]; - - if (preexistingProgram.cacheKey === cacheKey) { - - program = preexistingProgram; - ++program.usedTimes; - - break; - - } - - } - - if (program === undefined) { - - program = new WebGLProgram(renderer, cacheKey, parameters, bindingStates); - programs.push(program); - - } - - return program; - - } - - function releaseProgram(program) { - - if (--program.usedTimes === 0) { - - // Remove from unordered set - const i = programs.indexOf(program); - programs[i] = programs[programs.length - 1]; - programs.pop(); - - // Free WebGL resources - program.destroy(); - - } - - } - - function releaseShaderCache(material) { - - _customShaders.remove(material); - - } - - function dispose() { - - _customShaders.dispose(); - - } - - return { - getParameters: getParameters, - getProgramCacheKey: getProgramCacheKey, - getUniforms: getUniforms, - acquireProgram: acquireProgram, - releaseProgram: releaseProgram, - releaseShaderCache: releaseShaderCache, - // Exposed for resource monitoring & error feedback via renderer.info: - programs: programs, - dispose: dispose - }; - -} - -function WebGLProperties() { - - let properties = new WeakMap(); - - function get(object) { - - let map = properties.get(object); - - if (map === undefined) { - - map = {}; - properties.set(object, map); - - } - - return map; - - } - - function remove(object) { - - properties.delete(object); - - } - - function update(object, key, value) { - - properties.get(object)[key] = value; - - } - - function dispose() { - - properties = new WeakMap(); - - } - - return { - get: get, - remove: remove, - update: update, - dispose: dispose - }; - -} - -function painterSortStable(a, b) { - - if (a.groupOrder !== b.groupOrder) { - - return a.groupOrder - b.groupOrder; - - } else if (a.renderOrder !== b.renderOrder) { - - return a.renderOrder - b.renderOrder; - - } else if (a.material.id !== b.material.id) { - - return a.material.id - b.material.id; - - } else if (a.z !== b.z) { - - return a.z - b.z; - - } else { - - return a.id - b.id; - - } - -} - -function reversePainterSortStable(a, b) { - - if (a.groupOrder !== b.groupOrder) { - - return a.groupOrder - b.groupOrder; - - } else if (a.renderOrder !== b.renderOrder) { - - return a.renderOrder - b.renderOrder; - - } else if (a.z !== b.z) { - - return b.z - a.z; - - } else { - - return a.id - b.id; - - } - -} - - -function WebGLRenderList() { - - const renderItems = []; - let renderItemsIndex = 0; - - const opaque = []; - const transmissive = []; - const transparent = []; - - function init() { - - renderItemsIndex = 0; - - opaque.length = 0; - transmissive.length = 0; - transparent.length = 0; - - } - - function getNextRenderItem(object, geometry, material, groupOrder, z, group) { - - let renderItem = renderItems[renderItemsIndex]; - - if (renderItem === undefined) { - - renderItem = { - id: object.id, - object: object, - geometry: geometry, - material: material, - groupOrder: groupOrder, - renderOrder: object.renderOrder, - z: z, - group: group - }; - - renderItems[renderItemsIndex] = renderItem; - - } else { - - renderItem.id = object.id; - renderItem.object = object; - renderItem.geometry = geometry; - renderItem.material = material; - renderItem.groupOrder = groupOrder; - renderItem.renderOrder = object.renderOrder; - renderItem.z = z; - renderItem.group = group; - - } - - renderItemsIndex++; - - return renderItem; - - } - - function push(object, geometry, material, groupOrder, z, group) { - - const renderItem = getNextRenderItem(object, geometry, material, groupOrder, z, group); - - if (material.transmission > 0.0) { - - transmissive.push(renderItem); - - } else if (material.transparent === true) { - - transparent.push(renderItem); - - } else { - - opaque.push(renderItem); - - } - - } - - function unshift(object, geometry, material, groupOrder, z, group) { - - const renderItem = getNextRenderItem(object, geometry, material, groupOrder, z, group); - - if (material.transmission > 0.0) { - - transmissive.unshift(renderItem); - - } else if (material.transparent === true) { - - transparent.unshift(renderItem); - - } else { - - opaque.unshift(renderItem); - - } - - } - - function sort(customOpaqueSort, customTransparentSort) { - - if (opaque.length > 1) opaque.sort(customOpaqueSort || painterSortStable); - if (transmissive.length > 1) transmissive.sort(customTransparentSort || reversePainterSortStable); - if (transparent.length > 1) transparent.sort(customTransparentSort || reversePainterSortStable); - - } - - function finish() { - - // Clear references from inactive renderItems in the list - - for (let i = renderItemsIndex, il = renderItems.length; i < il; i++) { - - const renderItem = renderItems[i]; - - if (renderItem.id === null) break; - - renderItem.id = null; - renderItem.object = null; - renderItem.geometry = null; - renderItem.material = null; - renderItem.group = null; - - } - - } - - return { - - opaque: opaque, - transmissive: transmissive, - transparent: transparent, - - init: init, - push: push, - unshift: unshift, - finish: finish, - - sort: sort - }; - -} - -function WebGLRenderLists() { - - let lists = new WeakMap(); - - function get(scene, renderCallDepth) { - - const listArray = lists.get(scene); - let list; - - if (listArray === undefined) { - - list = new WebGLRenderList(); - lists.set(scene, [list]); - - } else { - - if (renderCallDepth >= listArray.length) { - - list = new WebGLRenderList(); - listArray.push(list); - - } else { - - list = listArray[renderCallDepth]; - - } - - } - - return list; - - } - - function dispose() { - - lists = new WeakMap(); - - } - - return { - get: get, - dispose: dispose - }; - -} - -function UniformsCache() { - - const lights = {}; - - return { - - get: function (light) { - - if (lights[light.id] !== undefined) { - - return lights[light.id]; - - } - - let uniforms; - - switch (light.type) { - - case 'DirectionalLight': - uniforms = { - direction: new Vector3(), - color: new Color() - }; - break; - - case 'SpotLight': - uniforms = { - position: new Vector3(), - direction: new Vector3(), - color: new Color(), - distance: 0, - coneCos: 0, - penumbraCos: 0, - decay: 0 - }; - break; - - case 'PointLight': - uniforms = { - position: new Vector3(), - color: new Color(), - distance: 0, - decay: 0 - }; - break; - - case 'HemisphereLight': - uniforms = { - direction: new Vector3(), - skyColor: new Color(), - groundColor: new Color() - }; - break; - - case 'RectAreaLight': - uniforms = { - color: new Color(), - position: new Vector3(), - halfWidth: new Vector3(), - halfHeight: new Vector3() - }; - break; - - } - - lights[light.id] = uniforms; - - return uniforms; - - } - - }; - -} - -function ShadowUniformsCache() { - - const lights = {}; - - return { - - get: function (light) { - - if (lights[light.id] !== undefined) { - - return lights[light.id]; - - } - - let uniforms; - - switch (light.type) { - - case 'DirectionalLight': - uniforms = { - shadowBias: 0, - shadowNormalBias: 0, - shadowRadius: 1, - shadowMapSize: new Vector2() - }; - break; - - case 'SpotLight': - uniforms = { - shadowBias: 0, - shadowNormalBias: 0, - shadowRadius: 1, - shadowMapSize: new Vector2() - }; - break; - - case 'PointLight': - uniforms = { - shadowBias: 0, - shadowNormalBias: 0, - shadowRadius: 1, - shadowMapSize: new Vector2(), - shadowCameraNear: 1, - shadowCameraFar: 1000 - }; - break; - - // TODO (abelnation): set RectAreaLight shadow uniforms - - } - - lights[light.id] = uniforms; - - return uniforms; - - } - - }; - -} - - - -let nextVersion = 0; - -function shadowCastingAndTexturingLightsFirst(lightA, lightB) { - - return (lightB.castShadow ? 2 : 0) - (lightA.castShadow ? 2 : 0) + (lightB.map ? 1 : 0) - (lightA.map ? 1 : 0); - -} - -function WebGLLights(extensions, capabilities) { - - const cache = new UniformsCache(); - - const shadowCache = ShadowUniformsCache(); - - const state = { - - version: 0, - - hash: { - directionalLength: - 1, - pointLength: - 1, - spotLength: - 1, - rectAreaLength: - 1, - hemiLength: - 1, - - numDirectionalShadows: - 1, - numPointShadows: - 1, - numSpotShadows: - 1, - numSpotMaps: - 1 - }, - - ambient: [0, 0, 0], - probe: [], - directional: [], - directionalShadow: [], - directionalShadowMap: [], - directionalShadowMatrix: [], - spot: [], - spotLightMap: [], - spotShadow: [], - spotShadowMap: [], - spotLightMatrix: [], - rectArea: [], - rectAreaLTC1: null, - rectAreaLTC2: null, - point: [], - pointShadow: [], - pointShadowMap: [], - pointShadowMatrix: [], - hemi: [], - numSpotLightShadowsWithMaps: 0 - - }; - - for (let i = 0; i < 9; i++) state.probe.push(new Vector3()); - - const vector3 = new Vector3(); - const matrix4 = new Matrix4(); - const matrix42 = new Matrix4(); - - function setup(lights, physicallyCorrectLights) { - - let r = 0, g = 0, b = 0; - - for (let i = 0; i < 9; i++) state.probe[i].set(0, 0, 0); - - let directionalLength = 0; - let pointLength = 0; - let spotLength = 0; - let rectAreaLength = 0; - let hemiLength = 0; - - let numDirectionalShadows = 0; - let numPointShadows = 0; - let numSpotShadows = 0; - let numSpotMaps = 0; - let numSpotShadowsWithMaps = 0; - - // ordering : [shadow casting + map texturing, map texturing, shadow casting, none ] - lights.sort(shadowCastingAndTexturingLightsFirst); - - // artist-friendly light intensity scaling factor - const scaleFactor = (physicallyCorrectLights !== true) ? Math.PI : 1; - - for (let i = 0, l = lights.length; i < l; i++) { - - const light = lights[i]; - - const color = light.color; - const intensity = light.intensity; - const distance = light.distance; - - const shadowMap = (light.shadow && light.shadow.map) ? light.shadow.map.texture : null; - - if (light.isAmbientLight) { - - r += color.r * intensity * scaleFactor; - g += color.g * intensity * scaleFactor; - b += color.b * intensity * scaleFactor; - - } else if (light.isLightProbe) { - - for (let j = 0; j < 9; j++) { - - state.probe[j].addScaledVector(light.sh.coefficients[j], intensity); - - } - - } else if (light.isDirectionalLight) { - - const uniforms = cache.get(light); - - uniforms.color.copy(light.color).multiplyScalar(light.intensity * scaleFactor); - - if (light.castShadow) { - - const shadow = light.shadow; - - const shadowUniforms = shadowCache.get(light); - - shadowUniforms.shadowBias = shadow.bias; - shadowUniforms.shadowNormalBias = shadow.normalBias; - shadowUniforms.shadowRadius = shadow.radius; - shadowUniforms.shadowMapSize = shadow.mapSize; - - state.directionalShadow[directionalLength] = shadowUniforms; - state.directionalShadowMap[directionalLength] = shadowMap; - state.directionalShadowMatrix[directionalLength] = light.shadow.matrix; - - numDirectionalShadows++; - - } - - state.directional[directionalLength] = uniforms; - - directionalLength++; - - } else if (light.isSpotLight) { - - const uniforms = cache.get(light); - - uniforms.position.setFromMatrixPosition(light.matrixWorld); - - uniforms.color.copy(color).multiplyScalar(intensity * scaleFactor); - uniforms.distance = distance; - - uniforms.coneCos = Math.cos(light.angle); - uniforms.penumbraCos = Math.cos(light.angle * (1 - light.penumbra)); - uniforms.decay = light.decay; - - state.spot[spotLength] = uniforms; - - const shadow = light.shadow; - - if (light.map) { - - state.spotLightMap[numSpotMaps] = light.map; - numSpotMaps++; - - // make sure the lightMatrix is up to date - // TODO : do it if required only - shadow.updateMatrices(light); - - if (light.castShadow) numSpotShadowsWithMaps++; - - } - - state.spotLightMatrix[spotLength] = shadow.matrix; - - if (light.castShadow) { - - const shadowUniforms = shadowCache.get(light); - - shadowUniforms.shadowBias = shadow.bias; - shadowUniforms.shadowNormalBias = shadow.normalBias; - shadowUniforms.shadowRadius = shadow.radius; - shadowUniforms.shadowMapSize = shadow.mapSize; - - state.spotShadow[spotLength] = shadowUniforms; - state.spotShadowMap[spotLength] = shadowMap; - - numSpotShadows++; - - } - - spotLength++; - - } else if (light.isRectAreaLight) { - - const uniforms = cache.get(light); - - uniforms.color.copy(color).multiplyScalar(intensity); - - uniforms.halfWidth.set(light.width * 0.5, 0.0, 0.0); - uniforms.halfHeight.set(0.0, light.height * 0.5, 0.0); - - state.rectArea[rectAreaLength] = uniforms; - - rectAreaLength++; - - } else if (light.isPointLight) { - - const uniforms = cache.get(light); - - uniforms.color.copy(light.color).multiplyScalar(light.intensity * scaleFactor); - uniforms.distance = light.distance; - uniforms.decay = light.decay; - - if (light.castShadow) { - - const shadow = light.shadow; - - const shadowUniforms = shadowCache.get(light); - - shadowUniforms.shadowBias = shadow.bias; - shadowUniforms.shadowNormalBias = shadow.normalBias; - shadowUniforms.shadowRadius = shadow.radius; - shadowUniforms.shadowMapSize = shadow.mapSize; - shadowUniforms.shadowCameraNear = shadow.camera.near; - shadowUniforms.shadowCameraFar = shadow.camera.far; - - state.pointShadow[pointLength] = shadowUniforms; - state.pointShadowMap[pointLength] = shadowMap; - state.pointShadowMatrix[pointLength] = light.shadow.matrix; - - numPointShadows++; - - } - - state.point[pointLength] = uniforms; - - pointLength++; - - } else if (light.isHemisphereLight) { - - const uniforms = cache.get(light); - - uniforms.skyColor.copy(light.color).multiplyScalar(intensity * scaleFactor); - uniforms.groundColor.copy(light.groundColor).multiplyScalar(intensity * scaleFactor); - - state.hemi[hemiLength] = uniforms; - - hemiLength++; - - } - - } - - if (rectAreaLength > 0) { - - if (capabilities.isWebGL2) { - - // WebGL 2 - - state.rectAreaLTC1 = UniformsLib.LTC_FLOAT_1; - state.rectAreaLTC2 = UniformsLib.LTC_FLOAT_2; - - } else { - - // WebGL 1 - - if (extensions.has('OES_texture_float_linear') === true) { - - state.rectAreaLTC1 = UniformsLib.LTC_FLOAT_1; - state.rectAreaLTC2 = UniformsLib.LTC_FLOAT_2; - - } else if (extensions.has('OES_texture_half_float_linear') === true) { - - state.rectAreaLTC1 = UniformsLib.LTC_HALF_1; - state.rectAreaLTC2 = UniformsLib.LTC_HALF_2; - - } else { - - console.error('THREE.WebGLRenderer: Unable to use RectAreaLight. Missing WebGL extensions.'); - - } - - } - - } - - state.ambient[0] = r; - state.ambient[1] = g; - state.ambient[2] = b; - - const hash = state.hash; - - if (hash.directionalLength !== directionalLength || - hash.pointLength !== pointLength || - hash.spotLength !== spotLength || - hash.rectAreaLength !== rectAreaLength || - hash.hemiLength !== hemiLength || - hash.numDirectionalShadows !== numDirectionalShadows || - hash.numPointShadows !== numPointShadows || - hash.numSpotShadows !== numSpotShadows || - hash.numSpotMaps !== numSpotMaps) { - - state.directional.length = directionalLength; - state.spot.length = spotLength; - state.rectArea.length = rectAreaLength; - state.point.length = pointLength; - state.hemi.length = hemiLength; - - state.directionalShadow.length = numDirectionalShadows; - state.directionalShadowMap.length = numDirectionalShadows; - state.pointShadow.length = numPointShadows; - state.pointShadowMap.length = numPointShadows; - state.spotShadow.length = numSpotShadows; - state.spotShadowMap.length = numSpotShadows; - state.directionalShadowMatrix.length = numDirectionalShadows; - state.pointShadowMatrix.length = numPointShadows; - state.spotLightMatrix.length = numSpotShadows + numSpotMaps - numSpotShadowsWithMaps; - state.spotLightMap.length = numSpotMaps; - state.numSpotLightShadowsWithMaps = numSpotShadowsWithMaps; - - hash.directionalLength = directionalLength; - hash.pointLength = pointLength; - hash.spotLength = spotLength; - hash.rectAreaLength = rectAreaLength; - hash.hemiLength = hemiLength; - - hash.numDirectionalShadows = numDirectionalShadows; - hash.numPointShadows = numPointShadows; - hash.numSpotShadows = numSpotShadows; - hash.numSpotMaps = numSpotMaps; - - state.version = nextVersion++; - - } - - } - - function setupView(lights, camera) { - - let directionalLength = 0; - let pointLength = 0; - let spotLength = 0; - let rectAreaLength = 0; - let hemiLength = 0; - - const viewMatrix = camera.matrixWorldInverse; - - for (let i = 0, l = lights.length; i < l; i++) { - - const light = lights[i]; - - if (light.isDirectionalLight) { - - const uniforms = state.directional[directionalLength]; - - uniforms.direction.setFromMatrixPosition(light.matrixWorld); - vector3.setFromMatrixPosition(light.target.matrixWorld); - uniforms.direction.sub(vector3); - uniforms.direction.transformDirection(viewMatrix); - - directionalLength++; - - } else if (light.isSpotLight) { - - const uniforms = state.spot[spotLength]; - - uniforms.position.setFromMatrixPosition(light.matrixWorld); - uniforms.position.applyMatrix4(viewMatrix); - - uniforms.direction.setFromMatrixPosition(light.matrixWorld); - vector3.setFromMatrixPosition(light.target.matrixWorld); - uniforms.direction.sub(vector3); - uniforms.direction.transformDirection(viewMatrix); - - spotLength++; - - } else if (light.isRectAreaLight) { - - const uniforms = state.rectArea[rectAreaLength]; - - uniforms.position.setFromMatrixPosition(light.matrixWorld); - uniforms.position.applyMatrix4(viewMatrix); - - // extract local rotation of light to derive width/height half vectors - matrix42.identity(); - matrix4.copy(light.matrixWorld); - matrix4.premultiply(viewMatrix); - matrix42.extractRotation(matrix4); - - uniforms.halfWidth.set(light.width * 0.5, 0.0, 0.0); - uniforms.halfHeight.set(0.0, light.height * 0.5, 0.0); - - uniforms.halfWidth.applyMatrix4(matrix42); - uniforms.halfHeight.applyMatrix4(matrix42); - - rectAreaLength++; - - } else if (light.isPointLight) { - - const uniforms = state.point[pointLength]; - - uniforms.position.setFromMatrixPosition(light.matrixWorld); - uniforms.position.applyMatrix4(viewMatrix); - - pointLength++; - - } else if (light.isHemisphereLight) { - - const uniforms = state.hemi[hemiLength]; - - uniforms.direction.setFromMatrixPosition(light.matrixWorld); - uniforms.direction.transformDirection(viewMatrix); - - hemiLength++; - - } - - } - - } - - return { - setup: setup, - setupView: setupView, - state: state - }; - -} - -function WebGLRenderState(extensions, capabilities) { - - const lights = new WebGLLights(extensions, capabilities); - - const lightsArray = []; - const shadowsArray = []; - - function init() { - - lightsArray.length = 0; - shadowsArray.length = 0; - - } - - function pushLight(light) { - - lightsArray.push(light); - - } - - function pushShadow(shadowLight) { - - shadowsArray.push(shadowLight); - - } - - function setupLights(physicallyCorrectLights) { - - lights.setup(lightsArray, physicallyCorrectLights); - - } - - function setupLightsView(camera) { - - lights.setupView(lightsArray, camera); - - } - - const state = { - lightsArray: lightsArray, - shadowsArray: shadowsArray, - - lights: lights - }; - - return { - init: init, - state: state, - setupLights: setupLights, - setupLightsView: setupLightsView, - - pushLight: pushLight, - pushShadow: pushShadow - }; - -} - -function WebGLRenderStates(extensions, capabilities) { - - let renderStates = new WeakMap(); - - function get(scene, renderCallDepth = 0) { - - const renderStateArray = renderStates.get(scene); - let renderState; - - if (renderStateArray === undefined) { - - renderState = new WebGLRenderState(extensions, capabilities); - renderStates.set(scene, [renderState]); - - } else { - - if (renderCallDepth >= renderStateArray.length) { - - renderState = new WebGLRenderState(extensions, capabilities); - renderStateArray.push(renderState); - - } else { - - renderState = renderStateArray[renderCallDepth]; - - } - - } - - return renderState; - - } - - function dispose() { - - renderStates = new WeakMap(); - - } - - return { - get: get, - dispose: dispose - }; - -} - -class MeshDepthMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshDepthMaterial = true; - - this.type = 'MeshDepthMaterial'; - - this.depthPacking = BasicDepthPacking; - - this.map = null; - - this.alphaMap = null; - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.wireframe = false; - this.wireframeLinewidth = 1; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.depthPacking = source.depthPacking; - - this.map = source.map; - - this.alphaMap = source.alphaMap; - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - - return this; - - } - -} - -class MeshDistanceMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshDistanceMaterial = true; - - this.type = 'MeshDistanceMaterial'; - - this.referencePosition = new Vector3(); - this.nearDistance = 1; - this.farDistance = 1000; - - this.map = null; - - this.alphaMap = null; - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.referencePosition.copy(source.referencePosition); - this.nearDistance = source.nearDistance; - this.farDistance = source.farDistance; - - this.map = source.map; - - this.alphaMap = source.alphaMap; - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - return this; - - } - -} - -const vertex = "void main() {\n\tgl_Position = vec4( position, 1.0 );\n}"; - -const fragment = "uniform sampler2D shadow_pass;\nuniform vec2 resolution;\nuniform float radius;\n#include \nvoid main() {\n\tconst float samples = float( VSM_SAMPLES );\n\tfloat mean = 0.0;\n\tfloat squared_mean = 0.0;\n\tfloat uvStride = samples <= 1.0 ? 0.0 : 2.0 / ( samples - 1.0 );\n\tfloat uvStart = samples <= 1.0 ? 0.0 : - 1.0;\n\tfor ( float i = 0.0; i < samples; i ++ ) {\n\t\tfloat uvOffset = uvStart + i * uvStride;\n\t\t#ifdef HORIZONTAL_PASS\n\t\t\tvec2 distribution = unpackRGBATo2Half( texture2D( shadow_pass, ( gl_FragCoord.xy + vec2( uvOffset, 0.0 ) * radius ) / resolution ) );\n\t\t\tmean += distribution.x;\n\t\t\tsquared_mean += distribution.y * distribution.y + distribution.x * distribution.x;\n\t\t#else\n\t\t\tfloat depth = unpackRGBAToDepth( texture2D( shadow_pass, ( gl_FragCoord.xy + vec2( 0.0, uvOffset ) * radius ) / resolution ) );\n\t\t\tmean += depth;\n\t\t\tsquared_mean += depth * depth;\n\t\t#endif\n\t}\n\tmean = mean / samples;\n\tsquared_mean = squared_mean / samples;\n\tfloat std_dev = sqrt( squared_mean - mean * mean );\n\tgl_FragColor = pack2HalfToRGBA( vec2( mean, std_dev ) );\n}"; - -function WebGLShadowMap(_renderer, _objects, _capabilities) { - - let _frustum = new Frustum(); - - const _shadowMapSize = new Vector2(), - _viewportSize = new Vector2(), - - _viewport = new Vector4(), - - _depthMaterial = new MeshDepthMaterial({ depthPacking: RGBADepthPacking }), - _distanceMaterial = new MeshDistanceMaterial(), - - _materialCache = {}, - - _maxTextureSize = _capabilities.maxTextureSize; - - const shadowSide = { [FrontSide]: BackSide, [BackSide]: FrontSide, [DoubleSide]: DoubleSide }; - - const shadowMaterialVertical = new ShaderMaterial({ - defines: { - VSM_SAMPLES: 8 - }, - uniforms: { - shadow_pass: { value: null }, - resolution: { value: new Vector2() }, - radius: { value: 4.0 } - }, - - vertexShader: vertex, - fragmentShader: fragment - - }); - - const shadowMaterialHorizontal = shadowMaterialVertical.clone(); - shadowMaterialHorizontal.defines.HORIZONTAL_PASS = 1; - - const fullScreenTri = new BufferGeometry(); - fullScreenTri.setAttribute( - 'position', - new BufferAttribute( - new Float32Array([- 1, - 1, 0.5, 3, - 1, 0.5, - 1, 3, 0.5]), - 3 - ) - ); - - const fullScreenMesh = new Mesh(fullScreenTri, shadowMaterialVertical); - - const scope = this; - - this.enabled = false; - - this.autoUpdate = true; - this.needsUpdate = false; - - this.type = PCFShadowMap; - - this.render = function (lights, scene, camera) { - - if (scope.enabled === false) return; - if (scope.autoUpdate === false && scope.needsUpdate === false) return; - - if (lights.length === 0) return; - - const currentRenderTarget = _renderer.getRenderTarget(); - const activeCubeFace = _renderer.getActiveCubeFace(); - const activeMipmapLevel = _renderer.getActiveMipmapLevel(); - - const _state = _renderer.state; - - // Set GL state for depth map. - _state.setBlending(NoBlending); - _state.buffers.color.setClear(1, 1, 1, 1); - _state.buffers.depth.setTest(true); - _state.setScissorTest(false); - - // render depth map - - for (let i = 0, il = lights.length; i < il; i++) { - - const light = lights[i]; - const shadow = light.shadow; - - if (shadow === undefined) { - - console.warn('THREE.WebGLShadowMap:', light, 'has no shadow.'); - continue; - - } - - if (shadow.autoUpdate === false && shadow.needsUpdate === false) continue; - - _shadowMapSize.copy(shadow.mapSize); - - const shadowFrameExtents = shadow.getFrameExtents(); - - _shadowMapSize.multiply(shadowFrameExtents); - - _viewportSize.copy(shadow.mapSize); - - if (_shadowMapSize.x > _maxTextureSize || _shadowMapSize.y > _maxTextureSize) { - - if (_shadowMapSize.x > _maxTextureSize) { - - _viewportSize.x = Math.floor(_maxTextureSize / shadowFrameExtents.x); - _shadowMapSize.x = _viewportSize.x * shadowFrameExtents.x; - shadow.mapSize.x = _viewportSize.x; - - } - - if (_shadowMapSize.y > _maxTextureSize) { - - _viewportSize.y = Math.floor(_maxTextureSize / shadowFrameExtents.y); - _shadowMapSize.y = _viewportSize.y * shadowFrameExtents.y; - shadow.mapSize.y = _viewportSize.y; - - } - - } - - if (shadow.map === null) { - - const pars = (this.type !== VSMShadowMap) ? { minFilter: NearestFilter, magFilter: NearestFilter } : {}; - - shadow.map = new WebGLRenderTarget(_shadowMapSize.x, _shadowMapSize.y, pars); - shadow.map.texture.name = light.name + '.shadowMap'; - - shadow.camera.updateProjectionMatrix(); - - } - - _renderer.setRenderTarget(shadow.map); - _renderer.clear(); - - const viewportCount = shadow.getViewportCount(); - - for (let vp = 0; vp < viewportCount; vp++) { - - const viewport = shadow.getViewport(vp); - - _viewport.set( - _viewportSize.x * viewport.x, - _viewportSize.y * viewport.y, - _viewportSize.x * viewport.z, - _viewportSize.y * viewport.w - ); - - _state.viewport(_viewport); - - shadow.updateMatrices(light, vp); - - _frustum = shadow.getFrustum(); - - renderObject(scene, camera, shadow.camera, light, this.type); - - } - - // do blur pass for VSM - - if (shadow.isPointLightShadow !== true && this.type === VSMShadowMap) { - - VSMPass(shadow, camera); - - } - - shadow.needsUpdate = false; - - } - - scope.needsUpdate = false; - - _renderer.setRenderTarget(currentRenderTarget, activeCubeFace, activeMipmapLevel); - - }; - - function VSMPass(shadow, camera) { - - const geometry = _objects.update(fullScreenMesh); - - if (shadowMaterialVertical.defines.VSM_SAMPLES !== shadow.blurSamples) { - - shadowMaterialVertical.defines.VSM_SAMPLES = shadow.blurSamples; - shadowMaterialHorizontal.defines.VSM_SAMPLES = shadow.blurSamples; - - shadowMaterialVertical.needsUpdate = true; - shadowMaterialHorizontal.needsUpdate = true; - - } - - if (shadow.mapPass === null) { - - shadow.mapPass = new WebGLRenderTarget(_shadowMapSize.x, _shadowMapSize.y); - - } - - // vertical pass - - shadowMaterialVertical.uniforms.shadow_pass.value = shadow.map.texture; - shadowMaterialVertical.uniforms.resolution.value = shadow.mapSize; - shadowMaterialVertical.uniforms.radius.value = shadow.radius; - _renderer.setRenderTarget(shadow.mapPass); - _renderer.clear(); - _renderer.renderBufferDirect(camera, null, geometry, shadowMaterialVertical, fullScreenMesh, null); - - // horizontal pass - - shadowMaterialHorizontal.uniforms.shadow_pass.value = shadow.mapPass.texture; - shadowMaterialHorizontal.uniforms.resolution.value = shadow.mapSize; - shadowMaterialHorizontal.uniforms.radius.value = shadow.radius; - _renderer.setRenderTarget(shadow.map); - _renderer.clear(); - _renderer.renderBufferDirect(camera, null, geometry, shadowMaterialHorizontal, fullScreenMesh, null); - - } - - function getDepthMaterial(object, material, light, shadowCameraNear, shadowCameraFar, type) { - - let result = null; - - const customMaterial = (light.isPointLight === true) ? object.customDistanceMaterial : object.customDepthMaterial; - - if (customMaterial !== undefined) { - - result = customMaterial; - - } else { - - result = (light.isPointLight === true) ? _distanceMaterial : _depthMaterial; - - if ((_renderer.localClippingEnabled && material.clipShadows === true && Array.isArray(material.clippingPlanes) && material.clippingPlanes.length !== 0) || - (material.displacementMap && material.displacementScale !== 0) || - (material.alphaMap && material.alphaTest > 0) || - (material.map && material.alphaTest > 0)) { - - // in this case we need a unique material instance reflecting the - // appropriate state - - const keyA = result.uuid, keyB = material.uuid; - - let materialsForVariant = _materialCache[keyA]; - - if (materialsForVariant === undefined) { - - materialsForVariant = {}; - _materialCache[keyA] = materialsForVariant; - - } - - let cachedMaterial = materialsForVariant[keyB]; - - if (cachedMaterial === undefined) { - - cachedMaterial = result.clone(); - materialsForVariant[keyB] = cachedMaterial; - - } - - result = cachedMaterial; - - } - - } - - result.visible = material.visible; - result.wireframe = material.wireframe; - - if (type === VSMShadowMap) { - - result.side = (material.shadowSide !== null) ? material.shadowSide : material.side; - - } else { - - result.side = (material.shadowSide !== null) ? material.shadowSide : shadowSide[material.side]; - - } - - result.alphaMap = material.alphaMap; - result.alphaTest = material.alphaTest; - result.map = material.map; - - result.clipShadows = material.clipShadows; - result.clippingPlanes = material.clippingPlanes; - result.clipIntersection = material.clipIntersection; - - result.displacementMap = material.displacementMap; - result.displacementScale = material.displacementScale; - result.displacementBias = material.displacementBias; - - result.wireframeLinewidth = material.wireframeLinewidth; - result.linewidth = material.linewidth; - - if (light.isPointLight === true && result.isMeshDistanceMaterial === true) { - - result.referencePosition.setFromMatrixPosition(light.matrixWorld); - result.nearDistance = shadowCameraNear; - result.farDistance = shadowCameraFar; - - } - - return result; - - } - - function renderObject(object, camera, shadowCamera, light, type) { - - if (object.visible === false) return; - - const visible = object.layers.test(camera.layers); - - if (visible && (object.isMesh || object.isLine || object.isPoints)) { - - if ((object.castShadow || (object.receiveShadow && type === VSMShadowMap)) && (!object.frustumCulled || _frustum.intersectsObject(object))) { - - object.modelViewMatrix.multiplyMatrices(shadowCamera.matrixWorldInverse, object.matrixWorld); - - const geometry = _objects.update(object); - const material = object.material; - - if (Array.isArray(material)) { - - const groups = geometry.groups; - - for (let k = 0, kl = groups.length; k < kl; k++) { - - const group = groups[k]; - const groupMaterial = material[group.materialIndex]; - - if (groupMaterial && groupMaterial.visible) { - - const depthMaterial = getDepthMaterial(object, groupMaterial, light, shadowCamera.near, shadowCamera.far, type); - - _renderer.renderBufferDirect(shadowCamera, null, geometry, depthMaterial, object, group); - - } - - } - - } else if (material.visible) { - - const depthMaterial = getDepthMaterial(object, material, light, shadowCamera.near, shadowCamera.far, type); - - _renderer.renderBufferDirect(shadowCamera, null, geometry, depthMaterial, object, null); - - } - - } - - } - - const children = object.children; - - for (let i = 0, l = children.length; i < l; i++) { - - renderObject(children[i], camera, shadowCamera, light, type); - - } - - } - -} - -function WebGLState(gl, extensions, capabilities) { - - const isWebGL2 = capabilities.isWebGL2; - - function ColorBuffer() { - - let locked = false; - - const color = new Vector4(); - let currentColorMask = null; - const currentColorClear = new Vector4(0, 0, 0, 0); - - return { - - setMask: function (colorMask) { - - if (currentColorMask !== colorMask && !locked) { - - gl.colorMask(colorMask, colorMask, colorMask, colorMask); - currentColorMask = colorMask; - - } - - }, - - setLocked: function (lock) { - - locked = lock; - - }, - - setClear: function (r, g, b, a, premultipliedAlpha) { - - if (premultipliedAlpha === true) { - - r *= a; g *= a; b *= a; - - } - - color.set(r, g, b, a); - - if (currentColorClear.equals(color) === false) { - - gl.clearColor(r, g, b, a); - currentColorClear.copy(color); - - } - - }, - - reset: function () { - - locked = false; - - currentColorMask = null; - currentColorClear.set(- 1, 0, 0, 0); // set to invalid state - - } - - }; - - } - - function DepthBuffer() { - - let locked = false; - - let currentDepthMask = null; - let currentDepthFunc = null; - let currentDepthClear = null; - - return { - - setTest: function (depthTest) { - - if (depthTest) { - - enable(2929); - - } else { - - disable(2929); - - } - - }, - - setMask: function (depthMask) { - - if (currentDepthMask !== depthMask && !locked) { - - gl.depthMask(depthMask); - currentDepthMask = depthMask; - - } - - }, - - setFunc: function (depthFunc) { - - if (currentDepthFunc !== depthFunc) { - - switch (depthFunc) { - - case NeverDepth: - - gl.depthFunc(512); - break; - - case AlwaysDepth: - - gl.depthFunc(519); - break; - - case LessDepth: - - gl.depthFunc(513); - break; - - case LessEqualDepth: - - gl.depthFunc(515); - break; - - case EqualDepth: - - gl.depthFunc(514); - break; - - case GreaterEqualDepth: - - gl.depthFunc(518); - break; - - case GreaterDepth: - - gl.depthFunc(516); - break; - - case NotEqualDepth: - - gl.depthFunc(517); - break; - - default: - - gl.depthFunc(515); - - } - - currentDepthFunc = depthFunc; - - } - - }, - - setLocked: function (lock) { - - locked = lock; - - }, - - setClear: function (depth) { - - if (currentDepthClear !== depth) { - - gl.clearDepth(depth); - currentDepthClear = depth; - - } - - }, - - reset: function () { - - locked = false; - - currentDepthMask = null; - currentDepthFunc = null; - currentDepthClear = null; - - } - - }; - - } - - function StencilBuffer() { - - let locked = false; - - let currentStencilMask = null; - let currentStencilFunc = null; - let currentStencilRef = null; - let currentStencilFuncMask = null; - let currentStencilFail = null; - let currentStencilZFail = null; - let currentStencilZPass = null; - let currentStencilClear = null; - - return { - - setTest: function (stencilTest) { - - if (!locked) { - - if (stencilTest) { - - enable(2960); - - } else { - - disable(2960); - - } - - } - - }, - - setMask: function (stencilMask) { - - if (currentStencilMask !== stencilMask && !locked) { - - gl.stencilMask(stencilMask); - currentStencilMask = stencilMask; - - } - - }, - - setFunc: function (stencilFunc, stencilRef, stencilMask) { - - if (currentStencilFunc !== stencilFunc || - currentStencilRef !== stencilRef || - currentStencilFuncMask !== stencilMask) { - - gl.stencilFunc(stencilFunc, stencilRef, stencilMask); - - currentStencilFunc = stencilFunc; - currentStencilRef = stencilRef; - currentStencilFuncMask = stencilMask; - - } - - }, - - setOp: function (stencilFail, stencilZFail, stencilZPass) { - - if (currentStencilFail !== stencilFail || - currentStencilZFail !== stencilZFail || - currentStencilZPass !== stencilZPass) { - - gl.stencilOp(stencilFail, stencilZFail, stencilZPass); - - currentStencilFail = stencilFail; - currentStencilZFail = stencilZFail; - currentStencilZPass = stencilZPass; - - } - - }, - - setLocked: function (lock) { - - locked = lock; - - }, - - setClear: function (stencil) { - - if (currentStencilClear !== stencil) { - - gl.clearStencil(stencil); - currentStencilClear = stencil; - - } - - }, - - reset: function () { - - locked = false; - - currentStencilMask = null; - currentStencilFunc = null; - currentStencilRef = null; - currentStencilFuncMask = null; - currentStencilFail = null; - currentStencilZFail = null; - currentStencilZPass = null; - currentStencilClear = null; - - } - - }; - - } - - // - - const colorBuffer = new ColorBuffer(); - const depthBuffer = new DepthBuffer(); - const stencilBuffer = new StencilBuffer(); - - const uboBindings = new WeakMap(); - const uboProgramMap = new WeakMap(); - - let enabledCapabilities = {}; - - let currentBoundFramebuffers = {}; - let currentDrawbuffers = new WeakMap(); - let defaultDrawbuffers = []; - - let currentProgram = null; - - let currentBlendingEnabled = false; - let currentBlending = null; - let currentBlendEquation = null; - let currentBlendSrc = null; - let currentBlendDst = null; - let currentBlendEquationAlpha = null; - let currentBlendSrcAlpha = null; - let currentBlendDstAlpha = null; - let currentPremultipledAlpha = false; - - let currentFlipSided = null; - let currentCullFace = null; - - let currentLineWidth = null; - - let currentPolygonOffsetFactor = null; - let currentPolygonOffsetUnits = null; - - const maxTextures = gl.getParameter(35661); - - let lineWidthAvailable = false; - let version = 0; - const glVersion = gl.getParameter(7938); - - if (glVersion.indexOf('WebGL') !== - 1) { - - version = parseFloat(/^WebGL (\d)/.exec(glVersion)[1]); - lineWidthAvailable = (version >= 1.0); - - } else if (glVersion.indexOf('OpenGL ES') !== - 1) { - - version = parseFloat(/^OpenGL ES (\d)/.exec(glVersion)[1]); - lineWidthAvailable = (version >= 2.0); - - } - - let currentTextureSlot = null; - let currentBoundTextures = {}; - - const scissorParam = gl.getParameter(3088); - const viewportParam = gl.getParameter(2978); - - const currentScissor = new Vector4().fromArray(scissorParam); - const currentViewport = new Vector4().fromArray(viewportParam); - - function createTexture(type, target, count) { - - const data = new Uint8Array(4); // 4 is required to match default unpack alignment of 4. - const texture = gl.createTexture(); - - gl.bindTexture(type, texture); - gl.texParameteri(type, 10241, 9728); - gl.texParameteri(type, 10240, 9728); - - for (let i = 0; i < count; i++) { - - gl.texImage2D(target + i, 0, 6408, 1, 1, 0, 6408, 5121, data); - - } - - return texture; - - } - - const emptyTextures = {}; - emptyTextures[3553] = createTexture(3553, 3553, 1); - emptyTextures[34067] = createTexture(34067, 34069, 6); - - // init - - colorBuffer.setClear(0, 0, 0, 1); - depthBuffer.setClear(1); - stencilBuffer.setClear(0); - - enable(2929); - depthBuffer.setFunc(LessEqualDepth); - - setFlipSided(false); - setCullFace(CullFaceBack); - enable(2884); - - setBlending(NoBlending); - - // - - function enable(id) { - - if (enabledCapabilities[id] !== true) { - - gl.enable(id); - enabledCapabilities[id] = true; - - } - - } - - function disable(id) { - - if (enabledCapabilities[id] !== false) { - - gl.disable(id); - enabledCapabilities[id] = false; - - } - - } - - function bindFramebuffer(target, framebuffer) { - - if (currentBoundFramebuffers[target] !== framebuffer) { - - gl.bindFramebuffer(target, framebuffer); - - currentBoundFramebuffers[target] = framebuffer; - - if (isWebGL2) { - - // 36009 is equivalent to 36160 - - if (target === 36009) { - - currentBoundFramebuffers[36160] = framebuffer; - - } - - if (target === 36160) { - - currentBoundFramebuffers[36009] = framebuffer; - - } - - } - - return true; - - } - - return false; - - } - - function drawBuffers(renderTarget, framebuffer) { - - let drawBuffers = defaultDrawbuffers; - - let needsUpdate = false; - - if (renderTarget) { - - drawBuffers = currentDrawbuffers.get(framebuffer); - - if (drawBuffers === undefined) { - - drawBuffers = []; - currentDrawbuffers.set(framebuffer, drawBuffers); - - } - - if (renderTarget.isWebGLMultipleRenderTargets) { - - const textures = renderTarget.texture; - - if (drawBuffers.length !== textures.length || drawBuffers[0] !== 36064) { - - for (let i = 0, il = textures.length; i < il; i++) { - - drawBuffers[i] = 36064 + i; - - } - - drawBuffers.length = textures.length; - - needsUpdate = true; - - } - - } else { - - if (drawBuffers[0] !== 36064) { - - drawBuffers[0] = 36064; - - needsUpdate = true; - - } - - } - - } else { - - if (drawBuffers[0] !== 1029) { - - drawBuffers[0] = 1029; - - needsUpdate = true; - - } - - } - - if (needsUpdate) { - - if (capabilities.isWebGL2) { - - gl.drawBuffers(drawBuffers); - - } else { - - extensions.get('WEBGL_draw_buffers').drawBuffersWEBGL(drawBuffers); - - } - - } - - - } - - function useProgram(program) { - - if (currentProgram !== program) { - - gl.useProgram(program); - - currentProgram = program; - - return true; - - } - - return false; - - } - - const equationToGL = { - [AddEquation]: 32774, - [SubtractEquation]: 32778, - [ReverseSubtractEquation]: 32779 - }; - - if (isWebGL2) { - - equationToGL[MinEquation] = 32775; - equationToGL[MaxEquation] = 32776; - - } else { - - const extension = extensions.get('EXT_blend_minmax'); - - if (extension !== null) { - - equationToGL[MinEquation] = extension.MIN_EXT; - equationToGL[MaxEquation] = extension.MAX_EXT; - - } - - } - - const factorToGL = { - [ZeroFactor]: 0, - [OneFactor]: 1, - [SrcColorFactor]: 768, - [SrcAlphaFactor]: 770, - [SrcAlphaSaturateFactor]: 776, - [DstColorFactor]: 774, - [DstAlphaFactor]: 772, - [OneMinusSrcColorFactor]: 769, - [OneMinusSrcAlphaFactor]: 771, - [OneMinusDstColorFactor]: 775, - [OneMinusDstAlphaFactor]: 773 - }; - - function setBlending(blending, blendEquation, blendSrc, blendDst, blendEquationAlpha, blendSrcAlpha, blendDstAlpha, premultipliedAlpha) { - - if (blending === NoBlending) { - - if (currentBlendingEnabled === true) { - - disable(3042); - currentBlendingEnabled = false; - - } - - return; - - } - - if (currentBlendingEnabled === false) { - - enable(3042); - currentBlendingEnabled = true; - - } - - if (blending !== CustomBlending) { - - if (blending !== currentBlending || premultipliedAlpha !== currentPremultipledAlpha) { - - if (currentBlendEquation !== AddEquation || currentBlendEquationAlpha !== AddEquation) { - - gl.blendEquation(32774); - - currentBlendEquation = AddEquation; - currentBlendEquationAlpha = AddEquation; - - } - - if (premultipliedAlpha) { - - switch (blending) { - - case NormalBlending: - gl.blendFuncSeparate(1, 771, 1, 771); - break; - - case AdditiveBlending: - gl.blendFunc(1, 1); - break; - - case SubtractiveBlending: - gl.blendFuncSeparate(0, 769, 0, 1); - break; - - case MultiplyBlending: - gl.blendFuncSeparate(0, 768, 0, 770); - break; - - default: - console.error('THREE.WebGLState: Invalid blending: ', blending); - break; - - } - - } else { - - switch (blending) { - - case NormalBlending: - gl.blendFuncSeparate(770, 771, 1, 771); - break; - - case AdditiveBlending: - gl.blendFunc(770, 1); - break; - - case SubtractiveBlending: - gl.blendFuncSeparate(0, 769, 0, 1); - break; - - case MultiplyBlending: - gl.blendFunc(0, 768); - break; - - default: - console.error('THREE.WebGLState: Invalid blending: ', blending); - break; - - } - - } - - currentBlendSrc = null; - currentBlendDst = null; - currentBlendSrcAlpha = null; - currentBlendDstAlpha = null; - - currentBlending = blending; - currentPremultipledAlpha = premultipliedAlpha; - - } - - return; - - } - - // custom blending - - blendEquationAlpha = blendEquationAlpha || blendEquation; - blendSrcAlpha = blendSrcAlpha || blendSrc; - blendDstAlpha = blendDstAlpha || blendDst; - - if (blendEquation !== currentBlendEquation || blendEquationAlpha !== currentBlendEquationAlpha) { - - gl.blendEquationSeparate(equationToGL[blendEquation], equationToGL[blendEquationAlpha]); - - currentBlendEquation = blendEquation; - currentBlendEquationAlpha = blendEquationAlpha; - - } - - if (blendSrc !== currentBlendSrc || blendDst !== currentBlendDst || blendSrcAlpha !== currentBlendSrcAlpha || blendDstAlpha !== currentBlendDstAlpha) { - - gl.blendFuncSeparate(factorToGL[blendSrc], factorToGL[blendDst], factorToGL[blendSrcAlpha], factorToGL[blendDstAlpha]); - - currentBlendSrc = blendSrc; - currentBlendDst = blendDst; - currentBlendSrcAlpha = blendSrcAlpha; - currentBlendDstAlpha = blendDstAlpha; - - } - - currentBlending = blending; - currentPremultipledAlpha = false; - - } - - function setMaterial(material, frontFaceCW) { - - material.side === DoubleSide - ? disable(2884) - : enable(2884); - - let flipSided = (material.side === BackSide); - if (frontFaceCW) flipSided = !flipSided; - - setFlipSided(flipSided); - - (material.blending === NormalBlending && material.transparent === false) - ? setBlending(NoBlending) - : setBlending(material.blending, material.blendEquation, material.blendSrc, material.blendDst, material.blendEquationAlpha, material.blendSrcAlpha, material.blendDstAlpha, material.premultipliedAlpha); - - depthBuffer.setFunc(material.depthFunc); - depthBuffer.setTest(material.depthTest); - depthBuffer.setMask(material.depthWrite); - colorBuffer.setMask(material.colorWrite); - - const stencilWrite = material.stencilWrite; - stencilBuffer.setTest(stencilWrite); - if (stencilWrite) { - - stencilBuffer.setMask(material.stencilWriteMask); - stencilBuffer.setFunc(material.stencilFunc, material.stencilRef, material.stencilFuncMask); - stencilBuffer.setOp(material.stencilFail, material.stencilZFail, material.stencilZPass); - - } - - setPolygonOffset(material.polygonOffset, material.polygonOffsetFactor, material.polygonOffsetUnits); - - material.alphaToCoverage === true - ? enable(32926) - : disable(32926); - - } - - // - - function setFlipSided(flipSided) { - - if (currentFlipSided !== flipSided) { - - if (flipSided) { - - gl.frontFace(2304); - - } else { - - gl.frontFace(2305); - - } - - currentFlipSided = flipSided; - - } - - } - - function setCullFace(cullFace) { - - if (cullFace !== CullFaceNone) { - - enable(2884); - - if (cullFace !== currentCullFace) { - - if (cullFace === CullFaceBack) { - - gl.cullFace(1029); - - } else if (cullFace === CullFaceFront) { - - gl.cullFace(1028); - - } else { - - gl.cullFace(1032); - - } - - } - - } else { - - disable(2884); - - } - - currentCullFace = cullFace; - - } - - function setLineWidth(width) { - - if (width !== currentLineWidth) { - - if (lineWidthAvailable) gl.lineWidth(width); - - currentLineWidth = width; - - } - - } - - function setPolygonOffset(polygonOffset, factor, units) { - - if (polygonOffset) { - - enable(32823); - - if (currentPolygonOffsetFactor !== factor || currentPolygonOffsetUnits !== units) { - - gl.polygonOffset(factor, units); - - currentPolygonOffsetFactor = factor; - currentPolygonOffsetUnits = units; - - } - - } else { - - disable(32823); - - } - - } - - function setScissorTest(scissorTest) { - - if (scissorTest) { - - enable(3089); - - } else { - - disable(3089); - - } - - } - - // texture - - function activeTexture(webglSlot) { - - if (webglSlot === undefined) webglSlot = 33984 + maxTextures - 1; - - if (currentTextureSlot !== webglSlot) { - - gl.activeTexture(webglSlot); - currentTextureSlot = webglSlot; - - } - - } - - function bindTexture(webglType, webglTexture, webglSlot) { - - if (webglSlot === undefined) { - - if (currentTextureSlot === null) { - - webglSlot = 33984 + maxTextures - 1; - - } else { - - webglSlot = currentTextureSlot; - - } - - } - - let boundTexture = currentBoundTextures[webglSlot]; - - if (boundTexture === undefined) { - - boundTexture = { type: undefined, texture: undefined }; - currentBoundTextures[webglSlot] = boundTexture; - - } - - if (boundTexture.type !== webglType || boundTexture.texture !== webglTexture) { - - if (currentTextureSlot !== webglSlot) { - - gl.activeTexture(webglSlot); - currentTextureSlot = webglSlot; - - } - - gl.bindTexture(webglType, webglTexture || emptyTextures[webglType]); - - boundTexture.type = webglType; - boundTexture.texture = webglTexture; - - } - - } - - function unbindTexture() { - - const boundTexture = currentBoundTextures[currentTextureSlot]; - - if (boundTexture !== undefined && boundTexture.type !== undefined) { - - gl.bindTexture(boundTexture.type, null); - - boundTexture.type = undefined; - boundTexture.texture = undefined; - - } - - } - - function compressedTexImage2D() { - - try { - - gl.compressedTexImage2D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function compressedTexImage3D() { - - try { - - gl.compressedTexImage3D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function texSubImage2D() { - - try { - - gl.texSubImage2D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function texSubImage3D() { - - try { - - gl.texSubImage3D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function compressedTexSubImage2D() { - - try { - - gl.compressedTexSubImage2D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function compressedTexSubImage3D() { - - try { - - gl.compressedTexSubImage3D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function texStorage2D() { - - try { - - gl.texStorage2D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function texStorage3D() { - - try { - - gl.texStorage3D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function texImage2D() { - - try { - - gl.texImage2D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - function texImage3D() { - - try { - - gl.texImage3D.apply(gl, arguments); - - } catch (error) { - - console.error('THREE.WebGLState:', error); - - } - - } - - // - - function scissor(scissor) { - - if (currentScissor.equals(scissor) === false) { - - gl.scissor(scissor.x, scissor.y, scissor.z, scissor.w); - currentScissor.copy(scissor); - - } - - } - - function viewport(viewport) { - - if (currentViewport.equals(viewport) === false) { - - gl.viewport(viewport.x, viewport.y, viewport.z, viewport.w); - currentViewport.copy(viewport); - - } - - } - - function updateUBOMapping(uniformsGroup, program) { - - let mapping = uboProgramMap.get(program); - - if (mapping === undefined) { - - mapping = new WeakMap(); - - uboProgramMap.set(program, mapping); - - } - - let blockIndex = mapping.get(uniformsGroup); - - if (blockIndex === undefined) { - - blockIndex = gl.getUniformBlockIndex(program, uniformsGroup.name); - - mapping.set(uniformsGroup, blockIndex); - - } - - } - - function uniformBlockBinding(uniformsGroup, program) { - - const mapping = uboProgramMap.get(program); - const blockIndex = mapping.get(uniformsGroup); - - if (uboBindings.get(program) !== blockIndex) { - - // bind shader specific block index to global block point - gl.uniformBlockBinding(program, blockIndex, uniformsGroup.__bindingPointIndex); - - uboBindings.set(program, blockIndex); - - } - - } - - // - - function reset() { - - // reset state - - gl.disable(3042); - gl.disable(2884); - gl.disable(2929); - gl.disable(32823); - gl.disable(3089); - gl.disable(2960); - gl.disable(32926); - - gl.blendEquation(32774); - gl.blendFunc(1, 0); - gl.blendFuncSeparate(1, 0, 1, 0); - - gl.colorMask(true, true, true, true); - gl.clearColor(0, 0, 0, 0); - - gl.depthMask(true); - gl.depthFunc(513); - gl.clearDepth(1); - - gl.stencilMask(0xffffffff); - gl.stencilFunc(519, 0, 0xffffffff); - gl.stencilOp(7680, 7680, 7680); - gl.clearStencil(0); - - gl.cullFace(1029); - gl.frontFace(2305); - - gl.polygonOffset(0, 0); - - gl.activeTexture(33984); - - gl.bindFramebuffer(36160, null); - - if (isWebGL2 === true) { - - gl.bindFramebuffer(36009, null); - gl.bindFramebuffer(36008, null); - - } - - gl.useProgram(null); - - gl.lineWidth(1); - - gl.scissor(0, 0, gl.canvas.width, gl.canvas.height); - gl.viewport(0, 0, gl.canvas.width, gl.canvas.height); - - // reset internals - - enabledCapabilities = {}; - - currentTextureSlot = null; - currentBoundTextures = {}; - - currentBoundFramebuffers = {}; - currentDrawbuffers = new WeakMap(); - defaultDrawbuffers = []; - - currentProgram = null; - - currentBlendingEnabled = false; - currentBlending = null; - currentBlendEquation = null; - currentBlendSrc = null; - currentBlendDst = null; - currentBlendEquationAlpha = null; - currentBlendSrcAlpha = null; - currentBlendDstAlpha = null; - currentPremultipledAlpha = false; - - currentFlipSided = null; - currentCullFace = null; - - currentLineWidth = null; - - currentPolygonOffsetFactor = null; - currentPolygonOffsetUnits = null; - - currentScissor.set(0, 0, gl.canvas.width, gl.canvas.height); - currentViewport.set(0, 0, gl.canvas.width, gl.canvas.height); - - colorBuffer.reset(); - depthBuffer.reset(); - stencilBuffer.reset(); - - } - - return { - - buffers: { - color: colorBuffer, - depth: depthBuffer, - stencil: stencilBuffer - }, - - enable: enable, - disable: disable, - - bindFramebuffer: bindFramebuffer, - drawBuffers: drawBuffers, - - useProgram: useProgram, - - setBlending: setBlending, - setMaterial: setMaterial, - - setFlipSided: setFlipSided, - setCullFace: setCullFace, - - setLineWidth: setLineWidth, - setPolygonOffset: setPolygonOffset, - - setScissorTest: setScissorTest, - - activeTexture: activeTexture, - bindTexture: bindTexture, - unbindTexture: unbindTexture, - compressedTexImage2D: compressedTexImage2D, - compressedTexImage3D: compressedTexImage3D, - texImage2D: texImage2D, - texImage3D: texImage3D, - - updateUBOMapping: updateUBOMapping, - uniformBlockBinding: uniformBlockBinding, - - texStorage2D: texStorage2D, - texStorage3D: texStorage3D, - texSubImage2D: texSubImage2D, - texSubImage3D: texSubImage3D, - compressedTexSubImage2D: compressedTexSubImage2D, - compressedTexSubImage3D: compressedTexSubImage3D, - - scissor: scissor, - viewport: viewport, - - reset: reset - - }; - -} - -function WebGLTextures(_gl, extensions, state, properties, capabilities, utils, info) { - - const isWebGL2 = capabilities.isWebGL2; - const maxTextures = capabilities.maxTextures; - const maxCubemapSize = capabilities.maxCubemapSize; - const maxTextureSize = capabilities.maxTextureSize; - const maxSamples = capabilities.maxSamples; - const multisampledRTTExt = extensions.has('WEBGL_multisampled_render_to_texture') ? extensions.get('WEBGL_multisampled_render_to_texture') : null; - const supportsInvalidateFramebuffer = typeof navigator === 'undefined' ? false : /OculusBrowser/g.test(navigator.userAgent); - - const _videoTextures = new WeakMap(); - let _canvas; - - const _sources = new WeakMap(); // maps WebglTexture objects to instances of Source - - // cordova iOS (as of 5.0) still uses UIWebView, which provides OffscreenCanvas, - // also OffscreenCanvas.getContext("webgl"), but not OffscreenCanvas.getContext("2d")! - // Some implementations may only implement OffscreenCanvas partially (e.g. lacking 2d). - - let useOffscreenCanvas = false; - - try { - - useOffscreenCanvas = typeof OffscreenCanvas !== 'undefined' - // eslint-disable-next-line compat/compat - && (new OffscreenCanvas(1, 1).getContext('2d')) !== null; - - } catch (err) { - - // Ignore any errors - - } - - function createCanvas(width, height) { - - // Use OffscreenCanvas when available. Specially needed in web workers - - return useOffscreenCanvas ? - // eslint-disable-next-line compat/compat - new OffscreenCanvas(width, height) : createElementNS('canvas'); - - } - - function resizeImage(image, needsPowerOfTwo, needsNewCanvas, maxSize) { - - let scale = 1; - - // handle case if texture exceeds max size - - if (image.width > maxSize || image.height > maxSize) { - - scale = maxSize / Math.max(image.width, image.height); - - } - - // only perform resize if necessary - - if (scale < 1 || needsPowerOfTwo === true) { - - // only perform resize for certain image types - - if ((typeof HTMLImageElement !== 'undefined' && image instanceof HTMLImageElement) || - (typeof HTMLCanvasElement !== 'undefined' && image instanceof HTMLCanvasElement) || - (typeof ImageBitmap !== 'undefined' && image instanceof ImageBitmap)) { - - const floor = needsPowerOfTwo ? floorPowerOfTwo : Math.floor; - - const width = floor(scale * image.width); - const height = floor(scale * image.height); - - if (_canvas === undefined) _canvas = createCanvas(width, height); - - // cube textures can't reuse the same canvas - - const canvas = needsNewCanvas ? createCanvas(width, height) : _canvas; - - canvas.width = width; - canvas.height = height; - - const context = canvas.getContext('2d'); - context.drawImage(image, 0, 0, width, height); - - console.warn('THREE.WebGLRenderer: Texture has been resized from (' + image.width + 'x' + image.height + ') to (' + width + 'x' + height + ').'); - - return canvas; - - } else { - - if ('data' in image) { - - console.warn('THREE.WebGLRenderer: Image in DataTexture is too big (' + image.width + 'x' + image.height + ').'); - - } - - return image; - - } - - } - - return image; - - } - - function isPowerOfTwo$1(image) { - - return isPowerOfTwo(image.width) && isPowerOfTwo(image.height); - - } - - function textureNeedsPowerOfTwo(texture) { - - if (isWebGL2) return false; - - return (texture.wrapS !== ClampToEdgeWrapping || texture.wrapT !== ClampToEdgeWrapping) || - (texture.minFilter !== NearestFilter && texture.minFilter !== LinearFilter); - - } - - function textureNeedsGenerateMipmaps(texture, supportsMips) { - - return texture.generateMipmaps && supportsMips && - texture.minFilter !== NearestFilter && texture.minFilter !== LinearFilter; - - } - - function generateMipmap(target) { - - _gl.generateMipmap(target); - - } - - function getInternalFormat(internalFormatName, glFormat, glType, encoding, forceLinearEncoding = false) { - - if (isWebGL2 === false) return glFormat; - - if (internalFormatName !== null) { - - if (_gl[internalFormatName] !== undefined) return _gl[internalFormatName]; - - console.warn('THREE.WebGLRenderer: Attempt to use non-existing WebGL internal format \'' + internalFormatName + '\''); - - } - - let internalFormat = glFormat; - - if (glFormat === 6403) { - - if (glType === 5126) internalFormat = 33326; - if (glType === 5131) internalFormat = 33325; - if (glType === 5121) internalFormat = 33321; - - } - - if (glFormat === 33319) { - - if (glType === 5126) internalFormat = 33328; - if (glType === 5131) internalFormat = 33327; - if (glType === 5121) internalFormat = 33323; - - } - - if (glFormat === 6408) { - - if (glType === 5126) internalFormat = 34836; - if (glType === 5131) internalFormat = 34842; - if (glType === 5121) internalFormat = (encoding === sRGBEncoding && forceLinearEncoding === false) ? 35907 : 32856; - if (glType === 32819) internalFormat = 32854; - if (glType === 32820) internalFormat = 32855; - - } - - if (internalFormat === 33325 || internalFormat === 33326 || - internalFormat === 33327 || internalFormat === 33328 || - internalFormat === 34842 || internalFormat === 34836) { - - extensions.get('EXT_color_buffer_float'); - - } - - return internalFormat; - - } - - function getMipLevels(texture, image, supportsMips) { - - if (textureNeedsGenerateMipmaps(texture, supportsMips) === true || (texture.isFramebufferTexture && texture.minFilter !== NearestFilter && texture.minFilter !== LinearFilter)) { - - return Math.log2(Math.max(image.width, image.height)) + 1; - - } else if (texture.mipmaps !== undefined && texture.mipmaps.length > 0) { - - // user-defined mipmaps - - return texture.mipmaps.length; - - } else if (texture.isCompressedTexture && Array.isArray(texture.image)) { - - return image.mipmaps.length; - - } else { - - // texture without mipmaps (only base level) - - return 1; - - } - - } - - // Fallback filters for non-power-of-2 textures - - function filterFallback(f) { - - if (f === NearestFilter || f === NearestMipmapNearestFilter || f === NearestMipmapLinearFilter) { - - return 9728; - - } - - return 9729; - - } - - // - - function onTextureDispose(event) { - - const texture = event.target; - - texture.removeEventListener('dispose', onTextureDispose); - - deallocateTexture(texture); - - if (texture.isVideoTexture) { - - _videoTextures.delete(texture); - - } - - } - - function onRenderTargetDispose(event) { - - const renderTarget = event.target; - - renderTarget.removeEventListener('dispose', onRenderTargetDispose); - - deallocateRenderTarget(renderTarget); - - } - - // - - function deallocateTexture(texture) { - - const textureProperties = properties.get(texture); - - if (textureProperties.__webglInit === undefined) return; - - // check if it's necessary to remove the WebGLTexture object - - const source = texture.source; - const webglTextures = _sources.get(source); - - if (webglTextures) { - - const webglTexture = webglTextures[textureProperties.__cacheKey]; - webglTexture.usedTimes--; - - // the WebGLTexture object is not used anymore, remove it - - if (webglTexture.usedTimes === 0) { - - deleteTexture(texture); - - } - - // remove the weak map entry if no WebGLTexture uses the source anymore - - if (Object.keys(webglTextures).length === 0) { - - _sources.delete(source); - - } - - } - - properties.remove(texture); - - } - - function deleteTexture(texture) { - - const textureProperties = properties.get(texture); - _gl.deleteTexture(textureProperties.__webglTexture); - - const source = texture.source; - const webglTextures = _sources.get(source); - delete webglTextures[textureProperties.__cacheKey]; - - info.memory.textures--; - - } - - function deallocateRenderTarget(renderTarget) { - - const texture = renderTarget.texture; - - const renderTargetProperties = properties.get(renderTarget); - const textureProperties = properties.get(texture); - - if (textureProperties.__webglTexture !== undefined) { - - _gl.deleteTexture(textureProperties.__webglTexture); - - info.memory.textures--; - - } - - if (renderTarget.depthTexture) { - - renderTarget.depthTexture.dispose(); - - } - - if (renderTarget.isWebGLCubeRenderTarget) { - - for (let i = 0; i < 6; i++) { - - _gl.deleteFramebuffer(renderTargetProperties.__webglFramebuffer[i]); - if (renderTargetProperties.__webglDepthbuffer) _gl.deleteRenderbuffer(renderTargetProperties.__webglDepthbuffer[i]); - - } - - } else { - - _gl.deleteFramebuffer(renderTargetProperties.__webglFramebuffer); - if (renderTargetProperties.__webglDepthbuffer) _gl.deleteRenderbuffer(renderTargetProperties.__webglDepthbuffer); - if (renderTargetProperties.__webglMultisampledFramebuffer) _gl.deleteFramebuffer(renderTargetProperties.__webglMultisampledFramebuffer); - - if (renderTargetProperties.__webglColorRenderbuffer) { - - for (let i = 0; i < renderTargetProperties.__webglColorRenderbuffer.length; i++) { - - if (renderTargetProperties.__webglColorRenderbuffer[i]) _gl.deleteRenderbuffer(renderTargetProperties.__webglColorRenderbuffer[i]); - - } - - } - - if (renderTargetProperties.__webglDepthRenderbuffer) _gl.deleteRenderbuffer(renderTargetProperties.__webglDepthRenderbuffer); - - } - - if (renderTarget.isWebGLMultipleRenderTargets) { - - for (let i = 0, il = texture.length; i < il; i++) { - - const attachmentProperties = properties.get(texture[i]); - - if (attachmentProperties.__webglTexture) { - - _gl.deleteTexture(attachmentProperties.__webglTexture); - - info.memory.textures--; - - } - - properties.remove(texture[i]); - - } - - } - - properties.remove(texture); - properties.remove(renderTarget); - - } - - // - - let textureUnits = 0; - - function resetTextureUnits() { - - textureUnits = 0; - - } - - function allocateTextureUnit() { - - const textureUnit = textureUnits; - - if (textureUnit >= maxTextures) { - - console.warn('THREE.WebGLTextures: Trying to use ' + textureUnit + ' texture units while this GPU supports only ' + maxTextures); - - } - - textureUnits += 1; - - return textureUnit; - - } - - function getTextureCacheKey(texture) { - - const array = []; - - array.push(texture.wrapS); - array.push(texture.wrapT); - array.push(texture.wrapR || 0); - array.push(texture.magFilter); - array.push(texture.minFilter); - array.push(texture.anisotropy); - array.push(texture.internalFormat); - array.push(texture.format); - array.push(texture.type); - array.push(texture.generateMipmaps); - array.push(texture.premultiplyAlpha); - array.push(texture.flipY); - array.push(texture.unpackAlignment); - array.push(texture.encoding); - - return array.join(); - - } - - // - - function setTexture2D(texture, slot) { - - const textureProperties = properties.get(texture); - - if (texture.isVideoTexture) updateVideoTexture(texture); - - if (texture.isRenderTargetTexture === false && texture.version > 0 && textureProperties.__version !== texture.version) { - - const image = texture.image; - - if (image === null) { - - console.warn('THREE.WebGLRenderer: Texture marked for update but no image data found.'); - - } else if (image.complete === false) { - - console.warn('THREE.WebGLRenderer: Texture marked for update but image is incomplete'); - - } else { - - uploadTexture(textureProperties, texture, slot); - return; - - } - - } - - state.bindTexture(3553, textureProperties.__webglTexture, 33984 + slot); - - } - - function setTexture2DArray(texture, slot) { - - const textureProperties = properties.get(texture); - - if (texture.version > 0 && textureProperties.__version !== texture.version) { - - uploadTexture(textureProperties, texture, slot); - return; - - } - - state.bindTexture(35866, textureProperties.__webglTexture, 33984 + slot); - - } - - function setTexture3D(texture, slot) { - - const textureProperties = properties.get(texture); - - if (texture.version > 0 && textureProperties.__version !== texture.version) { - - uploadTexture(textureProperties, texture, slot); - return; - - } - - state.bindTexture(32879, textureProperties.__webglTexture, 33984 + slot); - - } - - function setTextureCube(texture, slot) { - - const textureProperties = properties.get(texture); - - if (texture.version > 0 && textureProperties.__version !== texture.version) { - - uploadCubeTexture(textureProperties, texture, slot); - return; - - } - - state.bindTexture(34067, textureProperties.__webglTexture, 33984 + slot); - - } - - const wrappingToGL = { - [RepeatWrapping]: 10497, - [ClampToEdgeWrapping]: 33071, - [MirroredRepeatWrapping]: 33648 - }; - - const filterToGL = { - [NearestFilter]: 9728, - [NearestMipmapNearestFilter]: 9984, - [NearestMipmapLinearFilter]: 9986, - - [LinearFilter]: 9729, - [LinearMipmapNearestFilter]: 9985, - [LinearMipmapLinearFilter]: 9987 - }; - - function setTextureParameters(textureType, texture, supportsMips) { - - if (supportsMips) { - - _gl.texParameteri(textureType, 10242, wrappingToGL[texture.wrapS]); - _gl.texParameteri(textureType, 10243, wrappingToGL[texture.wrapT]); - - if (textureType === 32879 || textureType === 35866) { - - _gl.texParameteri(textureType, 32882, wrappingToGL[texture.wrapR]); - - } - - _gl.texParameteri(textureType, 10240, filterToGL[texture.magFilter]); - _gl.texParameteri(textureType, 10241, filterToGL[texture.minFilter]); - - } else { - - _gl.texParameteri(textureType, 10242, 33071); - _gl.texParameteri(textureType, 10243, 33071); - - if (textureType === 32879 || textureType === 35866) { - - _gl.texParameteri(textureType, 32882, 33071); - - } - - if (texture.wrapS !== ClampToEdgeWrapping || texture.wrapT !== ClampToEdgeWrapping) { - - console.warn('THREE.WebGLRenderer: Texture is not power of two. Texture.wrapS and Texture.wrapT should be set to THREE.ClampToEdgeWrapping.'); - - } - - _gl.texParameteri(textureType, 10240, filterFallback(texture.magFilter)); - _gl.texParameteri(textureType, 10241, filterFallback(texture.minFilter)); - - if (texture.minFilter !== NearestFilter && texture.minFilter !== LinearFilter) { - - console.warn('THREE.WebGLRenderer: Texture is not power of two. Texture.minFilter should be set to THREE.NearestFilter or THREE.LinearFilter.'); - - } - - } - - if (extensions.has('EXT_texture_filter_anisotropic') === true) { - - const extension = extensions.get('EXT_texture_filter_anisotropic'); - - if (texture.magFilter === NearestFilter) return; - if (texture.minFilter !== NearestMipmapLinearFilter && texture.minFilter !== LinearMipmapLinearFilter) return; - if (texture.type === FloatType && extensions.has('OES_texture_float_linear') === false) return; // verify extension for WebGL 1 and WebGL 2 - if (isWebGL2 === false && (texture.type === HalfFloatType && extensions.has('OES_texture_half_float_linear') === false)) return; // verify extension for WebGL 1 only - - if (texture.anisotropy > 1 || properties.get(texture).__currentAnisotropy) { - - _gl.texParameterf(textureType, extension.TEXTURE_MAX_ANISOTROPY_EXT, Math.min(texture.anisotropy, capabilities.getMaxAnisotropy())); - properties.get(texture).__currentAnisotropy = texture.anisotropy; - - } - - } - - } - - function initTexture(textureProperties, texture) { - - let forceUpload = false; - - if (textureProperties.__webglInit === undefined) { - - textureProperties.__webglInit = true; - - texture.addEventListener('dispose', onTextureDispose); - - } - - // create Source <-> WebGLTextures mapping if necessary - - const source = texture.source; - let webglTextures = _sources.get(source); - - if (webglTextures === undefined) { - - webglTextures = {}; - _sources.set(source, webglTextures); - - } - - // check if there is already a WebGLTexture object for the given texture parameters - - const textureCacheKey = getTextureCacheKey(texture); - - if (textureCacheKey !== textureProperties.__cacheKey) { - - // if not, create a new instance of WebGLTexture - - if (webglTextures[textureCacheKey] === undefined) { - - // create new entry - - webglTextures[textureCacheKey] = { - texture: _gl.createTexture(), - usedTimes: 0 - }; - - info.memory.textures++; - - // when a new instance of WebGLTexture was created, a texture upload is required - // even if the image contents are identical - - forceUpload = true; - - } - - webglTextures[textureCacheKey].usedTimes++; - - // every time the texture cache key changes, it's necessary to check if an instance of - // WebGLTexture can be deleted in order to avoid a memory leak. - - const webglTexture = webglTextures[textureProperties.__cacheKey]; - - if (webglTexture !== undefined) { - - webglTextures[textureProperties.__cacheKey].usedTimes--; - - if (webglTexture.usedTimes === 0) { - - deleteTexture(texture); - - } - - } - - // store references to cache key and WebGLTexture object - - textureProperties.__cacheKey = textureCacheKey; - textureProperties.__webglTexture = webglTextures[textureCacheKey].texture; - - } - - return forceUpload; - - } - - function uploadTexture(textureProperties, texture, slot) { - - let textureType = 3553; - - if (texture.isDataArrayTexture || texture.isCompressedArrayTexture) textureType = 35866; - if (texture.isData3DTexture) textureType = 32879; - - const forceUpload = initTexture(textureProperties, texture); - const source = texture.source; - - state.bindTexture(textureType, textureProperties.__webglTexture, 33984 + slot); - - const sourceProperties = properties.get(source); - - if (source.version !== sourceProperties.__version || forceUpload === true) { - - state.activeTexture(33984 + slot); - - _gl.pixelStorei(37440, texture.flipY); - _gl.pixelStorei(37441, texture.premultiplyAlpha); - _gl.pixelStorei(3317, texture.unpackAlignment); - _gl.pixelStorei(37443, 0); - - const needsPowerOfTwo = textureNeedsPowerOfTwo(texture) && isPowerOfTwo$1(texture.image) === false; - let image = resizeImage(texture.image, needsPowerOfTwo, false, maxTextureSize); - image = verifyColorSpace(texture, image); - - const supportsMips = isPowerOfTwo$1(image) || isWebGL2, - glFormat = utils.convert(texture.format, texture.encoding); - - let glType = utils.convert(texture.type), - glInternalFormat = getInternalFormat(texture.internalFormat, glFormat, glType, texture.encoding, texture.isVideoTexture); - - setTextureParameters(textureType, texture, supportsMips); - - let mipmap; - const mipmaps = texture.mipmaps; - - const useTexStorage = (isWebGL2 && texture.isVideoTexture !== true); - const allocateMemory = (sourceProperties.__version === undefined) || (forceUpload === true); - const levels = getMipLevels(texture, image, supportsMips); - - if (texture.isDepthTexture) { - - // populate depth texture with dummy data - - glInternalFormat = 6402; - - if (isWebGL2) { - - if (texture.type === FloatType) { - - glInternalFormat = 36012; - - } else if (texture.type === UnsignedIntType) { - - glInternalFormat = 33190; - - } else if (texture.type === UnsignedInt248Type) { - - glInternalFormat = 35056; - - } else { - - glInternalFormat = 33189; // WebGL2 requires sized internalformat for glTexImage2D - - } - - } else { - - if (texture.type === FloatType) { - - console.error('WebGLRenderer: Floating point depth texture requires WebGL2.'); - - } - - } - - // validation checks for WebGL 1 - - if (texture.format === DepthFormat && glInternalFormat === 6402) { - - // The error INVALID_OPERATION is generated by texImage2D if format and internalformat are - // DEPTH_COMPONENT and type is not UNSIGNED_SHORT or UNSIGNED_INT - // (https://www.khronos.org/registry/webgl/extensions/WEBGL_depth_texture/) - if (texture.type !== UnsignedShortType && texture.type !== UnsignedIntType) { - - console.warn('THREE.WebGLRenderer: Use UnsignedShortType or UnsignedIntType for DepthFormat DepthTexture.'); - - texture.type = UnsignedIntType; - glType = utils.convert(texture.type); - - } - - } - - if (texture.format === DepthStencilFormat && glInternalFormat === 6402) { - - // Depth stencil textures need the DEPTH_STENCIL internal format - // (https://www.khronos.org/registry/webgl/extensions/WEBGL_depth_texture/) - glInternalFormat = 34041; - - // The error INVALID_OPERATION is generated by texImage2D if format and internalformat are - // DEPTH_STENCIL and type is not UNSIGNED_INT_24_8_WEBGL. - // (https://www.khronos.org/registry/webgl/extensions/WEBGL_depth_texture/) - if (texture.type !== UnsignedInt248Type) { - - console.warn('THREE.WebGLRenderer: Use UnsignedInt248Type for DepthStencilFormat DepthTexture.'); - - texture.type = UnsignedInt248Type; - glType = utils.convert(texture.type); - - } - - } - - // - - if (allocateMemory) { - - if (useTexStorage) { - - state.texStorage2D(3553, 1, glInternalFormat, image.width, image.height); - - } else { - - state.texImage2D(3553, 0, glInternalFormat, image.width, image.height, 0, glFormat, glType, null); - - } - - } - - } else if (texture.isDataTexture) { - - // use manually created mipmaps if available - // if there are no manual mipmaps - // set 0 level mipmap and then use GL to generate other mipmap levels - - if (mipmaps.length > 0 && supportsMips) { - - if (useTexStorage && allocateMemory) { - - state.texStorage2D(3553, levels, glInternalFormat, mipmaps[0].width, mipmaps[0].height); - - } - - for (let i = 0, il = mipmaps.length; i < il; i++) { - - mipmap = mipmaps[i]; - - if (useTexStorage) { - - state.texSubImage2D(3553, i, 0, 0, mipmap.width, mipmap.height, glFormat, glType, mipmap.data); - - } else { - - state.texImage2D(3553, i, glInternalFormat, mipmap.width, mipmap.height, 0, glFormat, glType, mipmap.data); - - } - - } - - texture.generateMipmaps = false; - - } else { - - if (useTexStorage) { - - if (allocateMemory) { - - state.texStorage2D(3553, levels, glInternalFormat, image.width, image.height); - - } - - state.texSubImage2D(3553, 0, 0, 0, image.width, image.height, glFormat, glType, image.data); - - } else { - - state.texImage2D(3553, 0, glInternalFormat, image.width, image.height, 0, glFormat, glType, image.data); - - } - - } - - } else if (texture.isCompressedTexture) { - - if (texture.isCompressedArrayTexture) { - - if (useTexStorage && allocateMemory) { - - state.texStorage3D(35866, levels, glInternalFormat, mipmaps[0].width, mipmaps[0].height, image.depth); - - } - - for (let i = 0, il = mipmaps.length; i < il; i++) { - - mipmap = mipmaps[i]; - - if (texture.format !== RGBAFormat) { - - if (glFormat !== null) { - - if (useTexStorage) { - - state.compressedTexSubImage3D(35866, i, 0, 0, 0, mipmap.width, mipmap.height, image.depth, glFormat, mipmap.data, 0, 0); - - } else { - - state.compressedTexImage3D(35866, i, glInternalFormat, mipmap.width, mipmap.height, image.depth, 0, mipmap.data, 0, 0); - - } - - } else { - - console.warn('THREE.WebGLRenderer: Attempt to load unsupported compressed texture format in .uploadTexture()'); - - } - - } else { - - if (useTexStorage) { - - state.texSubImage3D(35866, i, 0, 0, 0, mipmap.width, mipmap.height, image.depth, glFormat, glType, mipmap.data); - - } else { - - state.texImage3D(35866, i, glInternalFormat, mipmap.width, mipmap.height, image.depth, 0, glFormat, glType, mipmap.data); - - } - - } - - } - - } else { - - if (useTexStorage && allocateMemory) { - - state.texStorage2D(3553, levels, glInternalFormat, mipmaps[0].width, mipmaps[0].height); - - } - - for (let i = 0, il = mipmaps.length; i < il; i++) { - - mipmap = mipmaps[i]; - - if (texture.format !== RGBAFormat) { - - if (glFormat !== null) { - - if (useTexStorage) { - - state.compressedTexSubImage2D(3553, i, 0, 0, mipmap.width, mipmap.height, glFormat, mipmap.data); - - } else { - - state.compressedTexImage2D(3553, i, glInternalFormat, mipmap.width, mipmap.height, 0, mipmap.data); - - } - - } else { - - console.warn('THREE.WebGLRenderer: Attempt to load unsupported compressed texture format in .uploadTexture()'); - - } - - } else { - - if (useTexStorage) { - - state.texSubImage2D(3553, i, 0, 0, mipmap.width, mipmap.height, glFormat, glType, mipmap.data); - - } else { - - state.texImage2D(3553, i, glInternalFormat, mipmap.width, mipmap.height, 0, glFormat, glType, mipmap.data); - - } - - } - - } - - } - - } else if (texture.isDataArrayTexture) { - - if (useTexStorage) { - - if (allocateMemory) { - - state.texStorage3D(35866, levels, glInternalFormat, image.width, image.height, image.depth); - - } - - state.texSubImage3D(35866, 0, 0, 0, 0, image.width, image.height, image.depth, glFormat, glType, image.data); - - } else { - - state.texImage3D(35866, 0, glInternalFormat, image.width, image.height, image.depth, 0, glFormat, glType, image.data); - - } - - } else if (texture.isData3DTexture) { - - if (useTexStorage) { - - if (allocateMemory) { - - state.texStorage3D(32879, levels, glInternalFormat, image.width, image.height, image.depth); - - } - - state.texSubImage3D(32879, 0, 0, 0, 0, image.width, image.height, image.depth, glFormat, glType, image.data); - - } else { - - state.texImage3D(32879, 0, glInternalFormat, image.width, image.height, image.depth, 0, glFormat, glType, image.data); - - } - - } else if (texture.isFramebufferTexture) { - - if (allocateMemory) { - - if (useTexStorage) { - - state.texStorage2D(3553, levels, glInternalFormat, image.width, image.height); - - } else { - - let width = image.width, height = image.height; - - for (let i = 0; i < levels; i++) { - - state.texImage2D(3553, i, glInternalFormat, width, height, 0, glFormat, glType, null); - - width >>= 1; - height >>= 1; - - } - - } - - } - - } else { - - // regular Texture (image, video, canvas) - - // use manually created mipmaps if available - // if there are no manual mipmaps - // set 0 level mipmap and then use GL to generate other mipmap levels - - if (mipmaps.length > 0 && supportsMips) { - - if (useTexStorage && allocateMemory) { - - state.texStorage2D(3553, levels, glInternalFormat, mipmaps[0].width, mipmaps[0].height); - - } - - for (let i = 0, il = mipmaps.length; i < il; i++) { - - mipmap = mipmaps[i]; - - if (useTexStorage) { - - state.texSubImage2D(3553, i, 0, 0, glFormat, glType, mipmap); - - } else { - - state.texImage2D(3553, i, glInternalFormat, glFormat, glType, mipmap); - - } - - } - - texture.generateMipmaps = false; - - } else { - - if (useTexStorage) { - - if (allocateMemory) { - - state.texStorage2D(3553, levels, glInternalFormat, image.width, image.height); - - } - - state.texSubImage2D(3553, 0, 0, 0, glFormat, glType, image); - - } else { - - state.texImage2D(3553, 0, glInternalFormat, glFormat, glType, image); - - } - - } - - } - - if (textureNeedsGenerateMipmaps(texture, supportsMips)) { - - generateMipmap(textureType); - - } - - sourceProperties.__version = source.version; - - if (texture.onUpdate) texture.onUpdate(texture); - - } - - textureProperties.__version = texture.version; - - } - - function uploadCubeTexture(textureProperties, texture, slot) { - - if (texture.image.length !== 6) return; - - const forceUpload = initTexture(textureProperties, texture); - const source = texture.source; - - state.bindTexture(34067, textureProperties.__webglTexture, 33984 + slot); - - const sourceProperties = properties.get(source); - - if (source.version !== sourceProperties.__version || forceUpload === true) { - - state.activeTexture(33984 + slot); - - _gl.pixelStorei(37440, texture.flipY); - _gl.pixelStorei(37441, texture.premultiplyAlpha); - _gl.pixelStorei(3317, texture.unpackAlignment); - _gl.pixelStorei(37443, 0); - - const isCompressed = (texture.isCompressedTexture || texture.image[0].isCompressedTexture); - const isDataTexture = (texture.image[0] && texture.image[0].isDataTexture); - - const cubeImage = []; - - for (let i = 0; i < 6; i++) { - - if (!isCompressed && !isDataTexture) { - - cubeImage[i] = resizeImage(texture.image[i], false, true, maxCubemapSize); - - } else { - - cubeImage[i] = isDataTexture ? texture.image[i].image : texture.image[i]; - - } - - cubeImage[i] = verifyColorSpace(texture, cubeImage[i]); - - } - - const image = cubeImage[0], - supportsMips = isPowerOfTwo$1(image) || isWebGL2, - glFormat = utils.convert(texture.format, texture.encoding), - glType = utils.convert(texture.type), - glInternalFormat = getInternalFormat(texture.internalFormat, glFormat, glType, texture.encoding); - - const useTexStorage = (isWebGL2 && texture.isVideoTexture !== true); - const allocateMemory = (sourceProperties.__version === undefined) || (forceUpload === true); - let levels = getMipLevels(texture, image, supportsMips); - - setTextureParameters(34067, texture, supportsMips); - - let mipmaps; - - if (isCompressed) { - - if (useTexStorage && allocateMemory) { - - state.texStorage2D(34067, levels, glInternalFormat, image.width, image.height); - - } - - for (let i = 0; i < 6; i++) { - - mipmaps = cubeImage[i].mipmaps; - - for (let j = 0; j < mipmaps.length; j++) { - - const mipmap = mipmaps[j]; - - if (texture.format !== RGBAFormat) { - - if (glFormat !== null) { - - if (useTexStorage) { - - state.compressedTexSubImage2D(34069 + i, j, 0, 0, mipmap.width, mipmap.height, glFormat, mipmap.data); - - } else { - - state.compressedTexImage2D(34069 + i, j, glInternalFormat, mipmap.width, mipmap.height, 0, mipmap.data); - - } - - } else { - - console.warn('THREE.WebGLRenderer: Attempt to load unsupported compressed texture format in .setTextureCube()'); - - } - - } else { - - if (useTexStorage) { - - state.texSubImage2D(34069 + i, j, 0, 0, mipmap.width, mipmap.height, glFormat, glType, mipmap.data); - - } else { - - state.texImage2D(34069 + i, j, glInternalFormat, mipmap.width, mipmap.height, 0, glFormat, glType, mipmap.data); - - } - - } - - } - - } - - } else { - - mipmaps = texture.mipmaps; - - if (useTexStorage && allocateMemory) { - - // TODO: Uniformly handle mipmap definitions - // Normal textures and compressed cube textures define base level + mips with their mipmap array - // Uncompressed cube textures use their mipmap array only for mips (no base level) - - if (mipmaps.length > 0) levels++; - - state.texStorage2D(34067, levels, glInternalFormat, cubeImage[0].width, cubeImage[0].height); - - } - - for (let i = 0; i < 6; i++) { - - if (isDataTexture) { - - if (useTexStorage) { - - state.texSubImage2D(34069 + i, 0, 0, 0, cubeImage[i].width, cubeImage[i].height, glFormat, glType, cubeImage[i].data); - - } else { - - state.texImage2D(34069 + i, 0, glInternalFormat, cubeImage[i].width, cubeImage[i].height, 0, glFormat, glType, cubeImage[i].data); - - } - - for (let j = 0; j < mipmaps.length; j++) { - - const mipmap = mipmaps[j]; - const mipmapImage = mipmap.image[i].image; - - if (useTexStorage) { - - state.texSubImage2D(34069 + i, j + 1, 0, 0, mipmapImage.width, mipmapImage.height, glFormat, glType, mipmapImage.data); - - } else { - - state.texImage2D(34069 + i, j + 1, glInternalFormat, mipmapImage.width, mipmapImage.height, 0, glFormat, glType, mipmapImage.data); - - } - - } - - } else { - - if (useTexStorage) { - - state.texSubImage2D(34069 + i, 0, 0, 0, glFormat, glType, cubeImage[i]); - - } else { - - state.texImage2D(34069 + i, 0, glInternalFormat, glFormat, glType, cubeImage[i]); - - } - - for (let j = 0; j < mipmaps.length; j++) { - - const mipmap = mipmaps[j]; - - if (useTexStorage) { - - state.texSubImage2D(34069 + i, j + 1, 0, 0, glFormat, glType, mipmap.image[i]); - - } else { - - state.texImage2D(34069 + i, j + 1, glInternalFormat, glFormat, glType, mipmap.image[i]); - - } - - } - - } - - } - - } - - if (textureNeedsGenerateMipmaps(texture, supportsMips)) { - - // We assume images for cube map have the same size. - generateMipmap(34067); - - } - - sourceProperties.__version = source.version; - - if (texture.onUpdate) texture.onUpdate(texture); - - } - - textureProperties.__version = texture.version; - - } - - // Render targets - - // Setup storage for target texture and bind it to correct framebuffer - function setupFrameBufferTexture(framebuffer, renderTarget, texture, attachment, textureTarget) { - - const glFormat = utils.convert(texture.format, texture.encoding); - const glType = utils.convert(texture.type); - const glInternalFormat = getInternalFormat(texture.internalFormat, glFormat, glType, texture.encoding); - const renderTargetProperties = properties.get(renderTarget); - - if (!renderTargetProperties.__hasExternalTextures) { - - if (textureTarget === 32879 || textureTarget === 35866) { - - state.texImage3D(textureTarget, 0, glInternalFormat, renderTarget.width, renderTarget.height, renderTarget.depth, 0, glFormat, glType, null); - - } else { - - state.texImage2D(textureTarget, 0, glInternalFormat, renderTarget.width, renderTarget.height, 0, glFormat, glType, null); - - } - - } - - state.bindFramebuffer(36160, framebuffer); - - if (useMultisampledRTT(renderTarget)) { - - multisampledRTTExt.framebufferTexture2DMultisampleEXT(36160, attachment, textureTarget, properties.get(texture).__webglTexture, 0, getRenderTargetSamples(renderTarget)); - - } else if (textureTarget === 3553 || (textureTarget >= 34069 && textureTarget <= 34074)) { // see #24753 - - _gl.framebufferTexture2D(36160, attachment, textureTarget, properties.get(texture).__webglTexture, 0); - - } - - state.bindFramebuffer(36160, null); - - } - - - // Setup storage for internal depth/stencil buffers and bind to correct framebuffer - function setupRenderBufferStorage(renderbuffer, renderTarget, isMultisample) { - - _gl.bindRenderbuffer(36161, renderbuffer); - - if (renderTarget.depthBuffer && !renderTarget.stencilBuffer) { - - let glInternalFormat = 33189; - - if (isMultisample || useMultisampledRTT(renderTarget)) { - - const depthTexture = renderTarget.depthTexture; - - if (depthTexture && depthTexture.isDepthTexture) { - - if (depthTexture.type === FloatType) { - - glInternalFormat = 36012; - - } else if (depthTexture.type === UnsignedIntType) { - - glInternalFormat = 33190; - - } - - } - - const samples = getRenderTargetSamples(renderTarget); - - if (useMultisampledRTT(renderTarget)) { - - multisampledRTTExt.renderbufferStorageMultisampleEXT(36161, samples, glInternalFormat, renderTarget.width, renderTarget.height); - - } else { - - _gl.renderbufferStorageMultisample(36161, samples, glInternalFormat, renderTarget.width, renderTarget.height); - - } - - } else { - - _gl.renderbufferStorage(36161, glInternalFormat, renderTarget.width, renderTarget.height); - - } - - _gl.framebufferRenderbuffer(36160, 36096, 36161, renderbuffer); - - } else if (renderTarget.depthBuffer && renderTarget.stencilBuffer) { - - const samples = getRenderTargetSamples(renderTarget); - - if (isMultisample && useMultisampledRTT(renderTarget) === false) { - - _gl.renderbufferStorageMultisample(36161, samples, 35056, renderTarget.width, renderTarget.height); - - } else if (useMultisampledRTT(renderTarget)) { - - multisampledRTTExt.renderbufferStorageMultisampleEXT(36161, samples, 35056, renderTarget.width, renderTarget.height); - - } else { - - _gl.renderbufferStorage(36161, 34041, renderTarget.width, renderTarget.height); - - } - - - _gl.framebufferRenderbuffer(36160, 33306, 36161, renderbuffer); - - } else { - - const textures = renderTarget.isWebGLMultipleRenderTargets === true ? renderTarget.texture : [renderTarget.texture]; - - for (let i = 0; i < textures.length; i++) { - - const texture = textures[i]; - - const glFormat = utils.convert(texture.format, texture.encoding); - const glType = utils.convert(texture.type); - const glInternalFormat = getInternalFormat(texture.internalFormat, glFormat, glType, texture.encoding); - const samples = getRenderTargetSamples(renderTarget); - - if (isMultisample && useMultisampledRTT(renderTarget) === false) { - - _gl.renderbufferStorageMultisample(36161, samples, glInternalFormat, renderTarget.width, renderTarget.height); - - } else if (useMultisampledRTT(renderTarget)) { - - multisampledRTTExt.renderbufferStorageMultisampleEXT(36161, samples, glInternalFormat, renderTarget.width, renderTarget.height); - - } else { - - _gl.renderbufferStorage(36161, glInternalFormat, renderTarget.width, renderTarget.height); - - } - - } - - } - - _gl.bindRenderbuffer(36161, null); - - } - - // Setup resources for a Depth Texture for a FBO (needs an extension) - function setupDepthTexture(framebuffer, renderTarget) { - - const isCube = (renderTarget && renderTarget.isWebGLCubeRenderTarget); - if (isCube) throw new Error('Depth Texture with cube render targets is not supported'); - - state.bindFramebuffer(36160, framebuffer); - - if (!(renderTarget.depthTexture && renderTarget.depthTexture.isDepthTexture)) { - - throw new Error('renderTarget.depthTexture must be an instance of THREE.DepthTexture'); - - } - - // upload an empty depth texture with framebuffer size - if (!properties.get(renderTarget.depthTexture).__webglTexture || - renderTarget.depthTexture.image.width !== renderTarget.width || - renderTarget.depthTexture.image.height !== renderTarget.height) { - - renderTarget.depthTexture.image.width = renderTarget.width; - renderTarget.depthTexture.image.height = renderTarget.height; - renderTarget.depthTexture.needsUpdate = true; - - } - - setTexture2D(renderTarget.depthTexture, 0); - - const webglDepthTexture = properties.get(renderTarget.depthTexture).__webglTexture; - const samples = getRenderTargetSamples(renderTarget); - - if (renderTarget.depthTexture.format === DepthFormat) { - - if (useMultisampledRTT(renderTarget)) { - - multisampledRTTExt.framebufferTexture2DMultisampleEXT(36160, 36096, 3553, webglDepthTexture, 0, samples); - - } else { - - _gl.framebufferTexture2D(36160, 36096, 3553, webglDepthTexture, 0); - - } - - } else if (renderTarget.depthTexture.format === DepthStencilFormat) { - - if (useMultisampledRTT(renderTarget)) { - - multisampledRTTExt.framebufferTexture2DMultisampleEXT(36160, 33306, 3553, webglDepthTexture, 0, samples); - - } else { - - _gl.framebufferTexture2D(36160, 33306, 3553, webglDepthTexture, 0); - - } - - } else { - - throw new Error('Unknown depthTexture format'); - - } - - } - - // Setup GL resources for a non-texture depth buffer - function setupDepthRenderbuffer(renderTarget) { - - const renderTargetProperties = properties.get(renderTarget); - const isCube = (renderTarget.isWebGLCubeRenderTarget === true); - - if (renderTarget.depthTexture && !renderTargetProperties.__autoAllocateDepthBuffer) { - - if (isCube) throw new Error('target.depthTexture not supported in Cube render targets'); - - setupDepthTexture(renderTargetProperties.__webglFramebuffer, renderTarget); - - } else { - - if (isCube) { - - renderTargetProperties.__webglDepthbuffer = []; - - for (let i = 0; i < 6; i++) { - - state.bindFramebuffer(36160, renderTargetProperties.__webglFramebuffer[i]); - renderTargetProperties.__webglDepthbuffer[i] = _gl.createRenderbuffer(); - setupRenderBufferStorage(renderTargetProperties.__webglDepthbuffer[i], renderTarget, false); - - } - - } else { - - state.bindFramebuffer(36160, renderTargetProperties.__webglFramebuffer); - renderTargetProperties.__webglDepthbuffer = _gl.createRenderbuffer(); - setupRenderBufferStorage(renderTargetProperties.__webglDepthbuffer, renderTarget, false); - - } - - } - - state.bindFramebuffer(36160, null); - - } - - // rebind framebuffer with external textures - function rebindTextures(renderTarget, colorTexture, depthTexture) { - - const renderTargetProperties = properties.get(renderTarget); - - if (colorTexture !== undefined) { - - setupFrameBufferTexture(renderTargetProperties.__webglFramebuffer, renderTarget, renderTarget.texture, 36064, 3553); - - } - - if (depthTexture !== undefined) { - - setupDepthRenderbuffer(renderTarget); - - } - - } - - // Set up GL resources for the render target - function setupRenderTarget(renderTarget) { - - const texture = renderTarget.texture; - - const renderTargetProperties = properties.get(renderTarget); - const textureProperties = properties.get(texture); - - renderTarget.addEventListener('dispose', onRenderTargetDispose); - - if (renderTarget.isWebGLMultipleRenderTargets !== true) { - - if (textureProperties.__webglTexture === undefined) { - - textureProperties.__webglTexture = _gl.createTexture(); - - } - - textureProperties.__version = texture.version; - info.memory.textures++; - - } - - const isCube = (renderTarget.isWebGLCubeRenderTarget === true); - const isMultipleRenderTargets = (renderTarget.isWebGLMultipleRenderTargets === true); - const supportsMips = isPowerOfTwo$1(renderTarget) || isWebGL2; - - // Setup framebuffer - - if (isCube) { - - renderTargetProperties.__webglFramebuffer = []; - - for (let i = 0; i < 6; i++) { - - renderTargetProperties.__webglFramebuffer[i] = _gl.createFramebuffer(); - - } - - } else { - - renderTargetProperties.__webglFramebuffer = _gl.createFramebuffer(); - - if (isMultipleRenderTargets) { - - if (capabilities.drawBuffers) { - - const textures = renderTarget.texture; - - for (let i = 0, il = textures.length; i < il; i++) { - - const attachmentProperties = properties.get(textures[i]); - - if (attachmentProperties.__webglTexture === undefined) { - - attachmentProperties.__webglTexture = _gl.createTexture(); - - info.memory.textures++; - - } - - } - - } else { - - console.warn('THREE.WebGLRenderer: WebGLMultipleRenderTargets can only be used with WebGL2 or WEBGL_draw_buffers extension.'); - - } - - } - - if ((isWebGL2 && renderTarget.samples > 0) && useMultisampledRTT(renderTarget) === false) { - - const textures = isMultipleRenderTargets ? texture : [texture]; - - renderTargetProperties.__webglMultisampledFramebuffer = _gl.createFramebuffer(); - renderTargetProperties.__webglColorRenderbuffer = []; - - state.bindFramebuffer(36160, renderTargetProperties.__webglMultisampledFramebuffer); - - for (let i = 0; i < textures.length; i++) { - - const texture = textures[i]; - renderTargetProperties.__webglColorRenderbuffer[i] = _gl.createRenderbuffer(); - - _gl.bindRenderbuffer(36161, renderTargetProperties.__webglColorRenderbuffer[i]); - - const glFormat = utils.convert(texture.format, texture.encoding); - const glType = utils.convert(texture.type); - const glInternalFormat = getInternalFormat(texture.internalFormat, glFormat, glType, texture.encoding, renderTarget.isXRRenderTarget === true); - const samples = getRenderTargetSamples(renderTarget); - _gl.renderbufferStorageMultisample(36161, samples, glInternalFormat, renderTarget.width, renderTarget.height); - - _gl.framebufferRenderbuffer(36160, 36064 + i, 36161, renderTargetProperties.__webglColorRenderbuffer[i]); - - } - - _gl.bindRenderbuffer(36161, null); - - if (renderTarget.depthBuffer) { - - renderTargetProperties.__webglDepthRenderbuffer = _gl.createRenderbuffer(); - setupRenderBufferStorage(renderTargetProperties.__webglDepthRenderbuffer, renderTarget, true); - - } - - state.bindFramebuffer(36160, null); - - } - - } - - // Setup color buffer - - if (isCube) { - - state.bindTexture(34067, textureProperties.__webglTexture); - setTextureParameters(34067, texture, supportsMips); - - for (let i = 0; i < 6; i++) { - - setupFrameBufferTexture(renderTargetProperties.__webglFramebuffer[i], renderTarget, texture, 36064, 34069 + i); - - } - - if (textureNeedsGenerateMipmaps(texture, supportsMips)) { - - generateMipmap(34067); - - } - - state.unbindTexture(); - - } else if (isMultipleRenderTargets) { - - const textures = renderTarget.texture; - - for (let i = 0, il = textures.length; i < il; i++) { - - const attachment = textures[i]; - const attachmentProperties = properties.get(attachment); - - state.bindTexture(3553, attachmentProperties.__webglTexture); - setTextureParameters(3553, attachment, supportsMips); - setupFrameBufferTexture(renderTargetProperties.__webglFramebuffer, renderTarget, attachment, 36064 + i, 3553); - - if (textureNeedsGenerateMipmaps(attachment, supportsMips)) { - - generateMipmap(3553); - - } - - } - - state.unbindTexture(); - - } else { - - let glTextureType = 3553; - - if (renderTarget.isWebGL3DRenderTarget || renderTarget.isWebGLArrayRenderTarget) { - - if (isWebGL2) { - - glTextureType = renderTarget.isWebGL3DRenderTarget ? 32879 : 35866; - - } else { - - console.error('THREE.WebGLTextures: THREE.Data3DTexture and THREE.DataArrayTexture only supported with WebGL2.'); - - } - - } - - state.bindTexture(glTextureType, textureProperties.__webglTexture); - setTextureParameters(glTextureType, texture, supportsMips); - setupFrameBufferTexture(renderTargetProperties.__webglFramebuffer, renderTarget, texture, 36064, glTextureType); - - if (textureNeedsGenerateMipmaps(texture, supportsMips)) { - - generateMipmap(glTextureType); - - } - - state.unbindTexture(); - - } - - // Setup depth and stencil buffers - - if (renderTarget.depthBuffer) { - - setupDepthRenderbuffer(renderTarget); - - } - - } - - function updateRenderTargetMipmap(renderTarget) { - - const supportsMips = isPowerOfTwo$1(renderTarget) || isWebGL2; - - const textures = renderTarget.isWebGLMultipleRenderTargets === true ? renderTarget.texture : [renderTarget.texture]; - - for (let i = 0, il = textures.length; i < il; i++) { - - const texture = textures[i]; - - if (textureNeedsGenerateMipmaps(texture, supportsMips)) { - - const target = renderTarget.isWebGLCubeRenderTarget ? 34067 : 3553; - const webglTexture = properties.get(texture).__webglTexture; - - state.bindTexture(target, webglTexture); - generateMipmap(target); - state.unbindTexture(); - - } - - } - - } - - function updateMultisampleRenderTarget(renderTarget) { - - if ((isWebGL2 && renderTarget.samples > 0) && useMultisampledRTT(renderTarget) === false) { - - const textures = renderTarget.isWebGLMultipleRenderTargets ? renderTarget.texture : [renderTarget.texture]; - const width = renderTarget.width; - const height = renderTarget.height; - let mask = 16384; - const invalidationArray = []; - const depthStyle = renderTarget.stencilBuffer ? 33306 : 36096; - const renderTargetProperties = properties.get(renderTarget); - const isMultipleRenderTargets = (renderTarget.isWebGLMultipleRenderTargets === true); - - // If MRT we need to remove FBO attachments - if (isMultipleRenderTargets) { - - for (let i = 0; i < textures.length; i++) { - - state.bindFramebuffer(36160, renderTargetProperties.__webglMultisampledFramebuffer); - _gl.framebufferRenderbuffer(36160, 36064 + i, 36161, null); - - state.bindFramebuffer(36160, renderTargetProperties.__webglFramebuffer); - _gl.framebufferTexture2D(36009, 36064 + i, 3553, null, 0); - - } - - } - - state.bindFramebuffer(36008, renderTargetProperties.__webglMultisampledFramebuffer); - state.bindFramebuffer(36009, renderTargetProperties.__webglFramebuffer); - - for (let i = 0; i < textures.length; i++) { - - invalidationArray.push(36064 + i); - - if (renderTarget.depthBuffer) { - - invalidationArray.push(depthStyle); - - } - - const ignoreDepthValues = (renderTargetProperties.__ignoreDepthValues !== undefined) ? renderTargetProperties.__ignoreDepthValues : false; - - if (ignoreDepthValues === false) { - - if (renderTarget.depthBuffer) mask |= 256; - if (renderTarget.stencilBuffer) mask |= 1024; - - } - - if (isMultipleRenderTargets) { - - _gl.framebufferRenderbuffer(36008, 36064, 36161, renderTargetProperties.__webglColorRenderbuffer[i]); - - } - - if (ignoreDepthValues === true) { - - _gl.invalidateFramebuffer(36008, [depthStyle]); - _gl.invalidateFramebuffer(36009, [depthStyle]); - - } - - if (isMultipleRenderTargets) { - - const webglTexture = properties.get(textures[i]).__webglTexture; - _gl.framebufferTexture2D(36009, 36064, 3553, webglTexture, 0); - - } - - _gl.blitFramebuffer(0, 0, width, height, 0, 0, width, height, mask, 9728); - - if (supportsInvalidateFramebuffer) { - - _gl.invalidateFramebuffer(36008, invalidationArray); - - } - - - } - - state.bindFramebuffer(36008, null); - state.bindFramebuffer(36009, null); - - // If MRT since pre-blit we removed the FBO we need to reconstruct the attachments - if (isMultipleRenderTargets) { - - for (let i = 0; i < textures.length; i++) { - - state.bindFramebuffer(36160, renderTargetProperties.__webglMultisampledFramebuffer); - _gl.framebufferRenderbuffer(36160, 36064 + i, 36161, renderTargetProperties.__webglColorRenderbuffer[i]); - - const webglTexture = properties.get(textures[i]).__webglTexture; - - state.bindFramebuffer(36160, renderTargetProperties.__webglFramebuffer); - _gl.framebufferTexture2D(36009, 36064 + i, 3553, webglTexture, 0); - - } - - } - - state.bindFramebuffer(36009, renderTargetProperties.__webglMultisampledFramebuffer); - - } - - } - - function getRenderTargetSamples(renderTarget) { - - return Math.min(maxSamples, renderTarget.samples); - - } - - function useMultisampledRTT(renderTarget) { - - const renderTargetProperties = properties.get(renderTarget); - - return isWebGL2 && renderTarget.samples > 0 && extensions.has('WEBGL_multisampled_render_to_texture') === true && renderTargetProperties.__useRenderToTexture !== false; - - } - - function updateVideoTexture(texture) { - - const frame = info.render.frame; - - // Check the last frame we updated the VideoTexture - - if (_videoTextures.get(texture) !== frame) { - - _videoTextures.set(texture, frame); - texture.update(); - - } - - } - - function verifyColorSpace(texture, image) { - - const encoding = texture.encoding; - const format = texture.format; - const type = texture.type; - - if (texture.isCompressedTexture === true || texture.isVideoTexture === true || texture.format === _SRGBAFormat) return image; - - if (encoding !== LinearEncoding) { - - // sRGB - - if (encoding === sRGBEncoding) { - - if (isWebGL2 === false) { - - // in WebGL 1, try to use EXT_sRGB extension and unsized formats - - if (extensions.has('EXT_sRGB') === true && format === RGBAFormat) { - - texture.format = _SRGBAFormat; - - // it's not possible to generate mips in WebGL 1 with this extension - - texture.minFilter = LinearFilter; - texture.generateMipmaps = false; - - } else { - - // slow fallback (CPU decode) - - image = ImageUtils.sRGBToLinear(image); - - } - - } else { - - // in WebGL 2 uncompressed textures can only be sRGB encoded if they have the RGBA8 format - - if (format !== RGBAFormat || type !== UnsignedByteType) { - - console.warn('THREE.WebGLTextures: sRGB encoded textures have to use RGBAFormat and UnsignedByteType.'); - - } - - } - - } else { - - console.error('THREE.WebGLTextures: Unsupported texture encoding:', encoding); - - } - - } - - return image; - - } - - // - - this.allocateTextureUnit = allocateTextureUnit; - this.resetTextureUnits = resetTextureUnits; - - this.setTexture2D = setTexture2D; - this.setTexture2DArray = setTexture2DArray; - this.setTexture3D = setTexture3D; - this.setTextureCube = setTextureCube; - this.rebindTextures = rebindTextures; - this.setupRenderTarget = setupRenderTarget; - this.updateRenderTargetMipmap = updateRenderTargetMipmap; - this.updateMultisampleRenderTarget = updateMultisampleRenderTarget; - this.setupDepthRenderbuffer = setupDepthRenderbuffer; - this.setupFrameBufferTexture = setupFrameBufferTexture; - this.useMultisampledRTT = useMultisampledRTT; - -} - -function WebGLUtils(gl, extensions, capabilities) { - - const isWebGL2 = capabilities.isWebGL2; - - function convert(p, encoding = null) { - - let extension; - - if (p === UnsignedByteType) return 5121; - if (p === UnsignedShort4444Type) return 32819; - if (p === UnsignedShort5551Type) return 32820; - - if (p === ByteType) return 5120; - if (p === ShortType) return 5122; - if (p === UnsignedShortType) return 5123; - if (p === IntType) return 5124; - if (p === UnsignedIntType) return 5125; - if (p === FloatType) return 5126; - - if (p === HalfFloatType) { - - if (isWebGL2) return 5131; - - extension = extensions.get('OES_texture_half_float'); - - if (extension !== null) { - - return extension.HALF_FLOAT_OES; - - } else { - - return null; - - } - - } - - if (p === AlphaFormat) return 6406; - if (p === RGBAFormat) return 6408; - if (p === LuminanceFormat) return 6409; - if (p === LuminanceAlphaFormat) return 6410; - if (p === DepthFormat) return 6402; - if (p === DepthStencilFormat) return 34041; - - // WebGL 1 sRGB fallback - - if (p === _SRGBAFormat) { - - extension = extensions.get('EXT_sRGB'); - - if (extension !== null) { - - return extension.SRGB_ALPHA_EXT; - - } else { - - return null; - - } - - } - - // WebGL2 formats. - - if (p === RedFormat) return 6403; - if (p === RedIntegerFormat) return 36244; - if (p === RGFormat) return 33319; - if (p === RGIntegerFormat) return 33320; - if (p === RGBAIntegerFormat) return 36249; - - // S3TC - - if (p === RGB_S3TC_DXT1_Format || p === RGBA_S3TC_DXT1_Format || p === RGBA_S3TC_DXT3_Format || p === RGBA_S3TC_DXT5_Format) { - - if (encoding === sRGBEncoding) { - - extension = extensions.get('WEBGL_compressed_texture_s3tc_srgb'); - - if (extension !== null) { - - if (p === RGB_S3TC_DXT1_Format) return extension.COMPRESSED_SRGB_S3TC_DXT1_EXT; - if (p === RGBA_S3TC_DXT1_Format) return extension.COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT; - if (p === RGBA_S3TC_DXT3_Format) return extension.COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT; - if (p === RGBA_S3TC_DXT5_Format) return extension.COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT; - - } else { - - return null; - - } - - } else { - - extension = extensions.get('WEBGL_compressed_texture_s3tc'); - - if (extension !== null) { - - if (p === RGB_S3TC_DXT1_Format) return extension.COMPRESSED_RGB_S3TC_DXT1_EXT; - if (p === RGBA_S3TC_DXT1_Format) return extension.COMPRESSED_RGBA_S3TC_DXT1_EXT; - if (p === RGBA_S3TC_DXT3_Format) return extension.COMPRESSED_RGBA_S3TC_DXT3_EXT; - if (p === RGBA_S3TC_DXT5_Format) return extension.COMPRESSED_RGBA_S3TC_DXT5_EXT; - - } else { - - return null; - - } - - } - - } - - // PVRTC - - if (p === RGB_PVRTC_4BPPV1_Format || p === RGB_PVRTC_2BPPV1_Format || p === RGBA_PVRTC_4BPPV1_Format || p === RGBA_PVRTC_2BPPV1_Format) { - - extension = extensions.get('WEBGL_compressed_texture_pvrtc'); - - if (extension !== null) { - - if (p === RGB_PVRTC_4BPPV1_Format) return extension.COMPRESSED_RGB_PVRTC_4BPPV1_IMG; - if (p === RGB_PVRTC_2BPPV1_Format) return extension.COMPRESSED_RGB_PVRTC_2BPPV1_IMG; - if (p === RGBA_PVRTC_4BPPV1_Format) return extension.COMPRESSED_RGBA_PVRTC_4BPPV1_IMG; - if (p === RGBA_PVRTC_2BPPV1_Format) return extension.COMPRESSED_RGBA_PVRTC_2BPPV1_IMG; - - } else { - - return null; - - } - - } - - // ETC1 - - if (p === RGB_ETC1_Format) { - - extension = extensions.get('WEBGL_compressed_texture_etc1'); - - if (extension !== null) { - - return extension.COMPRESSED_RGB_ETC1_WEBGL; - - } else { - - return null; - - } - - } - - // ETC2 - - if (p === RGB_ETC2_Format || p === RGBA_ETC2_EAC_Format) { - - extension = extensions.get('WEBGL_compressed_texture_etc'); - - if (extension !== null) { - - if (p === RGB_ETC2_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ETC2 : extension.COMPRESSED_RGB8_ETC2; - if (p === RGBA_ETC2_EAC_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ETC2_EAC : extension.COMPRESSED_RGBA8_ETC2_EAC; - - } else { - - return null; - - } - - } - - // ASTC - - if (p === RGBA_ASTC_4x4_Format || p === RGBA_ASTC_5x4_Format || p === RGBA_ASTC_5x5_Format || - p === RGBA_ASTC_6x5_Format || p === RGBA_ASTC_6x6_Format || p === RGBA_ASTC_8x5_Format || - p === RGBA_ASTC_8x6_Format || p === RGBA_ASTC_8x8_Format || p === RGBA_ASTC_10x5_Format || - p === RGBA_ASTC_10x6_Format || p === RGBA_ASTC_10x8_Format || p === RGBA_ASTC_10x10_Format || - p === RGBA_ASTC_12x10_Format || p === RGBA_ASTC_12x12_Format) { - - extension = extensions.get('WEBGL_compressed_texture_astc'); - - if (extension !== null) { - - if (p === RGBA_ASTC_4x4_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR : extension.COMPRESSED_RGBA_ASTC_4x4_KHR; - if (p === RGBA_ASTC_5x4_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR : extension.COMPRESSED_RGBA_ASTC_5x4_KHR; - if (p === RGBA_ASTC_5x5_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR : extension.COMPRESSED_RGBA_ASTC_5x5_KHR; - if (p === RGBA_ASTC_6x5_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR : extension.COMPRESSED_RGBA_ASTC_6x5_KHR; - if (p === RGBA_ASTC_6x6_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR : extension.COMPRESSED_RGBA_ASTC_6x6_KHR; - if (p === RGBA_ASTC_8x5_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR : extension.COMPRESSED_RGBA_ASTC_8x5_KHR; - if (p === RGBA_ASTC_8x6_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR : extension.COMPRESSED_RGBA_ASTC_8x6_KHR; - if (p === RGBA_ASTC_8x8_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR : extension.COMPRESSED_RGBA_ASTC_8x8_KHR; - if (p === RGBA_ASTC_10x5_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR : extension.COMPRESSED_RGBA_ASTC_10x5_KHR; - if (p === RGBA_ASTC_10x6_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR : extension.COMPRESSED_RGBA_ASTC_10x6_KHR; - if (p === RGBA_ASTC_10x8_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR : extension.COMPRESSED_RGBA_ASTC_10x8_KHR; - if (p === RGBA_ASTC_10x10_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR : extension.COMPRESSED_RGBA_ASTC_10x10_KHR; - if (p === RGBA_ASTC_12x10_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR : extension.COMPRESSED_RGBA_ASTC_12x10_KHR; - if (p === RGBA_ASTC_12x12_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR : extension.COMPRESSED_RGBA_ASTC_12x12_KHR; - - } else { - - return null; - - } - - } - - // BPTC - - if (p === RGBA_BPTC_Format) { - - extension = extensions.get('EXT_texture_compression_bptc'); - - if (extension !== null) { - - if (p === RGBA_BPTC_Format) return (encoding === sRGBEncoding) ? extension.COMPRESSED_SRGB_ALPHA_BPTC_UNORM_EXT : extension.COMPRESSED_RGBA_BPTC_UNORM_EXT; - - } else { - - return null; - - } - - } - - // RGTC - - if (p === RED_RGTC1_Format || p === SIGNED_RED_RGTC1_Format || p === RED_GREEN_RGTC2_Format || p === SIGNED_RED_GREEN_RGTC2_Format) { - - extension = extensions.get('EXT_texture_compression_rgtc'); - - if (extension !== null) { - - if (p === RGBA_BPTC_Format) return extension.COMPRESSED_RED_RGTC1_EXT; - if (p === SIGNED_RED_RGTC1_Format) return extension.COMPRESSED_SIGNED_RED_RGTC1_EXT; - if (p === RED_GREEN_RGTC2_Format) return extension.COMPRESSED_RED_GREEN_RGTC2_EXT; - if (p === SIGNED_RED_GREEN_RGTC2_Format) return extension.COMPRESSED_SIGNED_RED_GREEN_RGTC2_EXT; - - } else { - - return null; - - } - - } - - // - - if (p === UnsignedInt248Type) { - - if (isWebGL2) return 34042; - - extension = extensions.get('WEBGL_depth_texture'); - - if (extension !== null) { - - return extension.UNSIGNED_INT_24_8_WEBGL; - - } else { - - return null; - - } - - } - - // if "p" can't be resolved, assume the user defines a WebGL constant as a string (fallback/workaround for packed RGB formats) - - return (gl[p] !== undefined) ? gl[p] : null; - - } - - return { convert: convert }; - -} - -class ArrayCamera extends PerspectiveCamera { - - constructor(array = []) { - - super(); - - this.isArrayCamera = true; - - this.cameras = array; - - } - -} - -class Group extends Object3D { - - constructor() { - - super(); - - this.isGroup = true; - - this.type = 'Group'; - - } - -} - -const _moveEvent = { type: 'move' }; - -class WebXRController { - - constructor() { - - this._targetRay = null; - this._grip = null; - this._hand = null; - - } - - getHandSpace() { - - if (this._hand === null) { - - this._hand = new Group(); - this._hand.matrixAutoUpdate = false; - this._hand.visible = false; - - this._hand.joints = {}; - this._hand.inputState = { pinching: false }; - - } - - return this._hand; - - } - - getTargetRaySpace() { - - if (this._targetRay === null) { - - this._targetRay = new Group(); - this._targetRay.matrixAutoUpdate = false; - this._targetRay.visible = false; - this._targetRay.hasLinearVelocity = false; - this._targetRay.linearVelocity = new Vector3(); - this._targetRay.hasAngularVelocity = false; - this._targetRay.angularVelocity = new Vector3(); - - } - - return this._targetRay; - - } - - getGripSpace() { - - if (this._grip === null) { - - this._grip = new Group(); - this._grip.matrixAutoUpdate = false; - this._grip.visible = false; - this._grip.hasLinearVelocity = false; - this._grip.linearVelocity = new Vector3(); - this._grip.hasAngularVelocity = false; - this._grip.angularVelocity = new Vector3(); - - } - - return this._grip; - - } - - dispatchEvent(event) { - - if (this._targetRay !== null) { - - this._targetRay.dispatchEvent(event); - - } - - if (this._grip !== null) { - - this._grip.dispatchEvent(event); - - } - - if (this._hand !== null) { - - this._hand.dispatchEvent(event); - - } - - return this; - - } - - connect(inputSource) { - - if (inputSource && inputSource.hand) { - - const hand = this._hand; - - if (hand) { - - for (const inputjoint of inputSource.hand.values()) { - - // Initialize hand with joints when connected - this._getHandJoint(hand, inputjoint); - - } - - } - - } - - this.dispatchEvent({ type: 'connected', data: inputSource }); - - return this; - - } - - disconnect(inputSource) { - - this.dispatchEvent({ type: 'disconnected', data: inputSource }); - - if (this._targetRay !== null) { - - this._targetRay.visible = false; - - } - - if (this._grip !== null) { - - this._grip.visible = false; - - } - - if (this._hand !== null) { - - this._hand.visible = false; - - } - - return this; - - } - - update(inputSource, frame, referenceSpace) { - - let inputPose = null; - let gripPose = null; - let handPose = null; - - const targetRay = this._targetRay; - const grip = this._grip; - const hand = this._hand; - - if (inputSource && frame.session.visibilityState !== 'visible-blurred') { - - if (hand && inputSource.hand) { - - handPose = true; - - for (const inputjoint of inputSource.hand.values()) { - - // Update the joints groups with the XRJoint poses - const jointPose = frame.getJointPose(inputjoint, referenceSpace); - - // The transform of this joint will be updated with the joint pose on each frame - const joint = this._getHandJoint(hand, inputjoint); - - if (jointPose !== null) { - - joint.matrix.fromArray(jointPose.transform.matrix); - joint.matrix.decompose(joint.position, joint.rotation, joint.scale); - joint.jointRadius = jointPose.radius; - - } - - joint.visible = jointPose !== null; - - } - - // Custom events - - // Check pinchz - const indexTip = hand.joints['index-finger-tip']; - const thumbTip = hand.joints['thumb-tip']; - const distance = indexTip.position.distanceTo(thumbTip.position); - - const distanceToPinch = 0.02; - const threshold = 0.005; - - if (hand.inputState.pinching && distance > distanceToPinch + threshold) { - - hand.inputState.pinching = false; - this.dispatchEvent({ - type: 'pinchend', - handedness: inputSource.handedness, - target: this - }); - - } else if (!hand.inputState.pinching && distance <= distanceToPinch - threshold) { - - hand.inputState.pinching = true; - this.dispatchEvent({ - type: 'pinchstart', - handedness: inputSource.handedness, - target: this - }); - - } - - } else { - - if (grip !== null && inputSource.gripSpace) { - - gripPose = frame.getPose(inputSource.gripSpace, referenceSpace); - - if (gripPose !== null) { - - grip.matrix.fromArray(gripPose.transform.matrix); - grip.matrix.decompose(grip.position, grip.rotation, grip.scale); - - if (gripPose.linearVelocity) { - - grip.hasLinearVelocity = true; - grip.linearVelocity.copy(gripPose.linearVelocity); - - } else { - - grip.hasLinearVelocity = false; - - } - - if (gripPose.angularVelocity) { - - grip.hasAngularVelocity = true; - grip.angularVelocity.copy(gripPose.angularVelocity); - - } else { - - grip.hasAngularVelocity = false; - - } - - } - - } - - } - - if (targetRay !== null) { - - inputPose = frame.getPose(inputSource.targetRaySpace, referenceSpace); - - // Some runtimes (namely Vive Cosmos with Vive OpenXR Runtime) have only grip space and ray space is equal to it - if (inputPose === null && gripPose !== null) { - - inputPose = gripPose; - - } - - if (inputPose !== null) { - - targetRay.matrix.fromArray(inputPose.transform.matrix); - targetRay.matrix.decompose(targetRay.position, targetRay.rotation, targetRay.scale); - - if (inputPose.linearVelocity) { - - targetRay.hasLinearVelocity = true; - targetRay.linearVelocity.copy(inputPose.linearVelocity); - - } else { - - targetRay.hasLinearVelocity = false; - - } - - if (inputPose.angularVelocity) { - - targetRay.hasAngularVelocity = true; - targetRay.angularVelocity.copy(inputPose.angularVelocity); - - } else { - - targetRay.hasAngularVelocity = false; - - } - - this.dispatchEvent(_moveEvent); - - } - - } - - - } - - if (targetRay !== null) { - - targetRay.visible = (inputPose !== null); - - } - - if (grip !== null) { - - grip.visible = (gripPose !== null); - - } - - if (hand !== null) { - - hand.visible = (handPose !== null); - - } - - return this; - - } - - // private method - - _getHandJoint(hand, inputjoint) { - - if (hand.joints[inputjoint.jointName] === undefined) { - - const joint = new Group(); - joint.matrixAutoUpdate = false; - joint.visible = false; - hand.joints[inputjoint.jointName] = joint; - - hand.add(joint); - - } - - return hand.joints[inputjoint.jointName]; - - } - -} - -class DepthTexture extends Texture { - - constructor(width, height, type, mapping, wrapS, wrapT, magFilter, minFilter, anisotropy, format) { - - format = format !== undefined ? format : DepthFormat; - - if (format !== DepthFormat && format !== DepthStencilFormat) { - - throw new Error('DepthTexture format must be either THREE.DepthFormat or THREE.DepthStencilFormat'); - - } - - if (type === undefined && format === DepthFormat) type = UnsignedIntType; - if (type === undefined && format === DepthStencilFormat) type = UnsignedInt248Type; - - super(null, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy); - - this.isDepthTexture = true; - - this.image = { width: width, height: height }; - - this.magFilter = magFilter !== undefined ? magFilter : NearestFilter; - this.minFilter = minFilter !== undefined ? minFilter : NearestFilter; - - this.flipY = false; - this.generateMipmaps = false; - - } - - -} - -class WebXRManager extends EventDispatcher { - - constructor(renderer, gl) { - - super(); - - const scope = this; - - let session = null; - let framebufferScaleFactor = 1.0; - - let referenceSpace = null; - let referenceSpaceType = 'local-floor'; - // Set default foveation to maximum. - let foveation = 1.0; - let customReferenceSpace = null; - - let pose = null; - let glBinding = null; - let glProjLayer = null; - let glBaseLayer = null; - let xrFrame = null; - const attributes = gl.getContextAttributes(); - let initialRenderTarget = null; - let newRenderTarget = null; - - const controllers = []; - const controllerInputSources = []; - - const planes = new Set(); - const planesLastChangedTimes = new Map(); - - // - - const cameraL = new PerspectiveCamera(); - cameraL.layers.enable(1); - cameraL.viewport = new Vector4(); - - const cameraR = new PerspectiveCamera(); - cameraR.layers.enable(2); - cameraR.viewport = new Vector4(); - - const cameras = [cameraL, cameraR]; - - const cameraVR = new ArrayCamera(); - cameraVR.layers.enable(1); - cameraVR.layers.enable(2); - - let _currentDepthNear = null; - let _currentDepthFar = null; - - // - - this.cameraAutoUpdate = true; - this.enabled = false; - - this.isPresenting = false; - - this.getController = function (index) { - - let controller = controllers[index]; - - if (controller === undefined) { - - controller = new WebXRController(); - controllers[index] = controller; - - } - - return controller.getTargetRaySpace(); - - }; - - this.getControllerGrip = function (index) { - - let controller = controllers[index]; - - if (controller === undefined) { - - controller = new WebXRController(); - controllers[index] = controller; - - } - - return controller.getGripSpace(); - - }; - - this.getHand = function (index) { - - let controller = controllers[index]; - - if (controller === undefined) { - - controller = new WebXRController(); - controllers[index] = controller; - - } - - return controller.getHandSpace(); - - }; - - // - - function onSessionEvent(event) { - - const controllerIndex = controllerInputSources.indexOf(event.inputSource); - - if (controllerIndex === - 1) { - - return; - - } - - const controller = controllers[controllerIndex]; - - if (controller !== undefined) { - - controller.dispatchEvent({ type: event.type, data: event.inputSource }); - - } - - } - - function onSessionEnd() { - - session.removeEventListener('select', onSessionEvent); - session.removeEventListener('selectstart', onSessionEvent); - session.removeEventListener('selectend', onSessionEvent); - session.removeEventListener('squeeze', onSessionEvent); - session.removeEventListener('squeezestart', onSessionEvent); - session.removeEventListener('squeezeend', onSessionEvent); - session.removeEventListener('end', onSessionEnd); - session.removeEventListener('inputsourceschange', onInputSourcesChange); - - for (let i = 0; i < controllers.length; i++) { - - const inputSource = controllerInputSources[i]; - - if (inputSource === null) continue; - - controllerInputSources[i] = null; - - controllers[i].disconnect(inputSource); - - } - - _currentDepthNear = null; - _currentDepthFar = null; - - // restore framebuffer/rendering state - - renderer.setRenderTarget(initialRenderTarget); - - glBaseLayer = null; - glProjLayer = null; - glBinding = null; - session = null; - newRenderTarget = null; - - // - - animation.stop(); - - scope.isPresenting = false; - - scope.dispatchEvent({ type: 'sessionend' }); - - } - - this.setFramebufferScaleFactor = function (value) { - - framebufferScaleFactor = value; - - if (scope.isPresenting === true) { - - console.warn('THREE.WebXRManager: Cannot change framebuffer scale while presenting.'); - - } - - }; - - this.setReferenceSpaceType = function (value) { - - referenceSpaceType = value; - - if (scope.isPresenting === true) { - - console.warn('THREE.WebXRManager: Cannot change reference space type while presenting.'); - - } - - }; - - this.getReferenceSpace = function () { - - return customReferenceSpace || referenceSpace; - - }; - - this.setReferenceSpace = function (space) { - - customReferenceSpace = space; - - }; - - this.getBaseLayer = function () { - - return glProjLayer !== null ? glProjLayer : glBaseLayer; - - }; - - this.getBinding = function () { - - return glBinding; - - }; - - this.getFrame = function () { - - return xrFrame; - - }; - - this.getSession = function () { - - return session; - - }; - - this.setSession = async function (value) { - - session = value; - - if (session !== null) { - - initialRenderTarget = renderer.getRenderTarget(); - - session.addEventListener('select', onSessionEvent); - session.addEventListener('selectstart', onSessionEvent); - session.addEventListener('selectend', onSessionEvent); - session.addEventListener('squeeze', onSessionEvent); - session.addEventListener('squeezestart', onSessionEvent); - session.addEventListener('squeezeend', onSessionEvent); - session.addEventListener('end', onSessionEnd); - session.addEventListener('inputsourceschange', onInputSourcesChange); - - if (attributes.xrCompatible !== true) { - - await gl.makeXRCompatible(); - - } - - if ((session.renderState.layers === undefined) || (renderer.capabilities.isWebGL2 === false)) { - - const layerInit = { - antialias: (session.renderState.layers === undefined) ? attributes.antialias : true, - alpha: attributes.alpha, - depth: attributes.depth, - stencil: attributes.stencil, - framebufferScaleFactor: framebufferScaleFactor - }; - - glBaseLayer = new XRWebGLLayer(session, gl, layerInit); - - session.updateRenderState({ baseLayer: glBaseLayer }); - - newRenderTarget = new WebGLRenderTarget( - glBaseLayer.framebufferWidth, - glBaseLayer.framebufferHeight, - { - format: RGBAFormat, - type: UnsignedByteType, - encoding: renderer.outputEncoding, - stencilBuffer: attributes.stencil - } - ); - - } else { - - let depthFormat = null; - let depthType = null; - let glDepthFormat = null; - - if (attributes.depth) { - - glDepthFormat = attributes.stencil ? 35056 : 33190; - depthFormat = attributes.stencil ? DepthStencilFormat : DepthFormat; - depthType = attributes.stencil ? UnsignedInt248Type : UnsignedIntType; - - } - - const projectionlayerInit = { - colorFormat: 32856, - depthFormat: glDepthFormat, - scaleFactor: framebufferScaleFactor - }; - - glBinding = new XRWebGLBinding(session, gl); - - glProjLayer = glBinding.createProjectionLayer(projectionlayerInit); - - session.updateRenderState({ layers: [glProjLayer] }); - - newRenderTarget = new WebGLRenderTarget( - glProjLayer.textureWidth, - glProjLayer.textureHeight, - { - format: RGBAFormat, - type: UnsignedByteType, - depthTexture: new DepthTexture(glProjLayer.textureWidth, glProjLayer.textureHeight, depthType, undefined, undefined, undefined, undefined, undefined, undefined, depthFormat), - stencilBuffer: attributes.stencil, - encoding: renderer.outputEncoding, - samples: attributes.antialias ? 4 : 0 - }); - - const renderTargetProperties = renderer.properties.get(newRenderTarget); - renderTargetProperties.__ignoreDepthValues = glProjLayer.ignoreDepthValues; - - } - - newRenderTarget.isXRRenderTarget = true; // TODO Remove this when possible, see #23278 - - this.setFoveation(foveation); - - customReferenceSpace = null; - referenceSpace = await session.requestReferenceSpace(referenceSpaceType); - - animation.setContext(session); - animation.start(); - - scope.isPresenting = true; - - scope.dispatchEvent({ type: 'sessionstart' }); - - } - - }; - - function onInputSourcesChange(event) { - - // Notify disconnected - - for (let i = 0; i < event.removed.length; i++) { - - const inputSource = event.removed[i]; - const index = controllerInputSources.indexOf(inputSource); - - if (index >= 0) { - - controllerInputSources[index] = null; - controllers[index].disconnect(inputSource); - - } - - } - - // Notify connected - - for (let i = 0; i < event.added.length; i++) { - - const inputSource = event.added[i]; - - let controllerIndex = controllerInputSources.indexOf(inputSource); - - if (controllerIndex === - 1) { - - // Assign input source a controller that currently has no input source - - for (let i = 0; i < controllers.length; i++) { - - if (i >= controllerInputSources.length) { - - controllerInputSources.push(inputSource); - controllerIndex = i; - break; - - } else if (controllerInputSources[i] === null) { - - controllerInputSources[i] = inputSource; - controllerIndex = i; - break; - - } - - } - - // If all controllers do currently receive input we ignore new ones - - if (controllerIndex === - 1) break; - - } - - const controller = controllers[controllerIndex]; - - if (controller) { - - controller.connect(inputSource); - - } - - } - - } - - // - - const cameraLPos = new Vector3(); - const cameraRPos = new Vector3(); - - /** - * Assumes 2 cameras that are parallel and share an X-axis, and that - * the cameras' projection and world matrices have already been set. - * And that near and far planes are identical for both cameras. - * Visualization of this technique: https://computergraphics.stackexchange.com/a/4765 - */ - function setProjectionFromUnion(camera, cameraL, cameraR) { - - cameraLPos.setFromMatrixPosition(cameraL.matrixWorld); - cameraRPos.setFromMatrixPosition(cameraR.matrixWorld); - - const ipd = cameraLPos.distanceTo(cameraRPos); - - const projL = cameraL.projectionMatrix.elements; - const projR = cameraR.projectionMatrix.elements; - - // VR systems will have identical far and near planes, and - // most likely identical top and bottom frustum extents. - // Use the left camera for these values. - const near = projL[14] / (projL[10] - 1); - const far = projL[14] / (projL[10] + 1); - const topFov = (projL[9] + 1) / projL[5]; - const bottomFov = (projL[9] - 1) / projL[5]; - - const leftFov = (projL[8] - 1) / projL[0]; - const rightFov = (projR[8] + 1) / projR[0]; - const left = near * leftFov; - const right = near * rightFov; - - // Calculate the new camera's position offset from the - // left camera. xOffset should be roughly half `ipd`. - const zOffset = ipd / (- leftFov + rightFov); - const xOffset = zOffset * - leftFov; - - // TODO: Better way to apply this offset? - cameraL.matrixWorld.decompose(camera.position, camera.quaternion, camera.scale); - camera.translateX(xOffset); - camera.translateZ(zOffset); - camera.matrixWorld.compose(camera.position, camera.quaternion, camera.scale); - camera.matrixWorldInverse.copy(camera.matrixWorld).invert(); - - // Find the union of the frustum values of the cameras and scale - // the values so that the near plane's position does not change in world space, - // although must now be relative to the new union camera. - const near2 = near + zOffset; - const far2 = far + zOffset; - const left2 = left - xOffset; - const right2 = right + (ipd - xOffset); - const top2 = topFov * far / far2 * near2; - const bottom2 = bottomFov * far / far2 * near2; - - camera.projectionMatrix.makePerspective(left2, right2, top2, bottom2, near2, far2); - - } - - function updateCamera(camera, parent) { - - if (parent === null) { - - camera.matrixWorld.copy(camera.matrix); - - } else { - - camera.matrixWorld.multiplyMatrices(parent.matrixWorld, camera.matrix); - - } - - camera.matrixWorldInverse.copy(camera.matrixWorld).invert(); - - } - - this.updateCamera = function (camera) { - - if (session === null) return; - - cameraVR.near = cameraR.near = cameraL.near = camera.near; - cameraVR.far = cameraR.far = cameraL.far = camera.far; - - if (_currentDepthNear !== cameraVR.near || _currentDepthFar !== cameraVR.far) { - - // Note that the new renderState won't apply until the next frame. See #18320 - - session.updateRenderState({ - depthNear: cameraVR.near, - depthFar: cameraVR.far - }); - - _currentDepthNear = cameraVR.near; - _currentDepthFar = cameraVR.far; - - } - - const parent = camera.parent; - const cameras = cameraVR.cameras; - - updateCamera(cameraVR, parent); - - for (let i = 0; i < cameras.length; i++) { - - updateCamera(cameras[i], parent); - - } - - cameraVR.matrixWorld.decompose(cameraVR.position, cameraVR.quaternion, cameraVR.scale); - - // update user camera and its children - - camera.matrix.copy(cameraVR.matrix); - camera.matrix.decompose(camera.position, camera.quaternion, camera.scale); - - const children = camera.children; - - for (let i = 0, l = children.length; i < l; i++) { - - children[i].updateMatrixWorld(true); - - } - - // update projection matrix for proper view frustum culling - - if (cameras.length === 2) { - - setProjectionFromUnion(cameraVR, cameraL, cameraR); - - } else { - - // assume single camera setup (AR) - - cameraVR.projectionMatrix.copy(cameraL.projectionMatrix); - - } - - }; - - this.getCamera = function () { - - return cameraVR; - - }; - - this.getFoveation = function () { - - if (glProjLayer === null && glBaseLayer === null) { - - return undefined; - - } - - return foveation; - - }; - - this.setFoveation = function (value) { - - // 0 = no foveation = full resolution - // 1 = maximum foveation = the edges render at lower resolution - - foveation = value; - - if (glProjLayer !== null) { - - glProjLayer.fixedFoveation = value; - - } - - if (glBaseLayer !== null && glBaseLayer.fixedFoveation !== undefined) { - - glBaseLayer.fixedFoveation = value; - - } - - }; - - this.getPlanes = function () { - - return planes; - - }; - - // Animation Loop - - let onAnimationFrameCallback = null; - - function onAnimationFrame(time, frame) { - - pose = frame.getViewerPose(customReferenceSpace || referenceSpace); - xrFrame = frame; - - if (pose !== null) { - - const views = pose.views; - - if (glBaseLayer !== null) { - - renderer.setRenderTargetFramebuffer(newRenderTarget, glBaseLayer.framebuffer); - renderer.setRenderTarget(newRenderTarget); - - } - - let cameraVRNeedsUpdate = false; - - // check if it's necessary to rebuild cameraVR's camera list - - if (views.length !== cameraVR.cameras.length) { - - cameraVR.cameras.length = 0; - cameraVRNeedsUpdate = true; - - } - - for (let i = 0; i < views.length; i++) { - - const view = views[i]; - - let viewport = null; - - if (glBaseLayer !== null) { - - viewport = glBaseLayer.getViewport(view); - - } else { - - const glSubImage = glBinding.getViewSubImage(glProjLayer, view); - viewport = glSubImage.viewport; - - // For side-by-side projection, we only produce a single texture for both eyes. - if (i === 0) { - - renderer.setRenderTargetTextures( - newRenderTarget, - glSubImage.colorTexture, - glProjLayer.ignoreDepthValues ? undefined : glSubImage.depthStencilTexture); - - renderer.setRenderTarget(newRenderTarget); - - } - - } - - let camera = cameras[i]; - - if (camera === undefined) { - - camera = new PerspectiveCamera(); - camera.layers.enable(i); - camera.viewport = new Vector4(); - cameras[i] = camera; - - } - - camera.matrix.fromArray(view.transform.matrix); - camera.projectionMatrix.fromArray(view.projectionMatrix); - camera.viewport.set(viewport.x, viewport.y, viewport.width, viewport.height); - - if (i === 0) { - - cameraVR.matrix.copy(camera.matrix); - - } - - if (cameraVRNeedsUpdate === true) { - - cameraVR.cameras.push(camera); - - } - - } - - } - - // - - for (let i = 0; i < controllers.length; i++) { - - const inputSource = controllerInputSources[i]; - const controller = controllers[i]; - - if (inputSource !== null && controller !== undefined) { - - controller.update(inputSource, frame, customReferenceSpace || referenceSpace); - - } - - } - - if (onAnimationFrameCallback) onAnimationFrameCallback(time, frame); - - if (frame.detectedPlanes) { - - scope.dispatchEvent({ type: 'planesdetected', data: frame.detectedPlanes }); - - let planesToRemove = null; - - for (const plane of planes) { - - if (!frame.detectedPlanes.has(plane)) { - - if (planesToRemove === null) { - - planesToRemove = []; - - } - - planesToRemove.push(plane); - - } - - } - - if (planesToRemove !== null) { - - for (const plane of planesToRemove) { - - planes.delete(plane); - planesLastChangedTimes.delete(plane); - scope.dispatchEvent({ type: 'planeremoved', data: plane }); - - } - - } - - for (const plane of frame.detectedPlanes) { - - if (!planes.has(plane)) { - - planes.add(plane); - planesLastChangedTimes.set(plane, frame.lastChangedTime); - scope.dispatchEvent({ type: 'planeadded', data: plane }); - - } else { - - const lastKnownTime = planesLastChangedTimes.get(plane); - - if (plane.lastChangedTime > lastKnownTime) { - - planesLastChangedTimes.set(plane, plane.lastChangedTime); - scope.dispatchEvent({ type: 'planechanged', data: plane }); - - } - - } - - } - - } - - xrFrame = null; - - } - - const animation = new WebGLAnimation(); - - animation.setAnimationLoop(onAnimationFrame); - - this.setAnimationLoop = function (callback) { - - onAnimationFrameCallback = callback; - - }; - - this.dispose = function () { }; - - } - -} - -function WebGLMaterials(renderer, properties) { - - function refreshFogUniforms(uniforms, fog) { - - fog.color.getRGB(uniforms.fogColor.value, getUnlitUniformColorSpace(renderer)); - - if (fog.isFog) { - - uniforms.fogNear.value = fog.near; - uniforms.fogFar.value = fog.far; - - } else if (fog.isFogExp2) { - - uniforms.fogDensity.value = fog.density; - - } - - } - - function refreshMaterialUniforms(uniforms, material, pixelRatio, height, transmissionRenderTarget) { - - if (material.isMeshBasicMaterial) { - - refreshUniformsCommon(uniforms, material); - - } else if (material.isMeshLambertMaterial) { - - refreshUniformsCommon(uniforms, material); - - } else if (material.isMeshToonMaterial) { - - refreshUniformsCommon(uniforms, material); - refreshUniformsToon(uniforms, material); - - } else if (material.isMeshPhongMaterial) { - - refreshUniformsCommon(uniforms, material); - refreshUniformsPhong(uniforms, material); - - } else if (material.isMeshStandardMaterial) { - - refreshUniformsCommon(uniforms, material); - refreshUniformsStandard(uniforms, material); - - if (material.isMeshPhysicalMaterial) { - - refreshUniformsPhysical(uniforms, material, transmissionRenderTarget); - - } - - } else if (material.isMeshMatcapMaterial) { - - refreshUniformsCommon(uniforms, material); - refreshUniformsMatcap(uniforms, material); - - } else if (material.isMeshDepthMaterial) { - - refreshUniformsCommon(uniforms, material); - - } else if (material.isMeshDistanceMaterial) { - - refreshUniformsCommon(uniforms, material); - refreshUniformsDistance(uniforms, material); - - } else if (material.isMeshNormalMaterial) { - - refreshUniformsCommon(uniforms, material); - - } else if (material.isLineBasicMaterial) { - - refreshUniformsLine(uniforms, material); - - if (material.isLineDashedMaterial) { - - refreshUniformsDash(uniforms, material); - - } - - } else if (material.isPointsMaterial) { - - refreshUniformsPoints(uniforms, material, pixelRatio, height); - - } else if (material.isSpriteMaterial) { - - refreshUniformsSprites(uniforms, material); - - } else if (material.isShadowMaterial) { - - uniforms.color.value.copy(material.color); - uniforms.opacity.value = material.opacity; - - } else if (material.isShaderMaterial) { - - material.uniformsNeedUpdate = false; // #15581 - - } - - } - - function refreshUniformsCommon(uniforms, material) { - - uniforms.opacity.value = material.opacity; - - if (material.color) { - - uniforms.diffuse.value.copy(material.color); - - } - - if (material.emissive) { - - uniforms.emissive.value.copy(material.emissive).multiplyScalar(material.emissiveIntensity); - - } - - if (material.map) { - - uniforms.map.value = material.map; - - } - - if (material.alphaMap) { - - uniforms.alphaMap.value = material.alphaMap; - - } - - if (material.bumpMap) { - - uniforms.bumpMap.value = material.bumpMap; - uniforms.bumpScale.value = material.bumpScale; - if (material.side === BackSide) uniforms.bumpScale.value *= - 1; - - } - - if (material.displacementMap) { - - uniforms.displacementMap.value = material.displacementMap; - uniforms.displacementScale.value = material.displacementScale; - uniforms.displacementBias.value = material.displacementBias; - - } - - if (material.emissiveMap) { - - uniforms.emissiveMap.value = material.emissiveMap; - - } - - if (material.normalMap) { - - uniforms.normalMap.value = material.normalMap; - uniforms.normalScale.value.copy(material.normalScale); - if (material.side === BackSide) uniforms.normalScale.value.negate(); - - } - - if (material.specularMap) { - - uniforms.specularMap.value = material.specularMap; - - } - - if (material.alphaTest > 0) { - - uniforms.alphaTest.value = material.alphaTest; - - } - - const envMap = properties.get(material).envMap; - - if (envMap) { - - uniforms.envMap.value = envMap; - - uniforms.flipEnvMap.value = (envMap.isCubeTexture && envMap.isRenderTargetTexture === false) ? - 1 : 1; - - uniforms.reflectivity.value = material.reflectivity; - uniforms.ior.value = material.ior; - uniforms.refractionRatio.value = material.refractionRatio; - - } - - if (material.lightMap) { - - uniforms.lightMap.value = material.lightMap; - - // artist-friendly light intensity scaling factor - const scaleFactor = (renderer.physicallyCorrectLights !== true) ? Math.PI : 1; - - uniforms.lightMapIntensity.value = material.lightMapIntensity * scaleFactor; - - } - - if (material.aoMap) { - - uniforms.aoMap.value = material.aoMap; - uniforms.aoMapIntensity.value = material.aoMapIntensity; - - } - - // uv repeat and offset setting priorities - // 1. color map - // 2. specular map - // 3. displacementMap map - // 4. normal map - // 5. bump map - // 6. roughnessMap map - // 7. metalnessMap map - // 8. alphaMap map - // 9. emissiveMap map - // 10. clearcoat map - // 11. clearcoat normal map - // 12. clearcoat roughnessMap map - // 13. iridescence map - // 14. iridescence thickness map - // 15. specular intensity map - // 16. specular tint map - // 17. transmission map - // 18. thickness map - - let uvScaleMap; - - if (material.map) { - - uvScaleMap = material.map; - - } else if (material.specularMap) { - - uvScaleMap = material.specularMap; - - } else if (material.displacementMap) { - - uvScaleMap = material.displacementMap; - - } else if (material.normalMap) { - - uvScaleMap = material.normalMap; - - } else if (material.bumpMap) { - - uvScaleMap = material.bumpMap; - - } else if (material.roughnessMap) { - - uvScaleMap = material.roughnessMap; - - } else if (material.metalnessMap) { - - uvScaleMap = material.metalnessMap; - - } else if (material.alphaMap) { - - uvScaleMap = material.alphaMap; - - } else if (material.emissiveMap) { - - uvScaleMap = material.emissiveMap; - - } else if (material.clearcoatMap) { - - uvScaleMap = material.clearcoatMap; - - } else if (material.clearcoatNormalMap) { - - uvScaleMap = material.clearcoatNormalMap; - - } else if (material.clearcoatRoughnessMap) { - - uvScaleMap = material.clearcoatRoughnessMap; - - } else if (material.iridescenceMap) { - - uvScaleMap = material.iridescenceMap; - - } else if (material.iridescenceThicknessMap) { - - uvScaleMap = material.iridescenceThicknessMap; - - } else if (material.specularIntensityMap) { - - uvScaleMap = material.specularIntensityMap; - - } else if (material.specularColorMap) { - - uvScaleMap = material.specularColorMap; - - } else if (material.transmissionMap) { - - uvScaleMap = material.transmissionMap; - - } else if (material.thicknessMap) { - - uvScaleMap = material.thicknessMap; - - } else if (material.sheenColorMap) { - - uvScaleMap = material.sheenColorMap; - - } else if (material.sheenRoughnessMap) { - - uvScaleMap = material.sheenRoughnessMap; - - } - - if (uvScaleMap !== undefined) { - - // backwards compatibility - if (uvScaleMap.isWebGLRenderTarget) { - - uvScaleMap = uvScaleMap.texture; - - } - - if (uvScaleMap.matrixAutoUpdate === true) { - - uvScaleMap.updateMatrix(); - - } - - uniforms.uvTransform.value.copy(uvScaleMap.matrix); - - } - - // uv repeat and offset setting priorities for uv2 - // 1. ao map - // 2. light map - - let uv2ScaleMap; - - if (material.aoMap) { - - uv2ScaleMap = material.aoMap; - - } else if (material.lightMap) { - - uv2ScaleMap = material.lightMap; - - } - - if (uv2ScaleMap !== undefined) { - - // backwards compatibility - if (uv2ScaleMap.isWebGLRenderTarget) { - - uv2ScaleMap = uv2ScaleMap.texture; - - } - - if (uv2ScaleMap.matrixAutoUpdate === true) { - - uv2ScaleMap.updateMatrix(); - - } - - uniforms.uv2Transform.value.copy(uv2ScaleMap.matrix); - - } - - } - - function refreshUniformsLine(uniforms, material) { - - uniforms.diffuse.value.copy(material.color); - uniforms.opacity.value = material.opacity; - - } - - function refreshUniformsDash(uniforms, material) { - - uniforms.dashSize.value = material.dashSize; - uniforms.totalSize.value = material.dashSize + material.gapSize; - uniforms.scale.value = material.scale; - - } - - function refreshUniformsPoints(uniforms, material, pixelRatio, height) { - - uniforms.diffuse.value.copy(material.color); - uniforms.opacity.value = material.opacity; - uniforms.size.value = material.size * pixelRatio; - uniforms.scale.value = height * 0.5; - - if (material.map) { - - uniforms.map.value = material.map; - - } - - if (material.alphaMap) { - - uniforms.alphaMap.value = material.alphaMap; - - } - - if (material.alphaTest > 0) { - - uniforms.alphaTest.value = material.alphaTest; - - } - - // uv repeat and offset setting priorities - // 1. color map - // 2. alpha map - - let uvScaleMap; - - if (material.map) { - - uvScaleMap = material.map; - - } else if (material.alphaMap) { - - uvScaleMap = material.alphaMap; - - } - - if (uvScaleMap !== undefined) { - - if (uvScaleMap.matrixAutoUpdate === true) { - - uvScaleMap.updateMatrix(); - - } - - uniforms.uvTransform.value.copy(uvScaleMap.matrix); - - } - - } - - function refreshUniformsSprites(uniforms, material) { - - uniforms.diffuse.value.copy(material.color); - uniforms.opacity.value = material.opacity; - uniforms.rotation.value = material.rotation; - - if (material.map) { - - uniforms.map.value = material.map; - - } - - if (material.alphaMap) { - - uniforms.alphaMap.value = material.alphaMap; - - } - - if (material.alphaTest > 0) { - - uniforms.alphaTest.value = material.alphaTest; - - } - - // uv repeat and offset setting priorities - // 1. color map - // 2. alpha map - - let uvScaleMap; - - if (material.map) { - - uvScaleMap = material.map; - - } else if (material.alphaMap) { - - uvScaleMap = material.alphaMap; - - } - - if (uvScaleMap !== undefined) { - - if (uvScaleMap.matrixAutoUpdate === true) { - - uvScaleMap.updateMatrix(); - - } - - uniforms.uvTransform.value.copy(uvScaleMap.matrix); - - } - - } - - function refreshUniformsPhong(uniforms, material) { - - uniforms.specular.value.copy(material.specular); - uniforms.shininess.value = Math.max(material.shininess, 1e-4); // to prevent pow( 0.0, 0.0 ) - - } - - function refreshUniformsToon(uniforms, material) { - - if (material.gradientMap) { - - uniforms.gradientMap.value = material.gradientMap; - - } - - } - - function refreshUniformsStandard(uniforms, material) { - - uniforms.roughness.value = material.roughness; - uniforms.metalness.value = material.metalness; - - if (material.roughnessMap) { - - uniforms.roughnessMap.value = material.roughnessMap; - - } - - if (material.metalnessMap) { - - uniforms.metalnessMap.value = material.metalnessMap; - - } - - const envMap = properties.get(material).envMap; - - if (envMap) { - - //uniforms.envMap.value = material.envMap; // part of uniforms common - uniforms.envMapIntensity.value = material.envMapIntensity; - - } - - } - - function refreshUniformsPhysical(uniforms, material, transmissionRenderTarget) { - - uniforms.ior.value = material.ior; // also part of uniforms common - - if (material.sheen > 0) { - - uniforms.sheenColor.value.copy(material.sheenColor).multiplyScalar(material.sheen); - - uniforms.sheenRoughness.value = material.sheenRoughness; - - if (material.sheenColorMap) { - - uniforms.sheenColorMap.value = material.sheenColorMap; - - } - - if (material.sheenRoughnessMap) { - - uniforms.sheenRoughnessMap.value = material.sheenRoughnessMap; - - } - - } - - if (material.clearcoat > 0) { - - uniforms.clearcoat.value = material.clearcoat; - uniforms.clearcoatRoughness.value = material.clearcoatRoughness; - - if (material.clearcoatMap) { - - uniforms.clearcoatMap.value = material.clearcoatMap; - - } - - if (material.clearcoatRoughnessMap) { - - uniforms.clearcoatRoughnessMap.value = material.clearcoatRoughnessMap; - - } - - if (material.clearcoatNormalMap) { - - uniforms.clearcoatNormalScale.value.copy(material.clearcoatNormalScale); - uniforms.clearcoatNormalMap.value = material.clearcoatNormalMap; - - if (material.side === BackSide) { - - uniforms.clearcoatNormalScale.value.negate(); - - } - - } - - } - - if (material.iridescence > 0) { - - uniforms.iridescence.value = material.iridescence; - uniforms.iridescenceIOR.value = material.iridescenceIOR; - uniforms.iridescenceThicknessMinimum.value = material.iridescenceThicknessRange[0]; - uniforms.iridescenceThicknessMaximum.value = material.iridescenceThicknessRange[1]; - - if (material.iridescenceMap) { - - uniforms.iridescenceMap.value = material.iridescenceMap; - - } - - if (material.iridescenceThicknessMap) { - - uniforms.iridescenceThicknessMap.value = material.iridescenceThicknessMap; - - } - - } - - if (material.transmission > 0) { - - uniforms.transmission.value = material.transmission; - uniforms.transmissionSamplerMap.value = transmissionRenderTarget.texture; - uniforms.transmissionSamplerSize.value.set(transmissionRenderTarget.width, transmissionRenderTarget.height); - - if (material.transmissionMap) { - - uniforms.transmissionMap.value = material.transmissionMap; - - } - - uniforms.thickness.value = material.thickness; - - if (material.thicknessMap) { - - uniforms.thicknessMap.value = material.thicknessMap; - - } - - uniforms.attenuationDistance.value = material.attenuationDistance; - uniforms.attenuationColor.value.copy(material.attenuationColor); - - } - - uniforms.specularIntensity.value = material.specularIntensity; - uniforms.specularColor.value.copy(material.specularColor); - - if (material.specularIntensityMap) { - - uniforms.specularIntensityMap.value = material.specularIntensityMap; - - } - - if (material.specularColorMap) { - - uniforms.specularColorMap.value = material.specularColorMap; - - } - - } - - function refreshUniformsMatcap(uniforms, material) { - - if (material.matcap) { - - uniforms.matcap.value = material.matcap; - - } - - } - - function refreshUniformsDistance(uniforms, material) { - - uniforms.referencePosition.value.copy(material.referencePosition); - uniforms.nearDistance.value = material.nearDistance; - uniforms.farDistance.value = material.farDistance; - - } - - return { - refreshFogUniforms: refreshFogUniforms, - refreshMaterialUniforms: refreshMaterialUniforms - }; - -} - -function WebGLUniformsGroups(gl, info, capabilities, state) { - - let buffers = {}; - let updateList = {}; - let allocatedBindingPoints = []; - - const maxBindingPoints = (capabilities.isWebGL2) ? gl.getParameter(35375) : 0; // binding points are global whereas block indices are per shader program - - function bind(uniformsGroup, program) { - - const webglProgram = program.program; - state.uniformBlockBinding(uniformsGroup, webglProgram); - - } - - function update(uniformsGroup, program) { - - let buffer = buffers[uniformsGroup.id]; - - if (buffer === undefined) { - - prepareUniformsGroup(uniformsGroup); - - buffer = createBuffer(uniformsGroup); - buffers[uniformsGroup.id] = buffer; - - uniformsGroup.addEventListener('dispose', onUniformsGroupsDispose); - - } - - // ensure to update the binding points/block indices mapping for this program - - const webglProgram = program.program; - state.updateUBOMapping(uniformsGroup, webglProgram); - - // update UBO once per frame - - const frame = info.render.frame; - - if (updateList[uniformsGroup.id] !== frame) { - - updateBufferData(uniformsGroup); - - updateList[uniformsGroup.id] = frame; - - } - - } - - function createBuffer(uniformsGroup) { - - // the setup of an UBO is independent of a particular shader program but global - - const bindingPointIndex = allocateBindingPointIndex(); - uniformsGroup.__bindingPointIndex = bindingPointIndex; - - const buffer = gl.createBuffer(); - const size = uniformsGroup.__size; - const usage = uniformsGroup.usage; - - gl.bindBuffer(35345, buffer); - gl.bufferData(35345, size, usage); - gl.bindBuffer(35345, null); - gl.bindBufferBase(35345, bindingPointIndex, buffer); - - return buffer; - - } - - function allocateBindingPointIndex() { - - for (let i = 0; i < maxBindingPoints; i++) { - - if (allocatedBindingPoints.indexOf(i) === - 1) { - - allocatedBindingPoints.push(i); - return i; - - } - - } - - console.error('THREE.WebGLRenderer: Maximum number of simultaneously usable uniforms groups reached.'); - - return 0; - - } - - function updateBufferData(uniformsGroup) { - - const buffer = buffers[uniformsGroup.id]; - const uniforms = uniformsGroup.uniforms; - const cache = uniformsGroup.__cache; - - gl.bindBuffer(35345, buffer); - - for (let i = 0, il = uniforms.length; i < il; i++) { - - const uniform = uniforms[i]; - - // partly update the buffer if necessary - - if (hasUniformChanged(uniform, i, cache) === true) { - - const offset = uniform.__offset; - - const values = Array.isArray(uniform.value) ? uniform.value : [uniform.value]; - - let arrayOffset = 0; - - for (let i = 0; i < values.length; i++) { - - const value = values[i]; - - const info = getUniformSize(value); - - if (typeof value === 'number') { - - uniform.__data[0] = value; - gl.bufferSubData(35345, offset + arrayOffset, uniform.__data); - - } else if (value.isMatrix3) { - - // manually converting 3x3 to 3x4 - - uniform.__data[0] = value.elements[0]; - uniform.__data[1] = value.elements[1]; - uniform.__data[2] = value.elements[2]; - uniform.__data[3] = value.elements[0]; - uniform.__data[4] = value.elements[3]; - uniform.__data[5] = value.elements[4]; - uniform.__data[6] = value.elements[5]; - uniform.__data[7] = value.elements[0]; - uniform.__data[8] = value.elements[6]; - uniform.__data[9] = value.elements[7]; - uniform.__data[10] = value.elements[8]; - uniform.__data[11] = value.elements[0]; - - } else { - - value.toArray(uniform.__data, arrayOffset); - - arrayOffset += info.storage / Float32Array.BYTES_PER_ELEMENT; - - } - - } - - gl.bufferSubData(35345, offset, uniform.__data); - - } - - } - - gl.bindBuffer(35345, null); - - } - - function hasUniformChanged(uniform, index, cache) { - - const value = uniform.value; - - if (cache[index] === undefined) { - - // cache entry does not exist so far - - if (typeof value === 'number') { - - cache[index] = value; - - } else { - - const values = Array.isArray(value) ? value : [value]; - - const tempValues = []; - - for (let i = 0; i < values.length; i++) { - - tempValues.push(values[i].clone()); - - } - - cache[index] = tempValues; - - } - - return true; - - } else { - - // compare current value with cached entry - - if (typeof value === 'number') { - - if (cache[index] !== value) { - - cache[index] = value; - return true; - - } - - } else { - - const cachedObjects = Array.isArray(cache[index]) ? cache[index] : [cache[index]]; - const values = Array.isArray(value) ? value : [value]; - - for (let i = 0; i < cachedObjects.length; i++) { - - const cachedObject = cachedObjects[i]; - - if (cachedObject.equals(values[i]) === false) { - - cachedObject.copy(values[i]); - return true; - - } - - } - - } - - } - - return false; - - } - - function prepareUniformsGroup(uniformsGroup) { - - // determine total buffer size according to the STD140 layout - // Hint: STD140 is the only supported layout in WebGL 2 - - const uniforms = uniformsGroup.uniforms; - - let offset = 0; // global buffer offset in bytes - const chunkSize = 16; // size of a chunk in bytes - let chunkOffset = 0; // offset within a single chunk in bytes - - for (let i = 0, l = uniforms.length; i < l; i++) { - - const uniform = uniforms[i]; - - const infos = { - boundary: 0, // bytes - storage: 0 // bytes - }; - - const values = Array.isArray(uniform.value) ? uniform.value : [uniform.value]; - - for (let j = 0, jl = values.length; j < jl; j++) { - - const value = values[j]; - - const info = getUniformSize(value); - - infos.boundary += info.boundary; - infos.storage += info.storage; - - } - - // the following two properties will be used for partial buffer updates - - uniform.__data = new Float32Array(infos.storage / Float32Array.BYTES_PER_ELEMENT); - uniform.__offset = offset; - - // - - if (i > 0) { - - chunkOffset = offset % chunkSize; - - const remainingSizeInChunk = chunkSize - chunkOffset; - - // check for chunk overflow - - if (chunkOffset !== 0 && (remainingSizeInChunk - infos.boundary) < 0) { - - // add padding and adjust offset - - offset += (chunkSize - chunkOffset); - uniform.__offset = offset; - - } - - } - - offset += infos.storage; - - } - - // ensure correct final padding - - chunkOffset = offset % chunkSize; - - if (chunkOffset > 0) offset += (chunkSize - chunkOffset); - - // - - uniformsGroup.__size = offset; - uniformsGroup.__cache = {}; - - return this; - - } - - function getUniformSize(value) { - - const info = { - boundary: 0, // bytes - storage: 0 // bytes - }; - - // determine sizes according to STD140 - - if (typeof value === 'number') { - - // float/int - - info.boundary = 4; - info.storage = 4; - - } else if (value.isVector2) { - - // vec2 - - info.boundary = 8; - info.storage = 8; - - } else if (value.isVector3 || value.isColor) { - - // vec3 - - info.boundary = 16; - info.storage = 12; // evil: vec3 must start on a 16-byte boundary but it only consumes 12 bytes - - } else if (value.isVector4) { - - // vec4 - - info.boundary = 16; - info.storage = 16; - - } else if (value.isMatrix3) { - - // mat3 (in STD140 a 3x3 matrix is represented as 3x4) - - info.boundary = 48; - info.storage = 48; - - } else if (value.isMatrix4) { - - // mat4 - - info.boundary = 64; - info.storage = 64; - - } else if (value.isTexture) { - - console.warn('THREE.WebGLRenderer: Texture samplers can not be part of an uniforms group.'); - - } else { - - console.warn('THREE.WebGLRenderer: Unsupported uniform value type.', value); - - } - - return info; - - } - - function onUniformsGroupsDispose(event) { - - const uniformsGroup = event.target; - - uniformsGroup.removeEventListener('dispose', onUniformsGroupsDispose); - - const index = allocatedBindingPoints.indexOf(uniformsGroup.__bindingPointIndex); - allocatedBindingPoints.splice(index, 1); - - gl.deleteBuffer(buffers[uniformsGroup.id]); - - delete buffers[uniformsGroup.id]; - delete updateList[uniformsGroup.id]; - - } - - function dispose() { - - for (const id in buffers) { - - gl.deleteBuffer(buffers[id]); - - } - - allocatedBindingPoints = []; - buffers = {}; - updateList = {}; - - } - - return { - - bind: bind, - update: update, - - dispose: dispose - - }; - -} - -function createCanvasElement() { - - const canvas = createElementNS('canvas'); - canvas.style.display = 'block'; - return canvas; - -} - -function WebGLRenderer(parameters = {}) { - - this.isWebGLRenderer = true; - - const _canvas = parameters.canvas !== undefined ? parameters.canvas : createCanvasElement(), - _context = parameters.context !== undefined ? parameters.context : null, - - _depth = parameters.depth !== undefined ? parameters.depth : true, - _stencil = parameters.stencil !== undefined ? parameters.stencil : true, - _antialias = parameters.antialias !== undefined ? parameters.antialias : false, - _premultipliedAlpha = parameters.premultipliedAlpha !== undefined ? parameters.premultipliedAlpha : true, - _preserveDrawingBuffer = parameters.preserveDrawingBuffer !== undefined ? parameters.preserveDrawingBuffer : false, - _powerPreference = parameters.powerPreference !== undefined ? parameters.powerPreference : 'default', - _failIfMajorPerformanceCaveat = parameters.failIfMajorPerformanceCaveat !== undefined ? parameters.failIfMajorPerformanceCaveat : false; - - let _alpha; - - if (_context !== null) { - - _alpha = _context.getContextAttributes().alpha; - - } else { - - _alpha = parameters.alpha !== undefined ? parameters.alpha : false; - - } - - let currentRenderList = null; - let currentRenderState = null; - - // render() can be called from within a callback triggered by another render. - // We track this so that the nested render call gets its list and state isolated from the parent render call. - - const renderListStack = []; - const renderStateStack = []; - - // public properties - - this.domElement = _canvas; - - // Debug configuration container - this.debug = { - - /** - * Enables error checking and reporting when shader programs are being compiled - * @type {boolean} - */ - checkShaderErrors: true - }; - - // clearing - - this.autoClear = true; - this.autoClearColor = true; - this.autoClearDepth = true; - this.autoClearStencil = true; - - // scene graph - - this.sortObjects = true; - - // user-defined clipping - - this.clippingPlanes = []; - this.localClippingEnabled = false; - - // physically based shading - - this.outputEncoding = LinearEncoding; - - // physical lights - - this.physicallyCorrectLights = false; - - // tone mapping - - this.toneMapping = NoToneMapping; - this.toneMappingExposure = 1.0; - - // internal properties - - const _this = this; - - let _isContextLost = false; - - // internal state cache - - let _currentActiveCubeFace = 0; - let _currentActiveMipmapLevel = 0; - let _currentRenderTarget = null; - let _currentMaterialId = - 1; - - let _currentCamera = null; - - const _currentViewport = new Vector4(); - const _currentScissor = new Vector4(); - let _currentScissorTest = null; - - // - - let _width = _canvas.width; - let _height = _canvas.height; - - let _pixelRatio = 1; - let _opaqueSort = null; - let _transparentSort = null; - - const _viewport = new Vector4(0, 0, _width, _height); - const _scissor = new Vector4(0, 0, _width, _height); - let _scissorTest = false; - - // frustum - - const _frustum = new Frustum(); - - // clipping - - let _clippingEnabled = false; - let _localClippingEnabled = false; - - // transmission - - let _transmissionRenderTarget = null; - - // camera matrices cache - - const _projScreenMatrix = new Matrix4(); - - const _vector2 = new Vector2(); - const _vector3 = new Vector3(); - - const _emptyScene = { background: null, fog: null, environment: null, overrideMaterial: null, isScene: true }; - - function getTargetPixelRatio() { - - return _currentRenderTarget === null ? _pixelRatio : 1; - - } - - // initialize - - let _gl = _context; - - function getContext(contextNames, contextAttributes) { - - for (let i = 0; i < contextNames.length; i++) { - - const contextName = contextNames[i]; - const context = _canvas.getContext(contextName, contextAttributes); - if (context !== null) return context; - - } - - return null; - - } - - try { - - const contextAttributes = { - alpha: true, - depth: _depth, - stencil: _stencil, - antialias: _antialias, - premultipliedAlpha: _premultipliedAlpha, - preserveDrawingBuffer: _preserveDrawingBuffer, - powerPreference: _powerPreference, - failIfMajorPerformanceCaveat: _failIfMajorPerformanceCaveat - }; - - // OffscreenCanvas does not have setAttribute, see #22811 - if ('setAttribute' in _canvas) _canvas.setAttribute('data-engine', `three.js r${REVISION}`); - - // event listeners must be registered before WebGL context is created, see #12753 - _canvas.addEventListener('webglcontextlost', onContextLost, false); - _canvas.addEventListener('webglcontextrestored', onContextRestore, false); - _canvas.addEventListener('webglcontextcreationerror', onContextCreationError, false); - - if (_gl === null) { - - const contextNames = ['webgl2', 'webgl', 'experimental-webgl']; - - if (_this.isWebGL1Renderer === true) { - - contextNames.shift(); - - } - - _gl = getContext(contextNames, contextAttributes); - - if (_gl === null) { - - if (getContext(contextNames)) { - - throw new Error('Error creating WebGL context with your selected attributes.'); - - } else { - - throw new Error('Error creating WebGL context.'); - - } - - } - - } - - // Some experimental-webgl implementations do not have getShaderPrecisionFormat - - if (_gl.getShaderPrecisionFormat === undefined) { - - _gl.getShaderPrecisionFormat = function () { - - return { 'rangeMin': 1, 'rangeMax': 1, 'precision': 1 }; - - }; - - } - - } catch (error) { - - console.error('THREE.WebGLRenderer: ' + error.message); - throw error; - - } - - let extensions, capabilities, state, info; - let properties, textures, cubemaps, cubeuvmaps, attributes, geometries, objects; - let programCache, materials, renderLists, renderStates, clipping, shadowMap; - - let background, morphtargets, bufferRenderer, indexedBufferRenderer; - - let utils, bindingStates, uniformsGroups; - - function initGLContext() { - - extensions = new WebGLExtensions(_gl); - - capabilities = new WebGLCapabilities(_gl, extensions, parameters); - - extensions.init(capabilities); - - utils = new WebGLUtils(_gl, extensions, capabilities); - - state = new WebGLState(_gl, extensions, capabilities); - - info = new WebGLInfo(); - properties = new WebGLProperties(); - textures = new WebGLTextures(_gl, extensions, state, properties, capabilities, utils, info); - cubemaps = new WebGLCubeMaps(_this); - cubeuvmaps = new WebGLCubeUVMaps(_this); - attributes = new WebGLAttributes(_gl, capabilities); - bindingStates = new WebGLBindingStates(_gl, extensions, attributes, capabilities); - geometries = new WebGLGeometries(_gl, attributes, info, bindingStates); - objects = new WebGLObjects(_gl, geometries, attributes, info); - morphtargets = new WebGLMorphtargets(_gl, capabilities, textures); - clipping = new WebGLClipping(properties); - programCache = new WebGLPrograms(_this, cubemaps, cubeuvmaps, extensions, capabilities, bindingStates, clipping); - materials = new WebGLMaterials(_this, properties); - renderLists = new WebGLRenderLists(); - renderStates = new WebGLRenderStates(extensions, capabilities); - background = new WebGLBackground(_this, cubemaps, cubeuvmaps, state, objects, _alpha, _premultipliedAlpha); - shadowMap = new WebGLShadowMap(_this, objects, capabilities); - uniformsGroups = new WebGLUniformsGroups(_gl, info, capabilities, state); - - bufferRenderer = new WebGLBufferRenderer(_gl, extensions, info, capabilities); - indexedBufferRenderer = new WebGLIndexedBufferRenderer(_gl, extensions, info, capabilities); - - info.programs = programCache.programs; - - _this.capabilities = capabilities; - _this.extensions = extensions; - _this.properties = properties; - _this.renderLists = renderLists; - _this.shadowMap = shadowMap; - _this.state = state; - _this.info = info; - - } - - initGLContext(); - - // xr - - const xr = new WebXRManager(_this, _gl); - - this.xr = xr; - - // API - - this.getContext = function () { - - return _gl; - - }; - - this.getContextAttributes = function () { - - return _gl.getContextAttributes(); - - }; - - this.forceContextLoss = function () { - - const extension = extensions.get('WEBGL_lose_context'); - if (extension) extension.loseContext(); - - }; - - this.forceContextRestore = function () { - - const extension = extensions.get('WEBGL_lose_context'); - if (extension) extension.restoreContext(); - - }; - - this.getPixelRatio = function () { - - return _pixelRatio; - - }; - - this.setPixelRatio = function (value) { - - if (value === undefined) return; - - _pixelRatio = value; - - this.setSize(_width, _height, false); - - }; - - this.getSize = function (target) { - - return target.set(_width, _height); - - }; - - this.setSize = function (width, height, updateStyle) { - - if (xr.isPresenting) { - - console.warn('THREE.WebGLRenderer: Can\'t change size while VR device is presenting.'); - return; - - } - - _width = width; - _height = height; - - _canvas.width = Math.floor(width * _pixelRatio); - _canvas.height = Math.floor(height * _pixelRatio); - - if (updateStyle !== false) { - - _canvas.style.width = width + 'px'; - _canvas.style.height = height + 'px'; - - } - - this.setViewport(0, 0, width, height); - - }; - - this.getDrawingBufferSize = function (target) { - - return target.set(_width * _pixelRatio, _height * _pixelRatio).floor(); - - }; - - this.setDrawingBufferSize = function (width, height, pixelRatio) { - - _width = width; - _height = height; - - _pixelRatio = pixelRatio; - - _canvas.width = Math.floor(width * pixelRatio); - _canvas.height = Math.floor(height * pixelRatio); - - this.setViewport(0, 0, width, height); - - }; - - this.getCurrentViewport = function (target) { - - return target.copy(_currentViewport); - - }; - - this.getViewport = function (target) { - - return target.copy(_viewport); - - }; - - this.setViewport = function (x, y, width, height) { - - if (x.isVector4) { - - _viewport.set(x.x, x.y, x.z, x.w); - - } else { - - _viewport.set(x, y, width, height); - - } - - state.viewport(_currentViewport.copy(_viewport).multiplyScalar(_pixelRatio).floor()); - - }; - - this.getScissor = function (target) { - - return target.copy(_scissor); - - }; - - this.setScissor = function (x, y, width, height) { - - if (x.isVector4) { - - _scissor.set(x.x, x.y, x.z, x.w); - - } else { - - _scissor.set(x, y, width, height); - - } - - state.scissor(_currentScissor.copy(_scissor).multiplyScalar(_pixelRatio).floor()); - - }; - - this.getScissorTest = function () { - - return _scissorTest; - - }; - - this.setScissorTest = function (boolean) { - - state.setScissorTest(_scissorTest = boolean); - - }; - - this.setOpaqueSort = function (method) { - - _opaqueSort = method; - - }; - - this.setTransparentSort = function (method) { - - _transparentSort = method; - - }; - - // Clearing - - this.getClearColor = function (target) { - - return target.copy(background.getClearColor()); - - }; - - this.setClearColor = function () { - - background.setClearColor.apply(background, arguments); - - }; - - this.getClearAlpha = function () { - - return background.getClearAlpha(); - - }; - - this.setClearAlpha = function () { - - background.setClearAlpha.apply(background, arguments); - - }; - - this.clear = function (color = true, depth = true, stencil = true) { - - let bits = 0; - - if (color) bits |= 16384; - if (depth) bits |= 256; - if (stencil) bits |= 1024; - - _gl.clear(bits); - - }; - - this.clearColor = function () { - - this.clear(true, false, false); - - }; - - this.clearDepth = function () { - - this.clear(false, true, false); - - }; - - this.clearStencil = function () { - - this.clear(false, false, true); - - }; - - // - - this.dispose = function () { - - _canvas.removeEventListener('webglcontextlost', onContextLost, false); - _canvas.removeEventListener('webglcontextrestored', onContextRestore, false); - _canvas.removeEventListener('webglcontextcreationerror', onContextCreationError, false); - - renderLists.dispose(); - renderStates.dispose(); - properties.dispose(); - cubemaps.dispose(); - cubeuvmaps.dispose(); - objects.dispose(); - bindingStates.dispose(); - uniformsGroups.dispose(); - programCache.dispose(); - - xr.dispose(); - - xr.removeEventListener('sessionstart', onXRSessionStart); - xr.removeEventListener('sessionend', onXRSessionEnd); - - if (_transmissionRenderTarget) { - - _transmissionRenderTarget.dispose(); - _transmissionRenderTarget = null; - - } - - animation.stop(); - - }; - - // Events - - function onContextLost(event) { - - event.preventDefault(); - - console.log('THREE.WebGLRenderer: Context Lost.'); - - _isContextLost = true; - - } - - function onContextRestore( /* event */) { - - console.log('THREE.WebGLRenderer: Context Restored.'); - - _isContextLost = false; - - const infoAutoReset = info.autoReset; - const shadowMapEnabled = shadowMap.enabled; - const shadowMapAutoUpdate = shadowMap.autoUpdate; - const shadowMapNeedsUpdate = shadowMap.needsUpdate; - const shadowMapType = shadowMap.type; - - initGLContext(); - - info.autoReset = infoAutoReset; - shadowMap.enabled = shadowMapEnabled; - shadowMap.autoUpdate = shadowMapAutoUpdate; - shadowMap.needsUpdate = shadowMapNeedsUpdate; - shadowMap.type = shadowMapType; - - } - - function onContextCreationError(event) { - - console.error('THREE.WebGLRenderer: A WebGL context could not be created. Reason: ', event.statusMessage); - - } - - function onMaterialDispose(event) { - - const material = event.target; - - material.removeEventListener('dispose', onMaterialDispose); - - deallocateMaterial(material); - - } - - // Buffer deallocation - - function deallocateMaterial(material) { - - releaseMaterialProgramReferences(material); - - properties.remove(material); - - } - - - function releaseMaterialProgramReferences(material) { - - const programs = properties.get(material).programs; - - if (programs !== undefined) { - - programs.forEach(function (program) { - - programCache.releaseProgram(program); - - }); - - if (material.isShaderMaterial) { - - programCache.releaseShaderCache(material); - - } - - } - - } - - // Buffer rendering - - this.renderBufferDirect = function (camera, scene, geometry, material, object, group) { - - if (scene === null) scene = _emptyScene; // renderBufferDirect second parameter used to be fog (could be null) - - const frontFaceCW = (object.isMesh && object.matrixWorld.determinant() < 0); - - const program = setProgram(camera, scene, geometry, material, object); - - state.setMaterial(material, frontFaceCW); - - // - - let index = geometry.index; - let rangeFactor = 1; - - if (material.wireframe === true) { - - index = geometries.getWireframeAttribute(geometry); - rangeFactor = 2; - - } - - // - - const drawRange = geometry.drawRange; - const position = geometry.attributes.position; - - let drawStart = drawRange.start * rangeFactor; - let drawEnd = (drawRange.start + drawRange.count) * rangeFactor; - - if (group !== null) { - - drawStart = Math.max(drawStart, group.start * rangeFactor); - drawEnd = Math.min(drawEnd, (group.start + group.count) * rangeFactor); - - } - - if (index !== null) { - - drawStart = Math.max(drawStart, 0); - drawEnd = Math.min(drawEnd, index.count); - - } else if (position !== undefined && position !== null) { - - drawStart = Math.max(drawStart, 0); - drawEnd = Math.min(drawEnd, position.count); - - } - - const drawCount = drawEnd - drawStart; - - if (drawCount < 0 || drawCount === Infinity) return; - - // - - bindingStates.setup(object, material, program, geometry, index); - - let attribute; - let renderer = bufferRenderer; - - if (index !== null) { - - attribute = attributes.get(index); - - renderer = indexedBufferRenderer; - renderer.setIndex(attribute); - - } - - // - - if (object.isMesh) { - - if (material.wireframe === true) { - - state.setLineWidth(material.wireframeLinewidth * getTargetPixelRatio()); - renderer.setMode(1); - - } else { - - renderer.setMode(4); - - } - - } else if (object.isLine) { - - let lineWidth = material.linewidth; - - if (lineWidth === undefined) lineWidth = 1; // Not using Line*Material - - state.setLineWidth(lineWidth * getTargetPixelRatio()); - - if (object.isLineSegments) { - - renderer.setMode(1); - - } else if (object.isLineLoop) { - - renderer.setMode(2); - - } else { - - renderer.setMode(3); - - } - - } else if (object.isPoints) { - - renderer.setMode(0); - - } else if (object.isSprite) { - - renderer.setMode(4); - - } - - if (object.isInstancedMesh) { - - renderer.renderInstances(drawStart, drawCount, object.count); - - } else if (geometry.isInstancedBufferGeometry) { - - const maxInstanceCount = geometry._maxInstanceCount !== undefined ? geometry._maxInstanceCount : Infinity; - const instanceCount = Math.min(geometry.instanceCount, maxInstanceCount); - - renderer.renderInstances(drawStart, drawCount, instanceCount); - - } else { - - renderer.render(drawStart, drawCount); - - } - - }; - - // Compile - - this.compile = function (scene, camera) { - - function prepare(material, scene, object) { - - if (material.transparent === true && material.side === DoubleSide && material.forceSinglePass === false) { - - material.side = BackSide; - material.needsUpdate = true; - getProgram(material, scene, object); - - material.side = FrontSide; - material.needsUpdate = true; - getProgram(material, scene, object); - - material.side = DoubleSide; - - } else { - - getProgram(material, scene, object); - - } - - } - - currentRenderState = renderStates.get(scene); - currentRenderState.init(); - - renderStateStack.push(currentRenderState); - - scene.traverseVisible(function (object) { - - if (object.isLight && object.layers.test(camera.layers)) { - - currentRenderState.pushLight(object); - - if (object.castShadow) { - - currentRenderState.pushShadow(object); - - } - - } - - }); - - currentRenderState.setupLights(_this.physicallyCorrectLights); - - scene.traverse(function (object) { - - const material = object.material; - - if (material) { - - if (Array.isArray(material)) { - - for (let i = 0; i < material.length; i++) { - - const material2 = material[i]; - - prepare(material2, scene, object); - - } - - } else { - - prepare(material, scene, object); - - } - - } - - }); - - renderStateStack.pop(); - currentRenderState = null; - - }; - - // Animation Loop - - let onAnimationFrameCallback = null; - - function onAnimationFrame(time) { - - if (onAnimationFrameCallback) onAnimationFrameCallback(time); - - } - - function onXRSessionStart() { - - animation.stop(); - - } - - function onXRSessionEnd() { - - animation.start(); - - } - - const animation = new WebGLAnimation(); - animation.setAnimationLoop(onAnimationFrame); - - if (typeof self !== 'undefined') animation.setContext(self); - - this.setAnimationLoop = function (callback) { - - onAnimationFrameCallback = callback; - xr.setAnimationLoop(callback); - - (callback === null) ? animation.stop() : animation.start(); - - }; - - xr.addEventListener('sessionstart', onXRSessionStart); - xr.addEventListener('sessionend', onXRSessionEnd); - - // Rendering - - this.render = function (scene, camera) { - - if (camera !== undefined && camera.isCamera !== true) { - - console.error('THREE.WebGLRenderer.render: camera is not an instance of THREE.Camera.'); - return; - - } - - if (_isContextLost === true) return; - - // update scene graph - - if (scene.matrixWorldAutoUpdate === true) scene.updateMatrixWorld(); - - // update camera matrices and frustum - - if (camera.parent === null && camera.matrixWorldAutoUpdate === true) camera.updateMatrixWorld(); - - if (xr.enabled === true && xr.isPresenting === true) { - - if (xr.cameraAutoUpdate === true) xr.updateCamera(camera); - - camera = xr.getCamera(); // use XR camera for rendering - - } - - // - if (scene.isScene === true) scene.onBeforeRender(_this, scene, camera, _currentRenderTarget); - - currentRenderState = renderStates.get(scene, renderStateStack.length); - currentRenderState.init(); - - renderStateStack.push(currentRenderState); - - _projScreenMatrix.multiplyMatrices(camera.projectionMatrix, camera.matrixWorldInverse); - _frustum.setFromProjectionMatrix(_projScreenMatrix); - - _localClippingEnabled = this.localClippingEnabled; - _clippingEnabled = clipping.init(this.clippingPlanes, _localClippingEnabled); - - currentRenderList = renderLists.get(scene, renderListStack.length); - currentRenderList.init(); - - renderListStack.push(currentRenderList); - - projectObject(scene, camera, 0, _this.sortObjects); - - currentRenderList.finish(); - - if (_this.sortObjects === true) { - - currentRenderList.sort(_opaqueSort, _transparentSort); - - } - - // - - if (_clippingEnabled === true) clipping.beginShadows(); - - const shadowsArray = currentRenderState.state.shadowsArray; - - shadowMap.render(shadowsArray, scene, camera); - - if (_clippingEnabled === true) clipping.endShadows(); - - // - - if (this.info.autoReset === true) this.info.reset(); - - // - - background.render(currentRenderList, scene); - - // render scene - - currentRenderState.setupLights(_this.physicallyCorrectLights); - - if (camera.isArrayCamera) { - - const cameras = camera.cameras; - - for (let i = 0, l = cameras.length; i < l; i++) { - - const camera2 = cameras[i]; - - renderScene(currentRenderList, scene, camera2, camera2.viewport); - - } - - } else { - - renderScene(currentRenderList, scene, camera); - - } - - // - - if (_currentRenderTarget !== null) { - - // resolve multisample renderbuffers to a single-sample texture if necessary - - textures.updateMultisampleRenderTarget(_currentRenderTarget); - - // Generate mipmap if we're using any kind of mipmap filtering - - textures.updateRenderTargetMipmap(_currentRenderTarget); - - } - - // - - if (scene.isScene === true) scene.onAfterRender(_this, scene, camera); - - // _gl.finish(); - - bindingStates.resetDefaultState(); - _currentMaterialId = - 1; - _currentCamera = null; - - renderStateStack.pop(); - - if (renderStateStack.length > 0) { - - currentRenderState = renderStateStack[renderStateStack.length - 1]; - - } else { - - currentRenderState = null; - - } - - renderListStack.pop(); - - if (renderListStack.length > 0) { - - currentRenderList = renderListStack[renderListStack.length - 1]; - - } else { - - currentRenderList = null; - - } - - }; - - function projectObject(object, camera, groupOrder, sortObjects) { - - if (object.visible === false) return; - - const visible = object.layers.test(camera.layers); - - if (visible) { - - if (object.isGroup) { - - groupOrder = object.renderOrder; - - } else if (object.isLOD) { - - if (object.autoUpdate === true) object.update(camera); - - } else if (object.isLight) { - - currentRenderState.pushLight(object); - - if (object.castShadow) { - - currentRenderState.pushShadow(object); - - } - - } else if (object.isSprite) { - - if (!object.frustumCulled || _frustum.intersectsSprite(object)) { - - if (sortObjects) { - - _vector3.setFromMatrixPosition(object.matrixWorld) - .applyMatrix4(_projScreenMatrix); - - } - - const geometry = objects.update(object); - const material = object.material; - - if (material.visible) { - - currentRenderList.push(object, geometry, material, groupOrder, _vector3.z, null); - - } - - } - - } else if (object.isMesh || object.isLine || object.isPoints) { - - if (object.isSkinnedMesh) { - - // update skeleton only once in a frame - - if (object.skeleton.frame !== info.render.frame) { - - object.skeleton.update(); - object.skeleton.frame = info.render.frame; - - } - - } - - if (!object.frustumCulled || _frustum.intersectsObject(object)) { - - if (sortObjects) { - - _vector3.setFromMatrixPosition(object.matrixWorld) - .applyMatrix4(_projScreenMatrix); - - } - - const geometry = objects.update(object); - const material = object.material; - - if (Array.isArray(material)) { - - const groups = geometry.groups; - - for (let i = 0, l = groups.length; i < l; i++) { - - const group = groups[i]; - const groupMaterial = material[group.materialIndex]; - - if (groupMaterial && groupMaterial.visible) { - - currentRenderList.push(object, geometry, groupMaterial, groupOrder, _vector3.z, group); - - } - - } - - } else if (material.visible) { - - currentRenderList.push(object, geometry, material, groupOrder, _vector3.z, null); - - } - - } - - } - - } - - const children = object.children; - - for (let i = 0, l = children.length; i < l; i++) { - - projectObject(children[i], camera, groupOrder, sortObjects); - - } - - } - - function renderScene(currentRenderList, scene, camera, viewport) { - - const opaqueObjects = currentRenderList.opaque; - const transmissiveObjects = currentRenderList.transmissive; - const transparentObjects = currentRenderList.transparent; - - currentRenderState.setupLightsView(camera); - - if (_clippingEnabled === true) clipping.setGlobalState(_this.clippingPlanes, camera); - - if (transmissiveObjects.length > 0) renderTransmissionPass(opaqueObjects, scene, camera); - - if (viewport) state.viewport(_currentViewport.copy(viewport)); - - if (opaqueObjects.length > 0) renderObjects(opaqueObjects, scene, camera); - if (transmissiveObjects.length > 0) renderObjects(transmissiveObjects, scene, camera); - if (transparentObjects.length > 0) renderObjects(transparentObjects, scene, camera); - - // Ensure depth buffer writing is enabled so it can be cleared on next render - - state.buffers.depth.setTest(true); - state.buffers.depth.setMask(true); - state.buffers.color.setMask(true); - - state.setPolygonOffset(false); - - } - - function renderTransmissionPass(opaqueObjects, scene, camera) { - - const isWebGL2 = capabilities.isWebGL2; - - if (_transmissionRenderTarget === null) { - - _transmissionRenderTarget = new WebGLRenderTarget(1, 1, { - generateMipmaps: true, - type: extensions.has('EXT_color_buffer_half_float') ? HalfFloatType : UnsignedByteType, - minFilter: LinearMipmapLinearFilter, - samples: (isWebGL2 && _antialias === true) ? 4 : 0 - }); - - } - - _this.getDrawingBufferSize(_vector2); - - if (isWebGL2) { - - _transmissionRenderTarget.setSize(_vector2.x, _vector2.y); - - } else { - - _transmissionRenderTarget.setSize(floorPowerOfTwo(_vector2.x), floorPowerOfTwo(_vector2.y)); - - } - - // - - const currentRenderTarget = _this.getRenderTarget(); - _this.setRenderTarget(_transmissionRenderTarget); - _this.clear(); - - // Turn off the features which can affect the frag color for opaque objects pass. - // Otherwise they are applied twice in opaque objects pass and transmission objects pass. - const currentToneMapping = _this.toneMapping; - _this.toneMapping = NoToneMapping; - - renderObjects(opaqueObjects, scene, camera); - - _this.toneMapping = currentToneMapping; - - textures.updateMultisampleRenderTarget(_transmissionRenderTarget); - textures.updateRenderTargetMipmap(_transmissionRenderTarget); - - _this.setRenderTarget(currentRenderTarget); - - } - - function renderObjects(renderList, scene, camera) { - - const overrideMaterial = scene.isScene === true ? scene.overrideMaterial : null; - - for (let i = 0, l = renderList.length; i < l; i++) { - - const renderItem = renderList[i]; - - const object = renderItem.object; - const geometry = renderItem.geometry; - const material = overrideMaterial === null ? renderItem.material : overrideMaterial; - const group = renderItem.group; - - if (object.layers.test(camera.layers)) { - - renderObject(object, scene, camera, geometry, material, group); - - } - - } - - } - - function renderObject(object, scene, camera, geometry, material, group) { - - object.onBeforeRender(_this, scene, camera, geometry, material, group); - - object.modelViewMatrix.multiplyMatrices(camera.matrixWorldInverse, object.matrixWorld); - object.normalMatrix.getNormalMatrix(object.modelViewMatrix); - - material.onBeforeRender(_this, scene, camera, geometry, object, group); - - if (material.transparent === true && material.side === DoubleSide && material.forceSinglePass === false) { - - material.side = BackSide; - material.needsUpdate = true; - _this.renderBufferDirect(camera, scene, geometry, material, object, group); - - material.side = FrontSide; - material.needsUpdate = true; - _this.renderBufferDirect(camera, scene, geometry, material, object, group); - - material.side = DoubleSide; - - } else { - - _this.renderBufferDirect(camera, scene, geometry, material, object, group); - - } - - object.onAfterRender(_this, scene, camera, geometry, material, group); - - } - - function getProgram(material, scene, object) { - - if (scene.isScene !== true) scene = _emptyScene; // scene could be a Mesh, Line, Points, ... - - const materialProperties = properties.get(material); - - const lights = currentRenderState.state.lights; - const shadowsArray = currentRenderState.state.shadowsArray; - - const lightsStateVersion = lights.state.version; - - const parameters = programCache.getParameters(material, lights.state, shadowsArray, scene, object); - const programCacheKey = programCache.getProgramCacheKey(parameters); - - let programs = materialProperties.programs; - - // always update environment and fog - changing these trigger an getProgram call, but it's possible that the program doesn't change - - materialProperties.environment = material.isMeshStandardMaterial ? scene.environment : null; - materialProperties.fog = scene.fog; - materialProperties.envMap = (material.isMeshStandardMaterial ? cubeuvmaps : cubemaps).get(material.envMap || materialProperties.environment); - - if (programs === undefined) { - - // new material - - material.addEventListener('dispose', onMaterialDispose); - - programs = new Map(); - materialProperties.programs = programs; - - } - - let program = programs.get(programCacheKey); - - if (program !== undefined) { - - // early out if program and light state is identical - - if (materialProperties.currentProgram === program && materialProperties.lightsStateVersion === lightsStateVersion) { - - updateCommonMaterialProperties(material, parameters); - - return program; - - } - - } else { - - parameters.uniforms = programCache.getUniforms(material); - - material.onBuild(object, parameters, _this); - - material.onBeforeCompile(parameters, _this); - - program = programCache.acquireProgram(parameters, programCacheKey); - programs.set(programCacheKey, program); - - materialProperties.uniforms = parameters.uniforms; - - } - - const uniforms = materialProperties.uniforms; - - if ((!material.isShaderMaterial && !material.isRawShaderMaterial) || material.clipping === true) { - - uniforms.clippingPlanes = clipping.uniform; - - } - - updateCommonMaterialProperties(material, parameters); - - // store the light setup it was created for - - materialProperties.needsLights = materialNeedsLights(material); - materialProperties.lightsStateVersion = lightsStateVersion; - - if (materialProperties.needsLights) { - - // wire up the material to this renderer's lighting state - - uniforms.ambientLightColor.value = lights.state.ambient; - uniforms.lightProbe.value = lights.state.probe; - uniforms.directionalLights.value = lights.state.directional; - uniforms.directionalLightShadows.value = lights.state.directionalShadow; - uniforms.spotLights.value = lights.state.spot; - uniforms.spotLightShadows.value = lights.state.spotShadow; - uniforms.rectAreaLights.value = lights.state.rectArea; - uniforms.ltc_1.value = lights.state.rectAreaLTC1; - uniforms.ltc_2.value = lights.state.rectAreaLTC2; - uniforms.pointLights.value = lights.state.point; - uniforms.pointLightShadows.value = lights.state.pointShadow; - uniforms.hemisphereLights.value = lights.state.hemi; - - uniforms.directionalShadowMap.value = lights.state.directionalShadowMap; - uniforms.directionalShadowMatrix.value = lights.state.directionalShadowMatrix; - uniforms.spotShadowMap.value = lights.state.spotShadowMap; - uniforms.spotLightMatrix.value = lights.state.spotLightMatrix; - uniforms.spotLightMap.value = lights.state.spotLightMap; - uniforms.pointShadowMap.value = lights.state.pointShadowMap; - uniforms.pointShadowMatrix.value = lights.state.pointShadowMatrix; - // TODO (abelnation): add area lights shadow info to uniforms - - } - - const progUniforms = program.getUniforms(); - const uniformsList = WebGLUniforms.seqWithValue(progUniforms.seq, uniforms); - - materialProperties.currentProgram = program; - materialProperties.uniformsList = uniformsList; - - return program; - - } - - function updateCommonMaterialProperties(material, parameters) { - - const materialProperties = properties.get(material); - - materialProperties.outputEncoding = parameters.outputEncoding; - materialProperties.instancing = parameters.instancing; - materialProperties.skinning = parameters.skinning; - materialProperties.morphTargets = parameters.morphTargets; - materialProperties.morphNormals = parameters.morphNormals; - materialProperties.morphColors = parameters.morphColors; - materialProperties.morphTargetsCount = parameters.morphTargetsCount; - materialProperties.numClippingPlanes = parameters.numClippingPlanes; - materialProperties.numIntersection = parameters.numClipIntersection; - materialProperties.vertexAlphas = parameters.vertexAlphas; - materialProperties.vertexTangents = parameters.vertexTangents; - materialProperties.toneMapping = parameters.toneMapping; - - } - - function setProgram(camera, scene, geometry, material, object) { - - if (scene.isScene !== true) scene = _emptyScene; // scene could be a Mesh, Line, Points, ... - - textures.resetTextureUnits(); - - const fog = scene.fog; - const environment = material.isMeshStandardMaterial ? scene.environment : null; - const encoding = (_currentRenderTarget === null) ? _this.outputEncoding : (_currentRenderTarget.isXRRenderTarget === true ? _currentRenderTarget.texture.encoding : LinearEncoding); - const envMap = (material.isMeshStandardMaterial ? cubeuvmaps : cubemaps).get(material.envMap || environment); - const vertexAlphas = material.vertexColors === true && !!geometry.attributes.color && geometry.attributes.color.itemSize === 4; - const vertexTangents = !!material.normalMap && !!geometry.attributes.tangent; - const morphTargets = !!geometry.morphAttributes.position; - const morphNormals = !!geometry.morphAttributes.normal; - const morphColors = !!geometry.morphAttributes.color; - const toneMapping = material.toneMapped ? _this.toneMapping : NoToneMapping; - - const morphAttribute = geometry.morphAttributes.position || geometry.morphAttributes.normal || geometry.morphAttributes.color; - const morphTargetsCount = (morphAttribute !== undefined) ? morphAttribute.length : 0; - - const materialProperties = properties.get(material); - const lights = currentRenderState.state.lights; - - if (_clippingEnabled === true) { - - if (_localClippingEnabled === true || camera !== _currentCamera) { - - const useCache = - camera === _currentCamera && - material.id === _currentMaterialId; - - // we might want to call this function with some ClippingGroup - // object instead of the material, once it becomes feasible - // (#8465, #8379) - clipping.setState(material, camera, useCache); - - } - - } - - // - - let needsProgramChange = false; - - if (material.version === materialProperties.__version) { - - if (materialProperties.needsLights && (materialProperties.lightsStateVersion !== lights.state.version)) { - - needsProgramChange = true; - - } else if (materialProperties.outputEncoding !== encoding) { - - needsProgramChange = true; - - } else if (object.isInstancedMesh && materialProperties.instancing === false) { - - needsProgramChange = true; - - } else if (!object.isInstancedMesh && materialProperties.instancing === true) { - - needsProgramChange = true; - - } else if (object.isSkinnedMesh && materialProperties.skinning === false) { - - needsProgramChange = true; - - } else if (!object.isSkinnedMesh && materialProperties.skinning === true) { - - needsProgramChange = true; - - } else if (materialProperties.envMap !== envMap) { - - needsProgramChange = true; - - } else if (material.fog === true && materialProperties.fog !== fog) { - - needsProgramChange = true; - - } else if (materialProperties.numClippingPlanes !== undefined && - (materialProperties.numClippingPlanes !== clipping.numPlanes || - materialProperties.numIntersection !== clipping.numIntersection)) { - - needsProgramChange = true; - - } else if (materialProperties.vertexAlphas !== vertexAlphas) { - - needsProgramChange = true; - - } else if (materialProperties.vertexTangents !== vertexTangents) { - - needsProgramChange = true; - - } else if (materialProperties.morphTargets !== morphTargets) { - - needsProgramChange = true; - - } else if (materialProperties.morphNormals !== morphNormals) { - - needsProgramChange = true; - - } else if (materialProperties.morphColors !== morphColors) { - - needsProgramChange = true; - - } else if (materialProperties.toneMapping !== toneMapping) { - - needsProgramChange = true; - - } else if (capabilities.isWebGL2 === true && materialProperties.morphTargetsCount !== morphTargetsCount) { - - needsProgramChange = true; - - } - - } else { - - needsProgramChange = true; - materialProperties.__version = material.version; - - } - - // - - let program = materialProperties.currentProgram; - - if (needsProgramChange === true) { - - program = getProgram(material, scene, object); - - } - - let refreshProgram = false; - let refreshMaterial = false; - let refreshLights = false; - - const p_uniforms = program.getUniforms(), - m_uniforms = materialProperties.uniforms; - - if (state.useProgram(program.program)) { - - refreshProgram = true; - refreshMaterial = true; - refreshLights = true; - - } - - if (material.id !== _currentMaterialId) { - - _currentMaterialId = material.id; - - refreshMaterial = true; - - } - - if (refreshProgram || _currentCamera !== camera) { - - p_uniforms.setValue(_gl, 'projectionMatrix', camera.projectionMatrix); - - if (capabilities.logarithmicDepthBuffer) { - - p_uniforms.setValue(_gl, 'logDepthBufFC', - 2.0 / (Math.log(camera.far + 1.0) / Math.LN2)); - - } - - if (_currentCamera !== camera) { - - _currentCamera = camera; - - // lighting uniforms depend on the camera so enforce an update - // now, in case this material supports lights - or later, when - // the next material that does gets activated: - - refreshMaterial = true; // set to true on material change - refreshLights = true; // remains set until update done - - } - - // load material specific uniforms - // (shader material also gets them for the sake of genericity) - - if (material.isShaderMaterial || - material.isMeshPhongMaterial || - material.isMeshToonMaterial || - material.isMeshStandardMaterial || - material.envMap) { - - const uCamPos = p_uniforms.map.cameraPosition; - - if (uCamPos !== undefined) { - - uCamPos.setValue(_gl, - _vector3.setFromMatrixPosition(camera.matrixWorld)); - - } - - } - - if (material.isMeshPhongMaterial || - material.isMeshToonMaterial || - material.isMeshLambertMaterial || - material.isMeshBasicMaterial || - material.isMeshStandardMaterial || - material.isShaderMaterial) { - - p_uniforms.setValue(_gl, 'isOrthographic', camera.isOrthographicCamera === true); - - } - - if (material.isMeshPhongMaterial || - material.isMeshToonMaterial || - material.isMeshLambertMaterial || - material.isMeshBasicMaterial || - material.isMeshStandardMaterial || - material.isShaderMaterial || - material.isShadowMaterial || - object.isSkinnedMesh) { - - p_uniforms.setValue(_gl, 'viewMatrix', camera.matrixWorldInverse); - - } - - } - - // skinning and morph target uniforms must be set even if material didn't change - // auto-setting of texture unit for bone and morph texture must go before other textures - // otherwise textures used for skinning and morphing can take over texture units reserved for other material textures - - if (object.isSkinnedMesh) { - - p_uniforms.setOptional(_gl, object, 'bindMatrix'); - p_uniforms.setOptional(_gl, object, 'bindMatrixInverse'); - - const skeleton = object.skeleton; - - if (skeleton) { - - if (capabilities.floatVertexTextures) { - - if (skeleton.boneTexture === null) skeleton.computeBoneTexture(); - - p_uniforms.setValue(_gl, 'boneTexture', skeleton.boneTexture, textures); - p_uniforms.setValue(_gl, 'boneTextureSize', skeleton.boneTextureSize); - - } else { - - console.warn('THREE.WebGLRenderer: SkinnedMesh can only be used with WebGL 2. With WebGL 1 OES_texture_float and vertex textures support is required.'); - - } - - } - - } - - const morphAttributes = geometry.morphAttributes; - - if (morphAttributes.position !== undefined || morphAttributes.normal !== undefined || (morphAttributes.color !== undefined && capabilities.isWebGL2 === true)) { - - morphtargets.update(object, geometry, material, program); - - } - - if (refreshMaterial || materialProperties.receiveShadow !== object.receiveShadow) { - - materialProperties.receiveShadow = object.receiveShadow; - p_uniforms.setValue(_gl, 'receiveShadow', object.receiveShadow); - - } - - // https://github.com/mrdoob/three.js/pull/24467#issuecomment-1209031512 - - if (material.isMeshGouraudMaterial && material.envMap !== null) { - - m_uniforms.envMap.value = envMap; - - m_uniforms.flipEnvMap.value = (envMap.isCubeTexture && envMap.isRenderTargetTexture === false) ? - 1 : 1; - - } - - if (refreshMaterial) { - - p_uniforms.setValue(_gl, 'toneMappingExposure', _this.toneMappingExposure); - - if (materialProperties.needsLights) { - - // the current material requires lighting info - - // note: all lighting uniforms are always set correctly - // they simply reference the renderer's state for their - // values - // - // use the current material's .needsUpdate flags to set - // the GL state when required - - markUniformsLightsNeedsUpdate(m_uniforms, refreshLights); - - } - - // refresh uniforms common to several materials - - if (fog && material.fog === true) { - - materials.refreshFogUniforms(m_uniforms, fog); - - } - - materials.refreshMaterialUniforms(m_uniforms, material, _pixelRatio, _height, _transmissionRenderTarget); - - WebGLUniforms.upload(_gl, materialProperties.uniformsList, m_uniforms, textures); - - } - - if (material.isShaderMaterial && material.uniformsNeedUpdate === true) { - - WebGLUniforms.upload(_gl, materialProperties.uniformsList, m_uniforms, textures); - material.uniformsNeedUpdate = false; - - } - - if (material.isSpriteMaterial) { - - p_uniforms.setValue(_gl, 'center', object.center); - - } - - // common matrices - - p_uniforms.setValue(_gl, 'modelViewMatrix', object.modelViewMatrix); - p_uniforms.setValue(_gl, 'normalMatrix', object.normalMatrix); - p_uniforms.setValue(_gl, 'modelMatrix', object.matrixWorld); - - // UBOs - - if (material.isShaderMaterial || material.isRawShaderMaterial) { - - const groups = material.uniformsGroups; - - for (let i = 0, l = groups.length; i < l; i++) { - - if (capabilities.isWebGL2) { - - const group = groups[i]; - - uniformsGroups.update(group, program); - uniformsGroups.bind(group, program); - - } else { - - console.warn('THREE.WebGLRenderer: Uniform Buffer Objects can only be used with WebGL 2.'); - - } - - } - - } - - return program; - - } - - // If uniforms are marked as clean, they don't need to be loaded to the GPU. - - function markUniformsLightsNeedsUpdate(uniforms, value) { - - uniforms.ambientLightColor.needsUpdate = value; - uniforms.lightProbe.needsUpdate = value; - - uniforms.directionalLights.needsUpdate = value; - uniforms.directionalLightShadows.needsUpdate = value; - uniforms.pointLights.needsUpdate = value; - uniforms.pointLightShadows.needsUpdate = value; - uniforms.spotLights.needsUpdate = value; - uniforms.spotLightShadows.needsUpdate = value; - uniforms.rectAreaLights.needsUpdate = value; - uniforms.hemisphereLights.needsUpdate = value; - - } - - function materialNeedsLights(material) { - - return material.isMeshLambertMaterial || material.isMeshToonMaterial || material.isMeshPhongMaterial || - material.isMeshStandardMaterial || material.isShadowMaterial || - (material.isShaderMaterial && material.lights === true); - - } - - this.getActiveCubeFace = function () { - - return _currentActiveCubeFace; - - }; - - this.getActiveMipmapLevel = function () { - - return _currentActiveMipmapLevel; - - }; - - this.getRenderTarget = function () { - - return _currentRenderTarget; - - }; - - this.setRenderTargetTextures = function (renderTarget, colorTexture, depthTexture) { - - properties.get(renderTarget.texture).__webglTexture = colorTexture; - properties.get(renderTarget.depthTexture).__webglTexture = depthTexture; - - const renderTargetProperties = properties.get(renderTarget); - renderTargetProperties.__hasExternalTextures = true; - - if (renderTargetProperties.__hasExternalTextures) { - - renderTargetProperties.__autoAllocateDepthBuffer = depthTexture === undefined; - - if (!renderTargetProperties.__autoAllocateDepthBuffer) { - - // The multisample_render_to_texture extension doesn't work properly if there - // are midframe flushes and an external depth buffer. Disable use of the extension. - if (extensions.has('WEBGL_multisampled_render_to_texture') === true) { - - console.warn('THREE.WebGLRenderer: Render-to-texture extension was disabled because an external texture was provided'); - renderTargetProperties.__useRenderToTexture = false; - - } - - } - - } - - }; - - this.setRenderTargetFramebuffer = function (renderTarget, defaultFramebuffer) { - - const renderTargetProperties = properties.get(renderTarget); - renderTargetProperties.__webglFramebuffer = defaultFramebuffer; - renderTargetProperties.__useDefaultFramebuffer = defaultFramebuffer === undefined; - - }; - - this.setRenderTarget = function (renderTarget, activeCubeFace = 0, activeMipmapLevel = 0) { - - _currentRenderTarget = renderTarget; - _currentActiveCubeFace = activeCubeFace; - _currentActiveMipmapLevel = activeMipmapLevel; - - let useDefaultFramebuffer = true; - let framebuffer = null; - let isCube = false; - let isRenderTarget3D = false; - - if (renderTarget) { - - const renderTargetProperties = properties.get(renderTarget); - - if (renderTargetProperties.__useDefaultFramebuffer !== undefined) { - - // We need to make sure to rebind the framebuffer. - state.bindFramebuffer(36160, null); - useDefaultFramebuffer = false; - - } else if (renderTargetProperties.__webglFramebuffer === undefined) { - - textures.setupRenderTarget(renderTarget); - - } else if (renderTargetProperties.__hasExternalTextures) { - - // Color and depth texture must be rebound in order for the swapchain to update. - textures.rebindTextures(renderTarget, properties.get(renderTarget.texture).__webglTexture, properties.get(renderTarget.depthTexture).__webglTexture); - - } - - const texture = renderTarget.texture; - - if (texture.isData3DTexture || texture.isDataArrayTexture || texture.isCompressedArrayTexture) { - - isRenderTarget3D = true; - - } - - const __webglFramebuffer = properties.get(renderTarget).__webglFramebuffer; - - if (renderTarget.isWebGLCubeRenderTarget) { - - framebuffer = __webglFramebuffer[activeCubeFace]; - isCube = true; - - } else if ((capabilities.isWebGL2 && renderTarget.samples > 0) && textures.useMultisampledRTT(renderTarget) === false) { - - framebuffer = properties.get(renderTarget).__webglMultisampledFramebuffer; - - } else { - - framebuffer = __webglFramebuffer; - - } - - _currentViewport.copy(renderTarget.viewport); - _currentScissor.copy(renderTarget.scissor); - _currentScissorTest = renderTarget.scissorTest; - - } else { - - _currentViewport.copy(_viewport).multiplyScalar(_pixelRatio).floor(); - _currentScissor.copy(_scissor).multiplyScalar(_pixelRatio).floor(); - _currentScissorTest = _scissorTest; - - } - - const framebufferBound = state.bindFramebuffer(36160, framebuffer); - - if (framebufferBound && capabilities.drawBuffers && useDefaultFramebuffer) { - - state.drawBuffers(renderTarget, framebuffer); - - } - - state.viewport(_currentViewport); - state.scissor(_currentScissor); - state.setScissorTest(_currentScissorTest); - - if (isCube) { - - const textureProperties = properties.get(renderTarget.texture); - _gl.framebufferTexture2D(36160, 36064, 34069 + activeCubeFace, textureProperties.__webglTexture, activeMipmapLevel); - - } else if (isRenderTarget3D) { - - const textureProperties = properties.get(renderTarget.texture); - const layer = activeCubeFace || 0; - _gl.framebufferTextureLayer(36160, 36064, textureProperties.__webglTexture, activeMipmapLevel || 0, layer); - - } - - _currentMaterialId = - 1; // reset current material to ensure correct uniform bindings - - }; - - this.readRenderTargetPixels = function (renderTarget, x, y, width, height, buffer, activeCubeFaceIndex) { - - if (!(renderTarget && renderTarget.isWebGLRenderTarget)) { - - console.error('THREE.WebGLRenderer.readRenderTargetPixels: renderTarget is not THREE.WebGLRenderTarget.'); - return; - - } - - let framebuffer = properties.get(renderTarget).__webglFramebuffer; - - if (renderTarget.isWebGLCubeRenderTarget && activeCubeFaceIndex !== undefined) { - - framebuffer = framebuffer[activeCubeFaceIndex]; - - } - - if (framebuffer) { - - state.bindFramebuffer(36160, framebuffer); - - try { - - const texture = renderTarget.texture; - const textureFormat = texture.format; - const textureType = texture.type; - - if (textureFormat !== RGBAFormat && utils.convert(textureFormat) !== _gl.getParameter(35739)) { - - console.error('THREE.WebGLRenderer.readRenderTargetPixels: renderTarget is not in RGBA or implementation defined format.'); - return; - - } - - const halfFloatSupportedByExt = (textureType === HalfFloatType) && (extensions.has('EXT_color_buffer_half_float') || (capabilities.isWebGL2 && extensions.has('EXT_color_buffer_float'))); - - if (textureType !== UnsignedByteType && utils.convert(textureType) !== _gl.getParameter(35738) && // Edge and Chrome Mac < 52 (#9513) - !(textureType === FloatType && (capabilities.isWebGL2 || extensions.has('OES_texture_float') || extensions.has('WEBGL_color_buffer_float'))) && // Chrome Mac >= 52 and Firefox - !halfFloatSupportedByExt) { - - console.error('THREE.WebGLRenderer.readRenderTargetPixels: renderTarget is not in UnsignedByteType or implementation defined type.'); - return; - - } - - // the following if statement ensures valid read requests (no out-of-bounds pixels, see #8604) - - if ((x >= 0 && x <= (renderTarget.width - width)) && (y >= 0 && y <= (renderTarget.height - height))) { - - _gl.readPixels(x, y, width, height, utils.convert(textureFormat), utils.convert(textureType), buffer); - - } - - } finally { - - // restore framebuffer of current render target if necessary - - const framebuffer = (_currentRenderTarget !== null) ? properties.get(_currentRenderTarget).__webglFramebuffer : null; - state.bindFramebuffer(36160, framebuffer); - - } - - } - - }; - - this.copyFramebufferToTexture = function (position, texture, level = 0) { - - const levelScale = Math.pow(2, - level); - const width = Math.floor(texture.image.width * levelScale); - const height = Math.floor(texture.image.height * levelScale); - - textures.setTexture2D(texture, 0); - - _gl.copyTexSubImage2D(3553, level, 0, 0, position.x, position.y, width, height); - - state.unbindTexture(); - - }; - - this.copyTextureToTexture = function (position, srcTexture, dstTexture, level = 0) { - - const width = srcTexture.image.width; - const height = srcTexture.image.height; - const glFormat = utils.convert(dstTexture.format); - const glType = utils.convert(dstTexture.type); - - textures.setTexture2D(dstTexture, 0); - - // As another texture upload may have changed pixelStorei - // parameters, make sure they are correct for the dstTexture - _gl.pixelStorei(37440, dstTexture.flipY); - _gl.pixelStorei(37441, dstTexture.premultiplyAlpha); - _gl.pixelStorei(3317, dstTexture.unpackAlignment); - - if (srcTexture.isDataTexture) { - - _gl.texSubImage2D(3553, level, position.x, position.y, width, height, glFormat, glType, srcTexture.image.data); - - } else { - - if (srcTexture.isCompressedTexture) { - - _gl.compressedTexSubImage2D(3553, level, position.x, position.y, srcTexture.mipmaps[0].width, srcTexture.mipmaps[0].height, glFormat, srcTexture.mipmaps[0].data); - - } else { - - _gl.texSubImage2D(3553, level, position.x, position.y, glFormat, glType, srcTexture.image); - - } - - } - - // Generate mipmaps only when copying level 0 - if (level === 0 && dstTexture.generateMipmaps) _gl.generateMipmap(3553); - - state.unbindTexture(); - - }; - - this.copyTextureToTexture3D = function (sourceBox, position, srcTexture, dstTexture, level = 0) { - - if (_this.isWebGL1Renderer) { - - console.warn('THREE.WebGLRenderer.copyTextureToTexture3D: can only be used with WebGL2.'); - return; - - } - - const width = sourceBox.max.x - sourceBox.min.x + 1; - const height = sourceBox.max.y - sourceBox.min.y + 1; - const depth = sourceBox.max.z - sourceBox.min.z + 1; - const glFormat = utils.convert(dstTexture.format); - const glType = utils.convert(dstTexture.type); - let glTarget; - - if (dstTexture.isData3DTexture) { - - textures.setTexture3D(dstTexture, 0); - glTarget = 32879; - - } else if (dstTexture.isDataArrayTexture) { - - textures.setTexture2DArray(dstTexture, 0); - glTarget = 35866; - - } else { - - console.warn('THREE.WebGLRenderer.copyTextureToTexture3D: only supports THREE.DataTexture3D and THREE.DataTexture2DArray.'); - return; - - } - - _gl.pixelStorei(37440, dstTexture.flipY); - _gl.pixelStorei(37441, dstTexture.premultiplyAlpha); - _gl.pixelStorei(3317, dstTexture.unpackAlignment); - - const unpackRowLen = _gl.getParameter(3314); - const unpackImageHeight = _gl.getParameter(32878); - const unpackSkipPixels = _gl.getParameter(3316); - const unpackSkipRows = _gl.getParameter(3315); - const unpackSkipImages = _gl.getParameter(32877); - - const image = srcTexture.isCompressedTexture ? srcTexture.mipmaps[0] : srcTexture.image; - - _gl.pixelStorei(3314, image.width); - _gl.pixelStorei(32878, image.height); - _gl.pixelStorei(3316, sourceBox.min.x); - _gl.pixelStorei(3315, sourceBox.min.y); - _gl.pixelStorei(32877, sourceBox.min.z); - - if (srcTexture.isDataTexture || srcTexture.isData3DTexture) { - - _gl.texSubImage3D(glTarget, level, position.x, position.y, position.z, width, height, depth, glFormat, glType, image.data); - - } else { - - if (srcTexture.isCompressedArrayTexture) { - - console.warn('THREE.WebGLRenderer.copyTextureToTexture3D: untested support for compressed srcTexture.'); - _gl.compressedTexSubImage3D(glTarget, level, position.x, position.y, position.z, width, height, depth, glFormat, image.data); - - } else { - - _gl.texSubImage3D(glTarget, level, position.x, position.y, position.z, width, height, depth, glFormat, glType, image); - - } - - } - - _gl.pixelStorei(3314, unpackRowLen); - _gl.pixelStorei(32878, unpackImageHeight); - _gl.pixelStorei(3316, unpackSkipPixels); - _gl.pixelStorei(3315, unpackSkipRows); - _gl.pixelStorei(32877, unpackSkipImages); - - // Generate mipmaps only when copying level 0 - if (level === 0 && dstTexture.generateMipmaps) _gl.generateMipmap(glTarget); - - state.unbindTexture(); - - }; - - this.initTexture = function (texture) { - - if (texture.isCubeTexture) { - - textures.setTextureCube(texture, 0); - - } else if (texture.isData3DTexture) { - - textures.setTexture3D(texture, 0); - - } else if (texture.isDataArrayTexture || texture.isCompressedArrayTexture) { - - textures.setTexture2DArray(texture, 0); - - } else { - - textures.setTexture2D(texture, 0); - - } - - state.unbindTexture(); - - }; - - this.resetState = function () { - - _currentActiveCubeFace = 0; - _currentActiveMipmapLevel = 0; - _currentRenderTarget = null; - - state.reset(); - bindingStates.reset(); - - }; - - if (typeof __THREE_DEVTOOLS__ !== 'undefined') { - - __THREE_DEVTOOLS__.dispatchEvent(new CustomEvent('observe', { detail: this })); - - } - -} - -class WebGL1Renderer extends WebGLRenderer { } - -WebGL1Renderer.prototype.isWebGL1Renderer = true; - -class FogExp2 { - - constructor(color, density = 0.00025) { - - this.isFogExp2 = true; - - this.name = ''; - - this.color = new Color(color); - this.density = density; - - } - - clone() { - - return new FogExp2(this.color, this.density); - - } - - toJSON( /* meta */) { - - return { - type: 'FogExp2', - color: this.color.getHex(), - density: this.density - }; - - } - -} - -class Fog { - - constructor(color, near = 1, far = 1000) { - - this.isFog = true; - - this.name = ''; - - this.color = new Color(color); - - this.near = near; - this.far = far; - - } - - clone() { - - return new Fog(this.color, this.near, this.far); - - } - - toJSON( /* meta */) { - - return { - type: 'Fog', - color: this.color.getHex(), - near: this.near, - far: this.far - }; - - } - -} - -class Scene extends Object3D { - - constructor() { - - super(); - - this.isScene = true; - - this.type = 'Scene'; - - this.background = null; - this.environment = null; - this.fog = null; - - this.backgroundBlurriness = 0; - this.backgroundIntensity = 1; - - this.overrideMaterial = null; - - if (typeof __THREE_DEVTOOLS__ !== 'undefined') { - - __THREE_DEVTOOLS__.dispatchEvent(new CustomEvent('observe', { detail: this })); - - } - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - if (source.background !== null) this.background = source.background.clone(); - if (source.environment !== null) this.environment = source.environment.clone(); - if (source.fog !== null) this.fog = source.fog.clone(); - - this.backgroundBlurriness = source.backgroundBlurriness; - this.backgroundIntensity = source.backgroundIntensity; - - if (source.overrideMaterial !== null) this.overrideMaterial = source.overrideMaterial.clone(); - - this.matrixAutoUpdate = source.matrixAutoUpdate; - - return this; - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - if (this.fog !== null) data.object.fog = this.fog.toJSON(); - if (this.backgroundBlurriness > 0) data.object.backgroundBlurriness = this.backgroundBlurriness; - if (this.backgroundIntensity !== 1) data.object.backgroundIntensity = this.backgroundIntensity; - - return data; - - } - - // @deprecated - - get autoUpdate() { - - console.warn('THREE.Scene: autoUpdate was renamed to matrixWorldAutoUpdate in r144.'); - return this.matrixWorldAutoUpdate; - - } - - set autoUpdate(value) { - - console.warn('THREE.Scene: autoUpdate was renamed to matrixWorldAutoUpdate in r144.'); - this.matrixWorldAutoUpdate = value; - - } - -} - -class InterleavedBuffer { - - constructor(array, stride) { - - this.isInterleavedBuffer = true; - - this.array = array; - this.stride = stride; - this.count = array !== undefined ? array.length / stride : 0; - - this.usage = StaticDrawUsage; - this.updateRange = { offset: 0, count: - 1 }; - - this.version = 0; - - this.uuid = generateUUID(); - - } - - onUploadCallback() { } - - set needsUpdate(value) { - - if (value === true) this.version++; - - } - - setUsage(value) { - - this.usage = value; - - return this; - - } - - copy(source) { - - this.array = new source.array.constructor(source.array); - this.count = source.count; - this.stride = source.stride; - this.usage = source.usage; - - return this; - - } - - copyAt(index1, attribute, index2) { - - index1 *= this.stride; - index2 *= attribute.stride; - - for (let i = 0, l = this.stride; i < l; i++) { - - this.array[index1 + i] = attribute.array[index2 + i]; - - } - - return this; - - } - - set(value, offset = 0) { - - this.array.set(value, offset); - - return this; - - } - - clone(data) { - - if (data.arrayBuffers === undefined) { - - data.arrayBuffers = {}; - - } - - if (this.array.buffer._uuid === undefined) { - - this.array.buffer._uuid = generateUUID(); - - } - - if (data.arrayBuffers[this.array.buffer._uuid] === undefined) { - - data.arrayBuffers[this.array.buffer._uuid] = this.array.slice(0).buffer; - - } - - const array = new this.array.constructor(data.arrayBuffers[this.array.buffer._uuid]); - - const ib = new this.constructor(array, this.stride); - ib.setUsage(this.usage); - - return ib; - - } - - onUpload(callback) { - - this.onUploadCallback = callback; - - return this; - - } - - toJSON(data) { - - if (data.arrayBuffers === undefined) { - - data.arrayBuffers = {}; - - } - - // generate UUID for array buffer if necessary - - if (this.array.buffer._uuid === undefined) { - - this.array.buffer._uuid = generateUUID(); - - } - - if (data.arrayBuffers[this.array.buffer._uuid] === undefined) { - - data.arrayBuffers[this.array.buffer._uuid] = Array.from(new Uint32Array(this.array.buffer)); - - } - - // - - return { - uuid: this.uuid, - buffer: this.array.buffer._uuid, - type: this.array.constructor.name, - stride: this.stride - }; - - } - -} - -const _vector$6 = /*@__PURE__*/ new Vector3(); - -class InterleavedBufferAttribute { - - constructor(interleavedBuffer, itemSize, offset, normalized = false) { - - this.isInterleavedBufferAttribute = true; - - this.name = ''; - - this.data = interleavedBuffer; - this.itemSize = itemSize; - this.offset = offset; - - this.normalized = normalized; - - } - - get count() { - - return this.data.count; - - } - - get array() { - - return this.data.array; - - } - - set needsUpdate(value) { - - this.data.needsUpdate = value; - - } - - applyMatrix4(m) { - - for (let i = 0, l = this.data.count; i < l; i++) { - - _vector$6.fromBufferAttribute(this, i); - - _vector$6.applyMatrix4(m); - - this.setXYZ(i, _vector$6.x, _vector$6.y, _vector$6.z); - - } - - return this; - - } - - applyNormalMatrix(m) { - - for (let i = 0, l = this.count; i < l; i++) { - - _vector$6.fromBufferAttribute(this, i); - - _vector$6.applyNormalMatrix(m); - - this.setXYZ(i, _vector$6.x, _vector$6.y, _vector$6.z); - - } - - return this; - - } - - transformDirection(m) { - - for (let i = 0, l = this.count; i < l; i++) { - - _vector$6.fromBufferAttribute(this, i); - - _vector$6.transformDirection(m); - - this.setXYZ(i, _vector$6.x, _vector$6.y, _vector$6.z); - - } - - return this; - - } - - setX(index, x) { - - if (this.normalized) x = normalize(x, this.array); - - this.data.array[index * this.data.stride + this.offset] = x; - - return this; - - } - - setY(index, y) { - - if (this.normalized) y = normalize(y, this.array); - - this.data.array[index * this.data.stride + this.offset + 1] = y; - - return this; - - } - - setZ(index, z) { - - if (this.normalized) z = normalize(z, this.array); - - this.data.array[index * this.data.stride + this.offset + 2] = z; - - return this; - - } - - setW(index, w) { - - if (this.normalized) w = normalize(w, this.array); - - this.data.array[index * this.data.stride + this.offset + 3] = w; - - return this; - - } - - getX(index) { - - let x = this.data.array[index * this.data.stride + this.offset]; - - if (this.normalized) x = denormalize(x, this.array); - - return x; - - } - - getY(index) { - - let y = this.data.array[index * this.data.stride + this.offset + 1]; - - if (this.normalized) y = denormalize(y, this.array); - - return y; - - } - - getZ(index) { - - let z = this.data.array[index * this.data.stride + this.offset + 2]; - - if (this.normalized) z = denormalize(z, this.array); - - return z; - - } - - getW(index) { - - let w = this.data.array[index * this.data.stride + this.offset + 3]; - - if (this.normalized) w = denormalize(w, this.array); - - return w; - - } - - setXY(index, x, y) { - - index = index * this.data.stride + this.offset; - - if (this.normalized) { - - x = normalize(x, this.array); - y = normalize(y, this.array); - - } - - this.data.array[index + 0] = x; - this.data.array[index + 1] = y; - - return this; - - } - - setXYZ(index, x, y, z) { - - index = index * this.data.stride + this.offset; - - if (this.normalized) { - - x = normalize(x, this.array); - y = normalize(y, this.array); - z = normalize(z, this.array); - - } - - this.data.array[index + 0] = x; - this.data.array[index + 1] = y; - this.data.array[index + 2] = z; - - return this; - - } - - setXYZW(index, x, y, z, w) { - - index = index * this.data.stride + this.offset; - - if (this.normalized) { - - x = normalize(x, this.array); - y = normalize(y, this.array); - z = normalize(z, this.array); - w = normalize(w, this.array); - - } - - this.data.array[index + 0] = x; - this.data.array[index + 1] = y; - this.data.array[index + 2] = z; - this.data.array[index + 3] = w; - - return this; - - } - - clone(data) { - - if (data === undefined) { - - console.log('THREE.InterleavedBufferAttribute.clone(): Cloning an interleaved buffer attribute will de-interleave buffer data.'); - - const array = []; - - for (let i = 0; i < this.count; i++) { - - const index = i * this.data.stride + this.offset; - - for (let j = 0; j < this.itemSize; j++) { - - array.push(this.data.array[index + j]); - - } - - } - - return new BufferAttribute(new this.array.constructor(array), this.itemSize, this.normalized); - - } else { - - if (data.interleavedBuffers === undefined) { - - data.interleavedBuffers = {}; - - } - - if (data.interleavedBuffers[this.data.uuid] === undefined) { - - data.interleavedBuffers[this.data.uuid] = this.data.clone(data); - - } - - return new InterleavedBufferAttribute(data.interleavedBuffers[this.data.uuid], this.itemSize, this.offset, this.normalized); - - } - - } - - toJSON(data) { - - if (data === undefined) { - - console.log('THREE.InterleavedBufferAttribute.toJSON(): Serializing an interleaved buffer attribute will de-interleave buffer data.'); - - const array = []; - - for (let i = 0; i < this.count; i++) { - - const index = i * this.data.stride + this.offset; - - for (let j = 0; j < this.itemSize; j++) { - - array.push(this.data.array[index + j]); - - } - - } - - // de-interleave data and save it as an ordinary buffer attribute for now - - return { - itemSize: this.itemSize, - type: this.array.constructor.name, - array: array, - normalized: this.normalized - }; - - } else { - - // save as true interleaved attribute - - if (data.interleavedBuffers === undefined) { - - data.interleavedBuffers = {}; - - } - - if (data.interleavedBuffers[this.data.uuid] === undefined) { - - data.interleavedBuffers[this.data.uuid] = this.data.toJSON(data); - - } - - return { - isInterleavedBufferAttribute: true, - itemSize: this.itemSize, - data: this.data.uuid, - offset: this.offset, - normalized: this.normalized - }; - - } - - } - -} - -class SpriteMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isSpriteMaterial = true; - - this.type = 'SpriteMaterial'; - - this.color = new Color(0xffffff); - - this.map = null; - - this.alphaMap = null; - - this.rotation = 0; - - this.sizeAttenuation = true; - - this.transparent = true; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - - this.map = source.map; - - this.alphaMap = source.alphaMap; - - this.rotation = source.rotation; - - this.sizeAttenuation = source.sizeAttenuation; - - this.fog = source.fog; - - return this; - - } - -} - -let _geometry; - -const _intersectPoint = /*@__PURE__*/ new Vector3(); -const _worldScale = /*@__PURE__*/ new Vector3(); -const _mvPosition = /*@__PURE__*/ new Vector3(); - -const _alignedPosition = /*@__PURE__*/ new Vector2(); -const _rotatedPosition = /*@__PURE__*/ new Vector2(); -const _viewWorldMatrix = /*@__PURE__*/ new Matrix4(); - -const _vA = /*@__PURE__*/ new Vector3(); -const _vB = /*@__PURE__*/ new Vector3(); -const _vC = /*@__PURE__*/ new Vector3(); - -const _uvA = /*@__PURE__*/ new Vector2(); -const _uvB = /*@__PURE__*/ new Vector2(); -const _uvC = /*@__PURE__*/ new Vector2(); - -class Sprite extends Object3D { - - constructor(material) { - - super(); - - this.isSprite = true; - - this.type = 'Sprite'; - - if (_geometry === undefined) { - - _geometry = new BufferGeometry(); - - const float32Array = new Float32Array([ - - 0.5, - 0.5, 0, 0, 0, - 0.5, - 0.5, 0, 1, 0, - 0.5, 0.5, 0, 1, 1, - - 0.5, 0.5, 0, 0, 1 - ]); - - const interleavedBuffer = new InterleavedBuffer(float32Array, 5); - - _geometry.setIndex([0, 1, 2, 0, 2, 3]); - _geometry.setAttribute('position', new InterleavedBufferAttribute(interleavedBuffer, 3, 0, false)); - _geometry.setAttribute('uv', new InterleavedBufferAttribute(interleavedBuffer, 2, 3, false)); - - } - - this.geometry = _geometry; - this.material = (material !== undefined) ? material : new SpriteMaterial(); - - this.center = new Vector2(0.5, 0.5); - - } - - raycast(raycaster, intersects) { - - if (raycaster.camera === null) { - - console.error('THREE.Sprite: "Raycaster.camera" needs to be set in order to raycast against sprites.'); - - } - - _worldScale.setFromMatrixScale(this.matrixWorld); - - _viewWorldMatrix.copy(raycaster.camera.matrixWorld); - this.modelViewMatrix.multiplyMatrices(raycaster.camera.matrixWorldInverse, this.matrixWorld); - - _mvPosition.setFromMatrixPosition(this.modelViewMatrix); - - if (raycaster.camera.isPerspectiveCamera && this.material.sizeAttenuation === false) { - - _worldScale.multiplyScalar(- _mvPosition.z); - - } - - const rotation = this.material.rotation; - let sin, cos; - - if (rotation !== 0) { - - cos = Math.cos(rotation); - sin = Math.sin(rotation); - - } - - const center = this.center; - - transformVertex(_vA.set(- 0.5, - 0.5, 0), _mvPosition, center, _worldScale, sin, cos); - transformVertex(_vB.set(0.5, - 0.5, 0), _mvPosition, center, _worldScale, sin, cos); - transformVertex(_vC.set(0.5, 0.5, 0), _mvPosition, center, _worldScale, sin, cos); - - _uvA.set(0, 0); - _uvB.set(1, 0); - _uvC.set(1, 1); - - // check first triangle - let intersect = raycaster.ray.intersectTriangle(_vA, _vB, _vC, false, _intersectPoint); - - if (intersect === null) { - - // check second triangle - transformVertex(_vB.set(- 0.5, 0.5, 0), _mvPosition, center, _worldScale, sin, cos); - _uvB.set(0, 1); - - intersect = raycaster.ray.intersectTriangle(_vA, _vC, _vB, false, _intersectPoint); - if (intersect === null) { - - return; - - } - - } - - const distance = raycaster.ray.origin.distanceTo(_intersectPoint); - - if (distance < raycaster.near || distance > raycaster.far) return; - - intersects.push({ - - distance: distance, - point: _intersectPoint.clone(), - uv: Triangle.getUV(_intersectPoint, _vA, _vB, _vC, _uvA, _uvB, _uvC, new Vector2()), - face: null, - object: this - - }); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - if (source.center !== undefined) this.center.copy(source.center); - - this.material = source.material; - - return this; - - } - -} - -function transformVertex(vertexPosition, mvPosition, center, scale, sin, cos) { - - // compute position in camera space - _alignedPosition.subVectors(vertexPosition, center).addScalar(0.5).multiply(scale); - - // to check if rotation is not zero - if (sin !== undefined) { - - _rotatedPosition.x = (cos * _alignedPosition.x) - (sin * _alignedPosition.y); - _rotatedPosition.y = (sin * _alignedPosition.x) + (cos * _alignedPosition.y); - - } else { - - _rotatedPosition.copy(_alignedPosition); - - } - - - vertexPosition.copy(mvPosition); - vertexPosition.x += _rotatedPosition.x; - vertexPosition.y += _rotatedPosition.y; - - // transform to world space - vertexPosition.applyMatrix4(_viewWorldMatrix); - -} - -const _v1$2 = /*@__PURE__*/ new Vector3(); -const _v2$1 = /*@__PURE__*/ new Vector3(); - -class LOD extends Object3D { - - constructor() { - - super(); - - this._currentLevel = 0; - - this.type = 'LOD'; - - Object.defineProperties(this, { - levels: { - enumerable: true, - value: [] - }, - isLOD: { - value: true, - } - }); - - this.autoUpdate = true; - - } - - copy(source) { - - super.copy(source, false); - - const levels = source.levels; - - for (let i = 0, l = levels.length; i < l; i++) { - - const level = levels[i]; - - this.addLevel(level.object.clone(), level.distance, level.hysteresis); - - } - - this.autoUpdate = source.autoUpdate; - - return this; - - } - - addLevel(object, distance = 0, hysteresis = 0) { - - distance = Math.abs(distance); - - const levels = this.levels; - - let l; - - for (l = 0; l < levels.length; l++) { - - if (distance < levels[l].distance) { - - break; - - } - - } - - levels.splice(l, 0, { distance: distance, hysteresis: hysteresis, object: object }); - - this.add(object); - - return this; - - } - - getCurrentLevel() { - - return this._currentLevel; - - } - - - - getObjectForDistance(distance) { - - const levels = this.levels; - - if (levels.length > 0) { - - let i, l; - - for (i = 1, l = levels.length; i < l; i++) { - - let levelDistance = levels[i].distance; - - if (levels[i].object.visible) { - - levelDistance -= levelDistance * levels[i].hysteresis; - - } - - if (distance < levelDistance) { - - break; - - } - - } - - return levels[i - 1].object; - - } - - return null; - - } - - raycast(raycaster, intersects) { - - const levels = this.levels; - - if (levels.length > 0) { - - _v1$2.setFromMatrixPosition(this.matrixWorld); - - const distance = raycaster.ray.origin.distanceTo(_v1$2); - - this.getObjectForDistance(distance).raycast(raycaster, intersects); - - } - - } - - update(camera) { - - const levels = this.levels; - - if (levels.length > 1) { - - _v1$2.setFromMatrixPosition(camera.matrixWorld); - _v2$1.setFromMatrixPosition(this.matrixWorld); - - const distance = _v1$2.distanceTo(_v2$1) / camera.zoom; - - levels[0].object.visible = true; - - let i, l; - - for (i = 1, l = levels.length; i < l; i++) { - - let levelDistance = levels[i].distance; - - if (levels[i].object.visible) { - - levelDistance -= levelDistance * levels[i].hysteresis; - - } - - if (distance >= levelDistance) { - - levels[i - 1].object.visible = false; - levels[i].object.visible = true; - - } else { - - break; - - } - - } - - this._currentLevel = i - 1; - - for (; i < l; i++) { - - levels[i].object.visible = false; - - } - - } - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - if (this.autoUpdate === false) data.object.autoUpdate = false; - - data.object.levels = []; - - const levels = this.levels; - - for (let i = 0, l = levels.length; i < l; i++) { - - const level = levels[i]; - - data.object.levels.push({ - object: level.object.uuid, - distance: level.distance, - hysteresis: level.hysteresis - }); - - } - - return data; - - } - -} - -const _basePosition = /*@__PURE__*/ new Vector3(); - -const _skinIndex = /*@__PURE__*/ new Vector4(); -const _skinWeight = /*@__PURE__*/ new Vector4(); - -const _vector$5 = /*@__PURE__*/ new Vector3(); -const _matrix = /*@__PURE__*/ new Matrix4(); - -class SkinnedMesh extends Mesh { - - constructor(geometry, material) { - - super(geometry, material); - - this.isSkinnedMesh = true; - - this.type = 'SkinnedMesh'; - - this.bindMode = 'attached'; - this.bindMatrix = new Matrix4(); - this.bindMatrixInverse = new Matrix4(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.bindMode = source.bindMode; - this.bindMatrix.copy(source.bindMatrix); - this.bindMatrixInverse.copy(source.bindMatrixInverse); - - this.skeleton = source.skeleton; - - return this; - - } - - bind(skeleton, bindMatrix) { - - this.skeleton = skeleton; - - if (bindMatrix === undefined) { - - this.updateMatrixWorld(true); - - this.skeleton.calculateInverses(); - - bindMatrix = this.matrixWorld; - - } - - this.bindMatrix.copy(bindMatrix); - this.bindMatrixInverse.copy(bindMatrix).invert(); - - } - - pose() { - - this.skeleton.pose(); - - } - - normalizeSkinWeights() { - - const vector = new Vector4(); - - const skinWeight = this.geometry.attributes.skinWeight; - - for (let i = 0, l = skinWeight.count; i < l; i++) { - - vector.fromBufferAttribute(skinWeight, i); - - const scale = 1.0 / vector.manhattanLength(); - - if (scale !== Infinity) { - - vector.multiplyScalar(scale); - - } else { - - vector.set(1, 0, 0, 0); // do something reasonable - - } - - skinWeight.setXYZW(i, vector.x, vector.y, vector.z, vector.w); - - } - - } - - updateMatrixWorld(force) { - - super.updateMatrixWorld(force); - - if (this.bindMode === 'attached') { - - this.bindMatrixInverse.copy(this.matrixWorld).invert(); - - } else if (this.bindMode === 'detached') { - - this.bindMatrixInverse.copy(this.bindMatrix).invert(); - - } else { - - console.warn('THREE.SkinnedMesh: Unrecognized bindMode: ' + this.bindMode); - - } - - } - - boneTransform(index, target) { - - const skeleton = this.skeleton; - const geometry = this.geometry; - - _skinIndex.fromBufferAttribute(geometry.attributes.skinIndex, index); - _skinWeight.fromBufferAttribute(geometry.attributes.skinWeight, index); - - _basePosition.copy(target).applyMatrix4(this.bindMatrix); - - target.set(0, 0, 0); - - for (let i = 0; i < 4; i++) { - - const weight = _skinWeight.getComponent(i); - - if (weight !== 0) { - - const boneIndex = _skinIndex.getComponent(i); - - _matrix.multiplyMatrices(skeleton.bones[boneIndex].matrixWorld, skeleton.boneInverses[boneIndex]); - - target.addScaledVector(_vector$5.copy(_basePosition).applyMatrix4(_matrix), weight); - - } - - } - - return target.applyMatrix4(this.bindMatrixInverse); - - } - -} - -class Bone extends Object3D { - - constructor() { - - super(); - - this.isBone = true; - - this.type = 'Bone'; - - } - -} - -class DataTexture extends Texture { - - constructor(data = null, width = 1, height = 1, format, type, mapping, wrapS, wrapT, magFilter = NearestFilter, minFilter = NearestFilter, anisotropy, encoding) { - - super(null, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy, encoding); - - this.isDataTexture = true; - - this.image = { data: data, width: width, height: height }; - - this.generateMipmaps = false; - this.flipY = false; - this.unpackAlignment = 1; - - } - -} - -const _offsetMatrix = /*@__PURE__*/ new Matrix4(); -const _identityMatrix = /*@__PURE__*/ new Matrix4(); - -class Skeleton { - - constructor(bones = [], boneInverses = []) { - - this.uuid = generateUUID(); - - this.bones = bones.slice(0); - this.boneInverses = boneInverses; - this.boneMatrices = null; - - this.boneTexture = null; - this.boneTextureSize = 0; - - this.frame = - 1; - - this.init(); - - } - - init() { - - const bones = this.bones; - const boneInverses = this.boneInverses; - - this.boneMatrices = new Float32Array(bones.length * 16); - - // calculate inverse bone matrices if necessary - - if (boneInverses.length === 0) { - - this.calculateInverses(); - - } else { - - // handle special case - - if (bones.length !== boneInverses.length) { - - console.warn('THREE.Skeleton: Number of inverse bone matrices does not match amount of bones.'); - - this.boneInverses = []; - - for (let i = 0, il = this.bones.length; i < il; i++) { - - this.boneInverses.push(new Matrix4()); - - } - - } - - } - - } - - calculateInverses() { - - this.boneInverses.length = 0; - - for (let i = 0, il = this.bones.length; i < il; i++) { - - const inverse = new Matrix4(); - - if (this.bones[i]) { - - inverse.copy(this.bones[i].matrixWorld).invert(); - - } - - this.boneInverses.push(inverse); - - } - - } - - pose() { - - // recover the bind-time world matrices - - for (let i = 0, il = this.bones.length; i < il; i++) { - - const bone = this.bones[i]; - - if (bone) { - - bone.matrixWorld.copy(this.boneInverses[i]).invert(); - - } - - } - - // compute the local matrices, positions, rotations and scales - - for (let i = 0, il = this.bones.length; i < il; i++) { - - const bone = this.bones[i]; - - if (bone) { - - if (bone.parent && bone.parent.isBone) { - - bone.matrix.copy(bone.parent.matrixWorld).invert(); - bone.matrix.multiply(bone.matrixWorld); - - } else { - - bone.matrix.copy(bone.matrixWorld); - - } - - bone.matrix.decompose(bone.position, bone.quaternion, bone.scale); - - } - - } - - } - - update() { - - const bones = this.bones; - const boneInverses = this.boneInverses; - const boneMatrices = this.boneMatrices; - const boneTexture = this.boneTexture; - - // flatten bone matrices to array - - for (let i = 0, il = bones.length; i < il; i++) { - - // compute the offset between the current and the original transform - - const matrix = bones[i] ? bones[i].matrixWorld : _identityMatrix; - - _offsetMatrix.multiplyMatrices(matrix, boneInverses[i]); - _offsetMatrix.toArray(boneMatrices, i * 16); - - } - - if (boneTexture !== null) { - - boneTexture.needsUpdate = true; - - } - - } - - clone() { - - return new Skeleton(this.bones, this.boneInverses); - - } - - computeBoneTexture() { - - // layout (1 matrix = 4 pixels) - // RGBA RGBA RGBA RGBA (=> column1, column2, column3, column4) - // with 8x8 pixel texture max 16 bones * 4 pixels = (8 * 8) - // 16x16 pixel texture max 64 bones * 4 pixels = (16 * 16) - // 32x32 pixel texture max 256 bones * 4 pixels = (32 * 32) - // 64x64 pixel texture max 1024 bones * 4 pixels = (64 * 64) - - let size = Math.sqrt(this.bones.length * 4); // 4 pixels needed for 1 matrix - size = ceilPowerOfTwo(size); - size = Math.max(size, 4); - - const boneMatrices = new Float32Array(size * size * 4); // 4 floats per RGBA pixel - boneMatrices.set(this.boneMatrices); // copy current values - - const boneTexture = new DataTexture(boneMatrices, size, size, RGBAFormat, FloatType); - boneTexture.needsUpdate = true; - - this.boneMatrices = boneMatrices; - this.boneTexture = boneTexture; - this.boneTextureSize = size; - - return this; - - } - - getBoneByName(name) { - - for (let i = 0, il = this.bones.length; i < il; i++) { - - const bone = this.bones[i]; - - if (bone.name === name) { - - return bone; - - } - - } - - return undefined; - - } - - dispose() { - - if (this.boneTexture !== null) { - - this.boneTexture.dispose(); - - this.boneTexture = null; - - } - - } - - fromJSON(json, bones) { - - this.uuid = json.uuid; - - for (let i = 0, l = json.bones.length; i < l; i++) { - - const uuid = json.bones[i]; - let bone = bones[uuid]; - - if (bone === undefined) { - - console.warn('THREE.Skeleton: No bone found with UUID:', uuid); - bone = new Bone(); - - } - - this.bones.push(bone); - this.boneInverses.push(new Matrix4().fromArray(json.boneInverses[i])); - - } - - this.init(); - - return this; - - } - - toJSON() { - - const data = { - metadata: { - version: 4.5, - type: 'Skeleton', - generator: 'Skeleton.toJSON' - }, - bones: [], - boneInverses: [] - }; - - data.uuid = this.uuid; - - const bones = this.bones; - const boneInverses = this.boneInverses; - - for (let i = 0, l = bones.length; i < l; i++) { - - const bone = bones[i]; - data.bones.push(bone.uuid); - - const boneInverse = boneInverses[i]; - data.boneInverses.push(boneInverse.toArray()); - - } - - return data; - - } - -} - -class InstancedBufferAttribute extends BufferAttribute { - - constructor(array, itemSize, normalized, meshPerAttribute = 1) { - - super(array, itemSize, normalized); - - this.isInstancedBufferAttribute = true; - - this.meshPerAttribute = meshPerAttribute; - - } - - copy(source) { - - super.copy(source); - - this.meshPerAttribute = source.meshPerAttribute; - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.meshPerAttribute = this.meshPerAttribute; - - data.isInstancedBufferAttribute = true; - - return data; - - } - -} - -const _instanceLocalMatrix = /*@__PURE__*/ new Matrix4(); -const _instanceWorldMatrix = /*@__PURE__*/ new Matrix4(); - -const _instanceIntersects = []; - -const _identity = /*@__PURE__*/ new Matrix4(); -const _mesh = /*@__PURE__*/ new Mesh(); - -class InstancedMesh extends Mesh { - - constructor(geometry, material, count) { - - super(geometry, material); - - this.isInstancedMesh = true; - - this.instanceMatrix = new InstancedBufferAttribute(new Float32Array(count * 16), 16); - this.instanceColor = null; - - this.count = count; - - this.frustumCulled = false; - - for (let i = 0; i < count; i++) { - - this.setMatrixAt(i, _identity); - - } - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.instanceMatrix.copy(source.instanceMatrix); - - if (source.instanceColor !== null) this.instanceColor = source.instanceColor.clone(); - - this.count = source.count; - - return this; - - } - - getColorAt(index, color) { - - color.fromArray(this.instanceColor.array, index * 3); - - } - - getMatrixAt(index, matrix) { - - matrix.fromArray(this.instanceMatrix.array, index * 16); - - } - - raycast(raycaster, intersects) { - - const matrixWorld = this.matrixWorld; - const raycastTimes = this.count; - - _mesh.geometry = this.geometry; - _mesh.material = this.material; - - if (_mesh.material === undefined) return; - - for (let instanceId = 0; instanceId < raycastTimes; instanceId++) { - - // calculate the world matrix for each instance - - this.getMatrixAt(instanceId, _instanceLocalMatrix); - - _instanceWorldMatrix.multiplyMatrices(matrixWorld, _instanceLocalMatrix); - - // the mesh represents this single instance - - _mesh.matrixWorld = _instanceWorldMatrix; - - _mesh.raycast(raycaster, _instanceIntersects); - - // process the result of raycast - - for (let i = 0, l = _instanceIntersects.length; i < l; i++) { - - const intersect = _instanceIntersects[i]; - intersect.instanceId = instanceId; - intersect.object = this; - intersects.push(intersect); - - } - - _instanceIntersects.length = 0; - - } - - } - - setColorAt(index, color) { - - if (this.instanceColor === null) { - - this.instanceColor = new InstancedBufferAttribute(new Float32Array(this.instanceMatrix.count * 3), 3); - - } - - color.toArray(this.instanceColor.array, index * 3); - - } - - setMatrixAt(index, matrix) { - - matrix.toArray(this.instanceMatrix.array, index * 16); - - } - - updateMorphTargets() { - - } - - dispose() { - - this.dispatchEvent({ type: 'dispose' }); - - } - -} - -class LineBasicMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isLineBasicMaterial = true; - - this.type = 'LineBasicMaterial'; - - this.color = new Color(0xffffff); - - this.linewidth = 1; - this.linecap = 'round'; - this.linejoin = 'round'; - - this.fog = true; - - this.setValues(parameters); - - } - - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - - this.linewidth = source.linewidth; - this.linecap = source.linecap; - this.linejoin = source.linejoin; - - this.fog = source.fog; - - return this; - - } - -} - -const _start$1 = /*@__PURE__*/ new Vector3(); -const _end$1 = /*@__PURE__*/ new Vector3(); -const _inverseMatrix$1 = /*@__PURE__*/ new Matrix4(); -const _ray$1 = /*@__PURE__*/ new Ray(); -const _sphere$1 = /*@__PURE__*/ new Sphere(); - -class Line extends Object3D { - - constructor(geometry = new BufferGeometry(), material = new LineBasicMaterial()) { - - super(); - - this.isLine = true; - - this.type = 'Line'; - - this.geometry = geometry; - this.material = material; - - this.updateMorphTargets(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.material = source.material; - this.geometry = source.geometry; - - return this; - - } - - computeLineDistances() { - - const geometry = this.geometry; - - // we assume non-indexed geometry - - if (geometry.index === null) { - - const positionAttribute = geometry.attributes.position; - const lineDistances = [0]; - - for (let i = 1, l = positionAttribute.count; i < l; i++) { - - _start$1.fromBufferAttribute(positionAttribute, i - 1); - _end$1.fromBufferAttribute(positionAttribute, i); - - lineDistances[i] = lineDistances[i - 1]; - lineDistances[i] += _start$1.distanceTo(_end$1); - - } - - geometry.setAttribute('lineDistance', new Float32BufferAttribute(lineDistances, 1)); - - } else { - - console.warn('THREE.Line.computeLineDistances(): Computation only possible with non-indexed BufferGeometry.'); - - } - - return this; - - } - - raycast(raycaster, intersects) { - - const geometry = this.geometry; - const matrixWorld = this.matrixWorld; - const threshold = raycaster.params.Line.threshold; - const drawRange = geometry.drawRange; - - // Checking boundingSphere distance to ray - - if (geometry.boundingSphere === null) geometry.computeBoundingSphere(); - - _sphere$1.copy(geometry.boundingSphere); - _sphere$1.applyMatrix4(matrixWorld); - _sphere$1.radius += threshold; - - if (raycaster.ray.intersectsSphere(_sphere$1) === false) return; - - // - - _inverseMatrix$1.copy(matrixWorld).invert(); - _ray$1.copy(raycaster.ray).applyMatrix4(_inverseMatrix$1); - - const localThreshold = threshold / ((this.scale.x + this.scale.y + this.scale.z) / 3); - const localThresholdSq = localThreshold * localThreshold; - - const vStart = new Vector3(); - const vEnd = new Vector3(); - const interSegment = new Vector3(); - const interRay = new Vector3(); - const step = this.isLineSegments ? 2 : 1; - - const index = geometry.index; - const attributes = geometry.attributes; - const positionAttribute = attributes.position; - - if (index !== null) { - - const start = Math.max(0, drawRange.start); - const end = Math.min(index.count, (drawRange.start + drawRange.count)); - - for (let i = start, l = end - 1; i < l; i += step) { - - const a = index.getX(i); - const b = index.getX(i + 1); - - vStart.fromBufferAttribute(positionAttribute, a); - vEnd.fromBufferAttribute(positionAttribute, b); - - const distSq = _ray$1.distanceSqToSegment(vStart, vEnd, interRay, interSegment); - - if (distSq > localThresholdSq) continue; - - interRay.applyMatrix4(this.matrixWorld); //Move back to world space for distance calculation - - const distance = raycaster.ray.origin.distanceTo(interRay); - - if (distance < raycaster.near || distance > raycaster.far) continue; - - intersects.push({ - - distance: distance, - // What do we want? intersection point on the ray or on the segment?? - // point: raycaster.ray.at( distance ), - point: interSegment.clone().applyMatrix4(this.matrixWorld), - index: i, - face: null, - faceIndex: null, - object: this - - }); - - } - - } else { - - const start = Math.max(0, drawRange.start); - const end = Math.min(positionAttribute.count, (drawRange.start + drawRange.count)); - - for (let i = start, l = end - 1; i < l; i += step) { - - vStart.fromBufferAttribute(positionAttribute, i); - vEnd.fromBufferAttribute(positionAttribute, i + 1); - - const distSq = _ray$1.distanceSqToSegment(vStart, vEnd, interRay, interSegment); - - if (distSq > localThresholdSq) continue; - - interRay.applyMatrix4(this.matrixWorld); //Move back to world space for distance calculation - - const distance = raycaster.ray.origin.distanceTo(interRay); - - if (distance < raycaster.near || distance > raycaster.far) continue; - - intersects.push({ - - distance: distance, - // What do we want? intersection point on the ray or on the segment?? - // point: raycaster.ray.at( distance ), - point: interSegment.clone().applyMatrix4(this.matrixWorld), - index: i, - face: null, - faceIndex: null, - object: this - - }); - - } - - } - - } - - updateMorphTargets() { - - const geometry = this.geometry; - - const morphAttributes = geometry.morphAttributes; - const keys = Object.keys(morphAttributes); - - if (keys.length > 0) { - - const morphAttribute = morphAttributes[keys[0]]; - - if (morphAttribute !== undefined) { - - this.morphTargetInfluences = []; - this.morphTargetDictionary = {}; - - for (let m = 0, ml = morphAttribute.length; m < ml; m++) { - - const name = morphAttribute[m].name || String(m); - - this.morphTargetInfluences.push(0); - this.morphTargetDictionary[name] = m; - - } - - } - - } - - } - -} - -const _start = /*@__PURE__*/ new Vector3(); -const _end = /*@__PURE__*/ new Vector3(); - -class LineSegments extends Line { - - constructor(geometry, material) { - - super(geometry, material); - - this.isLineSegments = true; - - this.type = 'LineSegments'; - - } - - computeLineDistances() { - - const geometry = this.geometry; - - // we assume non-indexed geometry - - if (geometry.index === null) { - - const positionAttribute = geometry.attributes.position; - const lineDistances = []; - - for (let i = 0, l = positionAttribute.count; i < l; i += 2) { - - _start.fromBufferAttribute(positionAttribute, i); - _end.fromBufferAttribute(positionAttribute, i + 1); - - lineDistances[i] = (i === 0) ? 0 : lineDistances[i - 1]; - lineDistances[i + 1] = lineDistances[i] + _start.distanceTo(_end); - - } - - geometry.setAttribute('lineDistance', new Float32BufferAttribute(lineDistances, 1)); - - } else { - - console.warn('THREE.LineSegments.computeLineDistances(): Computation only possible with non-indexed BufferGeometry.'); - - } - - return this; - - } - -} - -class LineLoop extends Line { - - constructor(geometry, material) { - - super(geometry, material); - - this.isLineLoop = true; - - this.type = 'LineLoop'; - - } - -} - -class PointsMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isPointsMaterial = true; - - this.type = 'PointsMaterial'; - - this.color = new Color(0xffffff); - - this.map = null; - - this.alphaMap = null; - - this.size = 1; - this.sizeAttenuation = true; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - - this.map = source.map; - - this.alphaMap = source.alphaMap; - - this.size = source.size; - this.sizeAttenuation = source.sizeAttenuation; - - this.fog = source.fog; - - return this; - - } - -} - -const _inverseMatrix = /*@__PURE__*/ new Matrix4(); -const _ray = /*@__PURE__*/ new Ray(); -const _sphere = /*@__PURE__*/ new Sphere(); -const _position$2 = /*@__PURE__*/ new Vector3(); - -class Points extends Object3D { - - constructor(geometry = new BufferGeometry(), material = new PointsMaterial()) { - - super(); - - this.isPoints = true; - - this.type = 'Points'; - - this.geometry = geometry; - this.material = material; - - this.updateMorphTargets(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.material = source.material; - this.geometry = source.geometry; - - return this; - - } - - raycast(raycaster, intersects) { - - const geometry = this.geometry; - const matrixWorld = this.matrixWorld; - const threshold = raycaster.params.Points.threshold; - const drawRange = geometry.drawRange; - - // Checking boundingSphere distance to ray - - if (geometry.boundingSphere === null) geometry.computeBoundingSphere(); - - _sphere.copy(geometry.boundingSphere); - _sphere.applyMatrix4(matrixWorld); - _sphere.radius += threshold; - - if (raycaster.ray.intersectsSphere(_sphere) === false) return; - - // - - _inverseMatrix.copy(matrixWorld).invert(); - _ray.copy(raycaster.ray).applyMatrix4(_inverseMatrix); - - const localThreshold = threshold / ((this.scale.x + this.scale.y + this.scale.z) / 3); - const localThresholdSq = localThreshold * localThreshold; - - const index = geometry.index; - const attributes = geometry.attributes; - const positionAttribute = attributes.position; - - if (index !== null) { - - const start = Math.max(0, drawRange.start); - const end = Math.min(index.count, (drawRange.start + drawRange.count)); - - for (let i = start, il = end; i < il; i++) { - - const a = index.getX(i); - - _position$2.fromBufferAttribute(positionAttribute, a); - - testPoint(_position$2, a, localThresholdSq, matrixWorld, raycaster, intersects, this); - - } - - } else { - - const start = Math.max(0, drawRange.start); - const end = Math.min(positionAttribute.count, (drawRange.start + drawRange.count)); - - for (let i = start, l = end; i < l; i++) { - - _position$2.fromBufferAttribute(positionAttribute, i); - - testPoint(_position$2, i, localThresholdSq, matrixWorld, raycaster, intersects, this); - - } - - } - - } - - updateMorphTargets() { - - const geometry = this.geometry; - - const morphAttributes = geometry.morphAttributes; - const keys = Object.keys(morphAttributes); - - if (keys.length > 0) { - - const morphAttribute = morphAttributes[keys[0]]; - - if (morphAttribute !== undefined) { - - this.morphTargetInfluences = []; - this.morphTargetDictionary = {}; - - for (let m = 0, ml = morphAttribute.length; m < ml; m++) { - - const name = morphAttribute[m].name || String(m); - - this.morphTargetInfluences.push(0); - this.morphTargetDictionary[name] = m; - - } - - } - - } - - } - -} - -function testPoint(point, index, localThresholdSq, matrixWorld, raycaster, intersects, object) { - - const rayPointDistanceSq = _ray.distanceSqToPoint(point); - - if (rayPointDistanceSq < localThresholdSq) { - - const intersectPoint = new Vector3(); - - _ray.closestPointToPoint(point, intersectPoint); - intersectPoint.applyMatrix4(matrixWorld); - - const distance = raycaster.ray.origin.distanceTo(intersectPoint); - - if (distance < raycaster.near || distance > raycaster.far) return; - - intersects.push({ - - distance: distance, - distanceToRay: Math.sqrt(rayPointDistanceSq), - point: intersectPoint, - index: index, - face: null, - object: object - - }); - - } - -} - -class VideoTexture extends Texture { - - constructor(video, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy) { - - super(video, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy); - - this.isVideoTexture = true; - - this.minFilter = minFilter !== undefined ? minFilter : LinearFilter; - this.magFilter = magFilter !== undefined ? magFilter : LinearFilter; - - this.generateMipmaps = false; - - const scope = this; - - function updateVideo() { - - scope.needsUpdate = true; - video.requestVideoFrameCallback(updateVideo); - - } - - if ('requestVideoFrameCallback' in video) { - - video.requestVideoFrameCallback(updateVideo); - - } - - } - - clone() { - - return new this.constructor(this.image).copy(this); - - } - - update() { - - const video = this.image; - const hasVideoFrameCallback = 'requestVideoFrameCallback' in video; - - if (hasVideoFrameCallback === false && video.readyState >= video.HAVE_CURRENT_DATA) { - - this.needsUpdate = true; - - } - - } - -} - -class FramebufferTexture extends Texture { - - constructor(width, height, format) { - - super({ width, height }); - - this.isFramebufferTexture = true; - - this.format = format; - - this.magFilter = NearestFilter; - this.minFilter = NearestFilter; - - this.generateMipmaps = false; - - this.needsUpdate = true; - - } - -} - -class CompressedTexture extends Texture { - - constructor(mipmaps, width, height, format, type, mapping, wrapS, wrapT, magFilter, minFilter, anisotropy, encoding) { - - super(null, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy, encoding); - - this.isCompressedTexture = true; - - this.image = { width: width, height: height }; - this.mipmaps = mipmaps; - - // no flipping for cube textures - // (also flipping doesn't work for compressed textures ) - - this.flipY = false; - - // can't generate mipmaps for compressed textures - // mips must be embedded in DDS files - - this.generateMipmaps = false; - - } - -} - -class CompressedArrayTexture extends CompressedTexture { - - constructor(mipmaps, width, height, depth, format, type) { - - super(mipmaps, width, height, format, type); - - this.isCompressedArrayTexture = true; - this.image.depth = depth; - this.wrapR = ClampToEdgeWrapping; - - } - -} - -class CanvasTexture extends Texture { - - constructor(canvas, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy) { - - super(canvas, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy); - - this.isCanvasTexture = true; - - this.needsUpdate = true; - - } - -} - -/** - * Extensible curve object. - * - * Some common of curve methods: - * .getPoint( t, optionalTarget ), .getTangent( t, optionalTarget ) - * .getPointAt( u, optionalTarget ), .getTangentAt( u, optionalTarget ) - * .getPoints(), .getSpacedPoints() - * .getLength() - * .updateArcLengths() - * - * This following curves inherit from THREE.Curve: - * - * -- 2D curves -- - * THREE.ArcCurve - * THREE.CubicBezierCurve - * THREE.EllipseCurve - * THREE.LineCurve - * THREE.QuadraticBezierCurve - * THREE.SplineCurve - * - * -- 3D curves -- - * THREE.CatmullRomCurve3 - * THREE.CubicBezierCurve3 - * THREE.LineCurve3 - * THREE.QuadraticBezierCurve3 - * - * A series of curves can be represented as a THREE.CurvePath. - * - **/ - -class Curve { - - constructor() { - - this.type = 'Curve'; - - this.arcLengthDivisions = 200; - - } - - // Virtual base class method to overwrite and implement in subclasses - // - t [0 .. 1] - - getPoint( /* t, optionalTarget */) { - - console.warn('THREE.Curve: .getPoint() not implemented.'); - return null; - - } - - // Get point at relative position in curve according to arc length - // - u [0 .. 1] - - getPointAt(u, optionalTarget) { - - const t = this.getUtoTmapping(u); - return this.getPoint(t, optionalTarget); - - } - - // Get sequence of points using getPoint( t ) - - getPoints(divisions = 5) { - - const points = []; - - for (let d = 0; d <= divisions; d++) { - - points.push(this.getPoint(d / divisions)); - - } - - return points; - - } - - // Get sequence of points using getPointAt( u ) - - getSpacedPoints(divisions = 5) { - - const points = []; - - for (let d = 0; d <= divisions; d++) { - - points.push(this.getPointAt(d / divisions)); - - } - - return points; - - } - - // Get total curve arc length - - getLength() { - - const lengths = this.getLengths(); - return lengths[lengths.length - 1]; - - } - - // Get list of cumulative segment lengths - - getLengths(divisions = this.arcLengthDivisions) { - - if (this.cacheArcLengths && - (this.cacheArcLengths.length === divisions + 1) && - !this.needsUpdate) { - - return this.cacheArcLengths; - - } - - this.needsUpdate = false; - - const cache = []; - let current, last = this.getPoint(0); - let sum = 0; - - cache.push(0); - - for (let p = 1; p <= divisions; p++) { - - current = this.getPoint(p / divisions); - sum += current.distanceTo(last); - cache.push(sum); - last = current; - - } - - this.cacheArcLengths = cache; - - return cache; // { sums: cache, sum: sum }; Sum is in the last element. - - } - - updateArcLengths() { - - this.needsUpdate = true; - this.getLengths(); - - } - - // Given u ( 0 .. 1 ), get a t to find p. This gives you points which are equidistant - - getUtoTmapping(u, distance) { - - const arcLengths = this.getLengths(); - - let i = 0; - const il = arcLengths.length; - - let targetArcLength; // The targeted u distance value to get - - if (distance) { - - targetArcLength = distance; - - } else { - - targetArcLength = u * arcLengths[il - 1]; - - } - - // binary search for the index with largest value smaller than target u distance - - let low = 0, high = il - 1, comparison; - - while (low <= high) { - - i = Math.floor(low + (high - low) / 2); // less likely to overflow, though probably not issue here, JS doesn't really have integers, all numbers are floats - - comparison = arcLengths[i] - targetArcLength; - - if (comparison < 0) { - - low = i + 1; - - } else if (comparison > 0) { - - high = i - 1; - - } else { - - high = i; - break; - - // DONE - - } - - } - - i = high; - - if (arcLengths[i] === targetArcLength) { - - return i / (il - 1); - - } - - // we could get finer grain at lengths, or use simple interpolation between two points - - const lengthBefore = arcLengths[i]; - const lengthAfter = arcLengths[i + 1]; - - const segmentLength = lengthAfter - lengthBefore; - - // determine where we are between the 'before' and 'after' points - - const segmentFraction = (targetArcLength - lengthBefore) / segmentLength; - - // add that fractional amount to t - - const t = (i + segmentFraction) / (il - 1); - - return t; - - } - - // Returns a unit vector tangent at t - // In case any sub curve does not implement its tangent derivation, - // 2 points a small delta apart will be used to find its gradient - // which seems to give a reasonable approximation - - getTangent(t, optionalTarget) { - - const delta = 0.0001; - let t1 = t - delta; - let t2 = t + delta; - - // Capping in case of danger - - if (t1 < 0) t1 = 0; - if (t2 > 1) t2 = 1; - - const pt1 = this.getPoint(t1); - const pt2 = this.getPoint(t2); - - const tangent = optionalTarget || ((pt1.isVector2) ? new Vector2() : new Vector3()); - - tangent.copy(pt2).sub(pt1).normalize(); - - return tangent; - - } - - getTangentAt(u, optionalTarget) { - - const t = this.getUtoTmapping(u); - return this.getTangent(t, optionalTarget); - - } - - computeFrenetFrames(segments, closed) { - - // see http://www.cs.indiana.edu/pub/techreports/TR425.pdf - - const normal = new Vector3(); - - const tangents = []; - const normals = []; - const binormals = []; - - const vec = new Vector3(); - const mat = new Matrix4(); - - // compute the tangent vectors for each segment on the curve - - for (let i = 0; i <= segments; i++) { - - const u = i / segments; - - tangents[i] = this.getTangentAt(u, new Vector3()); - - } - - // select an initial normal vector perpendicular to the first tangent vector, - // and in the direction of the minimum tangent xyz component - - normals[0] = new Vector3(); - binormals[0] = new Vector3(); - let min = Number.MAX_VALUE; - const tx = Math.abs(tangents[0].x); - const ty = Math.abs(tangents[0].y); - const tz = Math.abs(tangents[0].z); - - if (tx <= min) { - - min = tx; - normal.set(1, 0, 0); - - } - - if (ty <= min) { - - min = ty; - normal.set(0, 1, 0); - - } - - if (tz <= min) { - - normal.set(0, 0, 1); - - } - - vec.crossVectors(tangents[0], normal).normalize(); - - normals[0].crossVectors(tangents[0], vec); - binormals[0].crossVectors(tangents[0], normals[0]); - - - // compute the slowly-varying normal and binormal vectors for each segment on the curve - - for (let i = 1; i <= segments; i++) { - - normals[i] = normals[i - 1].clone(); - - binormals[i] = binormals[i - 1].clone(); - - vec.crossVectors(tangents[i - 1], tangents[i]); - - if (vec.length() > Number.EPSILON) { - - vec.normalize(); - - const theta = Math.acos(clamp(tangents[i - 1].dot(tangents[i]), - 1, 1)); // clamp for floating pt errors - - normals[i].applyMatrix4(mat.makeRotationAxis(vec, theta)); - - } - - binormals[i].crossVectors(tangents[i], normals[i]); - - } - - // if the curve is closed, postprocess the vectors so the first and last normal vectors are the same - - if (closed === true) { - - let theta = Math.acos(clamp(normals[0].dot(normals[segments]), - 1, 1)); - theta /= segments; - - if (tangents[0].dot(vec.crossVectors(normals[0], normals[segments])) > 0) { - - theta = - theta; - - } - - for (let i = 1; i <= segments; i++) { - - // twist a little... - normals[i].applyMatrix4(mat.makeRotationAxis(tangents[i], theta * i)); - binormals[i].crossVectors(tangents[i], normals[i]); - - } - - } - - return { - tangents: tangents, - normals: normals, - binormals: binormals - }; - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(source) { - - this.arcLengthDivisions = source.arcLengthDivisions; - - return this; - - } - - toJSON() { - - const data = { - metadata: { - version: 4.5, - type: 'Curve', - generator: 'Curve.toJSON' - } - }; - - data.arcLengthDivisions = this.arcLengthDivisions; - data.type = this.type; - - return data; - - } - - fromJSON(json) { - - this.arcLengthDivisions = json.arcLengthDivisions; - - return this; - - } - -} - -class EllipseCurve extends Curve { - - constructor(aX = 0, aY = 0, xRadius = 1, yRadius = 1, aStartAngle = 0, aEndAngle = Math.PI * 2, aClockwise = false, aRotation = 0) { - - super(); - - this.isEllipseCurve = true; - - this.type = 'EllipseCurve'; - - this.aX = aX; - this.aY = aY; - - this.xRadius = xRadius; - this.yRadius = yRadius; - - this.aStartAngle = aStartAngle; - this.aEndAngle = aEndAngle; - - this.aClockwise = aClockwise; - - this.aRotation = aRotation; - - } - - getPoint(t, optionalTarget) { - - const point = optionalTarget || new Vector2(); - - const twoPi = Math.PI * 2; - let deltaAngle = this.aEndAngle - this.aStartAngle; - const samePoints = Math.abs(deltaAngle) < Number.EPSILON; - - // ensures that deltaAngle is 0 .. 2 PI - while (deltaAngle < 0) deltaAngle += twoPi; - while (deltaAngle > twoPi) deltaAngle -= twoPi; - - if (deltaAngle < Number.EPSILON) { - - if (samePoints) { - - deltaAngle = 0; - - } else { - - deltaAngle = twoPi; - - } - - } - - if (this.aClockwise === true && !samePoints) { - - if (deltaAngle === twoPi) { - - deltaAngle = - twoPi; - - } else { - - deltaAngle = deltaAngle - twoPi; - - } - - } - - const angle = this.aStartAngle + t * deltaAngle; - let x = this.aX + this.xRadius * Math.cos(angle); - let y = this.aY + this.yRadius * Math.sin(angle); - - if (this.aRotation !== 0) { - - const cos = Math.cos(this.aRotation); - const sin = Math.sin(this.aRotation); - - const tx = x - this.aX; - const ty = y - this.aY; - - // Rotate the point about the center of the ellipse. - x = tx * cos - ty * sin + this.aX; - y = tx * sin + ty * cos + this.aY; - - } - - return point.set(x, y); - - } - - copy(source) { - - super.copy(source); - - this.aX = source.aX; - this.aY = source.aY; - - this.xRadius = source.xRadius; - this.yRadius = source.yRadius; - - this.aStartAngle = source.aStartAngle; - this.aEndAngle = source.aEndAngle; - - this.aClockwise = source.aClockwise; - - this.aRotation = source.aRotation; - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.aX = this.aX; - data.aY = this.aY; - - data.xRadius = this.xRadius; - data.yRadius = this.yRadius; - - data.aStartAngle = this.aStartAngle; - data.aEndAngle = this.aEndAngle; - - data.aClockwise = this.aClockwise; - - data.aRotation = this.aRotation; - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.aX = json.aX; - this.aY = json.aY; - - this.xRadius = json.xRadius; - this.yRadius = json.yRadius; - - this.aStartAngle = json.aStartAngle; - this.aEndAngle = json.aEndAngle; - - this.aClockwise = json.aClockwise; - - this.aRotation = json.aRotation; - - return this; - - } - -} - -class ArcCurve extends EllipseCurve { - - constructor(aX, aY, aRadius, aStartAngle, aEndAngle, aClockwise) { - - super(aX, aY, aRadius, aRadius, aStartAngle, aEndAngle, aClockwise); - - this.isArcCurve = true; - - this.type = 'ArcCurve'; - - } - -} - -/** - * Centripetal CatmullRom Curve - which is useful for avoiding - * cusps and self-intersections in non-uniform catmull rom curves. - * http://www.cemyuksel.com/research/catmullrom_param/catmullrom.pdf - * - * curve.type accepts centripetal(default), chordal and catmullrom - * curve.tension is used for catmullrom which defaults to 0.5 - */ - - -/* -Based on an optimized c++ solution in - - http://stackoverflow.com/questions/9489736/catmull-rom-curve-with-no-cusps-and-no-self-intersections/ - - http://ideone.com/NoEbVM - -This CubicPoly class could be used for reusing some variables and calculations, -but for three.js curve use, it could be possible inlined and flatten into a single function call -which can be placed in CurveUtils. -*/ - -function CubicPoly() { - - let c0 = 0, c1 = 0, c2 = 0, c3 = 0; - - /* - * Compute coefficients for a cubic polynomial - * p(s) = c0 + c1*s + c2*s^2 + c3*s^3 - * such that - * p(0) = x0, p(1) = x1 - * and - * p'(0) = t0, p'(1) = t1. - */ - function init(x0, x1, t0, t1) { - - c0 = x0; - c1 = t0; - c2 = - 3 * x0 + 3 * x1 - 2 * t0 - t1; - c3 = 2 * x0 - 2 * x1 + t0 + t1; - - } - - return { - - initCatmullRom: function (x0, x1, x2, x3, tension) { - - init(x1, x2, tension * (x2 - x0), tension * (x3 - x1)); - - }, - - initNonuniformCatmullRom: function (x0, x1, x2, x3, dt0, dt1, dt2) { - - // compute tangents when parameterized in [t1,t2] - let t1 = (x1 - x0) / dt0 - (x2 - x0) / (dt0 + dt1) + (x2 - x1) / dt1; - let t2 = (x2 - x1) / dt1 - (x3 - x1) / (dt1 + dt2) + (x3 - x2) / dt2; - - // rescale tangents for parametrization in [0,1] - t1 *= dt1; - t2 *= dt1; - - init(x1, x2, t1, t2); - - }, - - calc: function (t) { - - const t2 = t * t; - const t3 = t2 * t; - return c0 + c1 * t + c2 * t2 + c3 * t3; - - } - - }; - -} - -// - -const tmp = /*@__PURE__*/ new Vector3(); -const px = /*@__PURE__*/ new CubicPoly(); -const py = /*@__PURE__*/ new CubicPoly(); -const pz = /*@__PURE__*/ new CubicPoly(); - -class CatmullRomCurve3 extends Curve { - - constructor(points = [], closed = false, curveType = 'centripetal', tension = 0.5) { - - super(); - - this.isCatmullRomCurve3 = true; - - this.type = 'CatmullRomCurve3'; - - this.points = points; - this.closed = closed; - this.curveType = curveType; - this.tension = tension; - - } - - getPoint(t, optionalTarget = new Vector3()) { - - const point = optionalTarget; - - const points = this.points; - const l = points.length; - - const p = (l - (this.closed ? 0 : 1)) * t; - let intPoint = Math.floor(p); - let weight = p - intPoint; - - if (this.closed) { - - intPoint += intPoint > 0 ? 0 : (Math.floor(Math.abs(intPoint) / l) + 1) * l; - - } else if (weight === 0 && intPoint === l - 1) { - - intPoint = l - 2; - weight = 1; - - } - - let p0, p3; // 4 points (p1 & p2 defined below) - - if (this.closed || intPoint > 0) { - - p0 = points[(intPoint - 1) % l]; - - } else { - - // extrapolate first point - tmp.subVectors(points[0], points[1]).add(points[0]); - p0 = tmp; - - } - - const p1 = points[intPoint % l]; - const p2 = points[(intPoint + 1) % l]; - - if (this.closed || intPoint + 2 < l) { - - p3 = points[(intPoint + 2) % l]; - - } else { - - // extrapolate last point - tmp.subVectors(points[l - 1], points[l - 2]).add(points[l - 1]); - p3 = tmp; - - } - - if (this.curveType === 'centripetal' || this.curveType === 'chordal') { - - // init Centripetal / Chordal Catmull-Rom - const pow = this.curveType === 'chordal' ? 0.5 : 0.25; - let dt0 = Math.pow(p0.distanceToSquared(p1), pow); - let dt1 = Math.pow(p1.distanceToSquared(p2), pow); - let dt2 = Math.pow(p2.distanceToSquared(p3), pow); - - // safety check for repeated points - if (dt1 < 1e-4) dt1 = 1.0; - if (dt0 < 1e-4) dt0 = dt1; - if (dt2 < 1e-4) dt2 = dt1; - - px.initNonuniformCatmullRom(p0.x, p1.x, p2.x, p3.x, dt0, dt1, dt2); - py.initNonuniformCatmullRom(p0.y, p1.y, p2.y, p3.y, dt0, dt1, dt2); - pz.initNonuniformCatmullRom(p0.z, p1.z, p2.z, p3.z, dt0, dt1, dt2); - - } else if (this.curveType === 'catmullrom') { - - px.initCatmullRom(p0.x, p1.x, p2.x, p3.x, this.tension); - py.initCatmullRom(p0.y, p1.y, p2.y, p3.y, this.tension); - pz.initCatmullRom(p0.z, p1.z, p2.z, p3.z, this.tension); - - } - - point.set( - px.calc(weight), - py.calc(weight), - pz.calc(weight) - ); - - return point; - - } - - copy(source) { - - super.copy(source); - - this.points = []; - - for (let i = 0, l = source.points.length; i < l; i++) { - - const point = source.points[i]; - - this.points.push(point.clone()); - - } - - this.closed = source.closed; - this.curveType = source.curveType; - this.tension = source.tension; - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.points = []; - - for (let i = 0, l = this.points.length; i < l; i++) { - - const point = this.points[i]; - data.points.push(point.toArray()); - - } - - data.closed = this.closed; - data.curveType = this.curveType; - data.tension = this.tension; - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.points = []; - - for (let i = 0, l = json.points.length; i < l; i++) { - - const point = json.points[i]; - this.points.push(new Vector3().fromArray(point)); - - } - - this.closed = json.closed; - this.curveType = json.curveType; - this.tension = json.tension; - - return this; - - } - -} - -/** - * Bezier Curves formulas obtained from - * https://en.wikipedia.org/wiki/B%C3%A9zier_curve - */ - -function CatmullRom(t, p0, p1, p2, p3) { - - const v0 = (p2 - p0) * 0.5; - const v1 = (p3 - p1) * 0.5; - const t2 = t * t; - const t3 = t * t2; - return (2 * p1 - 2 * p2 + v0 + v1) * t3 + (- 3 * p1 + 3 * p2 - 2 * v0 - v1) * t2 + v0 * t + p1; - -} - -// - -function QuadraticBezierP0(t, p) { - - const k = 1 - t; - return k * k * p; - -} - -function QuadraticBezierP1(t, p) { - - return 2 * (1 - t) * t * p; - -} - -function QuadraticBezierP2(t, p) { - - return t * t * p; - -} - -function QuadraticBezier(t, p0, p1, p2) { - - return QuadraticBezierP0(t, p0) + QuadraticBezierP1(t, p1) + - QuadraticBezierP2(t, p2); - -} - -// - -function CubicBezierP0(t, p) { - - const k = 1 - t; - return k * k * k * p; - -} - -function CubicBezierP1(t, p) { - - const k = 1 - t; - return 3 * k * k * t * p; - -} - -function CubicBezierP2(t, p) { - - return 3 * (1 - t) * t * t * p; - -} - -function CubicBezierP3(t, p) { - - return t * t * t * p; - -} - -function CubicBezier(t, p0, p1, p2, p3) { - - return CubicBezierP0(t, p0) + CubicBezierP1(t, p1) + CubicBezierP2(t, p2) + - CubicBezierP3(t, p3); - -} - -class CubicBezierCurve extends Curve { - - constructor(v0 = new Vector2(), v1 = new Vector2(), v2 = new Vector2(), v3 = new Vector2()) { - - super(); - - this.isCubicBezierCurve = true; - - this.type = 'CubicBezierCurve'; - - this.v0 = v0; - this.v1 = v1; - this.v2 = v2; - this.v3 = v3; - - } - - getPoint(t, optionalTarget = new Vector2()) { - - const point = optionalTarget; - - const v0 = this.v0, v1 = this.v1, v2 = this.v2, v3 = this.v3; - - point.set( - CubicBezier(t, v0.x, v1.x, v2.x, v3.x), - CubicBezier(t, v0.y, v1.y, v2.y, v3.y) - ); - - return point; - - } - - copy(source) { - - super.copy(source); - - this.v0.copy(source.v0); - this.v1.copy(source.v1); - this.v2.copy(source.v2); - this.v3.copy(source.v3); - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.v0 = this.v0.toArray(); - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - data.v3 = this.v3.toArray(); - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.v0.fromArray(json.v0); - this.v1.fromArray(json.v1); - this.v2.fromArray(json.v2); - this.v3.fromArray(json.v3); - - return this; - - } - -} - -class CubicBezierCurve3 extends Curve { - - constructor(v0 = new Vector3(), v1 = new Vector3(), v2 = new Vector3(), v3 = new Vector3()) { - - super(); - - this.isCubicBezierCurve3 = true; - - this.type = 'CubicBezierCurve3'; - - this.v0 = v0; - this.v1 = v1; - this.v2 = v2; - this.v3 = v3; - - } - - getPoint(t, optionalTarget = new Vector3()) { - - const point = optionalTarget; - - const v0 = this.v0, v1 = this.v1, v2 = this.v2, v3 = this.v3; - - point.set( - CubicBezier(t, v0.x, v1.x, v2.x, v3.x), - CubicBezier(t, v0.y, v1.y, v2.y, v3.y), - CubicBezier(t, v0.z, v1.z, v2.z, v3.z) - ); - - return point; - - } - - copy(source) { - - super.copy(source); - - this.v0.copy(source.v0); - this.v1.copy(source.v1); - this.v2.copy(source.v2); - this.v3.copy(source.v3); - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.v0 = this.v0.toArray(); - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - data.v3 = this.v3.toArray(); - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.v0.fromArray(json.v0); - this.v1.fromArray(json.v1); - this.v2.fromArray(json.v2); - this.v3.fromArray(json.v3); - - return this; - - } - -} - -class LineCurve extends Curve { - - constructor(v1 = new Vector2(), v2 = new Vector2()) { - - super(); - - this.isLineCurve = true; - - this.type = 'LineCurve'; - - this.v1 = v1; - this.v2 = v2; - - } - - getPoint(t, optionalTarget = new Vector2()) { - - const point = optionalTarget; - - if (t === 1) { - - point.copy(this.v2); - - } else { - - point.copy(this.v2).sub(this.v1); - point.multiplyScalar(t).add(this.v1); - - } - - return point; - - } - - // Line curve is linear, so we can overwrite default getPointAt - getPointAt(u, optionalTarget) { - - return this.getPoint(u, optionalTarget); - - } - - getTangent(t, optionalTarget) { - - const tangent = optionalTarget || new Vector2(); - - tangent.copy(this.v2).sub(this.v1).normalize(); - - return tangent; - - } - - copy(source) { - - super.copy(source); - - this.v1.copy(source.v1); - this.v2.copy(source.v2); - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.v1.fromArray(json.v1); - this.v2.fromArray(json.v2); - - return this; - - } - -} - -class LineCurve3 extends Curve { - - constructor(v1 = new Vector3(), v2 = new Vector3()) { - - super(); - - this.isLineCurve3 = true; - - this.type = 'LineCurve3'; - - this.v1 = v1; - this.v2 = v2; - - } - getPoint(t, optionalTarget = new Vector3()) { - - const point = optionalTarget; - - if (t === 1) { - - point.copy(this.v2); - - } else { - - point.copy(this.v2).sub(this.v1); - point.multiplyScalar(t).add(this.v1); - - } - - return point; - - } - // Line curve is linear, so we can overwrite default getPointAt - getPointAt(u, optionalTarget) { - - return this.getPoint(u, optionalTarget); - - } - copy(source) { - - super.copy(source); - - this.v1.copy(source.v1); - this.v2.copy(source.v2); - - return this; - - } - toJSON() { - - const data = super.toJSON(); - - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - - return data; - - } - fromJSON(json) { - - super.fromJSON(json); - - this.v1.fromArray(json.v1); - this.v2.fromArray(json.v2); - - return this; - - } - -} - -class QuadraticBezierCurve extends Curve { - - constructor(v0 = new Vector2(), v1 = new Vector2(), v2 = new Vector2()) { - - super(); - - this.isQuadraticBezierCurve = true; - - this.type = 'QuadraticBezierCurve'; - - this.v0 = v0; - this.v1 = v1; - this.v2 = v2; - - } - - getPoint(t, optionalTarget = new Vector2()) { - - const point = optionalTarget; - - const v0 = this.v0, v1 = this.v1, v2 = this.v2; - - point.set( - QuadraticBezier(t, v0.x, v1.x, v2.x), - QuadraticBezier(t, v0.y, v1.y, v2.y) - ); - - return point; - - } - - copy(source) { - - super.copy(source); - - this.v0.copy(source.v0); - this.v1.copy(source.v1); - this.v2.copy(source.v2); - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.v0 = this.v0.toArray(); - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.v0.fromArray(json.v0); - this.v1.fromArray(json.v1); - this.v2.fromArray(json.v2); - - return this; - - } - -} - -class QuadraticBezierCurve3 extends Curve { - - constructor(v0 = new Vector3(), v1 = new Vector3(), v2 = new Vector3()) { - - super(); - - this.isQuadraticBezierCurve3 = true; - - this.type = 'QuadraticBezierCurve3'; - - this.v0 = v0; - this.v1 = v1; - this.v2 = v2; - - } - - getPoint(t, optionalTarget = new Vector3()) { - - const point = optionalTarget; - - const v0 = this.v0, v1 = this.v1, v2 = this.v2; - - point.set( - QuadraticBezier(t, v0.x, v1.x, v2.x), - QuadraticBezier(t, v0.y, v1.y, v2.y), - QuadraticBezier(t, v0.z, v1.z, v2.z) - ); - - return point; - - } - - copy(source) { - - super.copy(source); - - this.v0.copy(source.v0); - this.v1.copy(source.v1); - this.v2.copy(source.v2); - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.v0 = this.v0.toArray(); - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.v0.fromArray(json.v0); - this.v1.fromArray(json.v1); - this.v2.fromArray(json.v2); - - return this; - - } - -} - -class SplineCurve extends Curve { - - constructor(points = []) { - - super(); - - this.isSplineCurve = true; - - this.type = 'SplineCurve'; - - this.points = points; - - } - - getPoint(t, optionalTarget = new Vector2()) { - - const point = optionalTarget; - - const points = this.points; - const p = (points.length - 1) * t; - - const intPoint = Math.floor(p); - const weight = p - intPoint; - - const p0 = points[intPoint === 0 ? intPoint : intPoint - 1]; - const p1 = points[intPoint]; - const p2 = points[intPoint > points.length - 2 ? points.length - 1 : intPoint + 1]; - const p3 = points[intPoint > points.length - 3 ? points.length - 1 : intPoint + 2]; - - point.set( - CatmullRom(weight, p0.x, p1.x, p2.x, p3.x), - CatmullRom(weight, p0.y, p1.y, p2.y, p3.y) - ); - - return point; - - } - - copy(source) { - - super.copy(source); - - this.points = []; - - for (let i = 0, l = source.points.length; i < l; i++) { - - const point = source.points[i]; - - this.points.push(point.clone()); - - } - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.points = []; - - for (let i = 0, l = this.points.length; i < l; i++) { - - const point = this.points[i]; - data.points.push(point.toArray()); - - } - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.points = []; - - for (let i = 0, l = json.points.length; i < l; i++) { - - const point = json.points[i]; - this.points.push(new Vector2().fromArray(point)); - - } - - return this; - - } - -} - -var Curves = /*#__PURE__*/Object.freeze({ - __proto__: null, - ArcCurve: ArcCurve, - CatmullRomCurve3: CatmullRomCurve3, - CubicBezierCurve: CubicBezierCurve, - CubicBezierCurve3: CubicBezierCurve3, - EllipseCurve: EllipseCurve, - LineCurve: LineCurve, - LineCurve3: LineCurve3, - QuadraticBezierCurve: QuadraticBezierCurve, - QuadraticBezierCurve3: QuadraticBezierCurve3, - SplineCurve: SplineCurve -}); - -/************************************************************** - * Curved Path - a curve path is simply a array of connected - * curves, but retains the api of a curve - **************************************************************/ - -class CurvePath extends Curve { - - constructor() { - - super(); - - this.type = 'CurvePath'; - - this.curves = []; - this.autoClose = false; // Automatically closes the path - - } - - add(curve) { - - this.curves.push(curve); - - } - - closePath() { - - // Add a line curve if start and end of lines are not connected - const startPoint = this.curves[0].getPoint(0); - const endPoint = this.curves[this.curves.length - 1].getPoint(1); - - if (!startPoint.equals(endPoint)) { - - this.curves.push(new LineCurve(endPoint, startPoint)); - - } - - } - - // To get accurate point with reference to - // entire path distance at time t, - // following has to be done: - - // 1. Length of each sub path have to be known - // 2. Locate and identify type of curve - // 3. Get t for the curve - // 4. Return curve.getPointAt(t') - - getPoint(t, optionalTarget) { - - const d = t * this.getLength(); - const curveLengths = this.getCurveLengths(); - let i = 0; - - // To think about boundaries points. - - while (i < curveLengths.length) { - - if (curveLengths[i] >= d) { - - const diff = curveLengths[i] - d; - const curve = this.curves[i]; - - const segmentLength = curve.getLength(); - const u = segmentLength === 0 ? 0 : 1 - diff / segmentLength; - - return curve.getPointAt(u, optionalTarget); - - } - - i++; - - } - - return null; - - // loop where sum != 0, sum > d , sum+1 1 && !points[points.length - 1].equals(points[0])) { - - points.push(points[0]); - - } - - return points; - - } - - copy(source) { - - super.copy(source); - - this.curves = []; - - for (let i = 0, l = source.curves.length; i < l; i++) { - - const curve = source.curves[i]; - - this.curves.push(curve.clone()); - - } - - this.autoClose = source.autoClose; - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.autoClose = this.autoClose; - data.curves = []; - - for (let i = 0, l = this.curves.length; i < l; i++) { - - const curve = this.curves[i]; - data.curves.push(curve.toJSON()); - - } - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.autoClose = json.autoClose; - this.curves = []; - - for (let i = 0, l = json.curves.length; i < l; i++) { - - const curve = json.curves[i]; - this.curves.push(new Curves[curve.type]().fromJSON(curve)); - - } - - return this; - - } - -} - -class Path extends CurvePath { - - constructor(points) { - - super(); - - this.type = 'Path'; - - this.currentPoint = new Vector2(); - - if (points) { - - this.setFromPoints(points); - - } - - } - - setFromPoints(points) { - - this.moveTo(points[0].x, points[0].y); - - for (let i = 1, l = points.length; i < l; i++) { - - this.lineTo(points[i].x, points[i].y); - - } - - return this; - - } - - moveTo(x, y) { - - this.currentPoint.set(x, y); // TODO consider referencing vectors instead of copying? - - return this; - - } - - lineTo(x, y) { - - const curve = new LineCurve(this.currentPoint.clone(), new Vector2(x, y)); - this.curves.push(curve); - - this.currentPoint.set(x, y); - - return this; - - } - - quadraticCurveTo(aCPx, aCPy, aX, aY) { - - const curve = new QuadraticBezierCurve( - this.currentPoint.clone(), - new Vector2(aCPx, aCPy), - new Vector2(aX, aY) - ); - - this.curves.push(curve); - - this.currentPoint.set(aX, aY); - - return this; - - } - - bezierCurveTo(aCP1x, aCP1y, aCP2x, aCP2y, aX, aY) { - - const curve = new CubicBezierCurve( - this.currentPoint.clone(), - new Vector2(aCP1x, aCP1y), - new Vector2(aCP2x, aCP2y), - new Vector2(aX, aY) - ); - - this.curves.push(curve); - - this.currentPoint.set(aX, aY); - - return this; - - } - - splineThru(pts /*Array of Vector*/) { - - const npts = [this.currentPoint.clone()].concat(pts); - - const curve = new SplineCurve(npts); - this.curves.push(curve); - - this.currentPoint.copy(pts[pts.length - 1]); - - return this; - - } - - arc(aX, aY, aRadius, aStartAngle, aEndAngle, aClockwise) { - - const x0 = this.currentPoint.x; - const y0 = this.currentPoint.y; - - this.absarc(aX + x0, aY + y0, aRadius, - aStartAngle, aEndAngle, aClockwise); - - return this; - - } - - absarc(aX, aY, aRadius, aStartAngle, aEndAngle, aClockwise) { - - this.absellipse(aX, aY, aRadius, aRadius, aStartAngle, aEndAngle, aClockwise); - - return this; - - } - - ellipse(aX, aY, xRadius, yRadius, aStartAngle, aEndAngle, aClockwise, aRotation) { - - const x0 = this.currentPoint.x; - const y0 = this.currentPoint.y; - - this.absellipse(aX + x0, aY + y0, xRadius, yRadius, aStartAngle, aEndAngle, aClockwise, aRotation); - - return this; - - } - - absellipse(aX, aY, xRadius, yRadius, aStartAngle, aEndAngle, aClockwise, aRotation) { - - const curve = new EllipseCurve(aX, aY, xRadius, yRadius, aStartAngle, aEndAngle, aClockwise, aRotation); - - if (this.curves.length > 0) { - - // if a previous curve is present, attempt to join - const firstPoint = curve.getPoint(0); - - if (!firstPoint.equals(this.currentPoint)) { - - this.lineTo(firstPoint.x, firstPoint.y); - - } - - } - - this.curves.push(curve); - - const lastPoint = curve.getPoint(1); - this.currentPoint.copy(lastPoint); - - return this; - - } - - copy(source) { - - super.copy(source); - - this.currentPoint.copy(source.currentPoint); - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.currentPoint = this.currentPoint.toArray(); - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.currentPoint.fromArray(json.currentPoint); - - return this; - - } - -} - -class LatheGeometry extends BufferGeometry { - - constructor(points = [new Vector2(0, - 0.5), new Vector2(0.5, 0), new Vector2(0, 0.5)], segments = 12, phiStart = 0, phiLength = Math.PI * 2) { - - super(); - - this.type = 'LatheGeometry'; - - this.parameters = { - points: points, - segments: segments, - phiStart: phiStart, - phiLength: phiLength - }; - - segments = Math.floor(segments); - - // clamp phiLength so it's in range of [ 0, 2PI ] - - phiLength = clamp(phiLength, 0, Math.PI * 2); - - // buffers - - const indices = []; - const vertices = []; - const uvs = []; - const initNormals = []; - const normals = []; - - // helper variables - - const inverseSegments = 1.0 / segments; - const vertex = new Vector3(); - const uv = new Vector2(); - const normal = new Vector3(); - const curNormal = new Vector3(); - const prevNormal = new Vector3(); - let dx = 0; - let dy = 0; - - // pre-compute normals for initial "meridian" - - for (let j = 0; j <= (points.length - 1); j++) { - - switch (j) { - - case 0: // special handling for 1st vertex on path - - dx = points[j + 1].x - points[j].x; - dy = points[j + 1].y - points[j].y; - - normal.x = dy * 1.0; - normal.y = - dx; - normal.z = dy * 0.0; - - prevNormal.copy(normal); - - normal.normalize(); - - initNormals.push(normal.x, normal.y, normal.z); - - break; - - case (points.length - 1): // special handling for last Vertex on path - - initNormals.push(prevNormal.x, prevNormal.y, prevNormal.z); - - break; - - default: // default handling for all vertices in between - - dx = points[j + 1].x - points[j].x; - dy = points[j + 1].y - points[j].y; - - normal.x = dy * 1.0; - normal.y = - dx; - normal.z = dy * 0.0; - - curNormal.copy(normal); - - normal.x += prevNormal.x; - normal.y += prevNormal.y; - normal.z += prevNormal.z; - - normal.normalize(); - - initNormals.push(normal.x, normal.y, normal.z); - - prevNormal.copy(curNormal); - - } - - } - - // generate vertices, uvs and normals - - for (let i = 0; i <= segments; i++) { - - const phi = phiStart + i * inverseSegments * phiLength; - - const sin = Math.sin(phi); - const cos = Math.cos(phi); - - for (let j = 0; j <= (points.length - 1); j++) { - - // vertex - - vertex.x = points[j].x * sin; - vertex.y = points[j].y; - vertex.z = points[j].x * cos; - - vertices.push(vertex.x, vertex.y, vertex.z); - - // uv - - uv.x = i / segments; - uv.y = j / (points.length - 1); - - uvs.push(uv.x, uv.y); - - // normal - - const x = initNormals[3 * j + 0] * sin; - const y = initNormals[3 * j + 1]; - const z = initNormals[3 * j + 0] * cos; - - normals.push(x, y, z); - - } - - } - - // indices - - for (let i = 0; i < segments; i++) { - - for (let j = 0; j < (points.length - 1); j++) { - - const base = j + i * points.length; - - const a = base; - const b = base + points.length; - const c = base + points.length + 1; - const d = base + 1; - - // faces - - indices.push(a, b, d); - indices.push(c, d, b); - - } - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - - } - - static fromJSON(data) { - - return new LatheGeometry(data.points, data.segments, data.phiStart, data.phiLength); - - } - -} - -class CapsuleGeometry extends LatheGeometry { - - constructor(radius = 1, length = 1, capSegments = 4, radialSegments = 8) { - - const path = new Path(); - path.absarc(0, - length / 2, radius, Math.PI * 1.5, 0); - path.absarc(0, length / 2, radius, 0, Math.PI * 0.5); - - super(path.getPoints(capSegments), radialSegments); - - this.type = 'CapsuleGeometry'; - - this.parameters = { - radius: radius, - height: length, - capSegments: capSegments, - radialSegments: radialSegments, - }; - - } - - static fromJSON(data) { - - return new CapsuleGeometry(data.radius, data.length, data.capSegments, data.radialSegments); - - } - -} - -class CircleGeometry extends BufferGeometry { - - constructor(radius = 1, segments = 32, thetaStart = 0, thetaLength = Math.PI * 2) { - - super(); - - this.type = 'CircleGeometry'; - - this.parameters = { - radius: radius, - segments: segments, - thetaStart: thetaStart, - thetaLength: thetaLength - }; - - segments = Math.max(3, segments); - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // helper variables - - const vertex = new Vector3(); - const uv = new Vector2(); - - // center point - - vertices.push(0, 0, 0); - normals.push(0, 0, 1); - uvs.push(0.5, 0.5); - - for (let s = 0, i = 3; s <= segments; s++, i += 3) { - - const segment = thetaStart + s / segments * thetaLength; - - // vertex - - vertex.x = radius * Math.cos(segment); - vertex.y = radius * Math.sin(segment); - - vertices.push(vertex.x, vertex.y, vertex.z); - - // normal - - normals.push(0, 0, 1); - - // uvs - - uv.x = (vertices[i] / radius + 1) / 2; - uv.y = (vertices[i + 1] / radius + 1) / 2; - - uvs.push(uv.x, uv.y); - - } - - // indices - - for (let i = 1; i <= segments; i++) { - - indices.push(i, i + 1, 0); - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - } - - static fromJSON(data) { - - return new CircleGeometry(data.radius, data.segments, data.thetaStart, data.thetaLength); - - } - -} - -class CylinderGeometry extends BufferGeometry { - - constructor(radiusTop = 1, radiusBottom = 1, height = 1, radialSegments = 32, heightSegments = 1, openEnded = false, thetaStart = 0, thetaLength = Math.PI * 2) { - - super(); - - this.type = 'CylinderGeometry'; - - this.parameters = { - radiusTop: radiusTop, - radiusBottom: radiusBottom, - height: height, - radialSegments: radialSegments, - heightSegments: heightSegments, - openEnded: openEnded, - thetaStart: thetaStart, - thetaLength: thetaLength - }; - - const scope = this; - - radialSegments = Math.floor(radialSegments); - heightSegments = Math.floor(heightSegments); - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // helper variables - - let index = 0; - const indexArray = []; - const halfHeight = height / 2; - let groupStart = 0; - - // generate geometry - - generateTorso(); - - if (openEnded === false) { - - if (radiusTop > 0) generateCap(true); - if (radiusBottom > 0) generateCap(false); - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - function generateTorso() { - - const normal = new Vector3(); - const vertex = new Vector3(); - - let groupCount = 0; - - // this will be used to calculate the normal - const slope = (radiusBottom - radiusTop) / height; - - // generate vertices, normals and uvs - - for (let y = 0; y <= heightSegments; y++) { - - const indexRow = []; - - const v = y / heightSegments; - - // calculate the radius of the current row - - const radius = v * (radiusBottom - radiusTop) + radiusTop; - - for (let x = 0; x <= radialSegments; x++) { - - const u = x / radialSegments; - - const theta = u * thetaLength + thetaStart; - - const sinTheta = Math.sin(theta); - const cosTheta = Math.cos(theta); - - // vertex - - vertex.x = radius * sinTheta; - vertex.y = - v * height + halfHeight; - vertex.z = radius * cosTheta; - vertices.push(vertex.x, vertex.y, vertex.z); - - // normal - - normal.set(sinTheta, slope, cosTheta).normalize(); - normals.push(normal.x, normal.y, normal.z); - - // uv - - uvs.push(u, 1 - v); - - // save index of vertex in respective row - - indexRow.push(index++); - - } - - // now save vertices of the row in our index array - - indexArray.push(indexRow); - - } - - // generate indices - - for (let x = 0; x < radialSegments; x++) { - - for (let y = 0; y < heightSegments; y++) { - - // we use the index array to access the correct indices - - const a = indexArray[y][x]; - const b = indexArray[y + 1][x]; - const c = indexArray[y + 1][x + 1]; - const d = indexArray[y][x + 1]; - - // faces - - indices.push(a, b, d); - indices.push(b, c, d); - - // update group counter - - groupCount += 6; - - } - - } - - // add a group to the geometry. this will ensure multi material support - - scope.addGroup(groupStart, groupCount, 0); - - // calculate new start value for groups - - groupStart += groupCount; - - } - - function generateCap(top) { - - // save the index of the first center vertex - const centerIndexStart = index; - - const uv = new Vector2(); - const vertex = new Vector3(); - - let groupCount = 0; - - const radius = (top === true) ? radiusTop : radiusBottom; - const sign = (top === true) ? 1 : - 1; - - // first we generate the center vertex data of the cap. - // because the geometry needs one set of uvs per face, - // we must generate a center vertex per face/segment - - for (let x = 1; x <= radialSegments; x++) { - - // vertex - - vertices.push(0, halfHeight * sign, 0); - - // normal - - normals.push(0, sign, 0); - - // uv - - uvs.push(0.5, 0.5); - - // increase index - - index++; - - } - - // save the index of the last center vertex - const centerIndexEnd = index; - - // now we generate the surrounding vertices, normals and uvs - - for (let x = 0; x <= radialSegments; x++) { - - const u = x / radialSegments; - const theta = u * thetaLength + thetaStart; - - const cosTheta = Math.cos(theta); - const sinTheta = Math.sin(theta); - - // vertex - - vertex.x = radius * sinTheta; - vertex.y = halfHeight * sign; - vertex.z = radius * cosTheta; - vertices.push(vertex.x, vertex.y, vertex.z); - - // normal - - normals.push(0, sign, 0); - - // uv - - uv.x = (cosTheta * 0.5) + 0.5; - uv.y = (sinTheta * 0.5 * sign) + 0.5; - uvs.push(uv.x, uv.y); - - // increase index - - index++; - - } - - // generate indices - - for (let x = 0; x < radialSegments; x++) { - - const c = centerIndexStart + x; - const i = centerIndexEnd + x; - - if (top === true) { - - // face top - - indices.push(i, i + 1, c); - - } else { - - // face bottom - - indices.push(i + 1, i, c); - - } - - groupCount += 3; - - } - - // add a group to the geometry. this will ensure multi material support - - scope.addGroup(groupStart, groupCount, top === true ? 1 : 2); - - // calculate new start value for groups - - groupStart += groupCount; - - } - - } - - static fromJSON(data) { - - return new CylinderGeometry(data.radiusTop, data.radiusBottom, data.height, data.radialSegments, data.heightSegments, data.openEnded, data.thetaStart, data.thetaLength); - - } - -} - -class ConeGeometry extends CylinderGeometry { - - constructor(radius = 1, height = 1, radialSegments = 32, heightSegments = 1, openEnded = false, thetaStart = 0, thetaLength = Math.PI * 2) { - - super(0, radius, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength); - - this.type = 'ConeGeometry'; - - this.parameters = { - radius: radius, - height: height, - radialSegments: radialSegments, - heightSegments: heightSegments, - openEnded: openEnded, - thetaStart: thetaStart, - thetaLength: thetaLength - }; - - } - - static fromJSON(data) { - - return new ConeGeometry(data.radius, data.height, data.radialSegments, data.heightSegments, data.openEnded, data.thetaStart, data.thetaLength); - - } - -} - -class PolyhedronGeometry extends BufferGeometry { - - constructor(vertices = [], indices = [], radius = 1, detail = 0) { - - super(); - - this.type = 'PolyhedronGeometry'; - - this.parameters = { - vertices: vertices, - indices: indices, - radius: radius, - detail: detail - }; - - // default buffer data - - const vertexBuffer = []; - const uvBuffer = []; - - // the subdivision creates the vertex buffer data - - subdivide(detail); - - // all vertices should lie on a conceptual sphere with a given radius - - applyRadius(radius); - - // finally, create the uv data - - generateUVs(); - - // build non-indexed geometry - - this.setAttribute('position', new Float32BufferAttribute(vertexBuffer, 3)); - this.setAttribute('normal', new Float32BufferAttribute(vertexBuffer.slice(), 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvBuffer, 2)); - - if (detail === 0) { - - this.computeVertexNormals(); // flat normals - - } else { - - this.normalizeNormals(); // smooth normals - - } - - // helper functions - - function subdivide(detail) { - - const a = new Vector3(); - const b = new Vector3(); - const c = new Vector3(); - - // iterate over all faces and apply a subdivision with the given detail value - - for (let i = 0; i < indices.length; i += 3) { - - // get the vertices of the face - - getVertexByIndex(indices[i + 0], a); - getVertexByIndex(indices[i + 1], b); - getVertexByIndex(indices[i + 2], c); - - // perform subdivision - - subdivideFace(a, b, c, detail); - - } - - } - - function subdivideFace(a, b, c, detail) { - - const cols = detail + 1; - - // we use this multidimensional array as a data structure for creating the subdivision - - const v = []; - - // construct all of the vertices for this subdivision - - for (let i = 0; i <= cols; i++) { - - v[i] = []; - - const aj = a.clone().lerp(c, i / cols); - const bj = b.clone().lerp(c, i / cols); - - const rows = cols - i; - - for (let j = 0; j <= rows; j++) { - - if (j === 0 && i === cols) { - - v[i][j] = aj; - - } else { - - v[i][j] = aj.clone().lerp(bj, j / rows); - - } - - } - - } - - // construct all of the faces - - for (let i = 0; i < cols; i++) { - - for (let j = 0; j < 2 * (cols - i) - 1; j++) { - - const k = Math.floor(j / 2); - - if (j % 2 === 0) { - - pushVertex(v[i][k + 1]); - pushVertex(v[i + 1][k]); - pushVertex(v[i][k]); - - } else { - - pushVertex(v[i][k + 1]); - pushVertex(v[i + 1][k + 1]); - pushVertex(v[i + 1][k]); - - } - - } - - } - - } - - function applyRadius(radius) { - - const vertex = new Vector3(); - - // iterate over the entire buffer and apply the radius to each vertex - - for (let i = 0; i < vertexBuffer.length; i += 3) { - - vertex.x = vertexBuffer[i + 0]; - vertex.y = vertexBuffer[i + 1]; - vertex.z = vertexBuffer[i + 2]; - - vertex.normalize().multiplyScalar(radius); - - vertexBuffer[i + 0] = vertex.x; - vertexBuffer[i + 1] = vertex.y; - vertexBuffer[i + 2] = vertex.z; - - } - - } - - function generateUVs() { - - const vertex = new Vector3(); - - for (let i = 0; i < vertexBuffer.length; i += 3) { - - vertex.x = vertexBuffer[i + 0]; - vertex.y = vertexBuffer[i + 1]; - vertex.z = vertexBuffer[i + 2]; - - const u = azimuth(vertex) / 2 / Math.PI + 0.5; - const v = inclination(vertex) / Math.PI + 0.5; - uvBuffer.push(u, 1 - v); - - } - - correctUVs(); - - correctSeam(); - - } - - function correctSeam() { - - // handle case when face straddles the seam, see #3269 - - for (let i = 0; i < uvBuffer.length; i += 6) { - - // uv data of a single face - - const x0 = uvBuffer[i + 0]; - const x1 = uvBuffer[i + 2]; - const x2 = uvBuffer[i + 4]; - - const max = Math.max(x0, x1, x2); - const min = Math.min(x0, x1, x2); - - // 0.9 is somewhat arbitrary - - if (max > 0.9 && min < 0.1) { - - if (x0 < 0.2) uvBuffer[i + 0] += 1; - if (x1 < 0.2) uvBuffer[i + 2] += 1; - if (x2 < 0.2) uvBuffer[i + 4] += 1; - - } - - } - - } - - function pushVertex(vertex) { - - vertexBuffer.push(vertex.x, vertex.y, vertex.z); - - } - - function getVertexByIndex(index, vertex) { - - const stride = index * 3; - - vertex.x = vertices[stride + 0]; - vertex.y = vertices[stride + 1]; - vertex.z = vertices[stride + 2]; - - } - - function correctUVs() { - - const a = new Vector3(); - const b = new Vector3(); - const c = new Vector3(); - - const centroid = new Vector3(); - - const uvA = new Vector2(); - const uvB = new Vector2(); - const uvC = new Vector2(); - - for (let i = 0, j = 0; i < vertexBuffer.length; i += 9, j += 6) { - - a.set(vertexBuffer[i + 0], vertexBuffer[i + 1], vertexBuffer[i + 2]); - b.set(vertexBuffer[i + 3], vertexBuffer[i + 4], vertexBuffer[i + 5]); - c.set(vertexBuffer[i + 6], vertexBuffer[i + 7], vertexBuffer[i + 8]); - - uvA.set(uvBuffer[j + 0], uvBuffer[j + 1]); - uvB.set(uvBuffer[j + 2], uvBuffer[j + 3]); - uvC.set(uvBuffer[j + 4], uvBuffer[j + 5]); - - centroid.copy(a).add(b).add(c).divideScalar(3); - - const azi = azimuth(centroid); - - correctUV(uvA, j + 0, a, azi); - correctUV(uvB, j + 2, b, azi); - correctUV(uvC, j + 4, c, azi); - - } - - } - - function correctUV(uv, stride, vector, azimuth) { - - if ((azimuth < 0) && (uv.x === 1)) { - - uvBuffer[stride] = uv.x - 1; - - } - - if ((vector.x === 0) && (vector.z === 0)) { - - uvBuffer[stride] = azimuth / 2 / Math.PI + 0.5; - - } - - } - - // Angle around the Y axis, counter-clockwise when looking from above. - - function azimuth(vector) { - - return Math.atan2(vector.z, - vector.x); - - } - - - // Angle above the XZ plane. - - function inclination(vector) { - - return Math.atan2(- vector.y, Math.sqrt((vector.x * vector.x) + (vector.z * vector.z))); - - } - - } - - static fromJSON(data) { - - return new PolyhedronGeometry(data.vertices, data.indices, data.radius, data.details); - - } - -} - -class DodecahedronGeometry extends PolyhedronGeometry { - - constructor(radius = 1, detail = 0) { - - const t = (1 + Math.sqrt(5)) / 2; - const r = 1 / t; - - const vertices = [ - - // (±1, ±1, ±1) - - 1, - 1, - 1, - 1, - 1, 1, - - 1, 1, - 1, - 1, 1, 1, - 1, - 1, - 1, 1, - 1, 1, - 1, 1, - 1, 1, 1, 1, - - // (0, ±1/φ, ±φ) - 0, - r, - t, 0, - r, t, - 0, r, - t, 0, r, t, - - // (±1/φ, ±φ, 0) - - r, - t, 0, - r, t, 0, - r, - t, 0, r, t, 0, - - // (±φ, 0, ±1/φ) - - t, 0, - r, t, 0, - r, - - t, 0, r, t, 0, r - ]; - - const indices = [ - 3, 11, 7, 3, 7, 15, 3, 15, 13, - 7, 19, 17, 7, 17, 6, 7, 6, 15, - 17, 4, 8, 17, 8, 10, 17, 10, 6, - 8, 0, 16, 8, 16, 2, 8, 2, 10, - 0, 12, 1, 0, 1, 18, 0, 18, 16, - 6, 10, 2, 6, 2, 13, 6, 13, 15, - 2, 16, 18, 2, 18, 3, 2, 3, 13, - 18, 1, 9, 18, 9, 11, 18, 11, 3, - 4, 14, 12, 4, 12, 0, 4, 0, 8, - 11, 9, 5, 11, 5, 19, 11, 19, 7, - 19, 5, 14, 19, 14, 4, 19, 4, 17, - 1, 12, 14, 1, 14, 5, 1, 5, 9 - ]; - - super(vertices, indices, radius, detail); - - this.type = 'DodecahedronGeometry'; - - this.parameters = { - radius: radius, - detail: detail - }; - - } - - static fromJSON(data) { - - return new DodecahedronGeometry(data.radius, data.detail); - - } - -} - -const _v0 = /*@__PURE__*/ new Vector3(); -const _v1$1 = /*@__PURE__*/ new Vector3(); -const _normal = /*@__PURE__*/ new Vector3(); -const _triangle = /*@__PURE__*/ new Triangle(); - -class EdgesGeometry extends BufferGeometry { - - constructor(geometry = null, thresholdAngle = 1) { - - super(); - - this.type = 'EdgesGeometry'; - - this.parameters = { - geometry: geometry, - thresholdAngle: thresholdAngle - }; - - if (geometry !== null) { - - const precisionPoints = 4; - const precision = Math.pow(10, precisionPoints); - const thresholdDot = Math.cos(DEG2RAD * thresholdAngle); - - const indexAttr = geometry.getIndex(); - const positionAttr = geometry.getAttribute('position'); - const indexCount = indexAttr ? indexAttr.count : positionAttr.count; - - const indexArr = [0, 0, 0]; - const vertKeys = ['a', 'b', 'c']; - const hashes = new Array(3); - - const edgeData = {}; - const vertices = []; - for (let i = 0; i < indexCount; i += 3) { - - if (indexAttr) { - - indexArr[0] = indexAttr.getX(i); - indexArr[1] = indexAttr.getX(i + 1); - indexArr[2] = indexAttr.getX(i + 2); - - } else { - - indexArr[0] = i; - indexArr[1] = i + 1; - indexArr[2] = i + 2; - - } - - const { a, b, c } = _triangle; - a.fromBufferAttribute(positionAttr, indexArr[0]); - b.fromBufferAttribute(positionAttr, indexArr[1]); - c.fromBufferAttribute(positionAttr, indexArr[2]); - _triangle.getNormal(_normal); - - // create hashes for the edge from the vertices - hashes[0] = `${Math.round(a.x * precision)},${Math.round(a.y * precision)},${Math.round(a.z * precision)}`; - hashes[1] = `${Math.round(b.x * precision)},${Math.round(b.y * precision)},${Math.round(b.z * precision)}`; - hashes[2] = `${Math.round(c.x * precision)},${Math.round(c.y * precision)},${Math.round(c.z * precision)}`; - - // skip degenerate triangles - if (hashes[0] === hashes[1] || hashes[1] === hashes[2] || hashes[2] === hashes[0]) { - - continue; - - } - - // iterate over every edge - for (let j = 0; j < 3; j++) { - - // get the first and next vertex making up the edge - const jNext = (j + 1) % 3; - const vecHash0 = hashes[j]; - const vecHash1 = hashes[jNext]; - const v0 = _triangle[vertKeys[j]]; - const v1 = _triangle[vertKeys[jNext]]; - - const hash = `${vecHash0}_${vecHash1}`; - const reverseHash = `${vecHash1}_${vecHash0}`; - - if (reverseHash in edgeData && edgeData[reverseHash]) { - - // if we found a sibling edge add it into the vertex array if - // it meets the angle threshold and delete the edge from the map. - if (_normal.dot(edgeData[reverseHash].normal) <= thresholdDot) { - - vertices.push(v0.x, v0.y, v0.z); - vertices.push(v1.x, v1.y, v1.z); - - } - - edgeData[reverseHash] = null; - - } else if (!(hash in edgeData)) { - - // if we've already got an edge here then skip adding a new one - edgeData[hash] = { - - index0: indexArr[j], - index1: indexArr[jNext], - normal: _normal.clone(), - - }; - - } - - } - - } - - // iterate over all remaining, unmatched edges and add them to the vertex array - for (const key in edgeData) { - - if (edgeData[key]) { - - const { index0, index1 } = edgeData[key]; - _v0.fromBufferAttribute(positionAttr, index0); - _v1$1.fromBufferAttribute(positionAttr, index1); - - vertices.push(_v0.x, _v0.y, _v0.z); - vertices.push(_v1$1.x, _v1$1.y, _v1$1.z); - - } - - } - - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - - } - - } - -} - -class Shape extends Path { - - constructor(points) { - - super(points); - - this.uuid = generateUUID(); - - this.type = 'Shape'; - - this.holes = []; - - } - - getPointsHoles(divisions) { - - const holesPts = []; - - for (let i = 0, l = this.holes.length; i < l; i++) { - - holesPts[i] = this.holes[i].getPoints(divisions); - - } - - return holesPts; - - } - - // get points of shape and holes (keypoints based on segments parameter) - - extractPoints(divisions) { - - return { - - shape: this.getPoints(divisions), - holes: this.getPointsHoles(divisions) - - }; - - } - - copy(source) { - - super.copy(source); - - this.holes = []; - - for (let i = 0, l = source.holes.length; i < l; i++) { - - const hole = source.holes[i]; - - this.holes.push(hole.clone()); - - } - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.uuid = this.uuid; - data.holes = []; - - for (let i = 0, l = this.holes.length; i < l; i++) { - - const hole = this.holes[i]; - data.holes.push(hole.toJSON()); - - } - - return data; - - } - - fromJSON(json) { - - super.fromJSON(json); - - this.uuid = json.uuid; - this.holes = []; - - for (let i = 0, l = json.holes.length; i < l; i++) { - - const hole = json.holes[i]; - this.holes.push(new Path().fromJSON(hole)); - - } - - return this; - - } - -} - -/** - * Port from https://github.com/mapbox/earcut (v2.2.4) - */ - -const Earcut = { - - triangulate: function (data, holeIndices, dim = 2) { - - const hasHoles = holeIndices && holeIndices.length; - const outerLen = hasHoles ? holeIndices[0] * dim : data.length; - let outerNode = linkedList(data, 0, outerLen, dim, true); - const triangles = []; - - if (!outerNode || outerNode.next === outerNode.prev) return triangles; - - let minX, minY, maxX, maxY, x, y, invSize; - - if (hasHoles) outerNode = eliminateHoles(data, holeIndices, outerNode, dim); - - // if the shape is not too simple, we'll use z-order curve hash later; calculate polygon bbox - if (data.length > 80 * dim) { - - minX = maxX = data[0]; - minY = maxY = data[1]; - - for (let i = dim; i < outerLen; i += dim) { - - x = data[i]; - y = data[i + 1]; - if (x < minX) minX = x; - if (y < minY) minY = y; - if (x > maxX) maxX = x; - if (y > maxY) maxY = y; - - } - - // minX, minY and invSize are later used to transform coords into integers for z-order calculation - invSize = Math.max(maxX - minX, maxY - minY); - invSize = invSize !== 0 ? 32767 / invSize : 0; - - } - - earcutLinked(outerNode, triangles, dim, minX, minY, invSize, 0); - - return triangles; - - } - -}; - -// create a circular doubly linked list from polygon points in the specified winding order -function linkedList(data, start, end, dim, clockwise) { - - let i, last; - - if (clockwise === (signedArea(data, start, end, dim) > 0)) { - - for (i = start; i < end; i += dim) last = insertNode(i, data[i], data[i + 1], last); - - } else { - - for (i = end - dim; i >= start; i -= dim) last = insertNode(i, data[i], data[i + 1], last); - - } - - if (last && equals(last, last.next)) { - - removeNode(last); - last = last.next; - - } - - return last; - -} - -// eliminate colinear or duplicate points -function filterPoints(start, end) { - - if (!start) return start; - if (!end) end = start; - - let p = start, - again; - do { - - again = false; - - if (!p.steiner && (equals(p, p.next) || area(p.prev, p, p.next) === 0)) { - - removeNode(p); - p = end = p.prev; - if (p === p.next) break; - again = true; - - } else { - - p = p.next; - - } - - } while (again || p !== end); - - return end; - -} - -// main ear slicing loop which triangulates a polygon (given as a linked list) -function earcutLinked(ear, triangles, dim, minX, minY, invSize, pass) { - - if (!ear) return; - - // interlink polygon nodes in z-order - if (!pass && invSize) indexCurve(ear, minX, minY, invSize); - - let stop = ear, - prev, next; - - // iterate through ears, slicing them one by one - while (ear.prev !== ear.next) { - - prev = ear.prev; - next = ear.next; - - if (invSize ? isEarHashed(ear, minX, minY, invSize) : isEar(ear)) { - - // cut off the triangle - triangles.push(prev.i / dim | 0); - triangles.push(ear.i / dim | 0); - triangles.push(next.i / dim | 0); - - removeNode(ear); - - // skipping the next vertex leads to less sliver triangles - ear = next.next; - stop = next.next; - - continue; - - } - - ear = next; - - // if we looped through the whole remaining polygon and can't find any more ears - if (ear === stop) { - - // try filtering points and slicing again - if (!pass) { - - earcutLinked(filterPoints(ear), triangles, dim, minX, minY, invSize, 1); - - // if this didn't work, try curing all small self-intersections locally - - } else if (pass === 1) { - - ear = cureLocalIntersections(filterPoints(ear), triangles, dim); - earcutLinked(ear, triangles, dim, minX, minY, invSize, 2); - - // as a last resort, try splitting the remaining polygon into two - - } else if (pass === 2) { - - splitEarcut(ear, triangles, dim, minX, minY, invSize); - - } - - break; - - } - - } - -} - -// check whether a polygon node forms a valid ear with adjacent nodes -function isEar(ear) { - - const a = ear.prev, - b = ear, - c = ear.next; - - if (area(a, b, c) >= 0) return false; // reflex, can't be an ear - - // now make sure we don't have other points inside the potential ear - const ax = a.x, bx = b.x, cx = c.x, ay = a.y, by = b.y, cy = c.y; - - // triangle bbox; min & max are calculated like this for speed - const x0 = ax < bx ? (ax < cx ? ax : cx) : (bx < cx ? bx : cx), - y0 = ay < by ? (ay < cy ? ay : cy) : (by < cy ? by : cy), - x1 = ax > bx ? (ax > cx ? ax : cx) : (bx > cx ? bx : cx), - y1 = ay > by ? (ay > cy ? ay : cy) : (by > cy ? by : cy); - - let p = c.next; - while (p !== a) { - - if (p.x >= x0 && p.x <= x1 && p.y >= y0 && p.y <= y1 && - pointInTriangle(ax, ay, bx, by, cx, cy, p.x, p.y) && - area(p.prev, p, p.next) >= 0) return false; - p = p.next; - - } - - return true; - -} - -function isEarHashed(ear, minX, minY, invSize) { - - const a = ear.prev, - b = ear, - c = ear.next; - - if (area(a, b, c) >= 0) return false; // reflex, can't be an ear - - const ax = a.x, bx = b.x, cx = c.x, ay = a.y, by = b.y, cy = c.y; - - // triangle bbox; min & max are calculated like this for speed - const x0 = ax < bx ? (ax < cx ? ax : cx) : (bx < cx ? bx : cx), - y0 = ay < by ? (ay < cy ? ay : cy) : (by < cy ? by : cy), - x1 = ax > bx ? (ax > cx ? ax : cx) : (bx > cx ? bx : cx), - y1 = ay > by ? (ay > cy ? ay : cy) : (by > cy ? by : cy); - - // z-order range for the current triangle bbox; - const minZ = zOrder(x0, y0, minX, minY, invSize), - maxZ = zOrder(x1, y1, minX, minY, invSize); - - let p = ear.prevZ, - n = ear.nextZ; - - // look for points inside the triangle in both directions - while (p && p.z >= minZ && n && n.z <= maxZ) { - - if (p.x >= x0 && p.x <= x1 && p.y >= y0 && p.y <= y1 && p !== a && p !== c && - pointInTriangle(ax, ay, bx, by, cx, cy, p.x, p.y) && area(p.prev, p, p.next) >= 0) return false; - p = p.prevZ; - - if (n.x >= x0 && n.x <= x1 && n.y >= y0 && n.y <= y1 && n !== a && n !== c && - pointInTriangle(ax, ay, bx, by, cx, cy, n.x, n.y) && area(n.prev, n, n.next) >= 0) return false; - n = n.nextZ; - - } - - // look for remaining points in decreasing z-order - while (p && p.z >= minZ) { - - if (p.x >= x0 && p.x <= x1 && p.y >= y0 && p.y <= y1 && p !== a && p !== c && - pointInTriangle(ax, ay, bx, by, cx, cy, p.x, p.y) && area(p.prev, p, p.next) >= 0) return false; - p = p.prevZ; - - } - - // look for remaining points in increasing z-order - while (n && n.z <= maxZ) { - - if (n.x >= x0 && n.x <= x1 && n.y >= y0 && n.y <= y1 && n !== a && n !== c && - pointInTriangle(ax, ay, bx, by, cx, cy, n.x, n.y) && area(n.prev, n, n.next) >= 0) return false; - n = n.nextZ; - - } - - return true; - -} - -// go through all polygon nodes and cure small local self-intersections -function cureLocalIntersections(start, triangles, dim) { - - let p = start; - do { - - const a = p.prev, - b = p.next.next; - - if (!equals(a, b) && intersects(a, p, p.next, b) && locallyInside(a, b) && locallyInside(b, a)) { - - triangles.push(a.i / dim | 0); - triangles.push(p.i / dim | 0); - triangles.push(b.i / dim | 0); - - // remove two nodes involved - removeNode(p); - removeNode(p.next); - - p = start = b; - - } - - p = p.next; - - } while (p !== start); - - return filterPoints(p); - -} - -// try splitting polygon into two and triangulate them independently -function splitEarcut(start, triangles, dim, minX, minY, invSize) { - - // look for a valid diagonal that divides the polygon into two - let a = start; - do { - - let b = a.next.next; - while (b !== a.prev) { - - if (a.i !== b.i && isValidDiagonal(a, b)) { - - // split the polygon in two by the diagonal - let c = splitPolygon(a, b); - - // filter colinear points around the cuts - a = filterPoints(a, a.next); - c = filterPoints(c, c.next); - - // run earcut on each half - earcutLinked(a, triangles, dim, minX, minY, invSize, 0); - earcutLinked(c, triangles, dim, minX, minY, invSize, 0); - return; - - } - - b = b.next; - - } - - a = a.next; - - } while (a !== start); - -} - -// link every hole into the outer loop, producing a single-ring polygon without holes -function eliminateHoles(data, holeIndices, outerNode, dim) { - - const queue = []; - let i, len, start, end, list; - - for (i = 0, len = holeIndices.length; i < len; i++) { - - start = holeIndices[i] * dim; - end = i < len - 1 ? holeIndices[i + 1] * dim : data.length; - list = linkedList(data, start, end, dim, false); - if (list === list.next) list.steiner = true; - queue.push(getLeftmost(list)); - - } - - queue.sort(compareX); - - // process holes from left to right - for (i = 0; i < queue.length; i++) { - - outerNode = eliminateHole(queue[i], outerNode); - - } - - return outerNode; - -} - -function compareX(a, b) { - - return a.x - b.x; - -} - -// find a bridge between vertices that connects hole with an outer ring and link it -function eliminateHole(hole, outerNode) { - - const bridge = findHoleBridge(hole, outerNode); - if (!bridge) { - - return outerNode; - - } - - const bridgeReverse = splitPolygon(bridge, hole); - - // filter collinear points around the cuts - filterPoints(bridgeReverse, bridgeReverse.next); - return filterPoints(bridge, bridge.next); - -} - -// David Eberly's algorithm for finding a bridge between hole and outer polygon -function findHoleBridge(hole, outerNode) { - - let p = outerNode, - qx = - Infinity, - m; - - const hx = hole.x, hy = hole.y; - - // find a segment intersected by a ray from the hole's leftmost point to the left; - // segment's endpoint with lesser x will be potential connection point - do { - - if (hy <= p.y && hy >= p.next.y && p.next.y !== p.y) { - - const x = p.x + (hy - p.y) * (p.next.x - p.x) / (p.next.y - p.y); - if (x <= hx && x > qx) { - - qx = x; - m = p.x < p.next.x ? p : p.next; - if (x === hx) return m; // hole touches outer segment; pick leftmost endpoint - - } - - } - - p = p.next; - - } while (p !== outerNode); - - if (!m) return null; - - // look for points inside the triangle of hole point, segment intersection and endpoint; - // if there are no points found, we have a valid connection; - // otherwise choose the point of the minimum angle with the ray as connection point - - const stop = m, - mx = m.x, - my = m.y; - let tanMin = Infinity, tan; - - p = m; - - do { - - if (hx >= p.x && p.x >= mx && hx !== p.x && - pointInTriangle(hy < my ? hx : qx, hy, mx, my, hy < my ? qx : hx, hy, p.x, p.y)) { - - tan = Math.abs(hy - p.y) / (hx - p.x); // tangential - - if (locallyInside(p, hole) && (tan < tanMin || (tan === tanMin && (p.x > m.x || (p.x === m.x && sectorContainsSector(m, p)))))) { - - m = p; - tanMin = tan; - - } - - } - - p = p.next; - - } while (p !== stop); - - return m; - -} - -// whether sector in vertex m contains sector in vertex p in the same coordinates -function sectorContainsSector(m, p) { - - return area(m.prev, m, p.prev) < 0 && area(p.next, m, m.next) < 0; - -} - -// interlink polygon nodes in z-order -function indexCurve(start, minX, minY, invSize) { - - let p = start; - do { - - if (p.z === 0) p.z = zOrder(p.x, p.y, minX, minY, invSize); - p.prevZ = p.prev; - p.nextZ = p.next; - p = p.next; - - } while (p !== start); - - p.prevZ.nextZ = null; - p.prevZ = null; - - sortLinked(p); - -} - -// Simon Tatham's linked list merge sort algorithm -// http://www.chiark.greenend.org.uk/~sgtatham/algorithms/listsort.html -function sortLinked(list) { - - let i, p, q, e, tail, numMerges, pSize, qSize, - inSize = 1; - - do { - - p = list; - list = null; - tail = null; - numMerges = 0; - - while (p) { - - numMerges++; - q = p; - pSize = 0; - for (i = 0; i < inSize; i++) { - - pSize++; - q = q.nextZ; - if (!q) break; - - } - - qSize = inSize; - - while (pSize > 0 || (qSize > 0 && q)) { - - if (pSize !== 0 && (qSize === 0 || !q || p.z <= q.z)) { - - e = p; - p = p.nextZ; - pSize--; - - } else { - - e = q; - q = q.nextZ; - qSize--; - - } - - if (tail) tail.nextZ = e; - else list = e; - - e.prevZ = tail; - tail = e; - - } - - p = q; - - } - - tail.nextZ = null; - inSize *= 2; - - } while (numMerges > 1); - - return list; - -} - -// z-order of a point given coords and inverse of the longer side of data bbox -function zOrder(x, y, minX, minY, invSize) { - - // coords are transformed into non-negative 15-bit integer range - x = (x - minX) * invSize | 0; - y = (y - minY) * invSize | 0; - - x = (x | (x << 8)) & 0x00FF00FF; - x = (x | (x << 4)) & 0x0F0F0F0F; - x = (x | (x << 2)) & 0x33333333; - x = (x | (x << 1)) & 0x55555555; - - y = (y | (y << 8)) & 0x00FF00FF; - y = (y | (y << 4)) & 0x0F0F0F0F; - y = (y | (y << 2)) & 0x33333333; - y = (y | (y << 1)) & 0x55555555; - - return x | (y << 1); - -} - -// find the leftmost node of a polygon ring -function getLeftmost(start) { - - let p = start, - leftmost = start; - do { - - if (p.x < leftmost.x || (p.x === leftmost.x && p.y < leftmost.y)) leftmost = p; - p = p.next; - - } while (p !== start); - - return leftmost; - -} - -// check if a point lies within a convex triangle -function pointInTriangle(ax, ay, bx, by, cx, cy, px, py) { - - return (cx - px) * (ay - py) >= (ax - px) * (cy - py) && - (ax - px) * (by - py) >= (bx - px) * (ay - py) && - (bx - px) * (cy - py) >= (cx - px) * (by - py); - -} - -// check if a diagonal between two polygon nodes is valid (lies in polygon interior) -function isValidDiagonal(a, b) { - - return a.next.i !== b.i && a.prev.i !== b.i && !intersectsPolygon(a, b) && // dones't intersect other edges - (locallyInside(a, b) && locallyInside(b, a) && middleInside(a, b) && // locally visible - (area(a.prev, a, b.prev) || area(a, b.prev, b)) || // does not create opposite-facing sectors - equals(a, b) && area(a.prev, a, a.next) > 0 && area(b.prev, b, b.next) > 0); // special zero-length case - -} - -// signed area of a triangle -function area(p, q, r) { - - return (q.y - p.y) * (r.x - q.x) - (q.x - p.x) * (r.y - q.y); - -} - -// check if two points are equal -function equals(p1, p2) { - - return p1.x === p2.x && p1.y === p2.y; - -} - -// check if two segments intersect -function intersects(p1, q1, p2, q2) { - - const o1 = sign(area(p1, q1, p2)); - const o2 = sign(area(p1, q1, q2)); - const o3 = sign(area(p2, q2, p1)); - const o4 = sign(area(p2, q2, q1)); - - if (o1 !== o2 && o3 !== o4) return true; // general case - - if (o1 === 0 && onSegment(p1, p2, q1)) return true; // p1, q1 and p2 are collinear and p2 lies on p1q1 - if (o2 === 0 && onSegment(p1, q2, q1)) return true; // p1, q1 and q2 are collinear and q2 lies on p1q1 - if (o3 === 0 && onSegment(p2, p1, q2)) return true; // p2, q2 and p1 are collinear and p1 lies on p2q2 - if (o4 === 0 && onSegment(p2, q1, q2)) return true; // p2, q2 and q1 are collinear and q1 lies on p2q2 - - return false; - -} - -// for collinear points p, q, r, check if point q lies on segment pr -function onSegment(p, q, r) { - - return q.x <= Math.max(p.x, r.x) && q.x >= Math.min(p.x, r.x) && q.y <= Math.max(p.y, r.y) && q.y >= Math.min(p.y, r.y); - -} - -function sign(num) { - - return num > 0 ? 1 : num < 0 ? - 1 : 0; - -} - -// check if a polygon diagonal intersects any polygon segments -function intersectsPolygon(a, b) { - - let p = a; - do { - - if (p.i !== a.i && p.next.i !== a.i && p.i !== b.i && p.next.i !== b.i && - intersects(p, p.next, a, b)) return true; - p = p.next; - - } while (p !== a); - - return false; - -} - -// check if a polygon diagonal is locally inside the polygon -function locallyInside(a, b) { - - return area(a.prev, a, a.next) < 0 ? - area(a, b, a.next) >= 0 && area(a, a.prev, b) >= 0 : - area(a, b, a.prev) < 0 || area(a, a.next, b) < 0; - -} - -// check if the middle point of a polygon diagonal is inside the polygon -function middleInside(a, b) { - - let p = a, - inside = false; - const px = (a.x + b.x) / 2, - py = (a.y + b.y) / 2; - do { - - if (((p.y > py) !== (p.next.y > py)) && p.next.y !== p.y && - (px < (p.next.x - p.x) * (py - p.y) / (p.next.y - p.y) + p.x)) - inside = !inside; - p = p.next; - - } while (p !== a); - - return inside; - -} - -// link two polygon vertices with a bridge; if the vertices belong to the same ring, it splits polygon into two; -// if one belongs to the outer ring and another to a hole, it merges it into a single ring -function splitPolygon(a, b) { - - const a2 = new Node(a.i, a.x, a.y), - b2 = new Node(b.i, b.x, b.y), - an = a.next, - bp = b.prev; - - a.next = b; - b.prev = a; - - a2.next = an; - an.prev = a2; - - b2.next = a2; - a2.prev = b2; - - bp.next = b2; - b2.prev = bp; - - return b2; - -} - -// create a node and optionally link it with previous one (in a circular doubly linked list) -function insertNode(i, x, y, last) { - - const p = new Node(i, x, y); - - if (!last) { - - p.prev = p; - p.next = p; - - } else { - - p.next = last.next; - p.prev = last; - last.next.prev = p; - last.next = p; - - } - - return p; - -} - -function removeNode(p) { - - p.next.prev = p.prev; - p.prev.next = p.next; - - if (p.prevZ) p.prevZ.nextZ = p.nextZ; - if (p.nextZ) p.nextZ.prevZ = p.prevZ; - -} - -function Node(i, x, y) { - - // vertex index in coordinates array - this.i = i; - - // vertex coordinates - this.x = x; - this.y = y; - - // previous and next vertex nodes in a polygon ring - this.prev = null; - this.next = null; - - // z-order curve value - this.z = 0; - - // previous and next nodes in z-order - this.prevZ = null; - this.nextZ = null; - - // indicates whether this is a steiner point - this.steiner = false; - -} - -function signedArea(data, start, end, dim) { - - let sum = 0; - for (let i = start, j = end - dim; i < end; i += dim) { - - sum += (data[j] - data[i]) * (data[i + 1] + data[j + 1]); - j = i; - - } - - return sum; - -} - -class ShapeUtils { - - // calculate area of the contour polygon - - static area(contour) { - - const n = contour.length; - let a = 0.0; - - for (let p = n - 1, q = 0; q < n; p = q++) { - - a += contour[p].x * contour[q].y - contour[q].x * contour[p].y; - - } - - return a * 0.5; - - } - - static isClockWise(pts) { - - return ShapeUtils.area(pts) < 0; - - } - - static triangulateShape(contour, holes) { - - const vertices = []; // flat array of vertices like [ x0,y0, x1,y1, x2,y2, ... ] - const holeIndices = []; // array of hole indices - const faces = []; // final array of vertex indices like [ [ a,b,d ], [ b,c,d ] ] - - removeDupEndPts(contour); - addContour(vertices, contour); - - // - - let holeIndex = contour.length; - - holes.forEach(removeDupEndPts); - - for (let i = 0; i < holes.length; i++) { - - holeIndices.push(holeIndex); - holeIndex += holes[i].length; - addContour(vertices, holes[i]); - - } - - // - - const triangles = Earcut.triangulate(vertices, holeIndices); - - // - - for (let i = 0; i < triangles.length; i += 3) { - - faces.push(triangles.slice(i, i + 3)); - - } - - return faces; - - } - -} - -function removeDupEndPts(points) { - - const l = points.length; - - if (l > 2 && points[l - 1].equals(points[0])) { - - points.pop(); - - } - -} - -function addContour(vertices, contour) { - - for (let i = 0; i < contour.length; i++) { - - vertices.push(contour[i].x); - vertices.push(contour[i].y); - - } - -} - -/** - * Creates extruded geometry from a path shape. - * - * parameters = { - * - * curveSegments: , // number of points on the curves - * steps: , // number of points for z-side extrusions / used for subdividing segments of extrude spline too - * depth: , // Depth to extrude the shape - * - * bevelEnabled: , // turn on bevel - * bevelThickness: , // how deep into the original shape bevel goes - * bevelSize: , // how far from shape outline (including bevelOffset) is bevel - * bevelOffset: , // how far from shape outline does bevel start - * bevelSegments: , // number of bevel layers - * - * extrudePath: // curve to extrude shape along - * - * UVGenerator: // object that provides UV generator functions - * - * } - */ - -class ExtrudeGeometry extends BufferGeometry { - - constructor(shapes = new Shape([new Vector2(0.5, 0.5), new Vector2(- 0.5, 0.5), new Vector2(- 0.5, - 0.5), new Vector2(0.5, - 0.5)]), options = {}) { - - super(); - - this.type = 'ExtrudeGeometry'; - - this.parameters = { - shapes: shapes, - options: options - }; - - shapes = Array.isArray(shapes) ? shapes : [shapes]; - - const scope = this; - - const verticesArray = []; - const uvArray = []; - - for (let i = 0, l = shapes.length; i < l; i++) { - - const shape = shapes[i]; - addShape(shape); - - } - - // build geometry - - this.setAttribute('position', new Float32BufferAttribute(verticesArray, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvArray, 2)); - - this.computeVertexNormals(); - - // functions - - function addShape(shape) { - - const placeholder = []; - - // options - - const curveSegments = options.curveSegments !== undefined ? options.curveSegments : 12; - const steps = options.steps !== undefined ? options.steps : 1; - const depth = options.depth !== undefined ? options.depth : 1; - - let bevelEnabled = options.bevelEnabled !== undefined ? options.bevelEnabled : true; - let bevelThickness = options.bevelThickness !== undefined ? options.bevelThickness : 0.2; - let bevelSize = options.bevelSize !== undefined ? options.bevelSize : bevelThickness - 0.1; - let bevelOffset = options.bevelOffset !== undefined ? options.bevelOffset : 0; - let bevelSegments = options.bevelSegments !== undefined ? options.bevelSegments : 3; - - const extrudePath = options.extrudePath; - - const uvgen = options.UVGenerator !== undefined ? options.UVGenerator : WorldUVGenerator; - - // - - let extrudePts, extrudeByPath = false; - let splineTube, binormal, normal, position2; - - if (extrudePath) { - - extrudePts = extrudePath.getSpacedPoints(steps); - - extrudeByPath = true; - bevelEnabled = false; // bevels not supported for path extrusion - - // SETUP TNB variables - - // TODO1 - have a .isClosed in spline? - - splineTube = extrudePath.computeFrenetFrames(steps, false); - - // console.log(splineTube, 'splineTube', splineTube.normals.length, 'steps', steps, 'extrudePts', extrudePts.length); - - binormal = new Vector3(); - normal = new Vector3(); - position2 = new Vector3(); - - } - - // Safeguards if bevels are not enabled - - if (!bevelEnabled) { - - bevelSegments = 0; - bevelThickness = 0; - bevelSize = 0; - bevelOffset = 0; - - } - - // Variables initialization - - const shapePoints = shape.extractPoints(curveSegments); - - let vertices = shapePoints.shape; - const holes = shapePoints.holes; - - const reverse = !ShapeUtils.isClockWise(vertices); - - if (reverse) { - - vertices = vertices.reverse(); - - // Maybe we should also check if holes are in the opposite direction, just to be safe ... - - for (let h = 0, hl = holes.length; h < hl; h++) { - - const ahole = holes[h]; - - if (ShapeUtils.isClockWise(ahole)) { - - holes[h] = ahole.reverse(); - - } - - } - - } - - - const faces = ShapeUtils.triangulateShape(vertices, holes); - - /* Vertices */ - - const contour = vertices; // vertices has all points but contour has only points of circumference - - for (let h = 0, hl = holes.length; h < hl; h++) { - - const ahole = holes[h]; - - vertices = vertices.concat(ahole); - - } - - - function scalePt2(pt, vec, size) { - - if (!vec) console.error('THREE.ExtrudeGeometry: vec does not exist'); - - return vec.clone().multiplyScalar(size).add(pt); - - } - - const vlen = vertices.length, flen = faces.length; - - - // Find directions for point movement - - - function getBevelVec(inPt, inPrev, inNext) { - - // computes for inPt the corresponding point inPt' on a new contour - // shifted by 1 unit (length of normalized vector) to the left - // if we walk along contour clockwise, this new contour is outside the old one - // - // inPt' is the intersection of the two lines parallel to the two - // adjacent edges of inPt at a distance of 1 unit on the left side. - - let v_trans_x, v_trans_y, shrink_by; // resulting translation vector for inPt - - // good reading for geometry algorithms (here: line-line intersection) - // http://geomalgorithms.com/a05-_intersect-1.html - - const v_prev_x = inPt.x - inPrev.x, - v_prev_y = inPt.y - inPrev.y; - const v_next_x = inNext.x - inPt.x, - v_next_y = inNext.y - inPt.y; - - const v_prev_lensq = (v_prev_x * v_prev_x + v_prev_y * v_prev_y); - - // check for collinear edges - const collinear0 = (v_prev_x * v_next_y - v_prev_y * v_next_x); - - if (Math.abs(collinear0) > Number.EPSILON) { - - // not collinear - - // length of vectors for normalizing - - const v_prev_len = Math.sqrt(v_prev_lensq); - const v_next_len = Math.sqrt(v_next_x * v_next_x + v_next_y * v_next_y); - - // shift adjacent points by unit vectors to the left - - const ptPrevShift_x = (inPrev.x - v_prev_y / v_prev_len); - const ptPrevShift_y = (inPrev.y + v_prev_x / v_prev_len); - - const ptNextShift_x = (inNext.x - v_next_y / v_next_len); - const ptNextShift_y = (inNext.y + v_next_x / v_next_len); - - // scaling factor for v_prev to intersection point - - const sf = ((ptNextShift_x - ptPrevShift_x) * v_next_y - - (ptNextShift_y - ptPrevShift_y) * v_next_x) / - (v_prev_x * v_next_y - v_prev_y * v_next_x); - - // vector from inPt to intersection point - - v_trans_x = (ptPrevShift_x + v_prev_x * sf - inPt.x); - v_trans_y = (ptPrevShift_y + v_prev_y * sf - inPt.y); - - // Don't normalize!, otherwise sharp corners become ugly - // but prevent crazy spikes - const v_trans_lensq = (v_trans_x * v_trans_x + v_trans_y * v_trans_y); - if (v_trans_lensq <= 2) { - - return new Vector2(v_trans_x, v_trans_y); - - } else { - - shrink_by = Math.sqrt(v_trans_lensq / 2); - - } - - } else { - - // handle special case of collinear edges - - let direction_eq = false; // assumes: opposite - - if (v_prev_x > Number.EPSILON) { - - if (v_next_x > Number.EPSILON) { - - direction_eq = true; - - } - - } else { - - if (v_prev_x < - Number.EPSILON) { - - if (v_next_x < - Number.EPSILON) { - - direction_eq = true; - - } - - } else { - - if (Math.sign(v_prev_y) === Math.sign(v_next_y)) { - - direction_eq = true; - - } - - } - - } - - if (direction_eq) { - - // console.log("Warning: lines are a straight sequence"); - v_trans_x = - v_prev_y; - v_trans_y = v_prev_x; - shrink_by = Math.sqrt(v_prev_lensq); - - } else { - - // console.log("Warning: lines are a straight spike"); - v_trans_x = v_prev_x; - v_trans_y = v_prev_y; - shrink_by = Math.sqrt(v_prev_lensq / 2); - - } - - } - - return new Vector2(v_trans_x / shrink_by, v_trans_y / shrink_by); - - } - - - const contourMovements = []; - - for (let i = 0, il = contour.length, j = il - 1, k = i + 1; i < il; i++, j++, k++) { - - if (j === il) j = 0; - if (k === il) k = 0; - - // (j)---(i)---(k) - // console.log('i,j,k', i, j , k) - - contourMovements[i] = getBevelVec(contour[i], contour[j], contour[k]); - - } - - const holesMovements = []; - let oneHoleMovements, verticesMovements = contourMovements.concat(); - - for (let h = 0, hl = holes.length; h < hl; h++) { - - const ahole = holes[h]; - - oneHoleMovements = []; - - for (let i = 0, il = ahole.length, j = il - 1, k = i + 1; i < il; i++, j++, k++) { - - if (j === il) j = 0; - if (k === il) k = 0; - - // (j)---(i)---(k) - oneHoleMovements[i] = getBevelVec(ahole[i], ahole[j], ahole[k]); - - } - - holesMovements.push(oneHoleMovements); - verticesMovements = verticesMovements.concat(oneHoleMovements); - - } - - - // Loop bevelSegments, 1 for the front, 1 for the back - - for (let b = 0; b < bevelSegments; b++) { - - //for ( b = bevelSegments; b > 0; b -- ) { - - const t = b / bevelSegments; - const z = bevelThickness * Math.cos(t * Math.PI / 2); - const bs = bevelSize * Math.sin(t * Math.PI / 2) + bevelOffset; - - // contract shape - - for (let i = 0, il = contour.length; i < il; i++) { - - const vert = scalePt2(contour[i], contourMovements[i], bs); - - v(vert.x, vert.y, - z); - - } - - // expand holes - - for (let h = 0, hl = holes.length; h < hl; h++) { - - const ahole = holes[h]; - oneHoleMovements = holesMovements[h]; - - for (let i = 0, il = ahole.length; i < il; i++) { - - const vert = scalePt2(ahole[i], oneHoleMovements[i], bs); - - v(vert.x, vert.y, - z); - - } - - } - - } - - const bs = bevelSize + bevelOffset; - - // Back facing vertices - - for (let i = 0; i < vlen; i++) { - - const vert = bevelEnabled ? scalePt2(vertices[i], verticesMovements[i], bs) : vertices[i]; - - if (!extrudeByPath) { - - v(vert.x, vert.y, 0); - - } else { - - // v( vert.x, vert.y + extrudePts[ 0 ].y, extrudePts[ 0 ].x ); - - normal.copy(splineTube.normals[0]).multiplyScalar(vert.x); - binormal.copy(splineTube.binormals[0]).multiplyScalar(vert.y); - - position2.copy(extrudePts[0]).add(normal).add(binormal); - - v(position2.x, position2.y, position2.z); - - } - - } - - // Add stepped vertices... - // Including front facing vertices - - for (let s = 1; s <= steps; s++) { - - for (let i = 0; i < vlen; i++) { - - const vert = bevelEnabled ? scalePt2(vertices[i], verticesMovements[i], bs) : vertices[i]; - - if (!extrudeByPath) { - - v(vert.x, vert.y, depth / steps * s); - - } else { - - // v( vert.x, vert.y + extrudePts[ s - 1 ].y, extrudePts[ s - 1 ].x ); - - normal.copy(splineTube.normals[s]).multiplyScalar(vert.x); - binormal.copy(splineTube.binormals[s]).multiplyScalar(vert.y); - - position2.copy(extrudePts[s]).add(normal).add(binormal); - - v(position2.x, position2.y, position2.z); - - } - - } - - } - - - // Add bevel segments planes - - //for ( b = 1; b <= bevelSegments; b ++ ) { - for (let b = bevelSegments - 1; b >= 0; b--) { - - const t = b / bevelSegments; - const z = bevelThickness * Math.cos(t * Math.PI / 2); - const bs = bevelSize * Math.sin(t * Math.PI / 2) + bevelOffset; - - // contract shape - - for (let i = 0, il = contour.length; i < il; i++) { - - const vert = scalePt2(contour[i], contourMovements[i], bs); - v(vert.x, vert.y, depth + z); - - } - - // expand holes - - for (let h = 0, hl = holes.length; h < hl; h++) { - - const ahole = holes[h]; - oneHoleMovements = holesMovements[h]; - - for (let i = 0, il = ahole.length; i < il; i++) { - - const vert = scalePt2(ahole[i], oneHoleMovements[i], bs); - - if (!extrudeByPath) { - - v(vert.x, vert.y, depth + z); - - } else { - - v(vert.x, vert.y + extrudePts[steps - 1].y, extrudePts[steps - 1].x + z); - - } - - } - - } - - } - - /* Faces */ - - // Top and bottom faces - - buildLidFaces(); - - // Sides faces - - buildSideFaces(); - - - ///// Internal functions - - function buildLidFaces() { - - const start = verticesArray.length / 3; - - if (bevelEnabled) { - - let layer = 0; // steps + 1 - let offset = vlen * layer; - - // Bottom faces - - for (let i = 0; i < flen; i++) { - - const face = faces[i]; - f3(face[2] + offset, face[1] + offset, face[0] + offset); - - } - - layer = steps + bevelSegments * 2; - offset = vlen * layer; - - // Top faces - - for (let i = 0; i < flen; i++) { - - const face = faces[i]; - f3(face[0] + offset, face[1] + offset, face[2] + offset); - - } - - } else { - - // Bottom faces - - for (let i = 0; i < flen; i++) { - - const face = faces[i]; - f3(face[2], face[1], face[0]); - - } - - // Top faces - - for (let i = 0; i < flen; i++) { - - const face = faces[i]; - f3(face[0] + vlen * steps, face[1] + vlen * steps, face[2] + vlen * steps); - - } - - } - - scope.addGroup(start, verticesArray.length / 3 - start, 0); - - } - - // Create faces for the z-sides of the shape - - function buildSideFaces() { - - const start = verticesArray.length / 3; - let layeroffset = 0; - sidewalls(contour, layeroffset); - layeroffset += contour.length; - - for (let h = 0, hl = holes.length; h < hl; h++) { - - const ahole = holes[h]; - sidewalls(ahole, layeroffset); - - //, true - layeroffset += ahole.length; - - } - - - scope.addGroup(start, verticesArray.length / 3 - start, 1); - - - } - - function sidewalls(contour, layeroffset) { - - let i = contour.length; - - while (--i >= 0) { - - const j = i; - let k = i - 1; - if (k < 0) k = contour.length - 1; - - //console.log('b', i,j, i-1, k,vertices.length); - - for (let s = 0, sl = (steps + bevelSegments * 2); s < sl; s++) { - - const slen1 = vlen * s; - const slen2 = vlen * (s + 1); - - const a = layeroffset + j + slen1, - b = layeroffset + k + slen1, - c = layeroffset + k + slen2, - d = layeroffset + j + slen2; - - f4(a, b, c, d); - - } - - } - - } - - function v(x, y, z) { - - placeholder.push(x); - placeholder.push(y); - placeholder.push(z); - - } - - - function f3(a, b, c) { - - addVertex(a); - addVertex(b); - addVertex(c); - - const nextIndex = verticesArray.length / 3; - const uvs = uvgen.generateTopUV(scope, verticesArray, nextIndex - 3, nextIndex - 2, nextIndex - 1); - - addUV(uvs[0]); - addUV(uvs[1]); - addUV(uvs[2]); - - } - - function f4(a, b, c, d) { - - addVertex(a); - addVertex(b); - addVertex(d); - - addVertex(b); - addVertex(c); - addVertex(d); - - - const nextIndex = verticesArray.length / 3; - const uvs = uvgen.generateSideWallUV(scope, verticesArray, nextIndex - 6, nextIndex - 3, nextIndex - 2, nextIndex - 1); - - addUV(uvs[0]); - addUV(uvs[1]); - addUV(uvs[3]); - - addUV(uvs[1]); - addUV(uvs[2]); - addUV(uvs[3]); - - } - - function addVertex(index) { - - verticesArray.push(placeholder[index * 3 + 0]); - verticesArray.push(placeholder[index * 3 + 1]); - verticesArray.push(placeholder[index * 3 + 2]); - - } - - - function addUV(vector2) { - - uvArray.push(vector2.x); - uvArray.push(vector2.y); - - } - - } - - } - - toJSON() { - - const data = super.toJSON(); - - const shapes = this.parameters.shapes; - const options = this.parameters.options; - - return toJSON$1(shapes, options, data); - - } - - static fromJSON(data, shapes) { - - const geometryShapes = []; - - for (let j = 0, jl = data.shapes.length; j < jl; j++) { - - const shape = shapes[data.shapes[j]]; - - geometryShapes.push(shape); - - } - - const extrudePath = data.options.extrudePath; - - if (extrudePath !== undefined) { - - data.options.extrudePath = new Curves[extrudePath.type]().fromJSON(extrudePath); - - } - - return new ExtrudeGeometry(geometryShapes, data.options); - - } - -} - -const WorldUVGenerator = { - - generateTopUV: function (geometry, vertices, indexA, indexB, indexC) { - - const a_x = vertices[indexA * 3]; - const a_y = vertices[indexA * 3 + 1]; - const b_x = vertices[indexB * 3]; - const b_y = vertices[indexB * 3 + 1]; - const c_x = vertices[indexC * 3]; - const c_y = vertices[indexC * 3 + 1]; - - return [ - new Vector2(a_x, a_y), - new Vector2(b_x, b_y), - new Vector2(c_x, c_y) - ]; - - }, - - generateSideWallUV: function (geometry, vertices, indexA, indexB, indexC, indexD) { - - const a_x = vertices[indexA * 3]; - const a_y = vertices[indexA * 3 + 1]; - const a_z = vertices[indexA * 3 + 2]; - const b_x = vertices[indexB * 3]; - const b_y = vertices[indexB * 3 + 1]; - const b_z = vertices[indexB * 3 + 2]; - const c_x = vertices[indexC * 3]; - const c_y = vertices[indexC * 3 + 1]; - const c_z = vertices[indexC * 3 + 2]; - const d_x = vertices[indexD * 3]; - const d_y = vertices[indexD * 3 + 1]; - const d_z = vertices[indexD * 3 + 2]; - - if (Math.abs(a_y - b_y) < Math.abs(a_x - b_x)) { - - return [ - new Vector2(a_x, 1 - a_z), - new Vector2(b_x, 1 - b_z), - new Vector2(c_x, 1 - c_z), - new Vector2(d_x, 1 - d_z) - ]; - - } else { - - return [ - new Vector2(a_y, 1 - a_z), - new Vector2(b_y, 1 - b_z), - new Vector2(c_y, 1 - c_z), - new Vector2(d_y, 1 - d_z) - ]; - - } - - } - -}; - -function toJSON$1(shapes, options, data) { - - data.shapes = []; - - if (Array.isArray(shapes)) { - - for (let i = 0, l = shapes.length; i < l; i++) { - - const shape = shapes[i]; - - data.shapes.push(shape.uuid); - - } - - } else { - - data.shapes.push(shapes.uuid); - - } - - data.options = Object.assign({}, options); - - if (options.extrudePath !== undefined) data.options.extrudePath = options.extrudePath.toJSON(); - - return data; - -} - -class IcosahedronGeometry extends PolyhedronGeometry { - - constructor(radius = 1, detail = 0) { - - const t = (1 + Math.sqrt(5)) / 2; - - const vertices = [ - - 1, t, 0, 1, t, 0, - 1, - t, 0, 1, - t, 0, - 0, - 1, t, 0, 1, t, 0, - 1, - t, 0, 1, - t, - t, 0, - 1, t, 0, 1, - t, 0, - 1, - t, 0, 1 - ]; - - const indices = [ - 0, 11, 5, 0, 5, 1, 0, 1, 7, 0, 7, 10, 0, 10, 11, - 1, 5, 9, 5, 11, 4, 11, 10, 2, 10, 7, 6, 7, 1, 8, - 3, 9, 4, 3, 4, 2, 3, 2, 6, 3, 6, 8, 3, 8, 9, - 4, 9, 5, 2, 4, 11, 6, 2, 10, 8, 6, 7, 9, 8, 1 - ]; - - super(vertices, indices, radius, detail); - - this.type = 'IcosahedronGeometry'; - - this.parameters = { - radius: radius, - detail: detail - }; - - } - - static fromJSON(data) { - - return new IcosahedronGeometry(data.radius, data.detail); - - } - -} - -class OctahedronGeometry extends PolyhedronGeometry { - - constructor(radius = 1, detail = 0) { - - const vertices = [ - 1, 0, 0, - 1, 0, 0, 0, 1, 0, - 0, - 1, 0, 0, 0, 1, 0, 0, - 1 - ]; - - const indices = [ - 0, 2, 4, 0, 4, 3, 0, 3, 5, - 0, 5, 2, 1, 2, 5, 1, 5, 3, - 1, 3, 4, 1, 4, 2 - ]; - - super(vertices, indices, radius, detail); - - this.type = 'OctahedronGeometry'; - - this.parameters = { - radius: radius, - detail: detail - }; - - } - - static fromJSON(data) { - - return new OctahedronGeometry(data.radius, data.detail); - - } - -} - -class RingGeometry extends BufferGeometry { - - constructor(innerRadius = 0.5, outerRadius = 1, thetaSegments = 32, phiSegments = 1, thetaStart = 0, thetaLength = Math.PI * 2) { - - super(); - - this.type = 'RingGeometry'; - - this.parameters = { - innerRadius: innerRadius, - outerRadius: outerRadius, - thetaSegments: thetaSegments, - phiSegments: phiSegments, - thetaStart: thetaStart, - thetaLength: thetaLength - }; - - thetaSegments = Math.max(3, thetaSegments); - phiSegments = Math.max(1, phiSegments); - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // some helper variables - - let radius = innerRadius; - const radiusStep = ((outerRadius - innerRadius) / phiSegments); - const vertex = new Vector3(); - const uv = new Vector2(); - - // generate vertices, normals and uvs - - for (let j = 0; j <= phiSegments; j++) { - - for (let i = 0; i <= thetaSegments; i++) { - - // values are generate from the inside of the ring to the outside - - const segment = thetaStart + i / thetaSegments * thetaLength; - - // vertex - - vertex.x = radius * Math.cos(segment); - vertex.y = radius * Math.sin(segment); - - vertices.push(vertex.x, vertex.y, vertex.z); - - // normal - - normals.push(0, 0, 1); - - // uv - - uv.x = (vertex.x / outerRadius + 1) / 2; - uv.y = (vertex.y / outerRadius + 1) / 2; - - uvs.push(uv.x, uv.y); - - } - - // increase the radius for next row of vertices - - radius += radiusStep; - - } - - // indices - - for (let j = 0; j < phiSegments; j++) { - - const thetaSegmentLevel = j * (thetaSegments + 1); - - for (let i = 0; i < thetaSegments; i++) { - - const segment = i + thetaSegmentLevel; - - const a = segment; - const b = segment + thetaSegments + 1; - const c = segment + thetaSegments + 2; - const d = segment + 1; - - // faces - - indices.push(a, b, d); - indices.push(b, c, d); - - } - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - } - - static fromJSON(data) { - - return new RingGeometry(data.innerRadius, data.outerRadius, data.thetaSegments, data.phiSegments, data.thetaStart, data.thetaLength); - - } - -} - -class ShapeGeometry extends BufferGeometry { - - constructor(shapes = new Shape([new Vector2(0, 0.5), new Vector2(- 0.5, - 0.5), new Vector2(0.5, - 0.5)]), curveSegments = 12) { - - super(); - - this.type = 'ShapeGeometry'; - - this.parameters = { - shapes: shapes, - curveSegments: curveSegments - }; - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // helper variables - - let groupStart = 0; - let groupCount = 0; - - // allow single and array values for "shapes" parameter - - if (Array.isArray(shapes) === false) { - - addShape(shapes); - - } else { - - for (let i = 0; i < shapes.length; i++) { - - addShape(shapes[i]); - - this.addGroup(groupStart, groupCount, i); // enables MultiMaterial support - - groupStart += groupCount; - groupCount = 0; - - } - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - - // helper functions - - function addShape(shape) { - - const indexOffset = vertices.length / 3; - const points = shape.extractPoints(curveSegments); - - let shapeVertices = points.shape; - const shapeHoles = points.holes; - - // check direction of vertices - - if (ShapeUtils.isClockWise(shapeVertices) === false) { - - shapeVertices = shapeVertices.reverse(); - - } - - for (let i = 0, l = shapeHoles.length; i < l; i++) { - - const shapeHole = shapeHoles[i]; - - if (ShapeUtils.isClockWise(shapeHole) === true) { - - shapeHoles[i] = shapeHole.reverse(); - - } - - } - - const faces = ShapeUtils.triangulateShape(shapeVertices, shapeHoles); - - // join vertices of inner and outer paths to a single array - - for (let i = 0, l = shapeHoles.length; i < l; i++) { - - const shapeHole = shapeHoles[i]; - shapeVertices = shapeVertices.concat(shapeHole); - - } - - // vertices, normals, uvs - - for (let i = 0, l = shapeVertices.length; i < l; i++) { - - const vertex = shapeVertices[i]; - - vertices.push(vertex.x, vertex.y, 0); - normals.push(0, 0, 1); - uvs.push(vertex.x, vertex.y); // world uvs - - } - - // indices - - for (let i = 0, l = faces.length; i < l; i++) { - - const face = faces[i]; - - const a = face[0] + indexOffset; - const b = face[1] + indexOffset; - const c = face[2] + indexOffset; - - indices.push(a, b, c); - groupCount += 3; - - } - - } - - } - - toJSON() { - - const data = super.toJSON(); - - const shapes = this.parameters.shapes; - - return toJSON(shapes, data); - - } - - static fromJSON(data, shapes) { - - const geometryShapes = []; - - for (let j = 0, jl = data.shapes.length; j < jl; j++) { - - const shape = shapes[data.shapes[j]]; - - geometryShapes.push(shape); - - } - - return new ShapeGeometry(geometryShapes, data.curveSegments); - - } - -} - -function toJSON(shapes, data) { - - data.shapes = []; - - if (Array.isArray(shapes)) { - - for (let i = 0, l = shapes.length; i < l; i++) { - - const shape = shapes[i]; - - data.shapes.push(shape.uuid); - - } - - } else { - - data.shapes.push(shapes.uuid); - - } - - return data; - -} - -class SphereGeometry extends BufferGeometry { - - constructor(radius = 1, widthSegments = 32, heightSegments = 16, phiStart = 0, phiLength = Math.PI * 2, thetaStart = 0, thetaLength = Math.PI) { - - super(); - - this.type = 'SphereGeometry'; - - this.parameters = { - radius: radius, - widthSegments: widthSegments, - heightSegments: heightSegments, - phiStart: phiStart, - phiLength: phiLength, - thetaStart: thetaStart, - thetaLength: thetaLength - }; - - widthSegments = Math.max(3, Math.floor(widthSegments)); - heightSegments = Math.max(2, Math.floor(heightSegments)); - - const thetaEnd = Math.min(thetaStart + thetaLength, Math.PI); - - let index = 0; - const grid = []; - - const vertex = new Vector3(); - const normal = new Vector3(); - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // generate vertices, normals and uvs - - for (let iy = 0; iy <= heightSegments; iy++) { - - const verticesRow = []; - - const v = iy / heightSegments; - - // special case for the poles - - let uOffset = 0; - - if (iy == 0 && thetaStart == 0) { - - uOffset = 0.5 / widthSegments; - - } else if (iy == heightSegments && thetaEnd == Math.PI) { - - uOffset = - 0.5 / widthSegments; - - } - - for (let ix = 0; ix <= widthSegments; ix++) { - - const u = ix / widthSegments; - - // vertex - - vertex.x = - radius * Math.cos(phiStart + u * phiLength) * Math.sin(thetaStart + v * thetaLength); - vertex.y = radius * Math.cos(thetaStart + v * thetaLength); - vertex.z = radius * Math.sin(phiStart + u * phiLength) * Math.sin(thetaStart + v * thetaLength); - - vertices.push(vertex.x, vertex.y, vertex.z); - - // normal - - normal.copy(vertex).normalize(); - normals.push(normal.x, normal.y, normal.z); - - // uv - - uvs.push(u + uOffset, 1 - v); - - verticesRow.push(index++); - - } - - grid.push(verticesRow); - - } - - // indices - - for (let iy = 0; iy < heightSegments; iy++) { - - for (let ix = 0; ix < widthSegments; ix++) { - - const a = grid[iy][ix + 1]; - const b = grid[iy][ix]; - const c = grid[iy + 1][ix]; - const d = grid[iy + 1][ix + 1]; - - if (iy !== 0 || thetaStart > 0) indices.push(a, b, d); - if (iy !== heightSegments - 1 || thetaEnd < Math.PI) indices.push(b, c, d); - - } - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - } - - static fromJSON(data) { - - return new SphereGeometry(data.radius, data.widthSegments, data.heightSegments, data.phiStart, data.phiLength, data.thetaStart, data.thetaLength); - - } - -} - -class TetrahedronGeometry extends PolyhedronGeometry { - - constructor(radius = 1, detail = 0) { - - const vertices = [ - 1, 1, 1, - 1, - 1, 1, - 1, 1, - 1, 1, - 1, - 1 - ]; - - const indices = [ - 2, 1, 0, 0, 3, 2, 1, 3, 0, 2, 3, 1 - ]; - - super(vertices, indices, radius, detail); - - this.type = 'TetrahedronGeometry'; - - this.parameters = { - radius: radius, - detail: detail - }; - - } - - static fromJSON(data) { - - return new TetrahedronGeometry(data.radius, data.detail); - - } - -} - -class TorusGeometry extends BufferGeometry { - - constructor(radius = 1, tube = 0.4, radialSegments = 12, tubularSegments = 48, arc = Math.PI * 2) { - - super(); - - this.type = 'TorusGeometry'; - - this.parameters = { - radius: radius, - tube: tube, - radialSegments: radialSegments, - tubularSegments: tubularSegments, - arc: arc - }; - - radialSegments = Math.floor(radialSegments); - tubularSegments = Math.floor(tubularSegments); - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // helper variables - - const center = new Vector3(); - const vertex = new Vector3(); - const normal = new Vector3(); - - // generate vertices, normals and uvs - - for (let j = 0; j <= radialSegments; j++) { - - for (let i = 0; i <= tubularSegments; i++) { - - const u = i / tubularSegments * arc; - const v = j / radialSegments * Math.PI * 2; - - // vertex - - vertex.x = (radius + tube * Math.cos(v)) * Math.cos(u); - vertex.y = (radius + tube * Math.cos(v)) * Math.sin(u); - vertex.z = tube * Math.sin(v); - - vertices.push(vertex.x, vertex.y, vertex.z); - - // normal - - center.x = radius * Math.cos(u); - center.y = radius * Math.sin(u); - normal.subVectors(vertex, center).normalize(); - - normals.push(normal.x, normal.y, normal.z); - - // uv - - uvs.push(i / tubularSegments); - uvs.push(j / radialSegments); - - } - - } - - // generate indices - - for (let j = 1; j <= radialSegments; j++) { - - for (let i = 1; i <= tubularSegments; i++) { - - // indices - - const a = (tubularSegments + 1) * j + i - 1; - const b = (tubularSegments + 1) * (j - 1) + i - 1; - const c = (tubularSegments + 1) * (j - 1) + i; - const d = (tubularSegments + 1) * j + i; - - // faces - - indices.push(a, b, d); - indices.push(b, c, d); - - } - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - } - - static fromJSON(data) { - - return new TorusGeometry(data.radius, data.tube, data.radialSegments, data.tubularSegments, data.arc); - - } - -} - -class TorusKnotGeometry extends BufferGeometry { - - constructor(radius = 1, tube = 0.4, tubularSegments = 64, radialSegments = 8, p = 2, q = 3) { - - super(); - - this.type = 'TorusKnotGeometry'; - - this.parameters = { - radius: radius, - tube: tube, - tubularSegments: tubularSegments, - radialSegments: radialSegments, - p: p, - q: q - }; - - tubularSegments = Math.floor(tubularSegments); - radialSegments = Math.floor(radialSegments); - - // buffers - - const indices = []; - const vertices = []; - const normals = []; - const uvs = []; - - // helper variables - - const vertex = new Vector3(); - const normal = new Vector3(); - - const P1 = new Vector3(); - const P2 = new Vector3(); - - const B = new Vector3(); - const T = new Vector3(); - const N = new Vector3(); - - // generate vertices, normals and uvs - - for (let i = 0; i <= tubularSegments; ++i) { - - // the radian "u" is used to calculate the position on the torus curve of the current tubular segment - - const u = i / tubularSegments * p * Math.PI * 2; - - // now we calculate two points. P1 is our current position on the curve, P2 is a little farther ahead. - // these points are used to create a special "coordinate space", which is necessary to calculate the correct vertex positions - - calculatePositionOnCurve(u, p, q, radius, P1); - calculatePositionOnCurve(u + 0.01, p, q, radius, P2); - - // calculate orthonormal basis - - T.subVectors(P2, P1); - N.addVectors(P2, P1); - B.crossVectors(T, N); - N.crossVectors(B, T); - - // normalize B, N. T can be ignored, we don't use it - - B.normalize(); - N.normalize(); - - for (let j = 0; j <= radialSegments; ++j) { - - // now calculate the vertices. they are nothing more than an extrusion of the torus curve. - // because we extrude a shape in the xy-plane, there is no need to calculate a z-value. - - const v = j / radialSegments * Math.PI * 2; - const cx = - tube * Math.cos(v); - const cy = tube * Math.sin(v); - - // now calculate the final vertex position. - // first we orient the extrusion with our basis vectors, then we add it to the current position on the curve - - vertex.x = P1.x + (cx * N.x + cy * B.x); - vertex.y = P1.y + (cx * N.y + cy * B.y); - vertex.z = P1.z + (cx * N.z + cy * B.z); - - vertices.push(vertex.x, vertex.y, vertex.z); - - // normal (P1 is always the center/origin of the extrusion, thus we can use it to calculate the normal) - - normal.subVectors(vertex, P1).normalize(); - - normals.push(normal.x, normal.y, normal.z); - - // uv - - uvs.push(i / tubularSegments); - uvs.push(j / radialSegments); - - } - - } - - // generate indices - - for (let j = 1; j <= tubularSegments; j++) { - - for (let i = 1; i <= radialSegments; i++) { - - // indices - - const a = (radialSegments + 1) * (j - 1) + (i - 1); - const b = (radialSegments + 1) * j + (i - 1); - const c = (radialSegments + 1) * j + i; - const d = (radialSegments + 1) * (j - 1) + i; - - // faces - - indices.push(a, b, d); - indices.push(b, c, d); - - } - - } - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - // this function calculates the current position on the torus curve - - function calculatePositionOnCurve(u, p, q, radius, position) { - - const cu = Math.cos(u); - const su = Math.sin(u); - const quOverP = q / p * u; - const cs = Math.cos(quOverP); - - position.x = radius * (2 + cs) * 0.5 * cu; - position.y = radius * (2 + cs) * su * 0.5; - position.z = radius * Math.sin(quOverP) * 0.5; - - } - - } - - static fromJSON(data) { - - return new TorusKnotGeometry(data.radius, data.tube, data.tubularSegments, data.radialSegments, data.p, data.q); - - } - -} - -class TubeGeometry extends BufferGeometry { - - constructor(path = new QuadraticBezierCurve3(new Vector3(- 1, - 1, 0), new Vector3(- 1, 1, 0), new Vector3(1, 1, 0)), tubularSegments = 64, radius = 1, radialSegments = 8, closed = false) { - - super(); - - this.type = 'TubeGeometry'; - - this.parameters = { - path: path, - tubularSegments: tubularSegments, - radius: radius, - radialSegments: radialSegments, - closed: closed - }; - - const frames = path.computeFrenetFrames(tubularSegments, closed); - - // expose internals - - this.tangents = frames.tangents; - this.normals = frames.normals; - this.binormals = frames.binormals; - - // helper variables - - const vertex = new Vector3(); - const normal = new Vector3(); - const uv = new Vector2(); - let P = new Vector3(); - - // buffer - - const vertices = []; - const normals = []; - const uvs = []; - const indices = []; - - // create buffer data - - generateBufferData(); - - // build geometry - - this.setIndex(indices); - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - this.setAttribute('normal', new Float32BufferAttribute(normals, 3)); - this.setAttribute('uv', new Float32BufferAttribute(uvs, 2)); - - // functions - - function generateBufferData() { - - for (let i = 0; i < tubularSegments; i++) { - - generateSegment(i); - - } - - // if the geometry is not closed, generate the last row of vertices and normals - // at the regular position on the given path - // - // if the geometry is closed, duplicate the first row of vertices and normals (uvs will differ) - - generateSegment((closed === false) ? tubularSegments : 0); - - // uvs are generated in a separate function. - // this makes it easy compute correct values for closed geometries - - generateUVs(); - - // finally create faces - - generateIndices(); - - } - - function generateSegment(i) { - - // we use getPointAt to sample evenly distributed points from the given path - - P = path.getPointAt(i / tubularSegments, P); - - // retrieve corresponding normal and binormal - - const N = frames.normals[i]; - const B = frames.binormals[i]; - - // generate normals and vertices for the current segment - - for (let j = 0; j <= radialSegments; j++) { - - const v = j / radialSegments * Math.PI * 2; - - const sin = Math.sin(v); - const cos = - Math.cos(v); - - // normal - - normal.x = (cos * N.x + sin * B.x); - normal.y = (cos * N.y + sin * B.y); - normal.z = (cos * N.z + sin * B.z); - normal.normalize(); - - normals.push(normal.x, normal.y, normal.z); - - // vertex - - vertex.x = P.x + radius * normal.x; - vertex.y = P.y + radius * normal.y; - vertex.z = P.z + radius * normal.z; - - vertices.push(vertex.x, vertex.y, vertex.z); - - } - - } - - function generateIndices() { - - for (let j = 1; j <= tubularSegments; j++) { - - for (let i = 1; i <= radialSegments; i++) { - - const a = (radialSegments + 1) * (j - 1) + (i - 1); - const b = (radialSegments + 1) * j + (i - 1); - const c = (radialSegments + 1) * j + i; - const d = (radialSegments + 1) * (j - 1) + i; - - // faces - - indices.push(a, b, d); - indices.push(b, c, d); - - } - - } - - } - - function generateUVs() { - - for (let i = 0; i <= tubularSegments; i++) { - - for (let j = 0; j <= radialSegments; j++) { - - uv.x = i / tubularSegments; - uv.y = j / radialSegments; - - uvs.push(uv.x, uv.y); - - } - - } - - } - - } - - toJSON() { - - const data = super.toJSON(); - - data.path = this.parameters.path.toJSON(); - - return data; - - } - - static fromJSON(data) { - - // This only works for built-in curves (e.g. CatmullRomCurve3). - // User defined curves or instances of CurvePath will not be deserialized. - return new TubeGeometry( - new Curves[data.path.type]().fromJSON(data.path), - data.tubularSegments, - data.radius, - data.radialSegments, - data.closed - ); - - } - -} - -class WireframeGeometry extends BufferGeometry { - - constructor(geometry = null) { - - super(); - - this.type = 'WireframeGeometry'; - - this.parameters = { - geometry: geometry - }; - - if (geometry !== null) { - - // buffer - - const vertices = []; - const edges = new Set(); - - // helper variables - - const start = new Vector3(); - const end = new Vector3(); - - if (geometry.index !== null) { - - // indexed BufferGeometry - - const position = geometry.attributes.position; - const indices = geometry.index; - let groups = geometry.groups; - - if (groups.length === 0) { - - groups = [{ start: 0, count: indices.count, materialIndex: 0 }]; - - } - - // create a data structure that contains all edges without duplicates - - for (let o = 0, ol = groups.length; o < ol; ++o) { - - const group = groups[o]; - - const groupStart = group.start; - const groupCount = group.count; - - for (let i = groupStart, l = (groupStart + groupCount); i < l; i += 3) { - - for (let j = 0; j < 3; j++) { - - const index1 = indices.getX(i + j); - const index2 = indices.getX(i + (j + 1) % 3); - - start.fromBufferAttribute(position, index1); - end.fromBufferAttribute(position, index2); - - if (isUniqueEdge(start, end, edges) === true) { - - vertices.push(start.x, start.y, start.z); - vertices.push(end.x, end.y, end.z); - - } - - } - - } - - } - - } else { - - // non-indexed BufferGeometry - - const position = geometry.attributes.position; - - for (let i = 0, l = (position.count / 3); i < l; i++) { - - for (let j = 0; j < 3; j++) { - - // three edges per triangle, an edge is represented as (index1, index2) - // e.g. the first triangle has the following edges: (0,1),(1,2),(2,0) - - const index1 = 3 * i + j; - const index2 = 3 * i + ((j + 1) % 3); - - start.fromBufferAttribute(position, index1); - end.fromBufferAttribute(position, index2); - - if (isUniqueEdge(start, end, edges) === true) { - - vertices.push(start.x, start.y, start.z); - vertices.push(end.x, end.y, end.z); - - } - - } - - } - - } - - // build geometry - - this.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - - } - - } - -} - -function isUniqueEdge(start, end, edges) { - - const hash1 = `${start.x},${start.y},${start.z}-${end.x},${end.y},${end.z}`; - const hash2 = `${end.x},${end.y},${end.z}-${start.x},${start.y},${start.z}`; // coincident edge - - if (edges.has(hash1) === true || edges.has(hash2) === true) { - - return false; - - } else { - - edges.add(hash1); - edges.add(hash2); - return true; - - } - -} - -var Geometries = /*#__PURE__*/Object.freeze({ - __proto__: null, - BoxGeometry: BoxGeometry, - CapsuleGeometry: CapsuleGeometry, - CircleGeometry: CircleGeometry, - ConeGeometry: ConeGeometry, - CylinderGeometry: CylinderGeometry, - DodecahedronGeometry: DodecahedronGeometry, - EdgesGeometry: EdgesGeometry, - ExtrudeGeometry: ExtrudeGeometry, - IcosahedronGeometry: IcosahedronGeometry, - LatheGeometry: LatheGeometry, - OctahedronGeometry: OctahedronGeometry, - PlaneGeometry: PlaneGeometry, - PolyhedronGeometry: PolyhedronGeometry, - RingGeometry: RingGeometry, - ShapeGeometry: ShapeGeometry, - SphereGeometry: SphereGeometry, - TetrahedronGeometry: TetrahedronGeometry, - TorusGeometry: TorusGeometry, - TorusKnotGeometry: TorusKnotGeometry, - TubeGeometry: TubeGeometry, - WireframeGeometry: WireframeGeometry -}); - -class ShadowMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isShadowMaterial = true; - - this.type = 'ShadowMaterial'; - - this.color = new Color(0x000000); - this.transparent = true; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - - this.fog = source.fog; - - return this; - - } - -} - -class RawShaderMaterial extends ShaderMaterial { - - constructor(parameters) { - - super(parameters); - - this.isRawShaderMaterial = true; - - this.type = 'RawShaderMaterial'; - - } - -} - -class MeshStandardMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshStandardMaterial = true; - - this.defines = { 'STANDARD': '' }; - - this.type = 'MeshStandardMaterial'; - - this.color = new Color(0xffffff); // diffuse - this.roughness = 1.0; - this.metalness = 0.0; - - this.map = null; - - this.lightMap = null; - this.lightMapIntensity = 1.0; - - this.aoMap = null; - this.aoMapIntensity = 1.0; - - this.emissive = new Color(0x000000); - this.emissiveIntensity = 1.0; - this.emissiveMap = null; - - this.bumpMap = null; - this.bumpScale = 1; - - this.normalMap = null; - this.normalMapType = TangentSpaceNormalMap; - this.normalScale = new Vector2(1, 1); - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.roughnessMap = null; - - this.metalnessMap = null; - - this.alphaMap = null; - - this.envMap = null; - this.envMapIntensity = 1.0; - - this.wireframe = false; - this.wireframeLinewidth = 1; - this.wireframeLinecap = 'round'; - this.wireframeLinejoin = 'round'; - - this.flatShading = false; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.defines = { 'STANDARD': '' }; - - this.color.copy(source.color); - this.roughness = source.roughness; - this.metalness = source.metalness; - - this.map = source.map; - - this.lightMap = source.lightMap; - this.lightMapIntensity = source.lightMapIntensity; - - this.aoMap = source.aoMap; - this.aoMapIntensity = source.aoMapIntensity; - - this.emissive.copy(source.emissive); - this.emissiveMap = source.emissiveMap; - this.emissiveIntensity = source.emissiveIntensity; - - this.bumpMap = source.bumpMap; - this.bumpScale = source.bumpScale; - - this.normalMap = source.normalMap; - this.normalMapType = source.normalMapType; - this.normalScale.copy(source.normalScale); - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - this.roughnessMap = source.roughnessMap; - - this.metalnessMap = source.metalnessMap; - - this.alphaMap = source.alphaMap; - - this.envMap = source.envMap; - this.envMapIntensity = source.envMapIntensity; - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - this.wireframeLinecap = source.wireframeLinecap; - this.wireframeLinejoin = source.wireframeLinejoin; - - this.flatShading = source.flatShading; - - this.fog = source.fog; - - return this; - - } - -} - -class MeshPhysicalMaterial extends MeshStandardMaterial { - - constructor(parameters) { - - super(); - - this.isMeshPhysicalMaterial = true; - - this.defines = { - - 'STANDARD': '', - 'PHYSICAL': '' - - }; - - this.type = 'MeshPhysicalMaterial'; - - this.clearcoatMap = null; - this.clearcoatRoughness = 0.0; - this.clearcoatRoughnessMap = null; - this.clearcoatNormalScale = new Vector2(1, 1); - this.clearcoatNormalMap = null; - - this.ior = 1.5; - - Object.defineProperty(this, 'reflectivity', { - get: function () { - - return (clamp(2.5 * (this.ior - 1) / (this.ior + 1), 0, 1)); - - }, - set: function (reflectivity) { - - this.ior = (1 + 0.4 * reflectivity) / (1 - 0.4 * reflectivity); - - } - }); - - this.iridescenceMap = null; - this.iridescenceIOR = 1.3; - this.iridescenceThicknessRange = [100, 400]; - this.iridescenceThicknessMap = null; - - this.sheenColor = new Color(0x000000); - this.sheenColorMap = null; - this.sheenRoughness = 1.0; - this.sheenRoughnessMap = null; - - this.transmissionMap = null; - - this.thickness = 0; - this.thicknessMap = null; - this.attenuationDistance = Infinity; - this.attenuationColor = new Color(1, 1, 1); - - this.specularIntensity = 1.0; - this.specularIntensityMap = null; - this.specularColor = new Color(1, 1, 1); - this.specularColorMap = null; - - this._sheen = 0.0; - this._clearcoat = 0; - this._iridescence = 0; - this._transmission = 0; - - this.setValues(parameters); - - } - - get sheen() { - - return this._sheen; - - } - - set sheen(value) { - - if (this._sheen > 0 !== value > 0) { - - this.version++; - - } - - this._sheen = value; - - } - - get clearcoat() { - - return this._clearcoat; - - } - - set clearcoat(value) { - - if (this._clearcoat > 0 !== value > 0) { - - this.version++; - - } - - this._clearcoat = value; - - } - - get iridescence() { - - return this._iridescence; - - } - - set iridescence(value) { - - if (this._iridescence > 0 !== value > 0) { - - this.version++; - - } - - this._iridescence = value; - - } - - get transmission() { - - return this._transmission; - - } - - set transmission(value) { - - if (this._transmission > 0 !== value > 0) { - - this.version++; - - } - - this._transmission = value; - - } - - copy(source) { - - super.copy(source); - - this.defines = { - - 'STANDARD': '', - 'PHYSICAL': '' - - }; - - this.clearcoat = source.clearcoat; - this.clearcoatMap = source.clearcoatMap; - this.clearcoatRoughness = source.clearcoatRoughness; - this.clearcoatRoughnessMap = source.clearcoatRoughnessMap; - this.clearcoatNormalMap = source.clearcoatNormalMap; - this.clearcoatNormalScale.copy(source.clearcoatNormalScale); - - this.ior = source.ior; - - this.iridescence = source.iridescence; - this.iridescenceMap = source.iridescenceMap; - this.iridescenceIOR = source.iridescenceIOR; - this.iridescenceThicknessRange = [...source.iridescenceThicknessRange]; - this.iridescenceThicknessMap = source.iridescenceThicknessMap; - - this.sheen = source.sheen; - this.sheenColor.copy(source.sheenColor); - this.sheenColorMap = source.sheenColorMap; - this.sheenRoughness = source.sheenRoughness; - this.sheenRoughnessMap = source.sheenRoughnessMap; - - this.transmission = source.transmission; - this.transmissionMap = source.transmissionMap; - - this.thickness = source.thickness; - this.thicknessMap = source.thicknessMap; - this.attenuationDistance = source.attenuationDistance; - this.attenuationColor.copy(source.attenuationColor); - - this.specularIntensity = source.specularIntensity; - this.specularIntensityMap = source.specularIntensityMap; - this.specularColor.copy(source.specularColor); - this.specularColorMap = source.specularColorMap; - - return this; - - } - -} - -class MeshPhongMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshPhongMaterial = true; - - this.type = 'MeshPhongMaterial'; - - this.color = new Color(0xffffff); // diffuse - this.specular = new Color(0x111111); - this.shininess = 30; - - this.map = null; - - this.lightMap = null; - this.lightMapIntensity = 1.0; - - this.aoMap = null; - this.aoMapIntensity = 1.0; - - this.emissive = new Color(0x000000); - this.emissiveIntensity = 1.0; - this.emissiveMap = null; - - this.bumpMap = null; - this.bumpScale = 1; - - this.normalMap = null; - this.normalMapType = TangentSpaceNormalMap; - this.normalScale = new Vector2(1, 1); - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.specularMap = null; - - this.alphaMap = null; - - this.envMap = null; - this.combine = MultiplyOperation; - this.reflectivity = 1; - this.refractionRatio = 0.98; - - this.wireframe = false; - this.wireframeLinewidth = 1; - this.wireframeLinecap = 'round'; - this.wireframeLinejoin = 'round'; - - this.flatShading = false; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - this.specular.copy(source.specular); - this.shininess = source.shininess; - - this.map = source.map; - - this.lightMap = source.lightMap; - this.lightMapIntensity = source.lightMapIntensity; - - this.aoMap = source.aoMap; - this.aoMapIntensity = source.aoMapIntensity; - - this.emissive.copy(source.emissive); - this.emissiveMap = source.emissiveMap; - this.emissiveIntensity = source.emissiveIntensity; - - this.bumpMap = source.bumpMap; - this.bumpScale = source.bumpScale; - - this.normalMap = source.normalMap; - this.normalMapType = source.normalMapType; - this.normalScale.copy(source.normalScale); - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - this.specularMap = source.specularMap; - - this.alphaMap = source.alphaMap; - - this.envMap = source.envMap; - this.combine = source.combine; - this.reflectivity = source.reflectivity; - this.refractionRatio = source.refractionRatio; - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - this.wireframeLinecap = source.wireframeLinecap; - this.wireframeLinejoin = source.wireframeLinejoin; - - this.flatShading = source.flatShading; - - this.fog = source.fog; - - return this; - - } - -} - -class MeshToonMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshToonMaterial = true; - - this.defines = { 'TOON': '' }; - - this.type = 'MeshToonMaterial'; - - this.color = new Color(0xffffff); - - this.map = null; - this.gradientMap = null; - - this.lightMap = null; - this.lightMapIntensity = 1.0; - - this.aoMap = null; - this.aoMapIntensity = 1.0; - - this.emissive = new Color(0x000000); - this.emissiveIntensity = 1.0; - this.emissiveMap = null; - - this.bumpMap = null; - this.bumpScale = 1; - - this.normalMap = null; - this.normalMapType = TangentSpaceNormalMap; - this.normalScale = new Vector2(1, 1); - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.alphaMap = null; - - this.wireframe = false; - this.wireframeLinewidth = 1; - this.wireframeLinecap = 'round'; - this.wireframeLinejoin = 'round'; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - - this.map = source.map; - this.gradientMap = source.gradientMap; - - this.lightMap = source.lightMap; - this.lightMapIntensity = source.lightMapIntensity; - - this.aoMap = source.aoMap; - this.aoMapIntensity = source.aoMapIntensity; - - this.emissive.copy(source.emissive); - this.emissiveMap = source.emissiveMap; - this.emissiveIntensity = source.emissiveIntensity; - - this.bumpMap = source.bumpMap; - this.bumpScale = source.bumpScale; - - this.normalMap = source.normalMap; - this.normalMapType = source.normalMapType; - this.normalScale.copy(source.normalScale); - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - this.alphaMap = source.alphaMap; - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - this.wireframeLinecap = source.wireframeLinecap; - this.wireframeLinejoin = source.wireframeLinejoin; - - this.fog = source.fog; - - return this; - - } - -} - -class MeshNormalMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshNormalMaterial = true; - - this.type = 'MeshNormalMaterial'; - - this.bumpMap = null; - this.bumpScale = 1; - - this.normalMap = null; - this.normalMapType = TangentSpaceNormalMap; - this.normalScale = new Vector2(1, 1); - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.wireframe = false; - this.wireframeLinewidth = 1; - - this.flatShading = false; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.bumpMap = source.bumpMap; - this.bumpScale = source.bumpScale; - - this.normalMap = source.normalMap; - this.normalMapType = source.normalMapType; - this.normalScale.copy(source.normalScale); - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - - this.flatShading = source.flatShading; - - return this; - - } - -} - -class MeshLambertMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshLambertMaterial = true; - - this.type = 'MeshLambertMaterial'; - - this.color = new Color(0xffffff); // diffuse - - this.map = null; - - this.lightMap = null; - this.lightMapIntensity = 1.0; - - this.aoMap = null; - this.aoMapIntensity = 1.0; - - this.emissive = new Color(0x000000); - this.emissiveIntensity = 1.0; - this.emissiveMap = null; - - this.bumpMap = null; - this.bumpScale = 1; - - this.normalMap = null; - this.normalMapType = TangentSpaceNormalMap; - this.normalScale = new Vector2(1, 1); - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.specularMap = null; - - this.alphaMap = null; - - this.envMap = null; - this.combine = MultiplyOperation; - this.reflectivity = 1; - this.refractionRatio = 0.98; - - this.wireframe = false; - this.wireframeLinewidth = 1; - this.wireframeLinecap = 'round'; - this.wireframeLinejoin = 'round'; - - this.flatShading = false; - - this.fog = true; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.color.copy(source.color); - - this.map = source.map; - - this.lightMap = source.lightMap; - this.lightMapIntensity = source.lightMapIntensity; - - this.aoMap = source.aoMap; - this.aoMapIntensity = source.aoMapIntensity; - - this.emissive.copy(source.emissive); - this.emissiveMap = source.emissiveMap; - this.emissiveIntensity = source.emissiveIntensity; - - this.bumpMap = source.bumpMap; - this.bumpScale = source.bumpScale; - - this.normalMap = source.normalMap; - this.normalMapType = source.normalMapType; - this.normalScale.copy(source.normalScale); - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - this.specularMap = source.specularMap; - - this.alphaMap = source.alphaMap; - - this.envMap = source.envMap; - this.combine = source.combine; - this.reflectivity = source.reflectivity; - this.refractionRatio = source.refractionRatio; - - this.wireframe = source.wireframe; - this.wireframeLinewidth = source.wireframeLinewidth; - this.wireframeLinecap = source.wireframeLinecap; - this.wireframeLinejoin = source.wireframeLinejoin; - - this.flatShading = source.flatShading; - - this.fog = source.fog; - - return this; - - } - -} - -class MeshMatcapMaterial extends Material { - - constructor(parameters) { - - super(); - - this.isMeshMatcapMaterial = true; - - this.defines = { 'MATCAP': '' }; - - this.type = 'MeshMatcapMaterial'; - - this.color = new Color(0xffffff); // diffuse - - this.matcap = null; - - this.map = null; - - this.bumpMap = null; - this.bumpScale = 1; - - this.normalMap = null; - this.normalMapType = TangentSpaceNormalMap; - this.normalScale = new Vector2(1, 1); - - this.displacementMap = null; - this.displacementScale = 1; - this.displacementBias = 0; - - this.alphaMap = null; - - this.flatShading = false; - - this.fog = true; - - this.setValues(parameters); - - } - - - copy(source) { - - super.copy(source); - - this.defines = { 'MATCAP': '' }; - - this.color.copy(source.color); - - this.matcap = source.matcap; - - this.map = source.map; - - this.bumpMap = source.bumpMap; - this.bumpScale = source.bumpScale; - - this.normalMap = source.normalMap; - this.normalMapType = source.normalMapType; - this.normalScale.copy(source.normalScale); - - this.displacementMap = source.displacementMap; - this.displacementScale = source.displacementScale; - this.displacementBias = source.displacementBias; - - this.alphaMap = source.alphaMap; - - this.flatShading = source.flatShading; - - this.fog = source.fog; - - return this; - - } - -} - -class LineDashedMaterial extends LineBasicMaterial { - - constructor(parameters) { - - super(); - - this.isLineDashedMaterial = true; - - this.type = 'LineDashedMaterial'; - - this.scale = 1; - this.dashSize = 3; - this.gapSize = 1; - - this.setValues(parameters); - - } - - copy(source) { - - super.copy(source); - - this.scale = source.scale; - this.dashSize = source.dashSize; - this.gapSize = source.gapSize; - - return this; - - } - -} - -// same as Array.prototype.slice, but also works on typed arrays -function arraySlice(array, from, to) { - - if (isTypedArray(array)) { - - // in ios9 array.subarray(from, undefined) will return empty array - // but array.subarray(from) or array.subarray(from, len) is correct - return new array.constructor(array.subarray(from, to !== undefined ? to : array.length)); - - } - - return array.slice(from, to); - -} - -// converts an array to a specific type -function convertArray(array, type, forceClone) { - - if (!array || // let 'undefined' and 'null' pass - !forceClone && array.constructor === type) return array; - - if (typeof type.BYTES_PER_ELEMENT === 'number') { - - return new type(array); // create typed array - - } - - return Array.prototype.slice.call(array); // create Array - -} - -function isTypedArray(object) { - - return ArrayBuffer.isView(object) && - !(object instanceof DataView); - -} - -// returns an array by which times and values can be sorted -function getKeyframeOrder(times) { - - function compareTime(i, j) { - - return times[i] - times[j]; - - } - - const n = times.length; - const result = new Array(n); - for (let i = 0; i !== n; ++i) result[i] = i; - - result.sort(compareTime); - - return result; - -} - -// uses the array previously returned by 'getKeyframeOrder' to sort data -function sortedArray(values, stride, order) { - - const nValues = values.length; - const result = new values.constructor(nValues); - - for (let i = 0, dstOffset = 0; dstOffset !== nValues; ++i) { - - const srcOffset = order[i] * stride; - - for (let j = 0; j !== stride; ++j) { - - result[dstOffset++] = values[srcOffset + j]; - - } - - } - - return result; - -} - -// function for parsing AOS keyframe formats -function flattenJSON(jsonKeys, times, values, valuePropertyName) { - - let i = 1, key = jsonKeys[0]; - - while (key !== undefined && key[valuePropertyName] === undefined) { - - key = jsonKeys[i++]; - - } - - if (key === undefined) return; // no data - - let value = key[valuePropertyName]; - if (value === undefined) return; // no data - - if (Array.isArray(value)) { - - do { - - value = key[valuePropertyName]; - - if (value !== undefined) { - - times.push(key.time); - values.push.apply(values, value); // push all elements - - } - - key = jsonKeys[i++]; - - } while (key !== undefined); - - } else if (value.toArray !== undefined) { - - // ...assume THREE.Math-ish - - do { - - value = key[valuePropertyName]; - - if (value !== undefined) { - - times.push(key.time); - value.toArray(values, values.length); - - } - - key = jsonKeys[i++]; - - } while (key !== undefined); - - } else { - - // otherwise push as-is - - do { - - value = key[valuePropertyName]; - - if (value !== undefined) { - - times.push(key.time); - values.push(value); - - } - - key = jsonKeys[i++]; - - } while (key !== undefined); - - } - -} - -function subclip(sourceClip, name, startFrame, endFrame, fps = 30) { - - const clip = sourceClip.clone(); - - clip.name = name; - - const tracks = []; - - for (let i = 0; i < clip.tracks.length; ++i) { - - const track = clip.tracks[i]; - const valueSize = track.getValueSize(); - - const times = []; - const values = []; - - for (let j = 0; j < track.times.length; ++j) { - - const frame = track.times[j] * fps; - - if (frame < startFrame || frame >= endFrame) continue; - - times.push(track.times[j]); - - for (let k = 0; k < valueSize; ++k) { - - values.push(track.values[j * valueSize + k]); - - } - - } - - if (times.length === 0) continue; - - track.times = convertArray(times, track.times.constructor); - track.values = convertArray(values, track.values.constructor); - - tracks.push(track); - - } - - clip.tracks = tracks; - - // find minimum .times value across all tracks in the trimmed clip - - let minStartTime = Infinity; - - for (let i = 0; i < clip.tracks.length; ++i) { - - if (minStartTime > clip.tracks[i].times[0]) { - - minStartTime = clip.tracks[i].times[0]; - - } - - } - - // shift all tracks such that clip begins at t=0 - - for (let i = 0; i < clip.tracks.length; ++i) { - - clip.tracks[i].shift(- 1 * minStartTime); - - } - - clip.resetDuration(); - - return clip; - -} - -function makeClipAdditive(targetClip, referenceFrame = 0, referenceClip = targetClip, fps = 30) { - - if (fps <= 0) fps = 30; - - const numTracks = referenceClip.tracks.length; - const referenceTime = referenceFrame / fps; - - // Make each track's values relative to the values at the reference frame - for (let i = 0; i < numTracks; ++i) { - - const referenceTrack = referenceClip.tracks[i]; - const referenceTrackType = referenceTrack.ValueTypeName; - - // Skip this track if it's non-numeric - if (referenceTrackType === 'bool' || referenceTrackType === 'string') continue; - - // Find the track in the target clip whose name and type matches the reference track - const targetTrack = targetClip.tracks.find(function (track) { - - return track.name === referenceTrack.name - && track.ValueTypeName === referenceTrackType; - - }); - - if (targetTrack === undefined) continue; - - let referenceOffset = 0; - const referenceValueSize = referenceTrack.getValueSize(); - - if (referenceTrack.createInterpolant.isInterpolantFactoryMethodGLTFCubicSpline) { - - referenceOffset = referenceValueSize / 3; - - } - - let targetOffset = 0; - const targetValueSize = targetTrack.getValueSize(); - - if (targetTrack.createInterpolant.isInterpolantFactoryMethodGLTFCubicSpline) { - - targetOffset = targetValueSize / 3; - - } - - const lastIndex = referenceTrack.times.length - 1; - let referenceValue; - - // Find the value to subtract out of the track - if (referenceTime <= referenceTrack.times[0]) { - - // Reference frame is earlier than the first keyframe, so just use the first keyframe - const startIndex = referenceOffset; - const endIndex = referenceValueSize - referenceOffset; - referenceValue = arraySlice(referenceTrack.values, startIndex, endIndex); - - } else if (referenceTime >= referenceTrack.times[lastIndex]) { - - // Reference frame is after the last keyframe, so just use the last keyframe - const startIndex = lastIndex * referenceValueSize + referenceOffset; - const endIndex = startIndex + referenceValueSize - referenceOffset; - referenceValue = arraySlice(referenceTrack.values, startIndex, endIndex); - - } else { - - // Interpolate to the reference value - const interpolant = referenceTrack.createInterpolant(); - const startIndex = referenceOffset; - const endIndex = referenceValueSize - referenceOffset; - interpolant.evaluate(referenceTime); - referenceValue = arraySlice(interpolant.resultBuffer, startIndex, endIndex); - - } - - // Conjugate the quaternion - if (referenceTrackType === 'quaternion') { - - const referenceQuat = new Quaternion().fromArray(referenceValue).normalize().conjugate(); - referenceQuat.toArray(referenceValue); - - } - - // Subtract the reference value from all of the track values - - const numTimes = targetTrack.times.length; - for (let j = 0; j < numTimes; ++j) { - - const valueStart = j * targetValueSize + targetOffset; - - if (referenceTrackType === 'quaternion') { - - // Multiply the conjugate for quaternion track types - Quaternion.multiplyQuaternionsFlat( - targetTrack.values, - valueStart, - referenceValue, - 0, - targetTrack.values, - valueStart - ); - - } else { - - const valueEnd = targetValueSize - targetOffset * 2; - - // Subtract each value for all other numeric track types - for (let k = 0; k < valueEnd; ++k) { - - targetTrack.values[valueStart + k] -= referenceValue[k]; - - } - - } - - } - - } - - targetClip.blendMode = AdditiveAnimationBlendMode; - - return targetClip; - -} - -var AnimationUtils = /*#__PURE__*/Object.freeze({ - __proto__: null, - arraySlice: arraySlice, - convertArray: convertArray, - flattenJSON: flattenJSON, - getKeyframeOrder: getKeyframeOrder, - isTypedArray: isTypedArray, - makeClipAdditive: makeClipAdditive, - sortedArray: sortedArray, - subclip: subclip -}); - -/** - * Abstract base class of interpolants over parametric samples. - * - * The parameter domain is one dimensional, typically the time or a path - * along a curve defined by the data. - * - * The sample values can have any dimensionality and derived classes may - * apply special interpretations to the data. - * - * This class provides the interval seek in a Template Method, deferring - * the actual interpolation to derived classes. - * - * Time complexity is O(1) for linear access crossing at most two points - * and O(log N) for random access, where N is the number of positions. - * - * References: - * - * http://www.oodesign.com/template-method-pattern.html - * - */ - -class Interpolant { - - constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) { - - this.parameterPositions = parameterPositions; - this._cachedIndex = 0; - - this.resultBuffer = resultBuffer !== undefined ? - resultBuffer : new sampleValues.constructor(sampleSize); - this.sampleValues = sampleValues; - this.valueSize = sampleSize; - - this.settings = null; - this.DefaultSettings_ = {}; - - } - - evaluate(t) { - - const pp = this.parameterPositions; - let i1 = this._cachedIndex, - t1 = pp[i1], - t0 = pp[i1 - 1]; - - validate_interval: { - - seek: { - - let right; - - linear_scan: { - - //- See http://jsperf.com/comparison-to-undefined/3 - //- slower code: - //- - //- if ( t >= t1 || t1 === undefined ) { - forward_scan: if (!(t < t1)) { - - for (let giveUpAt = i1 + 2; ;) { - - if (t1 === undefined) { - - if (t < t0) break forward_scan; - - // after end - - i1 = pp.length; - this._cachedIndex = i1; - return this.copySampleValue_(i1 - 1); - - } - - if (i1 === giveUpAt) break; // this loop - - t0 = t1; - t1 = pp[++i1]; - - if (t < t1) { - - // we have arrived at the sought interval - break seek; - - } - - } - - // prepare binary search on the right side of the index - right = pp.length; - break linear_scan; - - } - - //- slower code: - //- if ( t < t0 || t0 === undefined ) { - if (!(t >= t0)) { - - // looping? - - const t1global = pp[1]; - - if (t < t1global) { - - i1 = 2; // + 1, using the scan for the details - t0 = t1global; - - } - - // linear reverse scan - - for (let giveUpAt = i1 - 2; ;) { - - if (t0 === undefined) { - - // before start - - this._cachedIndex = 0; - return this.copySampleValue_(0); - - } - - if (i1 === giveUpAt) break; // this loop - - t1 = t0; - t0 = pp[--i1 - 1]; - - if (t >= t0) { - - // we have arrived at the sought interval - break seek; - - } - - } - - // prepare binary search on the left side of the index - right = i1; - i1 = 0; - break linear_scan; - - } - - // the interval is valid - - break validate_interval; - - } // linear scan - - // binary search - - while (i1 < right) { - - const mid = (i1 + right) >>> 1; - - if (t < pp[mid]) { - - right = mid; - - } else { - - i1 = mid + 1; - - } - - } - - t1 = pp[i1]; - t0 = pp[i1 - 1]; - - // check boundary cases, again - - if (t0 === undefined) { - - this._cachedIndex = 0; - return this.copySampleValue_(0); - - } - - if (t1 === undefined) { - - i1 = pp.length; - this._cachedIndex = i1; - return this.copySampleValue_(i1 - 1); - - } - - } // seek - - this._cachedIndex = i1; - - this.intervalChanged_(i1, t0, t1); - - } // validate_interval - - return this.interpolate_(i1, t0, t, t1); - - } - - getSettings_() { - - return this.settings || this.DefaultSettings_; - - } - - copySampleValue_(index) { - - // copies a sample value to the result buffer - - const result = this.resultBuffer, - values = this.sampleValues, - stride = this.valueSize, - offset = index * stride; - - for (let i = 0; i !== stride; ++i) { - - result[i] = values[offset + i]; - - } - - return result; - - } - - // Template methods for derived classes: - - interpolate_( /* i1, t0, t, t1 */) { - - throw new Error('call to abstract method'); - // implementations shall return this.resultBuffer - - } - - intervalChanged_( /* i1, t0, t1 */) { - - // empty - - } - -} - -/** - * Fast and simple cubic spline interpolant. - * - * It was derived from a Hermitian construction setting the first derivative - * at each sample position to the linear slope between neighboring positions - * over their parameter interval. - */ - -class CubicInterpolant extends Interpolant { - - constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) { - - super(parameterPositions, sampleValues, sampleSize, resultBuffer); - - this._weightPrev = - 0; - this._offsetPrev = - 0; - this._weightNext = - 0; - this._offsetNext = - 0; - - this.DefaultSettings_ = { - - endingStart: ZeroCurvatureEnding, - endingEnd: ZeroCurvatureEnding - - }; - - } - - intervalChanged_(i1, t0, t1) { - - const pp = this.parameterPositions; - let iPrev = i1 - 2, - iNext = i1 + 1, - - tPrev = pp[iPrev], - tNext = pp[iNext]; - - if (tPrev === undefined) { - - switch (this.getSettings_().endingStart) { - - case ZeroSlopeEnding: - - // f'(t0) = 0 - iPrev = i1; - tPrev = 2 * t0 - t1; - - break; - - case WrapAroundEnding: - - // use the other end of the curve - iPrev = pp.length - 2; - tPrev = t0 + pp[iPrev] - pp[iPrev + 1]; - - break; - - default: // ZeroCurvatureEnding - - // f''(t0) = 0 a.k.a. Natural Spline - iPrev = i1; - tPrev = t1; - - } - - } - - if (tNext === undefined) { - - switch (this.getSettings_().endingEnd) { - - case ZeroSlopeEnding: - - // f'(tN) = 0 - iNext = i1; - tNext = 2 * t1 - t0; - - break; - - case WrapAroundEnding: - - // use the other end of the curve - iNext = 1; - tNext = t1 + pp[1] - pp[0]; - - break; - - default: // ZeroCurvatureEnding - - // f''(tN) = 0, a.k.a. Natural Spline - iNext = i1 - 1; - tNext = t0; - - } - - } - - const halfDt = (t1 - t0) * 0.5, - stride = this.valueSize; - - this._weightPrev = halfDt / (t0 - tPrev); - this._weightNext = halfDt / (tNext - t1); - this._offsetPrev = iPrev * stride; - this._offsetNext = iNext * stride; - - } - - interpolate_(i1, t0, t, t1) { - - const result = this.resultBuffer, - values = this.sampleValues, - stride = this.valueSize, - - o1 = i1 * stride, o0 = o1 - stride, - oP = this._offsetPrev, oN = this._offsetNext, - wP = this._weightPrev, wN = this._weightNext, - - p = (t - t0) / (t1 - t0), - pp = p * p, - ppp = pp * p; - - // evaluate polynomials - - const sP = - wP * ppp + 2 * wP * pp - wP * p; - const s0 = (1 + wP) * ppp + (- 1.5 - 2 * wP) * pp + (- 0.5 + wP) * p + 1; - const s1 = (- 1 - wN) * ppp + (1.5 + wN) * pp + 0.5 * p; - const sN = wN * ppp - wN * pp; - - // combine data linearly - - for (let i = 0; i !== stride; ++i) { - - result[i] = - sP * values[oP + i] + - s0 * values[o0 + i] + - s1 * values[o1 + i] + - sN * values[oN + i]; - - } - - return result; - - } - -} - -class LinearInterpolant extends Interpolant { - - constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) { - - super(parameterPositions, sampleValues, sampleSize, resultBuffer); - - } - - interpolate_(i1, t0, t, t1) { - - const result = this.resultBuffer, - values = this.sampleValues, - stride = this.valueSize, - - offset1 = i1 * stride, - offset0 = offset1 - stride, - - weight1 = (t - t0) / (t1 - t0), - weight0 = 1 - weight1; - - for (let i = 0; i !== stride; ++i) { - - result[i] = - values[offset0 + i] * weight0 + - values[offset1 + i] * weight1; - - } - - return result; - - } - -} - -/** - * - * Interpolant that evaluates to the sample value at the position preceding - * the parameter. - */ - -class DiscreteInterpolant extends Interpolant { - - constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) { - - super(parameterPositions, sampleValues, sampleSize, resultBuffer); - - } - - interpolate_(i1 /*, t0, t, t1 */) { - - return this.copySampleValue_(i1 - 1); - - } - -} - -class KeyframeTrack { - - constructor(name, times, values, interpolation) { - - if (name === undefined) throw new Error('THREE.KeyframeTrack: track name is undefined'); - if (times === undefined || times.length === 0) throw new Error('THREE.KeyframeTrack: no keyframes in track named ' + name); - - this.name = name; - - this.times = convertArray(times, this.TimeBufferType); - this.values = convertArray(values, this.ValueBufferType); - - this.setInterpolation(interpolation || this.DefaultInterpolation); - - } - - // Serialization (in static context, because of constructor invocation - // and automatic invocation of .toJSON): - - static toJSON(track) { - - const trackType = track.constructor; - - let json; - - // derived classes can define a static toJSON method - if (trackType.toJSON !== this.toJSON) { - - json = trackType.toJSON(track); - - } else { - - // by default, we assume the data can be serialized as-is - json = { - - 'name': track.name, - 'times': convertArray(track.times, Array), - 'values': convertArray(track.values, Array) - - }; - - const interpolation = track.getInterpolation(); - - if (interpolation !== track.DefaultInterpolation) { - - json.interpolation = interpolation; - - } - - } - - json.type = track.ValueTypeName; // mandatory - - return json; - - } - - InterpolantFactoryMethodDiscrete(result) { - - return new DiscreteInterpolant(this.times, this.values, this.getValueSize(), result); - - } - - InterpolantFactoryMethodLinear(result) { - - return new LinearInterpolant(this.times, this.values, this.getValueSize(), result); - - } - - InterpolantFactoryMethodSmooth(result) { - - return new CubicInterpolant(this.times, this.values, this.getValueSize(), result); - - } - - setInterpolation(interpolation) { - - let factoryMethod; - - switch (interpolation) { - - case InterpolateDiscrete: - - factoryMethod = this.InterpolantFactoryMethodDiscrete; - - break; - - case InterpolateLinear: - - factoryMethod = this.InterpolantFactoryMethodLinear; - - break; - - case InterpolateSmooth: - - factoryMethod = this.InterpolantFactoryMethodSmooth; - - break; - - } - - if (factoryMethod === undefined) { - - const message = 'unsupported interpolation for ' + - this.ValueTypeName + ' keyframe track named ' + this.name; - - if (this.createInterpolant === undefined) { - - // fall back to default, unless the default itself is messed up - if (interpolation !== this.DefaultInterpolation) { - - this.setInterpolation(this.DefaultInterpolation); - - } else { - - throw new Error(message); // fatal, in this case - - } - - } - - console.warn('THREE.KeyframeTrack:', message); - return this; - - } - - this.createInterpolant = factoryMethod; - - return this; - - } - - getInterpolation() { - - switch (this.createInterpolant) { - - case this.InterpolantFactoryMethodDiscrete: - - return InterpolateDiscrete; - - case this.InterpolantFactoryMethodLinear: - - return InterpolateLinear; - - case this.InterpolantFactoryMethodSmooth: - - return InterpolateSmooth; - - } - - } - - getValueSize() { - - return this.values.length / this.times.length; - - } - - // move all keyframes either forwards or backwards in time - shift(timeOffset) { - - if (timeOffset !== 0.0) { - - const times = this.times; - - for (let i = 0, n = times.length; i !== n; ++i) { - - times[i] += timeOffset; - - } - - } - - return this; - - } - - // scale all keyframe times by a factor (useful for frame <-> seconds conversions) - scale(timeScale) { - - if (timeScale !== 1.0) { - - const times = this.times; - - for (let i = 0, n = times.length; i !== n; ++i) { - - times[i] *= timeScale; - - } - - } - - return this; - - } - - // removes keyframes before and after animation without changing any values within the range [startTime, endTime]. - // IMPORTANT: We do not shift around keys to the start of the track time, because for interpolated keys this will change their values - trim(startTime, endTime) { - - const times = this.times, - nKeys = times.length; - - let from = 0, - to = nKeys - 1; - - while (from !== nKeys && times[from] < startTime) { - - ++from; - - } - - while (to !== - 1 && times[to] > endTime) { - - --to; - - } - - ++to; // inclusive -> exclusive bound - - if (from !== 0 || to !== nKeys) { - - // empty tracks are forbidden, so keep at least one keyframe - if (from >= to) { - - to = Math.max(to, 1); - from = to - 1; - - } - - const stride = this.getValueSize(); - this.times = arraySlice(times, from, to); - this.values = arraySlice(this.values, from * stride, to * stride); - - } - - return this; - - } - - // ensure we do not get a GarbageInGarbageOut situation, make sure tracks are at least minimally viable - validate() { - - let valid = true; - - const valueSize = this.getValueSize(); - if (valueSize - Math.floor(valueSize) !== 0) { - - console.error('THREE.KeyframeTrack: Invalid value size in track.', this); - valid = false; - - } - - const times = this.times, - values = this.values, - - nKeys = times.length; - - if (nKeys === 0) { - - console.error('THREE.KeyframeTrack: Track is empty.', this); - valid = false; - - } - - let prevTime = null; - - for (let i = 0; i !== nKeys; i++) { - - const currTime = times[i]; - - if (typeof currTime === 'number' && isNaN(currTime)) { - - console.error('THREE.KeyframeTrack: Time is not a valid number.', this, i, currTime); - valid = false; - break; - - } - - if (prevTime !== null && prevTime > currTime) { - - console.error('THREE.KeyframeTrack: Out of order keys.', this, i, currTime, prevTime); - valid = false; - break; - - } - - prevTime = currTime; - - } - - if (values !== undefined) { - - if (isTypedArray(values)) { - - for (let i = 0, n = values.length; i !== n; ++i) { - - const value = values[i]; - - if (isNaN(value)) { - - console.error('THREE.KeyframeTrack: Value is not a valid number.', this, i, value); - valid = false; - break; - - } - - } - - } - - } - - return valid; - - } - - // removes equivalent sequential keys as common in morph target sequences - // (0,0,0,0,1,1,1,0,0,0,0,0,0,0) --> (0,0,1,1,0,0) - optimize() { - - // times or values may be shared with other tracks, so overwriting is unsafe - const times = arraySlice(this.times), - values = arraySlice(this.values), - stride = this.getValueSize(), - - smoothInterpolation = this.getInterpolation() === InterpolateSmooth, - - lastIndex = times.length - 1; - - let writeIndex = 1; - - for (let i = 1; i < lastIndex; ++i) { - - let keep = false; - - const time = times[i]; - const timeNext = times[i + 1]; - - // remove adjacent keyframes scheduled at the same time - - if (time !== timeNext && (i !== 1 || time !== times[0])) { - - if (!smoothInterpolation) { - - // remove unnecessary keyframes same as their neighbors - - const offset = i * stride, - offsetP = offset - stride, - offsetN = offset + stride; - - for (let j = 0; j !== stride; ++j) { - - const value = values[offset + j]; - - if (value !== values[offsetP + j] || - value !== values[offsetN + j]) { - - keep = true; - break; - - } - - } - - } else { - - keep = true; - - } - - } - - // in-place compaction - - if (keep) { - - if (i !== writeIndex) { - - times[writeIndex] = times[i]; - - const readOffset = i * stride, - writeOffset = writeIndex * stride; - - for (let j = 0; j !== stride; ++j) { - - values[writeOffset + j] = values[readOffset + j]; - - } - - } - - ++writeIndex; - - } - - } - - // flush last keyframe (compaction looks ahead) - - if (lastIndex > 0) { - - times[writeIndex] = times[lastIndex]; - - for (let readOffset = lastIndex * stride, writeOffset = writeIndex * stride, j = 0; j !== stride; ++j) { - - values[writeOffset + j] = values[readOffset + j]; - - } - - ++writeIndex; - - } - - if (writeIndex !== times.length) { - - this.times = arraySlice(times, 0, writeIndex); - this.values = arraySlice(values, 0, writeIndex * stride); - - } else { - - this.times = times; - this.values = values; - - } - - return this; - - } - - clone() { - - const times = arraySlice(this.times, 0); - const values = arraySlice(this.values, 0); - - const TypedKeyframeTrack = this.constructor; - const track = new TypedKeyframeTrack(this.name, times, values); - - // Interpolant argument to constructor is not saved, so copy the factory method directly. - track.createInterpolant = this.createInterpolant; - - return track; - - } - -} - -KeyframeTrack.prototype.TimeBufferType = Float32Array; -KeyframeTrack.prototype.ValueBufferType = Float32Array; -KeyframeTrack.prototype.DefaultInterpolation = InterpolateLinear; - -/** - * A Track of Boolean keyframe values. - */ -class BooleanKeyframeTrack extends KeyframeTrack { } - -BooleanKeyframeTrack.prototype.ValueTypeName = 'bool'; -BooleanKeyframeTrack.prototype.ValueBufferType = Array; -BooleanKeyframeTrack.prototype.DefaultInterpolation = InterpolateDiscrete; -BooleanKeyframeTrack.prototype.InterpolantFactoryMethodLinear = undefined; -BooleanKeyframeTrack.prototype.InterpolantFactoryMethodSmooth = undefined; - -/** - * A Track of keyframe values that represent color. - */ -class ColorKeyframeTrack extends KeyframeTrack { } - -ColorKeyframeTrack.prototype.ValueTypeName = 'color'; - -/** - * A Track of numeric keyframe values. - */ -class NumberKeyframeTrack extends KeyframeTrack { } - -NumberKeyframeTrack.prototype.ValueTypeName = 'number'; - -/** - * Spherical linear unit quaternion interpolant. - */ - -class QuaternionLinearInterpolant extends Interpolant { - - constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) { - - super(parameterPositions, sampleValues, sampleSize, resultBuffer); - - } - - interpolate_(i1, t0, t, t1) { - - const result = this.resultBuffer, - values = this.sampleValues, - stride = this.valueSize, - - alpha = (t - t0) / (t1 - t0); - - let offset = i1 * stride; - - for (let end = offset + stride; offset !== end; offset += 4) { - - Quaternion.slerpFlat(result, 0, values, offset - stride, values, offset, alpha); - - } - - return result; - - } - -} - -/** - * A Track of quaternion keyframe values. - */ -class QuaternionKeyframeTrack extends KeyframeTrack { - - InterpolantFactoryMethodLinear(result) { - - return new QuaternionLinearInterpolant(this.times, this.values, this.getValueSize(), result); - - } - -} - -QuaternionKeyframeTrack.prototype.ValueTypeName = 'quaternion'; -// ValueBufferType is inherited -QuaternionKeyframeTrack.prototype.DefaultInterpolation = InterpolateLinear; -QuaternionKeyframeTrack.prototype.InterpolantFactoryMethodSmooth = undefined; - -/** - * A Track that interpolates Strings - */ -class StringKeyframeTrack extends KeyframeTrack { } - -StringKeyframeTrack.prototype.ValueTypeName = 'string'; -StringKeyframeTrack.prototype.ValueBufferType = Array; -StringKeyframeTrack.prototype.DefaultInterpolation = InterpolateDiscrete; -StringKeyframeTrack.prototype.InterpolantFactoryMethodLinear = undefined; -StringKeyframeTrack.prototype.InterpolantFactoryMethodSmooth = undefined; - -/** - * A Track of vectored keyframe values. - */ -class VectorKeyframeTrack extends KeyframeTrack { } - -VectorKeyframeTrack.prototype.ValueTypeName = 'vector'; - -class AnimationClip { - - constructor(name, duration = - 1, tracks, blendMode = NormalAnimationBlendMode) { - - this.name = name; - this.tracks = tracks; - this.duration = duration; - this.blendMode = blendMode; - - this.uuid = generateUUID(); - - // this means it should figure out its duration by scanning the tracks - if (this.duration < 0) { - - this.resetDuration(); - - } - - } - - - static parse(json) { - - const tracks = [], - jsonTracks = json.tracks, - frameTime = 1.0 / (json.fps || 1.0); - - for (let i = 0, n = jsonTracks.length; i !== n; ++i) { - - tracks.push(parseKeyframeTrack(jsonTracks[i]).scale(frameTime)); - - } - - const clip = new this(json.name, json.duration, tracks, json.blendMode); - clip.uuid = json.uuid; - - return clip; - - } - - static toJSON(clip) { - - const tracks = [], - clipTracks = clip.tracks; - - const json = { - - 'name': clip.name, - 'duration': clip.duration, - 'tracks': tracks, - 'uuid': clip.uuid, - 'blendMode': clip.blendMode - - }; - - for (let i = 0, n = clipTracks.length; i !== n; ++i) { - - tracks.push(KeyframeTrack.toJSON(clipTracks[i])); - - } - - return json; - - } - - static CreateFromMorphTargetSequence(name, morphTargetSequence, fps, noLoop) { - - const numMorphTargets = morphTargetSequence.length; - const tracks = []; - - for (let i = 0; i < numMorphTargets; i++) { - - let times = []; - let values = []; - - times.push( - (i + numMorphTargets - 1) % numMorphTargets, - i, - (i + 1) % numMorphTargets); - - values.push(0, 1, 0); - - const order = getKeyframeOrder(times); - times = sortedArray(times, 1, order); - values = sortedArray(values, 1, order); - - // if there is a key at the first frame, duplicate it as the - // last frame as well for perfect loop. - if (!noLoop && times[0] === 0) { - - times.push(numMorphTargets); - values.push(values[0]); - - } - - tracks.push( - new NumberKeyframeTrack( - '.morphTargetInfluences[' + morphTargetSequence[i].name + ']', - times, values - ).scale(1.0 / fps)); - - } - - return new this(name, - 1, tracks); - - } - - static findByName(objectOrClipArray, name) { - - let clipArray = objectOrClipArray; - - if (!Array.isArray(objectOrClipArray)) { - - const o = objectOrClipArray; - clipArray = o.geometry && o.geometry.animations || o.animations; - - } - - for (let i = 0; i < clipArray.length; i++) { - - if (clipArray[i].name === name) { - - return clipArray[i]; - - } - - } - - return null; - - } - - static CreateClipsFromMorphTargetSequences(morphTargets, fps, noLoop) { - - const animationToMorphTargets = {}; - - // tested with https://regex101.com/ on trick sequences - // such flamingo_flyA_003, flamingo_run1_003, crdeath0059 - const pattern = /^([\w-]*?)([\d]+)$/; - - // sort morph target names into animation groups based - // patterns like Walk_001, Walk_002, Run_001, Run_002 - for (let i = 0, il = morphTargets.length; i < il; i++) { - - const morphTarget = morphTargets[i]; - const parts = morphTarget.name.match(pattern); - - if (parts && parts.length > 1) { - - const name = parts[1]; - - let animationMorphTargets = animationToMorphTargets[name]; - - if (!animationMorphTargets) { - - animationToMorphTargets[name] = animationMorphTargets = []; - - } - - animationMorphTargets.push(morphTarget); - - } - - } - - const clips = []; - - for (const name in animationToMorphTargets) { - - clips.push(this.CreateFromMorphTargetSequence(name, animationToMorphTargets[name], fps, noLoop)); - - } - - return clips; - - } - - // parse the animation.hierarchy format - static parseAnimation(animation, bones) { - - if (!animation) { - - console.error('THREE.AnimationClip: No animation in JSONLoader data.'); - return null; - - } - - const addNonemptyTrack = function (trackType, trackName, animationKeys, propertyName, destTracks) { - - // only return track if there are actually keys. - if (animationKeys.length !== 0) { - - const times = []; - const values = []; - - flattenJSON(animationKeys, times, values, propertyName); - - // empty keys are filtered out, so check again - if (times.length !== 0) { - - destTracks.push(new trackType(trackName, times, values)); - - } - - } - - }; - - const tracks = []; - - const clipName = animation.name || 'default'; - const fps = animation.fps || 30; - const blendMode = animation.blendMode; - - // automatic length determination in AnimationClip. - let duration = animation.length || - 1; - - const hierarchyTracks = animation.hierarchy || []; - - for (let h = 0; h < hierarchyTracks.length; h++) { - - const animationKeys = hierarchyTracks[h].keys; - - // skip empty tracks - if (!animationKeys || animationKeys.length === 0) continue; - - // process morph targets - if (animationKeys[0].morphTargets) { - - // figure out all morph targets used in this track - const morphTargetNames = {}; - - let k; - - for (k = 0; k < animationKeys.length; k++) { - - if (animationKeys[k].morphTargets) { - - for (let m = 0; m < animationKeys[k].morphTargets.length; m++) { - - morphTargetNames[animationKeys[k].morphTargets[m]] = - 1; - - } - - } - - } - - // create a track for each morph target with all zero - // morphTargetInfluences except for the keys in which - // the morphTarget is named. - for (const morphTargetName in morphTargetNames) { - - const times = []; - const values = []; - - for (let m = 0; m !== animationKeys[k].morphTargets.length; ++m) { - - const animationKey = animationKeys[k]; - - times.push(animationKey.time); - values.push((animationKey.morphTarget === morphTargetName) ? 1 : 0); - - } - - tracks.push(new NumberKeyframeTrack('.morphTargetInfluence[' + morphTargetName + ']', times, values)); - - } - - duration = morphTargetNames.length * fps; - - } else { - - // ...assume skeletal animation - - const boneName = '.bones[' + bones[h].name + ']'; - - addNonemptyTrack( - VectorKeyframeTrack, boneName + '.position', - animationKeys, 'pos', tracks); - - addNonemptyTrack( - QuaternionKeyframeTrack, boneName + '.quaternion', - animationKeys, 'rot', tracks); - - addNonemptyTrack( - VectorKeyframeTrack, boneName + '.scale', - animationKeys, 'scl', tracks); - - } - - } - - if (tracks.length === 0) { - - return null; - - } - - const clip = new this(clipName, duration, tracks, blendMode); - - return clip; - - } - - resetDuration() { - - const tracks = this.tracks; - let duration = 0; - - for (let i = 0, n = tracks.length; i !== n; ++i) { - - const track = this.tracks[i]; - - duration = Math.max(duration, track.times[track.times.length - 1]); - - } - - this.duration = duration; - - return this; - - } - - trim() { - - for (let i = 0; i < this.tracks.length; i++) { - - this.tracks[i].trim(0, this.duration); - - } - - return this; - - } - - validate() { - - let valid = true; - - for (let i = 0; i < this.tracks.length; i++) { - - valid = valid && this.tracks[i].validate(); - - } - - return valid; - - } - - optimize() { - - for (let i = 0; i < this.tracks.length; i++) { - - this.tracks[i].optimize(); - - } - - return this; - - } - - clone() { - - const tracks = []; - - for (let i = 0; i < this.tracks.length; i++) { - - tracks.push(this.tracks[i].clone()); - - } - - return new this.constructor(this.name, this.duration, tracks, this.blendMode); - - } - - toJSON() { - - return this.constructor.toJSON(this); - - } - -} - -function getTrackTypeForValueTypeName(typeName) { - - switch (typeName.toLowerCase()) { - - case 'scalar': - case 'double': - case 'float': - case 'number': - case 'integer': - - return NumberKeyframeTrack; - - case 'vector': - case 'vector2': - case 'vector3': - case 'vector4': - - return VectorKeyframeTrack; - - case 'color': - - return ColorKeyframeTrack; - - case 'quaternion': - - return QuaternionKeyframeTrack; - - case 'bool': - case 'boolean': - - return BooleanKeyframeTrack; - - case 'string': - - return StringKeyframeTrack; - - } - - throw new Error('THREE.KeyframeTrack: Unsupported typeName: ' + typeName); - -} - -function parseKeyframeTrack(json) { - - if (json.type === undefined) { - - throw new Error('THREE.KeyframeTrack: track type undefined, can not parse'); - - } - - const trackType = getTrackTypeForValueTypeName(json.type); - - if (json.times === undefined) { - - const times = [], values = []; - - flattenJSON(json.keys, times, values, 'value'); - - json.times = times; - json.values = values; - - } - - // derived classes can define a static parse method - if (trackType.parse !== undefined) { - - return trackType.parse(json); - - } else { - - // by default, we assume a constructor compatible with the base - return new trackType(json.name, json.times, json.values, json.interpolation); - - } - -} - -const Cache = { - - enabled: false, - - files: {}, - - add: function (key, file) { - - if (this.enabled === false) return; - - // console.log( 'THREE.Cache', 'Adding key:', key ); - - this.files[key] = file; - - }, - - get: function (key) { - - if (this.enabled === false) return; - - // console.log( 'THREE.Cache', 'Checking key:', key ); - - return this.files[key]; - - }, - - remove: function (key) { - - delete this.files[key]; - - }, - - clear: function () { - - this.files = {}; - - } - -}; - -class LoadingManager { - - constructor(onLoad, onProgress, onError) { - - const scope = this; - - let isLoading = false; - let itemsLoaded = 0; - let itemsTotal = 0; - let urlModifier = undefined; - const handlers = []; - - // Refer to #5689 for the reason why we don't set .onStart - // in the constructor - - this.onStart = undefined; - this.onLoad = onLoad; - this.onProgress = onProgress; - this.onError = onError; - - this.itemStart = function (url) { - - itemsTotal++; - - if (isLoading === false) { - - if (scope.onStart !== undefined) { - - scope.onStart(url, itemsLoaded, itemsTotal); - - } - - } - - isLoading = true; - - }; - - this.itemEnd = function (url) { - - itemsLoaded++; - - if (scope.onProgress !== undefined) { - - scope.onProgress(url, itemsLoaded, itemsTotal); - - } - - if (itemsLoaded === itemsTotal) { - - isLoading = false; - - if (scope.onLoad !== undefined) { - - scope.onLoad(); - - } - - } - - }; - - this.itemError = function (url) { - - if (scope.onError !== undefined) { - - scope.onError(url); - - } - - }; - - this.resolveURL = function (url) { - - if (urlModifier) { - - return urlModifier(url); - - } - - return url; - - }; - - this.setURLModifier = function (transform) { - - urlModifier = transform; - - return this; - - }; - - this.addHandler = function (regex, loader) { - - handlers.push(regex, loader); - - return this; - - }; - - this.removeHandler = function (regex) { - - const index = handlers.indexOf(regex); - - if (index !== - 1) { - - handlers.splice(index, 2); - - } - - return this; - - }; - - this.getHandler = function (file) { - - for (let i = 0, l = handlers.length; i < l; i += 2) { - - const regex = handlers[i]; - const loader = handlers[i + 1]; - - if (regex.global) regex.lastIndex = 0; // see #17920 - - if (regex.test(file)) { - - return loader; - - } - - } - - return null; - - }; - - } - -} - -const DefaultLoadingManager = /*@__PURE__*/ new LoadingManager(); - -class Loader { - - constructor(manager) { - - this.manager = (manager !== undefined) ? manager : DefaultLoadingManager; - - this.crossOrigin = 'anonymous'; - this.withCredentials = false; - this.path = ''; - this.resourcePath = ''; - this.requestHeader = {}; - - } - - load( /* url, onLoad, onProgress, onError */) { } - - loadAsync(url, onProgress) { - - const scope = this; - - return new Promise(function (resolve, reject) { - - scope.load(url, resolve, onProgress, reject); - - }); - - } - - parse( /* data */) { } - - setCrossOrigin(crossOrigin) { - - this.crossOrigin = crossOrigin; - return this; - - } - - setWithCredentials(value) { - - this.withCredentials = value; - return this; - - } - - setPath(path) { - - this.path = path; - return this; - - } - - setResourcePath(resourcePath) { - - this.resourcePath = resourcePath; - return this; - - } - - setRequestHeader(requestHeader) { - - this.requestHeader = requestHeader; - return this; - - } - -} - -const loading = {}; - -class HttpError extends Error { - - constructor(message, response) { - - super(message); - this.response = response; - - } - -} - -class FileLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - if (url === undefined) url = ''; - - if (this.path !== undefined) url = this.path + url; - - url = this.manager.resolveURL(url); - - const cached = Cache.get(url); - - if (cached !== undefined) { - - this.manager.itemStart(url); - - setTimeout(() => { - - if (onLoad) onLoad(cached); - - this.manager.itemEnd(url); - - }, 0); - - return cached; - - } - - // Check if request is duplicate - - if (loading[url] !== undefined) { - - loading[url].push({ - - onLoad: onLoad, - onProgress: onProgress, - onError: onError - - }); - - return; - - } - - // Initialise array for duplicate requests - loading[url] = []; - - loading[url].push({ - onLoad: onLoad, - onProgress: onProgress, - onError: onError, - }); - - // create request - const req = new Request(url, { - headers: new Headers(this.requestHeader), - credentials: this.withCredentials ? 'include' : 'same-origin', - // An abort controller could be added within a future PR - }); - - // record states ( avoid data race ) - const mimeType = this.mimeType; - const responseType = this.responseType; - - // start the fetch - fetch(req) - .then(response => { - - if (response.status === 200 || response.status === 0) { - - // Some browsers return HTTP Status 0 when using non-http protocol - // e.g. 'file://' or 'data://'. Handle as success. - - if (response.status === 0) { - - console.warn('THREE.FileLoader: HTTP Status 0 received.'); - - } - - // Workaround: Checking if response.body === undefined for Alipay browser #23548 - - if (typeof ReadableStream === 'undefined' || response.body === undefined || response.body.getReader === undefined) { - - return response; - - } - - const callbacks = loading[url]; - const reader = response.body.getReader(); - - // Nginx needs X-File-Size check - // https://serverfault.com/questions/482875/why-does-nginx-remove-content-length-header-for-chunked-content - const contentLength = response.headers.get('Content-Length') || response.headers.get('X-File-Size'); - const total = contentLength ? parseInt(contentLength) : 0; - const lengthComputable = total !== 0; - let loaded = 0; - - // periodically read data into the new stream tracking while download progress - const stream = new ReadableStream({ - start(controller) { - - readData(); - - function readData() { - - reader.read().then(({ done, value }) => { - - if (done) { - - controller.close(); - - } else { - - loaded += value.byteLength; - - const event = new ProgressEvent('progress', { lengthComputable, loaded, total }); - for (let i = 0, il = callbacks.length; i < il; i++) { - - const callback = callbacks[i]; - if (callback.onProgress) callback.onProgress(event); - - } - - controller.enqueue(value); - readData(); - - } - - }); - - } - - } - - }); - - return new Response(stream); - - } else { - - throw new HttpError(`fetch for "${response.url}" responded with ${response.status}: ${response.statusText}`, response); - - } - - }) - .then(response => { - - switch (responseType) { - - case 'arraybuffer': - - return response.arrayBuffer(); - - case 'blob': - - return response.blob(); - - case 'document': - - return response.text() - .then(text => { - - const parser = new DOMParser(); - return parser.parseFromString(text, mimeType); - - }); - - case 'json': - - return response.json(); - - default: - - if (mimeType === undefined) { - - return response.text(); - - } else { - - // sniff encoding - const re = /charset="?([^;"\s]*)"?/i; - const exec = re.exec(mimeType); - const label = exec && exec[1] ? exec[1].toLowerCase() : undefined; - const decoder = new TextDecoder(label); - return response.arrayBuffer().then(ab => decoder.decode(ab)); - - } - - } - - }) - .then(data => { - - // Add to cache only on HTTP success, so that we do not cache - // error response bodies as proper responses to requests. - Cache.add(url, data); - - const callbacks = loading[url]; - delete loading[url]; - - for (let i = 0, il = callbacks.length; i < il; i++) { - - const callback = callbacks[i]; - if (callback.onLoad) callback.onLoad(data); - - } - - }) - .catch(err => { - - // Abort errors and other errors are handled the same - - const callbacks = loading[url]; - - if (callbacks === undefined) { - - // When onLoad was called and url was deleted in `loading` - this.manager.itemError(url); - throw err; - - } - - delete loading[url]; - - for (let i = 0, il = callbacks.length; i < il; i++) { - - const callback = callbacks[i]; - if (callback.onError) callback.onError(err); - - } - - this.manager.itemError(url); - - }) - .finally(() => { - - this.manager.itemEnd(url); - - }); - - this.manager.itemStart(url); - - } - - setResponseType(value) { - - this.responseType = value; - return this; - - } - - setMimeType(value) { - - this.mimeType = value; - return this; - - } - -} - -class AnimationLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - const scope = this; - - const loader = new FileLoader(this.manager); - loader.setPath(this.path); - loader.setRequestHeader(this.requestHeader); - loader.setWithCredentials(this.withCredentials); - loader.load(url, function (text) { - - try { - - onLoad(scope.parse(JSON.parse(text))); - - } catch (e) { - - if (onError) { - - onError(e); - - } else { - - console.error(e); - - } - - scope.manager.itemError(url); - - } - - }, onProgress, onError); - - } - - parse(json) { - - const animations = []; - - for (let i = 0; i < json.length; i++) { - - const clip = AnimationClip.parse(json[i]); - - animations.push(clip); - - } - - return animations; - - } - -} - -/** - * Abstract Base class to block based textures loader (dds, pvr, ...) - * - * Sub classes have to implement the parse() method which will be used in load(). - */ - -class CompressedTextureLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - const scope = this; - - const images = []; - - const texture = new CompressedTexture(); - - const loader = new FileLoader(this.manager); - loader.setPath(this.path); - loader.setResponseType('arraybuffer'); - loader.setRequestHeader(this.requestHeader); - loader.setWithCredentials(scope.withCredentials); - - let loaded = 0; - - function loadTexture(i) { - - loader.load(url[i], function (buffer) { - - const texDatas = scope.parse(buffer, true); - - images[i] = { - width: texDatas.width, - height: texDatas.height, - format: texDatas.format, - mipmaps: texDatas.mipmaps - }; - - loaded += 1; - - if (loaded === 6) { - - if (texDatas.mipmapCount === 1) texture.minFilter = LinearFilter; - - texture.image = images; - texture.format = texDatas.format; - texture.needsUpdate = true; - - if (onLoad) onLoad(texture); - - } - - }, onProgress, onError); - - } - - if (Array.isArray(url)) { - - for (let i = 0, il = url.length; i < il; ++i) { - - loadTexture(i); - - } - - } else { - - // compressed cubemap texture stored in a single DDS file - - loader.load(url, function (buffer) { - - const texDatas = scope.parse(buffer, true); - - if (texDatas.isCubemap) { - - const faces = texDatas.mipmaps.length / texDatas.mipmapCount; - - for (let f = 0; f < faces; f++) { - - images[f] = { mipmaps: [] }; - - for (let i = 0; i < texDatas.mipmapCount; i++) { - - images[f].mipmaps.push(texDatas.mipmaps[f * texDatas.mipmapCount + i]); - images[f].format = texDatas.format; - images[f].width = texDatas.width; - images[f].height = texDatas.height; - - } - - } - - texture.image = images; - - } else { - - texture.image.width = texDatas.width; - texture.image.height = texDatas.height; - texture.mipmaps = texDatas.mipmaps; - - } - - if (texDatas.mipmapCount === 1) { - - texture.minFilter = LinearFilter; - - } - - texture.format = texDatas.format; - texture.needsUpdate = true; - - if (onLoad) onLoad(texture); - - }, onProgress, onError); - - } - - return texture; - - } - -} - -class ImageLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - if (this.path !== undefined) url = this.path + url; - - url = this.manager.resolveURL(url); - - const scope = this; - - const cached = Cache.get(url); - - if (cached !== undefined) { - - scope.manager.itemStart(url); - - setTimeout(function () { - - if (onLoad) onLoad(cached); - - scope.manager.itemEnd(url); - - }, 0); - - return cached; - - } - - const image = createElementNS('img'); - - function onImageLoad() { - - removeEventListeners(); - - Cache.add(url, this); - - if (onLoad) onLoad(this); - - scope.manager.itemEnd(url); - - } - - function onImageError(event) { - - removeEventListeners(); - - if (onError) onError(event); - - scope.manager.itemError(url); - scope.manager.itemEnd(url); - - } - - function removeEventListeners() { - - image.removeEventListener('load', onImageLoad, false); - image.removeEventListener('error', onImageError, false); - - } - - image.addEventListener('load', onImageLoad, false); - image.addEventListener('error', onImageError, false); - - if (url.slice(0, 5) !== 'data:') { - - if (this.crossOrigin !== undefined) image.crossOrigin = this.crossOrigin; - - } - - scope.manager.itemStart(url); - - image.src = url; - - return image; - - } - -} - -class CubeTextureLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(urls, onLoad, onProgress, onError) { - - const texture = new CubeTexture(); - - const loader = new ImageLoader(this.manager); - loader.setCrossOrigin(this.crossOrigin); - loader.setPath(this.path); - - let loaded = 0; - - function loadTexture(i) { - - loader.load(urls[i], function (image) { - - texture.images[i] = image; - - loaded++; - - if (loaded === 6) { - - texture.needsUpdate = true; - - if (onLoad) onLoad(texture); - - } - - }, undefined, onError); - - } - - for (let i = 0; i < urls.length; ++i) { - - loadTexture(i); - - } - - return texture; - - } - -} - -/** - * Abstract Base class to load generic binary textures formats (rgbe, hdr, ...) - * - * Sub classes have to implement the parse() method which will be used in load(). - */ - -class DataTextureLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - const scope = this; - - const texture = new DataTexture(); - - const loader = new FileLoader(this.manager); - loader.setResponseType('arraybuffer'); - loader.setRequestHeader(this.requestHeader); - loader.setPath(this.path); - loader.setWithCredentials(scope.withCredentials); - loader.load(url, function (buffer) { - - const texData = scope.parse(buffer); - - if (!texData) return; - - if (texData.image !== undefined) { - - texture.image = texData.image; - - } else if (texData.data !== undefined) { - - texture.image.width = texData.width; - texture.image.height = texData.height; - texture.image.data = texData.data; - - } - - texture.wrapS = texData.wrapS !== undefined ? texData.wrapS : ClampToEdgeWrapping; - texture.wrapT = texData.wrapT !== undefined ? texData.wrapT : ClampToEdgeWrapping; - - texture.magFilter = texData.magFilter !== undefined ? texData.magFilter : LinearFilter; - texture.minFilter = texData.minFilter !== undefined ? texData.minFilter : LinearFilter; - - texture.anisotropy = texData.anisotropy !== undefined ? texData.anisotropy : 1; - - if (texData.encoding !== undefined) { - - texture.encoding = texData.encoding; - - } - - if (texData.flipY !== undefined) { - - texture.flipY = texData.flipY; - - } - - if (texData.format !== undefined) { - - texture.format = texData.format; - - } - - if (texData.type !== undefined) { - - texture.type = texData.type; - - } - - if (texData.mipmaps !== undefined) { - - texture.mipmaps = texData.mipmaps; - texture.minFilter = LinearMipmapLinearFilter; // presumably... - - } - - if (texData.mipmapCount === 1) { - - texture.minFilter = LinearFilter; - - } - - if (texData.generateMipmaps !== undefined) { - - texture.generateMipmaps = texData.generateMipmaps; - - } - - texture.needsUpdate = true; - - if (onLoad) onLoad(texture, texData); - - }, onProgress, onError); - - - return texture; - - } - -} - -class TextureLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - const texture = new Texture(); - - const loader = new ImageLoader(this.manager); - loader.setCrossOrigin(this.crossOrigin); - loader.setPath(this.path); - - loader.load(url, function (image) { - - texture.image = image; - texture.needsUpdate = true; - - if (onLoad !== undefined) { - - onLoad(texture); - - } - - }, onProgress, onError); - - return texture; - - } - -} - -class Light extends Object3D { - - constructor(color, intensity = 1) { - - super(); - - this.isLight = true; - - this.type = 'Light'; - - this.color = new Color(color); - this.intensity = intensity; - - } - - dispose() { - - // Empty here in base class; some subclasses override. - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.color.copy(source.color); - this.intensity = source.intensity; - - return this; - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - data.object.color = this.color.getHex(); - data.object.intensity = this.intensity; - - if (this.groundColor !== undefined) data.object.groundColor = this.groundColor.getHex(); - - if (this.distance !== undefined) data.object.distance = this.distance; - if (this.angle !== undefined) data.object.angle = this.angle; - if (this.decay !== undefined) data.object.decay = this.decay; - if (this.penumbra !== undefined) data.object.penumbra = this.penumbra; - - if (this.shadow !== undefined) data.object.shadow = this.shadow.toJSON(); - - return data; - - } - -} - -class HemisphereLight extends Light { - - constructor(skyColor, groundColor, intensity) { - - super(skyColor, intensity); - - this.isHemisphereLight = true; - - this.type = 'HemisphereLight'; - - this.position.copy(Object3D.DEFAULT_UP); - this.updateMatrix(); - - this.groundColor = new Color(groundColor); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.groundColor.copy(source.groundColor); - - return this; - - } - -} - -const _projScreenMatrix$1 = /*@__PURE__*/ new Matrix4(); -const _lightPositionWorld$1 = /*@__PURE__*/ new Vector3(); -const _lookTarget$1 = /*@__PURE__*/ new Vector3(); - -class LightShadow { - - constructor(camera) { - - this.camera = camera; - - this.bias = 0; - this.normalBias = 0; - this.radius = 1; - this.blurSamples = 8; - - this.mapSize = new Vector2(512, 512); - - this.map = null; - this.mapPass = null; - this.matrix = new Matrix4(); - - this.autoUpdate = true; - this.needsUpdate = false; - - this._frustum = new Frustum(); - this._frameExtents = new Vector2(1, 1); - - this._viewportCount = 1; - - this._viewports = [ - - new Vector4(0, 0, 1, 1) - - ]; - - } - - getViewportCount() { - - return this._viewportCount; - - } - - getFrustum() { - - return this._frustum; - - } - - updateMatrices(light) { - - const shadowCamera = this.camera; - const shadowMatrix = this.matrix; - - _lightPositionWorld$1.setFromMatrixPosition(light.matrixWorld); - shadowCamera.position.copy(_lightPositionWorld$1); - - _lookTarget$1.setFromMatrixPosition(light.target.matrixWorld); - shadowCamera.lookAt(_lookTarget$1); - shadowCamera.updateMatrixWorld(); - - _projScreenMatrix$1.multiplyMatrices(shadowCamera.projectionMatrix, shadowCamera.matrixWorldInverse); - this._frustum.setFromProjectionMatrix(_projScreenMatrix$1); - - shadowMatrix.set( - 0.5, 0.0, 0.0, 0.5, - 0.0, 0.5, 0.0, 0.5, - 0.0, 0.0, 0.5, 0.5, - 0.0, 0.0, 0.0, 1.0 - ); - - shadowMatrix.multiply(_projScreenMatrix$1); - - } - - getViewport(viewportIndex) { - - return this._viewports[viewportIndex]; - - } - - getFrameExtents() { - - return this._frameExtents; - - } - - dispose() { - - if (this.map) { - - this.map.dispose(); - - } - - if (this.mapPass) { - - this.mapPass.dispose(); - - } - - } - - copy(source) { - - this.camera = source.camera.clone(); - - this.bias = source.bias; - this.radius = source.radius; - - this.mapSize.copy(source.mapSize); - - return this; - - } - - clone() { - - return new this.constructor().copy(this); - - } - - toJSON() { - - const object = {}; - - if (this.bias !== 0) object.bias = this.bias; - if (this.normalBias !== 0) object.normalBias = this.normalBias; - if (this.radius !== 1) object.radius = this.radius; - if (this.mapSize.x !== 512 || this.mapSize.y !== 512) object.mapSize = this.mapSize.toArray(); - - object.camera = this.camera.toJSON(false).object; - delete object.camera.matrix; - - return object; - - } - -} - -class SpotLightShadow extends LightShadow { - - constructor() { - - super(new PerspectiveCamera(50, 1, 0.5, 500)); - - this.isSpotLightShadow = true; - - this.focus = 1; - - } - - updateMatrices(light) { - - const camera = this.camera; - - const fov = RAD2DEG * 2 * light.angle * this.focus; - const aspect = this.mapSize.width / this.mapSize.height; - const far = light.distance || camera.far; - - if (fov !== camera.fov || aspect !== camera.aspect || far !== camera.far) { - - camera.fov = fov; - camera.aspect = aspect; - camera.far = far; - camera.updateProjectionMatrix(); - - } - - super.updateMatrices(light); - - } - - copy(source) { - - super.copy(source); - - this.focus = source.focus; - - return this; - - } - -} - -class SpotLight extends Light { - - constructor(color, intensity, distance = 0, angle = Math.PI / 3, penumbra = 0, decay = 2) { - - super(color, intensity); - - this.isSpotLight = true; - - this.type = 'SpotLight'; - - this.position.copy(Object3D.DEFAULT_UP); - this.updateMatrix(); - - this.target = new Object3D(); - - this.distance = distance; - this.angle = angle; - this.penumbra = penumbra; - this.decay = decay; - - this.map = null; - - this.shadow = new SpotLightShadow(); - - } - - get power() { - - // compute the light's luminous power (in lumens) from its intensity (in candela) - // by convention for a spotlight, luminous power (lm) = π * luminous intensity (cd) - return this.intensity * Math.PI; - - } - - set power(power) { - - // set the light's intensity (in candela) from the desired luminous power (in lumens) - this.intensity = power / Math.PI; - - } - - dispose() { - - this.shadow.dispose(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.distance = source.distance; - this.angle = source.angle; - this.penumbra = source.penumbra; - this.decay = source.decay; - - this.target = source.target.clone(); - - this.shadow = source.shadow.clone(); - - return this; - - } - -} - -const _projScreenMatrix = /*@__PURE__*/ new Matrix4(); -const _lightPositionWorld = /*@__PURE__*/ new Vector3(); -const _lookTarget = /*@__PURE__*/ new Vector3(); - -class PointLightShadow extends LightShadow { - - constructor() { - - super(new PerspectiveCamera(90, 1, 0.5, 500)); - - this.isPointLightShadow = true; - - this._frameExtents = new Vector2(4, 2); - - this._viewportCount = 6; - - this._viewports = [ - // These viewports map a cube-map onto a 2D texture with the - // following orientation: - // - // xzXZ - // y Y - // - // X - Positive x direction - // x - Negative x direction - // Y - Positive y direction - // y - Negative y direction - // Z - Positive z direction - // z - Negative z direction - - // positive X - new Vector4(2, 1, 1, 1), - // negative X - new Vector4(0, 1, 1, 1), - // positive Z - new Vector4(3, 1, 1, 1), - // negative Z - new Vector4(1, 1, 1, 1), - // positive Y - new Vector4(3, 0, 1, 1), - // negative Y - new Vector4(1, 0, 1, 1) - ]; - - this._cubeDirections = [ - new Vector3(1, 0, 0), new Vector3(- 1, 0, 0), new Vector3(0, 0, 1), - new Vector3(0, 0, - 1), new Vector3(0, 1, 0), new Vector3(0, - 1, 0) - ]; - - this._cubeUps = [ - new Vector3(0, 1, 0), new Vector3(0, 1, 0), new Vector3(0, 1, 0), - new Vector3(0, 1, 0), new Vector3(0, 0, 1), new Vector3(0, 0, - 1) - ]; - - } - - updateMatrices(light, viewportIndex = 0) { - - const camera = this.camera; - const shadowMatrix = this.matrix; - - const far = light.distance || camera.far; - - if (far !== camera.far) { - - camera.far = far; - camera.updateProjectionMatrix(); - - } - - _lightPositionWorld.setFromMatrixPosition(light.matrixWorld); - camera.position.copy(_lightPositionWorld); - - _lookTarget.copy(camera.position); - _lookTarget.add(this._cubeDirections[viewportIndex]); - camera.up.copy(this._cubeUps[viewportIndex]); - camera.lookAt(_lookTarget); - camera.updateMatrixWorld(); - - shadowMatrix.makeTranslation(- _lightPositionWorld.x, - _lightPositionWorld.y, - _lightPositionWorld.z); - - _projScreenMatrix.multiplyMatrices(camera.projectionMatrix, camera.matrixWorldInverse); - this._frustum.setFromProjectionMatrix(_projScreenMatrix); - - } - -} - -class PointLight extends Light { - - constructor(color, intensity, distance = 0, decay = 2) { - - super(color, intensity); - - this.isPointLight = true; - - this.type = 'PointLight'; - - this.distance = distance; - this.decay = decay; - - this.shadow = new PointLightShadow(); - - } - - get power() { - - // compute the light's luminous power (in lumens) from its intensity (in candela) - // for an isotropic light source, luminous power (lm) = 4 π luminous intensity (cd) - return this.intensity * 4 * Math.PI; - - } - - set power(power) { - - // set the light's intensity (in candela) from the desired luminous power (in lumens) - this.intensity = power / (4 * Math.PI); - - } - - dispose() { - - this.shadow.dispose(); - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.distance = source.distance; - this.decay = source.decay; - - this.shadow = source.shadow.clone(); - - return this; - - } - -} - -class DirectionalLightShadow extends LightShadow { - - constructor() { - - super(new OrthographicCamera(- 5, 5, 5, - 5, 0.5, 500)); - - this.isDirectionalLightShadow = true; - - } - -} - -class DirectionalLight extends Light { - - constructor(color, intensity) { - - super(color, intensity); - - this.isDirectionalLight = true; - - this.type = 'DirectionalLight'; - - this.position.copy(Object3D.DEFAULT_UP); - this.updateMatrix(); - - this.target = new Object3D(); - - this.shadow = new DirectionalLightShadow(); - - } - - dispose() { - - this.shadow.dispose(); - - } - - copy(source) { - - super.copy(source); - - this.target = source.target.clone(); - this.shadow = source.shadow.clone(); - - return this; - - } - -} - -class AmbientLight extends Light { - - constructor(color, intensity) { - - super(color, intensity); - - this.isAmbientLight = true; - - this.type = 'AmbientLight'; - - } - -} - -class RectAreaLight extends Light { - - constructor(color, intensity, width = 10, height = 10) { - - super(color, intensity); - - this.isRectAreaLight = true; - - this.type = 'RectAreaLight'; - - this.width = width; - this.height = height; - - } - - get power() { - - // compute the light's luminous power (in lumens) from its intensity (in nits) - return this.intensity * this.width * this.height * Math.PI; - - } - - set power(power) { - - // set the light's intensity (in nits) from the desired luminous power (in lumens) - this.intensity = power / (this.width * this.height * Math.PI); - - } - - copy(source) { - - super.copy(source); - - this.width = source.width; - this.height = source.height; - - return this; - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - data.object.width = this.width; - data.object.height = this.height; - - return data; - - } - -} - -/** - * Primary reference: - * https://graphics.stanford.edu/papers/envmap/envmap.pdf - * - * Secondary reference: - * https://www.ppsloan.org/publications/StupidSH36.pdf - */ - -// 3-band SH defined by 9 coefficients - -class SphericalHarmonics3 { - - constructor() { - - this.isSphericalHarmonics3 = true; - - this.coefficients = []; - - for (let i = 0; i < 9; i++) { - - this.coefficients.push(new Vector3()); - - } - - } - - set(coefficients) { - - for (let i = 0; i < 9; i++) { - - this.coefficients[i].copy(coefficients[i]); - - } - - return this; - - } - - zero() { - - for (let i = 0; i < 9; i++) { - - this.coefficients[i].set(0, 0, 0); - - } - - return this; - - } - - // get the radiance in the direction of the normal - // target is a Vector3 - getAt(normal, target) { - - // normal is assumed to be unit length - - const x = normal.x, y = normal.y, z = normal.z; - - const coeff = this.coefficients; - - // band 0 - target.copy(coeff[0]).multiplyScalar(0.282095); - - // band 1 - target.addScaledVector(coeff[1], 0.488603 * y); - target.addScaledVector(coeff[2], 0.488603 * z); - target.addScaledVector(coeff[3], 0.488603 * x); - - // band 2 - target.addScaledVector(coeff[4], 1.092548 * (x * y)); - target.addScaledVector(coeff[5], 1.092548 * (y * z)); - target.addScaledVector(coeff[6], 0.315392 * (3.0 * z * z - 1.0)); - target.addScaledVector(coeff[7], 1.092548 * (x * z)); - target.addScaledVector(coeff[8], 0.546274 * (x * x - y * y)); - - return target; - - } - - // get the irradiance (radiance convolved with cosine lobe) in the direction of the normal - // target is a Vector3 - // https://graphics.stanford.edu/papers/envmap/envmap.pdf - getIrradianceAt(normal, target) { - - // normal is assumed to be unit length - - const x = normal.x, y = normal.y, z = normal.z; - - const coeff = this.coefficients; - - // band 0 - target.copy(coeff[0]).multiplyScalar(0.886227); // π * 0.282095 - - // band 1 - target.addScaledVector(coeff[1], 2.0 * 0.511664 * y); // ( 2 * π / 3 ) * 0.488603 - target.addScaledVector(coeff[2], 2.0 * 0.511664 * z); - target.addScaledVector(coeff[3], 2.0 * 0.511664 * x); - - // band 2 - target.addScaledVector(coeff[4], 2.0 * 0.429043 * x * y); // ( π / 4 ) * 1.092548 - target.addScaledVector(coeff[5], 2.0 * 0.429043 * y * z); - target.addScaledVector(coeff[6], 0.743125 * z * z - 0.247708); // ( π / 4 ) * 0.315392 * 3 - target.addScaledVector(coeff[7], 2.0 * 0.429043 * x * z); - target.addScaledVector(coeff[8], 0.429043 * (x * x - y * y)); // ( π / 4 ) * 0.546274 - - return target; - - } - - add(sh) { - - for (let i = 0; i < 9; i++) { - - this.coefficients[i].add(sh.coefficients[i]); - - } - - return this; - - } - - addScaledSH(sh, s) { - - for (let i = 0; i < 9; i++) { - - this.coefficients[i].addScaledVector(sh.coefficients[i], s); - - } - - return this; - - } - - scale(s) { - - for (let i = 0; i < 9; i++) { - - this.coefficients[i].multiplyScalar(s); - - } - - return this; - - } - - lerp(sh, alpha) { - - for (let i = 0; i < 9; i++) { - - this.coefficients[i].lerp(sh.coefficients[i], alpha); - - } - - return this; - - } - - equals(sh) { - - for (let i = 0; i < 9; i++) { - - if (!this.coefficients[i].equals(sh.coefficients[i])) { - - return false; - - } - - } - - return true; - - } - - copy(sh) { - - return this.set(sh.coefficients); - - } - - clone() { - - return new this.constructor().copy(this); - - } - - fromArray(array, offset = 0) { - - const coefficients = this.coefficients; - - for (let i = 0; i < 9; i++) { - - coefficients[i].fromArray(array, offset + (i * 3)); - - } - - return this; - - } - - toArray(array = [], offset = 0) { - - const coefficients = this.coefficients; - - for (let i = 0; i < 9; i++) { - - coefficients[i].toArray(array, offset + (i * 3)); - - } - - return array; - - } - - // evaluate the basis functions - // shBasis is an Array[ 9 ] - static getBasisAt(normal, shBasis) { - - // normal is assumed to be unit length - - const x = normal.x, y = normal.y, z = normal.z; - - // band 0 - shBasis[0] = 0.282095; - - // band 1 - shBasis[1] = 0.488603 * y; - shBasis[2] = 0.488603 * z; - shBasis[3] = 0.488603 * x; - - // band 2 - shBasis[4] = 1.092548 * x * y; - shBasis[5] = 1.092548 * y * z; - shBasis[6] = 0.315392 * (3 * z * z - 1); - shBasis[7] = 1.092548 * x * z; - shBasis[8] = 0.546274 * (x * x - y * y); - - } - -} - -class LightProbe extends Light { - - constructor(sh = new SphericalHarmonics3(), intensity = 1) { - - super(undefined, intensity); - - this.isLightProbe = true; - - this.sh = sh; - - } - - copy(source) { - - super.copy(source); - - this.sh.copy(source.sh); - - return this; - - } - - fromJSON(json) { - - this.intensity = json.intensity; // TODO: Move this bit to Light.fromJSON(); - this.sh.fromArray(json.sh); - - return this; - - } - - toJSON(meta) { - - const data = super.toJSON(meta); - - data.object.sh = this.sh.toArray(); - - return data; - - } - -} - -class MaterialLoader extends Loader { - - constructor(manager) { - - super(manager); - this.textures = {}; - - } - - load(url, onLoad, onProgress, onError) { - - const scope = this; - - const loader = new FileLoader(scope.manager); - loader.setPath(scope.path); - loader.setRequestHeader(scope.requestHeader); - loader.setWithCredentials(scope.withCredentials); - loader.load(url, function (text) { - - try { - - onLoad(scope.parse(JSON.parse(text))); - - } catch (e) { - - if (onError) { - - onError(e); - - } else { - - console.error(e); - - } - - scope.manager.itemError(url); - - } - - }, onProgress, onError); - - } - - parse(json) { - - const textures = this.textures; - - function getTexture(name) { - - if (textures[name] === undefined) { - - console.warn('THREE.MaterialLoader: Undefined texture', name); - - } - - return textures[name]; - - } - - const material = MaterialLoader.createMaterialFromType(json.type); - - if (json.uuid !== undefined) material.uuid = json.uuid; - if (json.name !== undefined) material.name = json.name; - if (json.color !== undefined && material.color !== undefined) material.color.setHex(json.color); - if (json.roughness !== undefined) material.roughness = json.roughness; - if (json.metalness !== undefined) material.metalness = json.metalness; - if (json.sheen !== undefined) material.sheen = json.sheen; - if (json.sheenColor !== undefined) material.sheenColor = new Color().setHex(json.sheenColor); - if (json.sheenRoughness !== undefined) material.sheenRoughness = json.sheenRoughness; - if (json.emissive !== undefined && material.emissive !== undefined) material.emissive.setHex(json.emissive); - if (json.specular !== undefined && material.specular !== undefined) material.specular.setHex(json.specular); - if (json.specularIntensity !== undefined) material.specularIntensity = json.specularIntensity; - if (json.specularColor !== undefined && material.specularColor !== undefined) material.specularColor.setHex(json.specularColor); - if (json.shininess !== undefined) material.shininess = json.shininess; - if (json.clearcoat !== undefined) material.clearcoat = json.clearcoat; - if (json.clearcoatRoughness !== undefined) material.clearcoatRoughness = json.clearcoatRoughness; - if (json.iridescence !== undefined) material.iridescence = json.iridescence; - if (json.iridescenceIOR !== undefined) material.iridescenceIOR = json.iridescenceIOR; - if (json.iridescenceThicknessRange !== undefined) material.iridescenceThicknessRange = json.iridescenceThicknessRange; - if (json.transmission !== undefined) material.transmission = json.transmission; - if (json.thickness !== undefined) material.thickness = json.thickness; - if (json.attenuationDistance !== undefined) material.attenuationDistance = json.attenuationDistance; - if (json.attenuationColor !== undefined && material.attenuationColor !== undefined) material.attenuationColor.setHex(json.attenuationColor); - if (json.fog !== undefined) material.fog = json.fog; - if (json.flatShading !== undefined) material.flatShading = json.flatShading; - if (json.blending !== undefined) material.blending = json.blending; - if (json.combine !== undefined) material.combine = json.combine; - if (json.side !== undefined) material.side = json.side; - if (json.shadowSide !== undefined) material.shadowSide = json.shadowSide; - if (json.opacity !== undefined) material.opacity = json.opacity; - if (json.transparent !== undefined) material.transparent = json.transparent; - if (json.alphaTest !== undefined) material.alphaTest = json.alphaTest; - if (json.depthTest !== undefined) material.depthTest = json.depthTest; - if (json.depthWrite !== undefined) material.depthWrite = json.depthWrite; - if (json.colorWrite !== undefined) material.colorWrite = json.colorWrite; - - if (json.stencilWrite !== undefined) material.stencilWrite = json.stencilWrite; - if (json.stencilWriteMask !== undefined) material.stencilWriteMask = json.stencilWriteMask; - if (json.stencilFunc !== undefined) material.stencilFunc = json.stencilFunc; - if (json.stencilRef !== undefined) material.stencilRef = json.stencilRef; - if (json.stencilFuncMask !== undefined) material.stencilFuncMask = json.stencilFuncMask; - if (json.stencilFail !== undefined) material.stencilFail = json.stencilFail; - if (json.stencilZFail !== undefined) material.stencilZFail = json.stencilZFail; - if (json.stencilZPass !== undefined) material.stencilZPass = json.stencilZPass; - - if (json.wireframe !== undefined) material.wireframe = json.wireframe; - if (json.wireframeLinewidth !== undefined) material.wireframeLinewidth = json.wireframeLinewidth; - if (json.wireframeLinecap !== undefined) material.wireframeLinecap = json.wireframeLinecap; - if (json.wireframeLinejoin !== undefined) material.wireframeLinejoin = json.wireframeLinejoin; - - if (json.rotation !== undefined) material.rotation = json.rotation; - - if (json.linewidth !== 1) material.linewidth = json.linewidth; - if (json.dashSize !== undefined) material.dashSize = json.dashSize; - if (json.gapSize !== undefined) material.gapSize = json.gapSize; - if (json.scale !== undefined) material.scale = json.scale; - - if (json.polygonOffset !== undefined) material.polygonOffset = json.polygonOffset; - if (json.polygonOffsetFactor !== undefined) material.polygonOffsetFactor = json.polygonOffsetFactor; - if (json.polygonOffsetUnits !== undefined) material.polygonOffsetUnits = json.polygonOffsetUnits; - - if (json.dithering !== undefined) material.dithering = json.dithering; - - if (json.alphaToCoverage !== undefined) material.alphaToCoverage = json.alphaToCoverage; - if (json.premultipliedAlpha !== undefined) material.premultipliedAlpha = json.premultipliedAlpha; - if (json.forceSinglePass !== undefined) material.forceSinglePass = json.forceSinglePass; - - if (json.visible !== undefined) material.visible = json.visible; - - if (json.toneMapped !== undefined) material.toneMapped = json.toneMapped; - - if (json.userData !== undefined) material.userData = json.userData; - - if (json.vertexColors !== undefined) { - - if (typeof json.vertexColors === 'number') { - - material.vertexColors = (json.vertexColors > 0) ? true : false; - - } else { - - material.vertexColors = json.vertexColors; - - } - - } - - // Shader Material - - if (json.uniforms !== undefined) { - - for (const name in json.uniforms) { - - const uniform = json.uniforms[name]; - - material.uniforms[name] = {}; - - switch (uniform.type) { - - case 't': - material.uniforms[name].value = getTexture(uniform.value); - break; - - case 'c': - material.uniforms[name].value = new Color().setHex(uniform.value); - break; - - case 'v2': - material.uniforms[name].value = new Vector2().fromArray(uniform.value); - break; - - case 'v3': - material.uniforms[name].value = new Vector3().fromArray(uniform.value); - break; - - case 'v4': - material.uniforms[name].value = new Vector4().fromArray(uniform.value); - break; - - case 'm3': - material.uniforms[name].value = new Matrix3().fromArray(uniform.value); - break; - - case 'm4': - material.uniforms[name].value = new Matrix4().fromArray(uniform.value); - break; - - default: - material.uniforms[name].value = uniform.value; - - } - - } - - } - - if (json.defines !== undefined) material.defines = json.defines; - if (json.vertexShader !== undefined) material.vertexShader = json.vertexShader; - if (json.fragmentShader !== undefined) material.fragmentShader = json.fragmentShader; - if (json.glslVersion !== undefined) material.glslVersion = json.glslVersion; - - if (json.extensions !== undefined) { - - for (const key in json.extensions) { - - material.extensions[key] = json.extensions[key]; - - } - - } - - // for PointsMaterial - - if (json.size !== undefined) material.size = json.size; - if (json.sizeAttenuation !== undefined) material.sizeAttenuation = json.sizeAttenuation; - - // maps - - if (json.map !== undefined) material.map = getTexture(json.map); - if (json.matcap !== undefined) material.matcap = getTexture(json.matcap); - - if (json.alphaMap !== undefined) material.alphaMap = getTexture(json.alphaMap); - - if (json.bumpMap !== undefined) material.bumpMap = getTexture(json.bumpMap); - if (json.bumpScale !== undefined) material.bumpScale = json.bumpScale; - - if (json.normalMap !== undefined) material.normalMap = getTexture(json.normalMap); - if (json.normalMapType !== undefined) material.normalMapType = json.normalMapType; - if (json.normalScale !== undefined) { - - let normalScale = json.normalScale; - - if (Array.isArray(normalScale) === false) { - - // Blender exporter used to export a scalar. See #7459 - - normalScale = [normalScale, normalScale]; - - } - - material.normalScale = new Vector2().fromArray(normalScale); - - } - - if (json.displacementMap !== undefined) material.displacementMap = getTexture(json.displacementMap); - if (json.displacementScale !== undefined) material.displacementScale = json.displacementScale; - if (json.displacementBias !== undefined) material.displacementBias = json.displacementBias; - - if (json.roughnessMap !== undefined) material.roughnessMap = getTexture(json.roughnessMap); - if (json.metalnessMap !== undefined) material.metalnessMap = getTexture(json.metalnessMap); - - if (json.emissiveMap !== undefined) material.emissiveMap = getTexture(json.emissiveMap); - if (json.emissiveIntensity !== undefined) material.emissiveIntensity = json.emissiveIntensity; - - if (json.specularMap !== undefined) material.specularMap = getTexture(json.specularMap); - if (json.specularIntensityMap !== undefined) material.specularIntensityMap = getTexture(json.specularIntensityMap); - if (json.specularColorMap !== undefined) material.specularColorMap = getTexture(json.specularColorMap); - - if (json.envMap !== undefined) material.envMap = getTexture(json.envMap); - if (json.envMapIntensity !== undefined) material.envMapIntensity = json.envMapIntensity; - - if (json.reflectivity !== undefined) material.reflectivity = json.reflectivity; - if (json.refractionRatio !== undefined) material.refractionRatio = json.refractionRatio; - - if (json.lightMap !== undefined) material.lightMap = getTexture(json.lightMap); - if (json.lightMapIntensity !== undefined) material.lightMapIntensity = json.lightMapIntensity; - - if (json.aoMap !== undefined) material.aoMap = getTexture(json.aoMap); - if (json.aoMapIntensity !== undefined) material.aoMapIntensity = json.aoMapIntensity; - - if (json.gradientMap !== undefined) material.gradientMap = getTexture(json.gradientMap); - - if (json.clearcoatMap !== undefined) material.clearcoatMap = getTexture(json.clearcoatMap); - if (json.clearcoatRoughnessMap !== undefined) material.clearcoatRoughnessMap = getTexture(json.clearcoatRoughnessMap); - if (json.clearcoatNormalMap !== undefined) material.clearcoatNormalMap = getTexture(json.clearcoatNormalMap); - if (json.clearcoatNormalScale !== undefined) material.clearcoatNormalScale = new Vector2().fromArray(json.clearcoatNormalScale); - - if (json.iridescenceMap !== undefined) material.iridescenceMap = getTexture(json.iridescenceMap); - if (json.iridescenceThicknessMap !== undefined) material.iridescenceThicknessMap = getTexture(json.iridescenceThicknessMap); - - if (json.transmissionMap !== undefined) material.transmissionMap = getTexture(json.transmissionMap); - if (json.thicknessMap !== undefined) material.thicknessMap = getTexture(json.thicknessMap); - - if (json.sheenColorMap !== undefined) material.sheenColorMap = getTexture(json.sheenColorMap); - if (json.sheenRoughnessMap !== undefined) material.sheenRoughnessMap = getTexture(json.sheenRoughnessMap); - - return material; - - } - - setTextures(value) { - - this.textures = value; - return this; - - } - - static createMaterialFromType(type) { - - const materialLib = { - ShadowMaterial, - SpriteMaterial, - RawShaderMaterial, - ShaderMaterial, - PointsMaterial, - MeshPhysicalMaterial, - MeshStandardMaterial, - MeshPhongMaterial, - MeshToonMaterial, - MeshNormalMaterial, - MeshLambertMaterial, - MeshDepthMaterial, - MeshDistanceMaterial, - MeshBasicMaterial, - MeshMatcapMaterial, - LineDashedMaterial, - LineBasicMaterial, - Material - }; - - return new materialLib[type](); - - } - -} - -class LoaderUtils { - - static decodeText(array) { - - if (typeof TextDecoder !== 'undefined') { - - return new TextDecoder().decode(array); - - } - - // Avoid the String.fromCharCode.apply(null, array) shortcut, which - // throws a "maximum call stack size exceeded" error for large arrays. - - let s = ''; - - for (let i = 0, il = array.length; i < il; i++) { - - // Implicitly assumes little-endian. - s += String.fromCharCode(array[i]); - - } - - try { - - // merges multi-byte utf-8 characters. - - return decodeURIComponent(escape(s)); - - } catch (e) { // see #16358 - - return s; - - } - - } - - static extractUrlBase(url) { - - const index = url.lastIndexOf('/'); - - if (index === - 1) return './'; - - return url.slice(0, index + 1); - - } - - static resolveURL(url, path) { - - // Invalid URL - if (typeof url !== 'string' || url === '') return ''; - - // Host Relative URL - if (/^https?:\/\//i.test(path) && /^\//.test(url)) { - - path = path.replace(/(^https?:\/\/[^\/]+).*/i, '$1'); - - } - - // Absolute URL http://,https://,// - if (/^(https?:)?\/\//i.test(url)) return url; - - // Data URI - if (/^data:.*,.*$/i.test(url)) return url; - - // Blob URL - if (/^blob:.*$/i.test(url)) return url; - - // Relative URL - return path + url; - - } - -} - -class InstancedBufferGeometry extends BufferGeometry { - - constructor() { - - super(); - - this.isInstancedBufferGeometry = true; - - this.type = 'InstancedBufferGeometry'; - this.instanceCount = Infinity; - - } - - copy(source) { - - super.copy(source); - - this.instanceCount = source.instanceCount; - - return this; - - } - - toJSON() { - - const data = super.toJSON(); - - data.instanceCount = this.instanceCount; - - data.isInstancedBufferGeometry = true; - - return data; - - } - -} - -class BufferGeometryLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - const scope = this; - - const loader = new FileLoader(scope.manager); - loader.setPath(scope.path); - loader.setRequestHeader(scope.requestHeader); - loader.setWithCredentials(scope.withCredentials); - loader.load(url, function (text) { - - try { - - onLoad(scope.parse(JSON.parse(text))); - - } catch (e) { - - if (onError) { - - onError(e); - - } else { - - console.error(e); - - } - - scope.manager.itemError(url); - - } - - }, onProgress, onError); - - } - - parse(json) { - - const interleavedBufferMap = {}; - const arrayBufferMap = {}; - - function getInterleavedBuffer(json, uuid) { - - if (interleavedBufferMap[uuid] !== undefined) return interleavedBufferMap[uuid]; - - const interleavedBuffers = json.interleavedBuffers; - const interleavedBuffer = interleavedBuffers[uuid]; - - const buffer = getArrayBuffer(json, interleavedBuffer.buffer); - - const array = getTypedArray(interleavedBuffer.type, buffer); - const ib = new InterleavedBuffer(array, interleavedBuffer.stride); - ib.uuid = interleavedBuffer.uuid; - - interleavedBufferMap[uuid] = ib; - - return ib; - - } - - function getArrayBuffer(json, uuid) { - - if (arrayBufferMap[uuid] !== undefined) return arrayBufferMap[uuid]; - - const arrayBuffers = json.arrayBuffers; - const arrayBuffer = arrayBuffers[uuid]; - - const ab = new Uint32Array(arrayBuffer).buffer; - - arrayBufferMap[uuid] = ab; - - return ab; - - } - - const geometry = json.isInstancedBufferGeometry ? new InstancedBufferGeometry() : new BufferGeometry(); - - const index = json.data.index; - - if (index !== undefined) { - - const typedArray = getTypedArray(index.type, index.array); - geometry.setIndex(new BufferAttribute(typedArray, 1)); - - } - - const attributes = json.data.attributes; - - for (const key in attributes) { - - const attribute = attributes[key]; - let bufferAttribute; - - if (attribute.isInterleavedBufferAttribute) { - - const interleavedBuffer = getInterleavedBuffer(json.data, attribute.data); - bufferAttribute = new InterleavedBufferAttribute(interleavedBuffer, attribute.itemSize, attribute.offset, attribute.normalized); - - } else { - - const typedArray = getTypedArray(attribute.type, attribute.array); - const bufferAttributeConstr = attribute.isInstancedBufferAttribute ? InstancedBufferAttribute : BufferAttribute; - bufferAttribute = new bufferAttributeConstr(typedArray, attribute.itemSize, attribute.normalized); - - } - - if (attribute.name !== undefined) bufferAttribute.name = attribute.name; - if (attribute.usage !== undefined) bufferAttribute.setUsage(attribute.usage); - - if (attribute.updateRange !== undefined) { - - bufferAttribute.updateRange.offset = attribute.updateRange.offset; - bufferAttribute.updateRange.count = attribute.updateRange.count; - - } - - geometry.setAttribute(key, bufferAttribute); - - } - - const morphAttributes = json.data.morphAttributes; - - if (morphAttributes) { - - for (const key in morphAttributes) { - - const attributeArray = morphAttributes[key]; - - const array = []; - - for (let i = 0, il = attributeArray.length; i < il; i++) { - - const attribute = attributeArray[i]; - let bufferAttribute; - - if (attribute.isInterleavedBufferAttribute) { - - const interleavedBuffer = getInterleavedBuffer(json.data, attribute.data); - bufferAttribute = new InterleavedBufferAttribute(interleavedBuffer, attribute.itemSize, attribute.offset, attribute.normalized); - - } else { - - const typedArray = getTypedArray(attribute.type, attribute.array); - bufferAttribute = new BufferAttribute(typedArray, attribute.itemSize, attribute.normalized); - - } - - if (attribute.name !== undefined) bufferAttribute.name = attribute.name; - array.push(bufferAttribute); - - } - - geometry.morphAttributes[key] = array; - - } - - } - - const morphTargetsRelative = json.data.morphTargetsRelative; - - if (morphTargetsRelative) { - - geometry.morphTargetsRelative = true; - - } - - const groups = json.data.groups || json.data.drawcalls || json.data.offsets; - - if (groups !== undefined) { - - for (let i = 0, n = groups.length; i !== n; ++i) { - - const group = groups[i]; - - geometry.addGroup(group.start, group.count, group.materialIndex); - - } - - } - - const boundingSphere = json.data.boundingSphere; - - if (boundingSphere !== undefined) { - - const center = new Vector3(); - - if (boundingSphere.center !== undefined) { - - center.fromArray(boundingSphere.center); - - } - - geometry.boundingSphere = new Sphere(center, boundingSphere.radius); - - } - - if (json.name) geometry.name = json.name; - if (json.userData) geometry.userData = json.userData; - - return geometry; - - } - -} - -class ObjectLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - const scope = this; - - const path = (this.path === '') ? LoaderUtils.extractUrlBase(url) : this.path; - this.resourcePath = this.resourcePath || path; - - const loader = new FileLoader(this.manager); - loader.setPath(this.path); - loader.setRequestHeader(this.requestHeader); - loader.setWithCredentials(this.withCredentials); - loader.load(url, function (text) { - - let json = null; - - try { - - json = JSON.parse(text); - - } catch (error) { - - if (onError !== undefined) onError(error); - - console.error('THREE:ObjectLoader: Can\'t parse ' + url + '.', error.message); - - return; - - } - - const metadata = json.metadata; - - if (metadata === undefined || metadata.type === undefined || metadata.type.toLowerCase() === 'geometry') { - - if (onError !== undefined) onError(new Error('THREE.ObjectLoader: Can\'t load ' + url)); - - console.error('THREE.ObjectLoader: Can\'t load ' + url); - return; - - } - - scope.parse(json, onLoad); - - }, onProgress, onError); - - } - - async loadAsync(url, onProgress) { - - const scope = this; - - const path = (this.path === '') ? LoaderUtils.extractUrlBase(url) : this.path; - this.resourcePath = this.resourcePath || path; - - const loader = new FileLoader(this.manager); - loader.setPath(this.path); - loader.setRequestHeader(this.requestHeader); - loader.setWithCredentials(this.withCredentials); - - const text = await loader.loadAsync(url, onProgress); - - const json = JSON.parse(text); - - const metadata = json.metadata; - - if (metadata === undefined || metadata.type === undefined || metadata.type.toLowerCase() === 'geometry') { - - throw new Error('THREE.ObjectLoader: Can\'t load ' + url); - - } - - return await scope.parseAsync(json); - - } - - parse(json, onLoad) { - - const animations = this.parseAnimations(json.animations); - const shapes = this.parseShapes(json.shapes); - const geometries = this.parseGeometries(json.geometries, shapes); - - const images = this.parseImages(json.images, function () { - - if (onLoad !== undefined) onLoad(object); - - }); - - const textures = this.parseTextures(json.textures, images); - const materials = this.parseMaterials(json.materials, textures); - - const object = this.parseObject(json.object, geometries, materials, textures, animations); - const skeletons = this.parseSkeletons(json.skeletons, object); - - this.bindSkeletons(object, skeletons); - - // - - if (onLoad !== undefined) { - - let hasImages = false; - - for (const uuid in images) { - - if (images[uuid].data instanceof HTMLImageElement) { - - hasImages = true; - break; - - } - - } - - if (hasImages === false) onLoad(object); - - } - - return object; - - } - - async parseAsync(json) { - - const animations = this.parseAnimations(json.animations); - const shapes = this.parseShapes(json.shapes); - const geometries = this.parseGeometries(json.geometries, shapes); - - const images = await this.parseImagesAsync(json.images); - - const textures = this.parseTextures(json.textures, images); - const materials = this.parseMaterials(json.materials, textures); - - const object = this.parseObject(json.object, geometries, materials, textures, animations); - const skeletons = this.parseSkeletons(json.skeletons, object); - - this.bindSkeletons(object, skeletons); - - return object; - - } - - parseShapes(json) { - - const shapes = {}; - - if (json !== undefined) { - - for (let i = 0, l = json.length; i < l; i++) { - - const shape = new Shape().fromJSON(json[i]); - - shapes[shape.uuid] = shape; - - } - - } - - return shapes; - - } - - parseSkeletons(json, object) { - - const skeletons = {}; - const bones = {}; - - // generate bone lookup table - - object.traverse(function (child) { - - if (child.isBone) bones[child.uuid] = child; - - }); - - // create skeletons - - if (json !== undefined) { - - for (let i = 0, l = json.length; i < l; i++) { - - const skeleton = new Skeleton().fromJSON(json[i], bones); - - skeletons[skeleton.uuid] = skeleton; - - } - - } - - return skeletons; - - } - - parseGeometries(json, shapes) { - - const geometries = {}; - - if (json !== undefined) { - - const bufferGeometryLoader = new BufferGeometryLoader(); - - for (let i = 0, l = json.length; i < l; i++) { - - let geometry; - const data = json[i]; - - switch (data.type) { - - case 'BufferGeometry': - case 'InstancedBufferGeometry': - - geometry = bufferGeometryLoader.parse(data); - break; - - default: - - if (data.type in Geometries) { - - geometry = Geometries[data.type].fromJSON(data, shapes); - - } else { - - console.warn(`THREE.ObjectLoader: Unsupported geometry type "${data.type}"`); - - } - - } - - geometry.uuid = data.uuid; - - if (data.name !== undefined) geometry.name = data.name; - if (geometry.isBufferGeometry === true && data.userData !== undefined) geometry.userData = data.userData; - - geometries[data.uuid] = geometry; - - } - - } - - return geometries; - - } - - parseMaterials(json, textures) { - - const cache = {}; // MultiMaterial - const materials = {}; - - if (json !== undefined) { - - const loader = new MaterialLoader(); - loader.setTextures(textures); - - for (let i = 0, l = json.length; i < l; i++) { - - const data = json[i]; - - if (cache[data.uuid] === undefined) { - - cache[data.uuid] = loader.parse(data); - - } - - materials[data.uuid] = cache[data.uuid]; - - } - - } - - return materials; - - } - - parseAnimations(json) { - - const animations = {}; - - if (json !== undefined) { - - for (let i = 0; i < json.length; i++) { - - const data = json[i]; - - const clip = AnimationClip.parse(data); - - animations[clip.uuid] = clip; - - } - - } - - return animations; - - } - - parseImages(json, onLoad) { - - const scope = this; - const images = {}; - - let loader; - - function loadImage(url) { - - scope.manager.itemStart(url); - - return loader.load(url, function () { - - scope.manager.itemEnd(url); - - }, undefined, function () { - - scope.manager.itemError(url); - scope.manager.itemEnd(url); - - }); - - } - - function deserializeImage(image) { - - if (typeof image === 'string') { - - const url = image; - - const path = /^(\/\/)|([a-z]+:(\/\/)?)/i.test(url) ? url : scope.resourcePath + url; - - return loadImage(path); - - } else { - - if (image.data) { - - return { - data: getTypedArray(image.type, image.data), - width: image.width, - height: image.height - }; - - } else { - - return null; - - } - - } - - } - - if (json !== undefined && json.length > 0) { - - const manager = new LoadingManager(onLoad); - - loader = new ImageLoader(manager); - loader.setCrossOrigin(this.crossOrigin); - - for (let i = 0, il = json.length; i < il; i++) { - - const image = json[i]; - const url = image.url; - - if (Array.isArray(url)) { - - // load array of images e.g CubeTexture - - const imageArray = []; - - for (let j = 0, jl = url.length; j < jl; j++) { - - const currentUrl = url[j]; - - const deserializedImage = deserializeImage(currentUrl); - - if (deserializedImage !== null) { - - if (deserializedImage instanceof HTMLImageElement) { - - imageArray.push(deserializedImage); - - } else { - - // special case: handle array of data textures for cube textures - - imageArray.push(new DataTexture(deserializedImage.data, deserializedImage.width, deserializedImage.height)); - - } - - } - - } - - images[image.uuid] = new Source(imageArray); - - } else { - - // load single image - - const deserializedImage = deserializeImage(image.url); - images[image.uuid] = new Source(deserializedImage); - - - } - - } - - } - - return images; - - } - - async parseImagesAsync(json) { - - const scope = this; - const images = {}; - - let loader; - - async function deserializeImage(image) { - - if (typeof image === 'string') { - - const url = image; - - const path = /^(\/\/)|([a-z]+:(\/\/)?)/i.test(url) ? url : scope.resourcePath + url; - - return await loader.loadAsync(path); - - } else { - - if (image.data) { - - return { - data: getTypedArray(image.type, image.data), - width: image.width, - height: image.height - }; - - } else { - - return null; - - } - - } - - } - - if (json !== undefined && json.length > 0) { - - loader = new ImageLoader(this.manager); - loader.setCrossOrigin(this.crossOrigin); - - for (let i = 0, il = json.length; i < il; i++) { - - const image = json[i]; - const url = image.url; - - if (Array.isArray(url)) { - - // load array of images e.g CubeTexture - - const imageArray = []; - - for (let j = 0, jl = url.length; j < jl; j++) { - - const currentUrl = url[j]; - - const deserializedImage = await deserializeImage(currentUrl); - - if (deserializedImage !== null) { - - if (deserializedImage instanceof HTMLImageElement) { - - imageArray.push(deserializedImage); - - } else { - - // special case: handle array of data textures for cube textures - - imageArray.push(new DataTexture(deserializedImage.data, deserializedImage.width, deserializedImage.height)); - - } - - } - - } - - images[image.uuid] = new Source(imageArray); - - } else { - - // load single image - - const deserializedImage = await deserializeImage(image.url); - images[image.uuid] = new Source(deserializedImage); - - } - - } - - } - - return images; - - } - - parseTextures(json, images) { - - function parseConstant(value, type) { - - if (typeof value === 'number') return value; - - console.warn('THREE.ObjectLoader.parseTexture: Constant should be in numeric form.', value); - - return type[value]; - - } - - const textures = {}; - - if (json !== undefined) { - - for (let i = 0, l = json.length; i < l; i++) { - - const data = json[i]; - - if (data.image === undefined) { - - console.warn('THREE.ObjectLoader: No "image" specified for', data.uuid); - - } - - if (images[data.image] === undefined) { - - console.warn('THREE.ObjectLoader: Undefined image', data.image); - - } - - const source = images[data.image]; - const image = source.data; - - let texture; - - if (Array.isArray(image)) { - - texture = new CubeTexture(); - - if (image.length === 6) texture.needsUpdate = true; - - } else { - - if (image && image.data) { - - texture = new DataTexture(); - - } else { - - texture = new Texture(); - - } - - if (image) texture.needsUpdate = true; // textures can have undefined image data - - } - - texture.source = source; - - texture.uuid = data.uuid; - - if (data.name !== undefined) texture.name = data.name; - - if (data.mapping !== undefined) texture.mapping = parseConstant(data.mapping, TEXTURE_MAPPING); - - if (data.offset !== undefined) texture.offset.fromArray(data.offset); - if (data.repeat !== undefined) texture.repeat.fromArray(data.repeat); - if (data.center !== undefined) texture.center.fromArray(data.center); - if (data.rotation !== undefined) texture.rotation = data.rotation; - - if (data.wrap !== undefined) { - - texture.wrapS = parseConstant(data.wrap[0], TEXTURE_WRAPPING); - texture.wrapT = parseConstant(data.wrap[1], TEXTURE_WRAPPING); - - } - - if (data.format !== undefined) texture.format = data.format; - if (data.type !== undefined) texture.type = data.type; - if (data.encoding !== undefined) texture.encoding = data.encoding; - - if (data.minFilter !== undefined) texture.minFilter = parseConstant(data.minFilter, TEXTURE_FILTER); - if (data.magFilter !== undefined) texture.magFilter = parseConstant(data.magFilter, TEXTURE_FILTER); - if (data.anisotropy !== undefined) texture.anisotropy = data.anisotropy; - - if (data.flipY !== undefined) texture.flipY = data.flipY; - - if (data.generateMipmaps !== undefined) texture.generateMipmaps = data.generateMipmaps; - if (data.premultiplyAlpha !== undefined) texture.premultiplyAlpha = data.premultiplyAlpha; - if (data.unpackAlignment !== undefined) texture.unpackAlignment = data.unpackAlignment; - - if (data.userData !== undefined) texture.userData = data.userData; - - textures[data.uuid] = texture; - - } - - } - - return textures; - - } - - parseObject(data, geometries, materials, textures, animations) { - - let object; - - function getGeometry(name) { - - if (geometries[name] === undefined) { - - console.warn('THREE.ObjectLoader: Undefined geometry', name); - - } - - return geometries[name]; - - } - - function getMaterial(name) { - - if (name === undefined) return undefined; - - if (Array.isArray(name)) { - - const array = []; - - for (let i = 0, l = name.length; i < l; i++) { - - const uuid = name[i]; - - if (materials[uuid] === undefined) { - - console.warn('THREE.ObjectLoader: Undefined material', uuid); - - } - - array.push(materials[uuid]); - - } - - return array; - - } - - if (materials[name] === undefined) { - - console.warn('THREE.ObjectLoader: Undefined material', name); - - } - - return materials[name]; - - } - - function getTexture(uuid) { - - if (textures[uuid] === undefined) { - - console.warn('THREE.ObjectLoader: Undefined texture', uuid); - - } - - return textures[uuid]; - - } - - let geometry, material; - - switch (data.type) { - - case 'Scene': - - object = new Scene(); - - if (data.background !== undefined) { - - if (Number.isInteger(data.background)) { - - object.background = new Color(data.background); - - } else { - - object.background = getTexture(data.background); - - } - - } - - if (data.environment !== undefined) { - - object.environment = getTexture(data.environment); - - } - - if (data.fog !== undefined) { - - if (data.fog.type === 'Fog') { - - object.fog = new Fog(data.fog.color, data.fog.near, data.fog.far); - - } else if (data.fog.type === 'FogExp2') { - - object.fog = new FogExp2(data.fog.color, data.fog.density); - - } - - } - - if (data.backgroundBlurriness !== undefined) object.backgroundBlurriness = data.backgroundBlurriness; - if (data.backgroundIntensity !== undefined) object.backgroundIntensity = data.backgroundIntensity; - - break; - - case 'PerspectiveCamera': - - object = new PerspectiveCamera(data.fov, data.aspect, data.near, data.far); - - if (data.focus !== undefined) object.focus = data.focus; - if (data.zoom !== undefined) object.zoom = data.zoom; - if (data.filmGauge !== undefined) object.filmGauge = data.filmGauge; - if (data.filmOffset !== undefined) object.filmOffset = data.filmOffset; - if (data.view !== undefined) object.view = Object.assign({}, data.view); - - break; - - case 'OrthographicCamera': - - object = new OrthographicCamera(data.left, data.right, data.top, data.bottom, data.near, data.far); - - if (data.zoom !== undefined) object.zoom = data.zoom; - if (data.view !== undefined) object.view = Object.assign({}, data.view); - - break; - - case 'AmbientLight': - - object = new AmbientLight(data.color, data.intensity); - - break; - - case 'DirectionalLight': - - object = new DirectionalLight(data.color, data.intensity); - - break; - - case 'PointLight': - - object = new PointLight(data.color, data.intensity, data.distance, data.decay); - - break; - - case 'RectAreaLight': - - object = new RectAreaLight(data.color, data.intensity, data.width, data.height); - - break; - - case 'SpotLight': - - object = new SpotLight(data.color, data.intensity, data.distance, data.angle, data.penumbra, data.decay); - - break; - - case 'HemisphereLight': - - object = new HemisphereLight(data.color, data.groundColor, data.intensity); - - break; - - case 'LightProbe': - - object = new LightProbe().fromJSON(data); - - break; - - case 'SkinnedMesh': - - geometry = getGeometry(data.geometry); - material = getMaterial(data.material); - - object = new SkinnedMesh(geometry, material); - - if (data.bindMode !== undefined) object.bindMode = data.bindMode; - if (data.bindMatrix !== undefined) object.bindMatrix.fromArray(data.bindMatrix); - if (data.skeleton !== undefined) object.skeleton = data.skeleton; - - break; - - case 'Mesh': - - geometry = getGeometry(data.geometry); - material = getMaterial(data.material); - - object = new Mesh(geometry, material); - - break; - - case 'InstancedMesh': - - geometry = getGeometry(data.geometry); - material = getMaterial(data.material); - const count = data.count; - const instanceMatrix = data.instanceMatrix; - const instanceColor = data.instanceColor; - - object = new InstancedMesh(geometry, material, count); - object.instanceMatrix = new InstancedBufferAttribute(new Float32Array(instanceMatrix.array), 16); - if (instanceColor !== undefined) object.instanceColor = new InstancedBufferAttribute(new Float32Array(instanceColor.array), instanceColor.itemSize); - - break; - - case 'LOD': - - object = new LOD(); - - break; - - case 'Line': - - object = new Line(getGeometry(data.geometry), getMaterial(data.material)); - - break; - - case 'LineLoop': - - object = new LineLoop(getGeometry(data.geometry), getMaterial(data.material)); - - break; - - case 'LineSegments': - - object = new LineSegments(getGeometry(data.geometry), getMaterial(data.material)); - - break; - - case 'PointCloud': - case 'Points': - - object = new Points(getGeometry(data.geometry), getMaterial(data.material)); - - break; - - case 'Sprite': - - object = new Sprite(getMaterial(data.material)); - - break; - - case 'Group': - - object = new Group(); - - break; - - case 'Bone': - - object = new Bone(); - - break; - - default: - - object = new Object3D(); - - } - - object.uuid = data.uuid; - - if (data.name !== undefined) object.name = data.name; - - if (data.matrix !== undefined) { - - object.matrix.fromArray(data.matrix); - - if (data.matrixAutoUpdate !== undefined) object.matrixAutoUpdate = data.matrixAutoUpdate; - if (object.matrixAutoUpdate) object.matrix.decompose(object.position, object.quaternion, object.scale); - - } else { - - if (data.position !== undefined) object.position.fromArray(data.position); - if (data.rotation !== undefined) object.rotation.fromArray(data.rotation); - if (data.quaternion !== undefined) object.quaternion.fromArray(data.quaternion); - if (data.scale !== undefined) object.scale.fromArray(data.scale); - - } - - if (data.castShadow !== undefined) object.castShadow = data.castShadow; - if (data.receiveShadow !== undefined) object.receiveShadow = data.receiveShadow; - - if (data.shadow) { - - if (data.shadow.bias !== undefined) object.shadow.bias = data.shadow.bias; - if (data.shadow.normalBias !== undefined) object.shadow.normalBias = data.shadow.normalBias; - if (data.shadow.radius !== undefined) object.shadow.radius = data.shadow.radius; - if (data.shadow.mapSize !== undefined) object.shadow.mapSize.fromArray(data.shadow.mapSize); - if (data.shadow.camera !== undefined) object.shadow.camera = this.parseObject(data.shadow.camera); - - } - - if (data.visible !== undefined) object.visible = data.visible; - if (data.frustumCulled !== undefined) object.frustumCulled = data.frustumCulled; - if (data.renderOrder !== undefined) object.renderOrder = data.renderOrder; - if (data.userData !== undefined) object.userData = data.userData; - if (data.layers !== undefined) object.layers.mask = data.layers; - - if (data.children !== undefined) { - - const children = data.children; - - for (let i = 0; i < children.length; i++) { - - object.add(this.parseObject(children[i], geometries, materials, textures, animations)); - - } - - } - - if (data.animations !== undefined) { - - const objectAnimations = data.animations; - - for (let i = 0; i < objectAnimations.length; i++) { - - const uuid = objectAnimations[i]; - - object.animations.push(animations[uuid]); - - } - - } - - if (data.type === 'LOD') { - - if (data.autoUpdate !== undefined) object.autoUpdate = data.autoUpdate; - - const levels = data.levels; - - for (let l = 0; l < levels.length; l++) { - - const level = levels[l]; - const child = object.getObjectByProperty('uuid', level.object); - - if (child !== undefined) { - - object.addLevel(child, level.distance, level.hysteresis); - - } - - } - - } - - return object; - - } - - bindSkeletons(object, skeletons) { - - if (Object.keys(skeletons).length === 0) return; - - object.traverse(function (child) { - - if (child.isSkinnedMesh === true && child.skeleton !== undefined) { - - const skeleton = skeletons[child.skeleton]; - - if (skeleton === undefined) { - - console.warn('THREE.ObjectLoader: No skeleton found with UUID:', child.skeleton); - - } else { - - child.bind(skeleton, child.bindMatrix); - - } - - } - - }); - - } - -} - -const TEXTURE_MAPPING = { - UVMapping: UVMapping, - CubeReflectionMapping: CubeReflectionMapping, - CubeRefractionMapping: CubeRefractionMapping, - EquirectangularReflectionMapping: EquirectangularReflectionMapping, - EquirectangularRefractionMapping: EquirectangularRefractionMapping, - CubeUVReflectionMapping: CubeUVReflectionMapping -}; - -const TEXTURE_WRAPPING = { - RepeatWrapping: RepeatWrapping, - ClampToEdgeWrapping: ClampToEdgeWrapping, - MirroredRepeatWrapping: MirroredRepeatWrapping -}; - -const TEXTURE_FILTER = { - NearestFilter: NearestFilter, - NearestMipmapNearestFilter: NearestMipmapNearestFilter, - NearestMipmapLinearFilter: NearestMipmapLinearFilter, - LinearFilter: LinearFilter, - LinearMipmapNearestFilter: LinearMipmapNearestFilter, - LinearMipmapLinearFilter: LinearMipmapLinearFilter -}; - -class ImageBitmapLoader extends Loader { - - constructor(manager) { - - super(manager); - - this.isImageBitmapLoader = true; - - if (typeof createImageBitmap === 'undefined') { - - console.warn('THREE.ImageBitmapLoader: createImageBitmap() not supported.'); - - } - - if (typeof fetch === 'undefined') { - - console.warn('THREE.ImageBitmapLoader: fetch() not supported.'); - - } - - this.options = { premultiplyAlpha: 'none' }; - - } - - setOptions(options) { - - this.options = options; - - return this; - - } - - load(url, onLoad, onProgress, onError) { - - if (url === undefined) url = ''; - - if (this.path !== undefined) url = this.path + url; - - url = this.manager.resolveURL(url); - - const scope = this; - - const cached = Cache.get(url); - - if (cached !== undefined) { - - scope.manager.itemStart(url); - - setTimeout(function () { - - if (onLoad) onLoad(cached); - - scope.manager.itemEnd(url); - - }, 0); - - return cached; - - } - - const fetchOptions = {}; - fetchOptions.credentials = (this.crossOrigin === 'anonymous') ? 'same-origin' : 'include'; - fetchOptions.headers = this.requestHeader; - - fetch(url, fetchOptions).then(function (res) { - - return res.blob(); - - }).then(function (blob) { - - return createImageBitmap(blob, Object.assign(scope.options, { colorSpaceConversion: 'none' })); - - }).then(function (imageBitmap) { - - Cache.add(url, imageBitmap); - - if (onLoad) onLoad(imageBitmap); - - scope.manager.itemEnd(url); - - }).catch(function (e) { - - if (onError) onError(e); - - scope.manager.itemError(url); - scope.manager.itemEnd(url); - - }); - - scope.manager.itemStart(url); - - } - -} - -let _context; - -class AudioContext { - - static getContext() { - - if (_context === undefined) { - - _context = new (window.AudioContext || window.webkitAudioContext)(); - - } - - return _context; - - } - - static setContext(value) { - - _context = value; - - } - -} - -class AudioLoader extends Loader { - - constructor(manager) { - - super(manager); - - } - - load(url, onLoad, onProgress, onError) { - - const scope = this; - - const loader = new FileLoader(this.manager); - loader.setResponseType('arraybuffer'); - loader.setPath(this.path); - loader.setRequestHeader(this.requestHeader); - loader.setWithCredentials(this.withCredentials); - loader.load(url, function (buffer) { - - try { - - // Create a copy of the buffer. The `decodeAudioData` method - // detaches the buffer when complete, preventing reuse. - const bufferCopy = buffer.slice(0); - - const context = AudioContext.getContext(); - context.decodeAudioData(bufferCopy, function (audioBuffer) { - - onLoad(audioBuffer); - - }); - - } catch (e) { - - if (onError) { - - onError(e); - - } else { - - console.error(e); - - } - - scope.manager.itemError(url); - - } - - }, onProgress, onError); - - } - -} - -class HemisphereLightProbe extends LightProbe { - - constructor(skyColor, groundColor, intensity = 1) { - - super(undefined, intensity); - - this.isHemisphereLightProbe = true; - - const color1 = new Color().set(skyColor); - const color2 = new Color().set(groundColor); - - const sky = new Vector3(color1.r, color1.g, color1.b); - const ground = new Vector3(color2.r, color2.g, color2.b); - - // without extra factor of PI in the shader, should = 1 / Math.sqrt( Math.PI ); - const c0 = Math.sqrt(Math.PI); - const c1 = c0 * Math.sqrt(0.75); - - this.sh.coefficients[0].copy(sky).add(ground).multiplyScalar(c0); - this.sh.coefficients[1].copy(sky).sub(ground).multiplyScalar(c1); - - } - -} - -class AmbientLightProbe extends LightProbe { - - constructor(color, intensity = 1) { - - super(undefined, intensity); - - this.isAmbientLightProbe = true; - - const color1 = new Color().set(color); - - // without extra factor of PI in the shader, would be 2 / Math.sqrt( Math.PI ); - this.sh.coefficients[0].set(color1.r, color1.g, color1.b).multiplyScalar(2 * Math.sqrt(Math.PI)); - - } - -} - -const _eyeRight = /*@__PURE__*/ new Matrix4(); -const _eyeLeft = /*@__PURE__*/ new Matrix4(); -const _projectionMatrix = /*@__PURE__*/ new Matrix4(); - -class StereoCamera { - - constructor() { - - this.type = 'StereoCamera'; - - this.aspect = 1; - - this.eyeSep = 0.064; - - this.cameraL = new PerspectiveCamera(); - this.cameraL.layers.enable(1); - this.cameraL.matrixAutoUpdate = false; - - this.cameraR = new PerspectiveCamera(); - this.cameraR.layers.enable(2); - this.cameraR.matrixAutoUpdate = false; - - this._cache = { - focus: null, - fov: null, - aspect: null, - near: null, - far: null, - zoom: null, - eyeSep: null - }; - - } - - update(camera) { - - const cache = this._cache; - - const needsUpdate = cache.focus !== camera.focus || cache.fov !== camera.fov || - cache.aspect !== camera.aspect * this.aspect || cache.near !== camera.near || - cache.far !== camera.far || cache.zoom !== camera.zoom || cache.eyeSep !== this.eyeSep; - - if (needsUpdate) { - - cache.focus = camera.focus; - cache.fov = camera.fov; - cache.aspect = camera.aspect * this.aspect; - cache.near = camera.near; - cache.far = camera.far; - cache.zoom = camera.zoom; - cache.eyeSep = this.eyeSep; - - // Off-axis stereoscopic effect based on - // http://paulbourke.net/stereographics/stereorender/ - - _projectionMatrix.copy(camera.projectionMatrix); - const eyeSepHalf = cache.eyeSep / 2; - const eyeSepOnProjection = eyeSepHalf * cache.near / cache.focus; - const ymax = (cache.near * Math.tan(DEG2RAD * cache.fov * 0.5)) / cache.zoom; - let xmin, xmax; - - // translate xOffset - - _eyeLeft.elements[12] = - eyeSepHalf; - _eyeRight.elements[12] = eyeSepHalf; - - // for left eye - - xmin = - ymax * cache.aspect + eyeSepOnProjection; - xmax = ymax * cache.aspect + eyeSepOnProjection; - - _projectionMatrix.elements[0] = 2 * cache.near / (xmax - xmin); - _projectionMatrix.elements[8] = (xmax + xmin) / (xmax - xmin); - - this.cameraL.projectionMatrix.copy(_projectionMatrix); - - // for right eye - - xmin = - ymax * cache.aspect - eyeSepOnProjection; - xmax = ymax * cache.aspect - eyeSepOnProjection; - - _projectionMatrix.elements[0] = 2 * cache.near / (xmax - xmin); - _projectionMatrix.elements[8] = (xmax + xmin) / (xmax - xmin); - - this.cameraR.projectionMatrix.copy(_projectionMatrix); - - } - - this.cameraL.matrixWorld.copy(camera.matrixWorld).multiply(_eyeLeft); - this.cameraR.matrixWorld.copy(camera.matrixWorld).multiply(_eyeRight); - - } - -} - -class Clock { - - constructor(autoStart = true) { - - this.autoStart = autoStart; - - this.startTime = 0; - this.oldTime = 0; - this.elapsedTime = 0; - - this.running = false; - - } - - start() { - - this.startTime = now(); - - this.oldTime = this.startTime; - this.elapsedTime = 0; - this.running = true; - - } - - stop() { - - this.getElapsedTime(); - this.running = false; - this.autoStart = false; - - } - - getElapsedTime() { - - this.getDelta(); - return this.elapsedTime; - - } - - getDelta() { - - let diff = 0; - - if (this.autoStart && !this.running) { - - this.start(); - return 0; - - } - - if (this.running) { - - const newTime = now(); - - diff = (newTime - this.oldTime) / 1000; - this.oldTime = newTime; - - this.elapsedTime += diff; - - } - - return diff; - - } - -} - -function now() { - - return (typeof performance === 'undefined' ? Date : performance).now(); // see #10732 - -} - -const _position$1 = /*@__PURE__*/ new Vector3(); -const _quaternion$1 = /*@__PURE__*/ new Quaternion(); -const _scale$1 = /*@__PURE__*/ new Vector3(); -const _orientation$1 = /*@__PURE__*/ new Vector3(); - -class AudioListener extends Object3D { - - constructor() { - - super(); - - this.type = 'AudioListener'; - - this.context = AudioContext.getContext(); - - this.gain = this.context.createGain(); - this.gain.connect(this.context.destination); - - this.filter = null; - - this.timeDelta = 0; - - // private - - this._clock = new Clock(); - - } - - getInput() { - - return this.gain; - - } - - removeFilter() { - - if (this.filter !== null) { - - this.gain.disconnect(this.filter); - this.filter.disconnect(this.context.destination); - this.gain.connect(this.context.destination); - this.filter = null; - - } - - return this; - - } - - getFilter() { - - return this.filter; - - } - - setFilter(value) { - - if (this.filter !== null) { - - this.gain.disconnect(this.filter); - this.filter.disconnect(this.context.destination); - - } else { - - this.gain.disconnect(this.context.destination); - - } - - this.filter = value; - this.gain.connect(this.filter); - this.filter.connect(this.context.destination); - - return this; - - } - - getMasterVolume() { - - return this.gain.gain.value; - - } - - setMasterVolume(value) { - - this.gain.gain.setTargetAtTime(value, this.context.currentTime, 0.01); - - return this; - - } - - updateMatrixWorld(force) { - - super.updateMatrixWorld(force); - - const listener = this.context.listener; - const up = this.up; - - this.timeDelta = this._clock.getDelta(); - - this.matrixWorld.decompose(_position$1, _quaternion$1, _scale$1); - - _orientation$1.set(0, 0, - 1).applyQuaternion(_quaternion$1); - - if (listener.positionX) { - - // code path for Chrome (see #14393) - - const endTime = this.context.currentTime + this.timeDelta; - - listener.positionX.linearRampToValueAtTime(_position$1.x, endTime); - listener.positionY.linearRampToValueAtTime(_position$1.y, endTime); - listener.positionZ.linearRampToValueAtTime(_position$1.z, endTime); - listener.forwardX.linearRampToValueAtTime(_orientation$1.x, endTime); - listener.forwardY.linearRampToValueAtTime(_orientation$1.y, endTime); - listener.forwardZ.linearRampToValueAtTime(_orientation$1.z, endTime); - listener.upX.linearRampToValueAtTime(up.x, endTime); - listener.upY.linearRampToValueAtTime(up.y, endTime); - listener.upZ.linearRampToValueAtTime(up.z, endTime); - - } else { - - listener.setPosition(_position$1.x, _position$1.y, _position$1.z); - listener.setOrientation(_orientation$1.x, _orientation$1.y, _orientation$1.z, up.x, up.y, up.z); - - } - - } - -} - -class Audio extends Object3D { - - constructor(listener) { - - super(); - - this.type = 'Audio'; - - this.listener = listener; - this.context = listener.context; - - this.gain = this.context.createGain(); - this.gain.connect(listener.getInput()); - - this.autoplay = false; - - this.buffer = null; - this.detune = 0; - this.loop = false; - this.loopStart = 0; - this.loopEnd = 0; - this.offset = 0; - this.duration = undefined; - this.playbackRate = 1; - this.isPlaying = false; - this.hasPlaybackControl = true; - this.source = null; - this.sourceType = 'empty'; - - this._startedAt = 0; - this._progress = 0; - this._connected = false; - - this.filters = []; - - } - - getOutput() { - - return this.gain; - - } - - setNodeSource(audioNode) { - - this.hasPlaybackControl = false; - this.sourceType = 'audioNode'; - this.source = audioNode; - this.connect(); - - return this; - - } - - setMediaElementSource(mediaElement) { - - this.hasPlaybackControl = false; - this.sourceType = 'mediaNode'; - this.source = this.context.createMediaElementSource(mediaElement); - this.connect(); - - return this; - - } - - setMediaStreamSource(mediaStream) { - - this.hasPlaybackControl = false; - this.sourceType = 'mediaStreamNode'; - this.source = this.context.createMediaStreamSource(mediaStream); - this.connect(); - - return this; - - } - - setBuffer(audioBuffer) { - - this.buffer = audioBuffer; - this.sourceType = 'buffer'; - - if (this.autoplay) this.play(); - - return this; - - } - - play(delay = 0) { - - if (this.isPlaying === true) { - - console.warn('THREE.Audio: Audio is already playing.'); - return; - - } - - if (this.hasPlaybackControl === false) { - - console.warn('THREE.Audio: this Audio has no playback control.'); - return; - - } - - this._startedAt = this.context.currentTime + delay; - - const source = this.context.createBufferSource(); - source.buffer = this.buffer; - source.loop = this.loop; - source.loopStart = this.loopStart; - source.loopEnd = this.loopEnd; - source.onended = this.onEnded.bind(this); - source.start(this._startedAt, this._progress + this.offset, this.duration); - - this.isPlaying = true; - - this.source = source; - - this.setDetune(this.detune); - this.setPlaybackRate(this.playbackRate); - - return this.connect(); - - } - - pause() { - - if (this.hasPlaybackControl === false) { - - console.warn('THREE.Audio: this Audio has no playback control.'); - return; - - } - - if (this.isPlaying === true) { - - // update current progress - - this._progress += Math.max(this.context.currentTime - this._startedAt, 0) * this.playbackRate; - - if (this.loop === true) { - - // ensure _progress does not exceed duration with looped audios - - this._progress = this._progress % (this.duration || this.buffer.duration); - - } - - this.source.stop(); - this.source.onended = null; - - this.isPlaying = false; - - } - - return this; - - } - - stop() { - - if (this.hasPlaybackControl === false) { - - console.warn('THREE.Audio: this Audio has no playback control.'); - return; - - } - - this._progress = 0; - - this.source.stop(); - this.source.onended = null; - this.isPlaying = false; - - return this; - - } - - connect() { - - if (this.filters.length > 0) { - - this.source.connect(this.filters[0]); - - for (let i = 1, l = this.filters.length; i < l; i++) { - - this.filters[i - 1].connect(this.filters[i]); - - } - - this.filters[this.filters.length - 1].connect(this.getOutput()); - - } else { - - this.source.connect(this.getOutput()); - - } - - this._connected = true; - - return this; - - } - - disconnect() { - - if (this.filters.length > 0) { - - this.source.disconnect(this.filters[0]); - - for (let i = 1, l = this.filters.length; i < l; i++) { - - this.filters[i - 1].disconnect(this.filters[i]); - - } - - this.filters[this.filters.length - 1].disconnect(this.getOutput()); - - } else { - - this.source.disconnect(this.getOutput()); - - } - - this._connected = false; - - return this; - - } - - getFilters() { - - return this.filters; - - } - - setFilters(value) { - - if (!value) value = []; - - if (this._connected === true) { - - this.disconnect(); - this.filters = value.slice(); - this.connect(); - - } else { - - this.filters = value.slice(); - - } - - return this; - - } - - setDetune(value) { - - this.detune = value; - - if (this.source.detune === undefined) return; // only set detune when available - - if (this.isPlaying === true) { - - this.source.detune.setTargetAtTime(this.detune, this.context.currentTime, 0.01); - - } - - return this; - - } - - getDetune() { - - return this.detune; - - } - - getFilter() { - - return this.getFilters()[0]; - - } - - setFilter(filter) { - - return this.setFilters(filter ? [filter] : []); - - } - - setPlaybackRate(value) { - - if (this.hasPlaybackControl === false) { - - console.warn('THREE.Audio: this Audio has no playback control.'); - return; - - } - - this.playbackRate = value; - - if (this.isPlaying === true) { - - this.source.playbackRate.setTargetAtTime(this.playbackRate, this.context.currentTime, 0.01); - - } - - return this; - - } - - getPlaybackRate() { - - return this.playbackRate; - - } - - onEnded() { - - this.isPlaying = false; - - } - - getLoop() { - - if (this.hasPlaybackControl === false) { - - console.warn('THREE.Audio: this Audio has no playback control.'); - return false; - - } - - return this.loop; - - } - - setLoop(value) { - - if (this.hasPlaybackControl === false) { - - console.warn('THREE.Audio: this Audio has no playback control.'); - return; - - } - - this.loop = value; - - if (this.isPlaying === true) { - - this.source.loop = this.loop; - - } - - return this; - - } - - setLoopStart(value) { - - this.loopStart = value; - - return this; - - } - - setLoopEnd(value) { - - this.loopEnd = value; - - return this; - - } - - getVolume() { - - return this.gain.gain.value; - - } - - setVolume(value) { - - this.gain.gain.setTargetAtTime(value, this.context.currentTime, 0.01); - - return this; - - } - -} - -const _position = /*@__PURE__*/ new Vector3(); -const _quaternion = /*@__PURE__*/ new Quaternion(); -const _scale = /*@__PURE__*/ new Vector3(); -const _orientation = /*@__PURE__*/ new Vector3(); - -class PositionalAudio extends Audio { - - constructor(listener) { - - super(listener); - - this.panner = this.context.createPanner(); - this.panner.panningModel = 'HRTF'; - this.panner.connect(this.gain); - - } - - disconnect() { - - super.disconnect(); - - this.panner.disconnect(this.gain); - - } - - getOutput() { - - return this.panner; - - } - - getRefDistance() { - - return this.panner.refDistance; - - } - - setRefDistance(value) { - - this.panner.refDistance = value; - - return this; - - } - - getRolloffFactor() { - - return this.panner.rolloffFactor; - - } - - setRolloffFactor(value) { - - this.panner.rolloffFactor = value; - - return this; - - } - - getDistanceModel() { - - return this.panner.distanceModel; - - } - - setDistanceModel(value) { - - this.panner.distanceModel = value; - - return this; - - } - - getMaxDistance() { - - return this.panner.maxDistance; - - } - - setMaxDistance(value) { - - this.panner.maxDistance = value; - - return this; - - } - - setDirectionalCone(coneInnerAngle, coneOuterAngle, coneOuterGain) { - - this.panner.coneInnerAngle = coneInnerAngle; - this.panner.coneOuterAngle = coneOuterAngle; - this.panner.coneOuterGain = coneOuterGain; - - return this; - - } - - updateMatrixWorld(force) { - - super.updateMatrixWorld(force); - - if (this.hasPlaybackControl === true && this.isPlaying === false) return; - - this.matrixWorld.decompose(_position, _quaternion, _scale); - - _orientation.set(0, 0, 1).applyQuaternion(_quaternion); - - const panner = this.panner; - - if (panner.positionX) { - - // code path for Chrome and Firefox (see #14393) - - const endTime = this.context.currentTime + this.listener.timeDelta; - - panner.positionX.linearRampToValueAtTime(_position.x, endTime); - panner.positionY.linearRampToValueAtTime(_position.y, endTime); - panner.positionZ.linearRampToValueAtTime(_position.z, endTime); - panner.orientationX.linearRampToValueAtTime(_orientation.x, endTime); - panner.orientationY.linearRampToValueAtTime(_orientation.y, endTime); - panner.orientationZ.linearRampToValueAtTime(_orientation.z, endTime); - - } else { - - panner.setPosition(_position.x, _position.y, _position.z); - panner.setOrientation(_orientation.x, _orientation.y, _orientation.z); - - } - - } - -} - -class AudioAnalyser { - - constructor(audio, fftSize = 2048) { - - this.analyser = audio.context.createAnalyser(); - this.analyser.fftSize = fftSize; - - this.data = new Uint8Array(this.analyser.frequencyBinCount); - - audio.getOutput().connect(this.analyser); - - } - - - getFrequencyData() { - - this.analyser.getByteFrequencyData(this.data); - - return this.data; - - } - - getAverageFrequency() { - - let value = 0; - const data = this.getFrequencyData(); - - for (let i = 0; i < data.length; i++) { - - value += data[i]; - - } - - return value / data.length; - - } - -} - -class PropertyMixer { - - constructor(binding, typeName, valueSize) { - - this.binding = binding; - this.valueSize = valueSize; - - let mixFunction, - mixFunctionAdditive, - setIdentity; - - // buffer layout: [ incoming | accu0 | accu1 | orig | addAccu | (optional work) ] - // - // interpolators can use .buffer as their .result - // the data then goes to 'incoming' - // - // 'accu0' and 'accu1' are used frame-interleaved for - // the cumulative result and are compared to detect - // changes - // - // 'orig' stores the original state of the property - // - // 'add' is used for additive cumulative results - // - // 'work' is optional and is only present for quaternion types. It is used - // to store intermediate quaternion multiplication results - - switch (typeName) { - - case 'quaternion': - mixFunction = this._slerp; - mixFunctionAdditive = this._slerpAdditive; - setIdentity = this._setAdditiveIdentityQuaternion; - - this.buffer = new Float64Array(valueSize * 6); - this._workIndex = 5; - break; - - case 'string': - case 'bool': - mixFunction = this._select; - - // Use the regular mix function and for additive on these types, - // additive is not relevant for non-numeric types - mixFunctionAdditive = this._select; - - setIdentity = this._setAdditiveIdentityOther; - - this.buffer = new Array(valueSize * 5); - break; - - default: - mixFunction = this._lerp; - mixFunctionAdditive = this._lerpAdditive; - setIdentity = this._setAdditiveIdentityNumeric; - - this.buffer = new Float64Array(valueSize * 5); - - } - - this._mixBufferRegion = mixFunction; - this._mixBufferRegionAdditive = mixFunctionAdditive; - this._setIdentity = setIdentity; - this._origIndex = 3; - this._addIndex = 4; - - this.cumulativeWeight = 0; - this.cumulativeWeightAdditive = 0; - - this.useCount = 0; - this.referenceCount = 0; - - } - - // accumulate data in the 'incoming' region into 'accu' - accumulate(accuIndex, weight) { - - // note: happily accumulating nothing when weight = 0, the caller knows - // the weight and shouldn't have made the call in the first place - - const buffer = this.buffer, - stride = this.valueSize, - offset = accuIndex * stride + stride; - - let currentWeight = this.cumulativeWeight; - - if (currentWeight === 0) { - - // accuN := incoming * weight - - for (let i = 0; i !== stride; ++i) { - - buffer[offset + i] = buffer[i]; - - } - - currentWeight = weight; - - } else { - - // accuN := accuN + incoming * weight - - currentWeight += weight; - const mix = weight / currentWeight; - this._mixBufferRegion(buffer, offset, 0, mix, stride); - - } - - this.cumulativeWeight = currentWeight; - - } - - // accumulate data in the 'incoming' region into 'add' - accumulateAdditive(weight) { - - const buffer = this.buffer, - stride = this.valueSize, - offset = stride * this._addIndex; - - if (this.cumulativeWeightAdditive === 0) { - - // add = identity - - this._setIdentity(); - - } - - // add := add + incoming * weight - - this._mixBufferRegionAdditive(buffer, offset, 0, weight, stride); - this.cumulativeWeightAdditive += weight; - - } - - // apply the state of 'accu' to the binding when accus differ - apply(accuIndex) { - - const stride = this.valueSize, - buffer = this.buffer, - offset = accuIndex * stride + stride, - - weight = this.cumulativeWeight, - weightAdditive = this.cumulativeWeightAdditive, - - binding = this.binding; - - this.cumulativeWeight = 0; - this.cumulativeWeightAdditive = 0; - - if (weight < 1) { - - // accuN := accuN + original * ( 1 - cumulativeWeight ) - - const originalValueOffset = stride * this._origIndex; - - this._mixBufferRegion( - buffer, offset, originalValueOffset, 1 - weight, stride); - - } - - if (weightAdditive > 0) { - - // accuN := accuN + additive accuN - - this._mixBufferRegionAdditive(buffer, offset, this._addIndex * stride, 1, stride); - - } - - for (let i = stride, e = stride + stride; i !== e; ++i) { - - if (buffer[i] !== buffer[i + stride]) { - - // value has changed -> update scene graph - - binding.setValue(buffer, offset); - break; - - } - - } - - } - - // remember the state of the bound property and copy it to both accus - saveOriginalState() { - - const binding = this.binding; - - const buffer = this.buffer, - stride = this.valueSize, - - originalValueOffset = stride * this._origIndex; - - binding.getValue(buffer, originalValueOffset); - - // accu[0..1] := orig -- initially detect changes against the original - for (let i = stride, e = originalValueOffset; i !== e; ++i) { - - buffer[i] = buffer[originalValueOffset + (i % stride)]; - - } - - // Add to identity for additive - this._setIdentity(); - - this.cumulativeWeight = 0; - this.cumulativeWeightAdditive = 0; - - } - - // apply the state previously taken via 'saveOriginalState' to the binding - restoreOriginalState() { - - const originalValueOffset = this.valueSize * 3; - this.binding.setValue(this.buffer, originalValueOffset); - - } - - _setAdditiveIdentityNumeric() { - - const startIndex = this._addIndex * this.valueSize; - const endIndex = startIndex + this.valueSize; - - for (let i = startIndex; i < endIndex; i++) { - - this.buffer[i] = 0; - - } - - } - - _setAdditiveIdentityQuaternion() { - - this._setAdditiveIdentityNumeric(); - this.buffer[this._addIndex * this.valueSize + 3] = 1; - - } - - _setAdditiveIdentityOther() { - - const startIndex = this._origIndex * this.valueSize; - const targetIndex = this._addIndex * this.valueSize; - - for (let i = 0; i < this.valueSize; i++) { - - this.buffer[targetIndex + i] = this.buffer[startIndex + i]; - - } - - } - - - // mix functions - - _select(buffer, dstOffset, srcOffset, t, stride) { - - if (t >= 0.5) { - - for (let i = 0; i !== stride; ++i) { - - buffer[dstOffset + i] = buffer[srcOffset + i]; - - } - - } - - } - - _slerp(buffer, dstOffset, srcOffset, t) { - - Quaternion.slerpFlat(buffer, dstOffset, buffer, dstOffset, buffer, srcOffset, t); - - } - - _slerpAdditive(buffer, dstOffset, srcOffset, t, stride) { - - const workOffset = this._workIndex * stride; - - // Store result in intermediate buffer offset - Quaternion.multiplyQuaternionsFlat(buffer, workOffset, buffer, dstOffset, buffer, srcOffset); - - // Slerp to the intermediate result - Quaternion.slerpFlat(buffer, dstOffset, buffer, dstOffset, buffer, workOffset, t); - - } - - _lerp(buffer, dstOffset, srcOffset, t, stride) { - - const s = 1 - t; - - for (let i = 0; i !== stride; ++i) { - - const j = dstOffset + i; - - buffer[j] = buffer[j] * s + buffer[srcOffset + i] * t; - - } - - } - - _lerpAdditive(buffer, dstOffset, srcOffset, t, stride) { - - for (let i = 0; i !== stride; ++i) { - - const j = dstOffset + i; - - buffer[j] = buffer[j] + buffer[srcOffset + i] * t; - - } - - } - -} - -// Characters [].:/ are reserved for track binding syntax. -const _RESERVED_CHARS_RE = '\\[\\]\\.:\\/'; -const _reservedRe = new RegExp('[' + _RESERVED_CHARS_RE + ']', 'g'); - -// Attempts to allow node names from any language. ES5's `\w` regexp matches -// only latin characters, and the unicode \p{L} is not yet supported. So -// instead, we exclude reserved characters and match everything else. -const _wordChar = '[^' + _RESERVED_CHARS_RE + ']'; -const _wordCharOrDot = '[^' + _RESERVED_CHARS_RE.replace('\\.', '') + ']'; - -// Parent directories, delimited by '/' or ':'. Currently unused, but must -// be matched to parse the rest of the track name. -const _directoryRe = /*@__PURE__*/ /((?:WC+[\/:])*)/.source.replace('WC', _wordChar); - -// Target node. May contain word characters (a-zA-Z0-9_) and '.' or '-'. -const _nodeRe = /*@__PURE__*/ /(WCOD+)?/.source.replace('WCOD', _wordCharOrDot); - -// Object on target node, and accessor. May not contain reserved -// characters. Accessor may contain any character except closing bracket. -const _objectRe = /*@__PURE__*/ /(?:\.(WC+)(?:\[(.+)\])?)?/.source.replace('WC', _wordChar); - -// Property and accessor. May not contain reserved characters. Accessor may -// contain any non-bracket characters. -const _propertyRe = /*@__PURE__*/ /\.(WC+)(?:\[(.+)\])?/.source.replace('WC', _wordChar); - -const _trackRe = new RegExp('' - + '^' - + _directoryRe - + _nodeRe - + _objectRe - + _propertyRe - + '$' -); - -const _supportedObjectNames = ['material', 'materials', 'bones', 'map']; - -class Composite { - - constructor(targetGroup, path, optionalParsedPath) { - - const parsedPath = optionalParsedPath || PropertyBinding.parseTrackName(path); - - this._targetGroup = targetGroup; - this._bindings = targetGroup.subscribe_(path, parsedPath); - - } - - getValue(array, offset) { - - this.bind(); // bind all binding - - const firstValidIndex = this._targetGroup.nCachedObjects_, - binding = this._bindings[firstValidIndex]; - - // and only call .getValue on the first - if (binding !== undefined) binding.getValue(array, offset); - - } - - setValue(array, offset) { - - const bindings = this._bindings; - - for (let i = this._targetGroup.nCachedObjects_, n = bindings.length; i !== n; ++i) { - - bindings[i].setValue(array, offset); - - } - - } - - bind() { - - const bindings = this._bindings; - - for (let i = this._targetGroup.nCachedObjects_, n = bindings.length; i !== n; ++i) { - - bindings[i].bind(); - - } - - } - - unbind() { - - const bindings = this._bindings; - - for (let i = this._targetGroup.nCachedObjects_, n = bindings.length; i !== n; ++i) { - - bindings[i].unbind(); - - } - - } - -} - -// Note: This class uses a State pattern on a per-method basis: -// 'bind' sets 'this.getValue' / 'setValue' and shadows the -// prototype version of these methods with one that represents -// the bound state. When the property is not found, the methods -// become no-ops. -class PropertyBinding { - - constructor(rootNode, path, parsedPath) { - - this.path = path; - this.parsedPath = parsedPath || PropertyBinding.parseTrackName(path); - - this.node = PropertyBinding.findNode(rootNode, this.parsedPath.nodeName) || rootNode; - - this.rootNode = rootNode; - - // initial state of these methods that calls 'bind' - this.getValue = this._getValue_unbound; - this.setValue = this._setValue_unbound; - - } - - - static create(root, path, parsedPath) { - - if (!(root && root.isAnimationObjectGroup)) { - - return new PropertyBinding(root, path, parsedPath); - - } else { - - return new PropertyBinding.Composite(root, path, parsedPath); - - } - - } - - /** - * Replaces spaces with underscores and removes unsupported characters from - * node names, to ensure compatibility with parseTrackName(). - * - * @param {string} name Node name to be sanitized. - * @return {string} - */ - static sanitizeNodeName(name) { - - return name.replace(/\s/g, '_').replace(_reservedRe, ''); - - } - - static parseTrackName(trackName) { - - const matches = _trackRe.exec(trackName); - - if (matches === null) { - - throw new Error('PropertyBinding: Cannot parse trackName: ' + trackName); - - } - - const results = { - // directoryName: matches[ 1 ], // (tschw) currently unused - nodeName: matches[2], - objectName: matches[3], - objectIndex: matches[4], - propertyName: matches[5], // required - propertyIndex: matches[6] - }; - - const lastDot = results.nodeName && results.nodeName.lastIndexOf('.'); - - if (lastDot !== undefined && lastDot !== - 1) { - - const objectName = results.nodeName.substring(lastDot + 1); - - // Object names must be checked against an allowlist. Otherwise, there - // is no way to parse 'foo.bar.baz': 'baz' must be a property, but - // 'bar' could be the objectName, or part of a nodeName (which can - // include '.' characters). - if (_supportedObjectNames.indexOf(objectName) !== - 1) { - - results.nodeName = results.nodeName.substring(0, lastDot); - results.objectName = objectName; - - } - - } - - if (results.propertyName === null || results.propertyName.length === 0) { - - throw new Error('PropertyBinding: can not parse propertyName from trackName: ' + trackName); - - } - - return results; - - } - - static findNode(root, nodeName) { - - if (nodeName === undefined || nodeName === '' || nodeName === '.' || nodeName === - 1 || nodeName === root.name || nodeName === root.uuid) { - - return root; - - } - - // search into skeleton bones. - if (root.skeleton) { - - const bone = root.skeleton.getBoneByName(nodeName); - - if (bone !== undefined) { - - return bone; - - } - - } - - // search into node subtree. - if (root.children) { - - const searchNodeSubtree = function (children) { - - for (let i = 0; i < children.length; i++) { - - const childNode = children[i]; - - if (childNode.name === nodeName || childNode.uuid === nodeName) { - - return childNode; - - } - - const result = searchNodeSubtree(childNode.children); - - if (result) return result; - - } - - return null; - - }; - - const subTreeNode = searchNodeSubtree(root.children); - - if (subTreeNode) { - - return subTreeNode; - - } - - } - - return null; - - } - - // these are used to "bind" a nonexistent property - _getValue_unavailable() { } - _setValue_unavailable() { } - - // Getters - - _getValue_direct(buffer, offset) { - - buffer[offset] = this.targetObject[this.propertyName]; - - } - - _getValue_array(buffer, offset) { - - const source = this.resolvedProperty; - - for (let i = 0, n = source.length; i !== n; ++i) { - - buffer[offset++] = source[i]; - - } - - } - - _getValue_arrayElement(buffer, offset) { - - buffer[offset] = this.resolvedProperty[this.propertyIndex]; - - } - - _getValue_toArray(buffer, offset) { - - this.resolvedProperty.toArray(buffer, offset); - - } - - // Direct - - _setValue_direct(buffer, offset) { - - this.targetObject[this.propertyName] = buffer[offset]; - - } - - _setValue_direct_setNeedsUpdate(buffer, offset) { - - this.targetObject[this.propertyName] = buffer[offset]; - this.targetObject.needsUpdate = true; - - } - - _setValue_direct_setMatrixWorldNeedsUpdate(buffer, offset) { - - this.targetObject[this.propertyName] = buffer[offset]; - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - // EntireArray - - _setValue_array(buffer, offset) { - - const dest = this.resolvedProperty; - - for (let i = 0, n = dest.length; i !== n; ++i) { - - dest[i] = buffer[offset++]; - - } - - } - - _setValue_array_setNeedsUpdate(buffer, offset) { - - const dest = this.resolvedProperty; - - for (let i = 0, n = dest.length; i !== n; ++i) { - - dest[i] = buffer[offset++]; - - } - - this.targetObject.needsUpdate = true; - - } - - _setValue_array_setMatrixWorldNeedsUpdate(buffer, offset) { - - const dest = this.resolvedProperty; - - for (let i = 0, n = dest.length; i !== n; ++i) { - - dest[i] = buffer[offset++]; - - } - - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - // ArrayElement - - _setValue_arrayElement(buffer, offset) { - - this.resolvedProperty[this.propertyIndex] = buffer[offset]; - - } - - _setValue_arrayElement_setNeedsUpdate(buffer, offset) { - - this.resolvedProperty[this.propertyIndex] = buffer[offset]; - this.targetObject.needsUpdate = true; - - } - - _setValue_arrayElement_setMatrixWorldNeedsUpdate(buffer, offset) { - - this.resolvedProperty[this.propertyIndex] = buffer[offset]; - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - // HasToFromArray - - _setValue_fromArray(buffer, offset) { - - this.resolvedProperty.fromArray(buffer, offset); - - } - - _setValue_fromArray_setNeedsUpdate(buffer, offset) { - - this.resolvedProperty.fromArray(buffer, offset); - this.targetObject.needsUpdate = true; - - } - - _setValue_fromArray_setMatrixWorldNeedsUpdate(buffer, offset) { - - this.resolvedProperty.fromArray(buffer, offset); - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - _getValue_unbound(targetArray, offset) { - - this.bind(); - this.getValue(targetArray, offset); - - } - - _setValue_unbound(sourceArray, offset) { - - this.bind(); - this.setValue(sourceArray, offset); - - } - - // create getter / setter pair for a property in the scene graph - bind() { - - let targetObject = this.node; - const parsedPath = this.parsedPath; - - const objectName = parsedPath.objectName; - const propertyName = parsedPath.propertyName; - let propertyIndex = parsedPath.propertyIndex; - - if (!targetObject) { - - targetObject = PropertyBinding.findNode(this.rootNode, parsedPath.nodeName) || this.rootNode; - - this.node = targetObject; - - } - - // set fail state so we can just 'return' on error - this.getValue = this._getValue_unavailable; - this.setValue = this._setValue_unavailable; - - // ensure there is a value node - if (!targetObject) { - - console.error('THREE.PropertyBinding: Trying to update node for track: ' + this.path + ' but it wasn\'t found.'); - return; - - } - - if (objectName) { - - let objectIndex = parsedPath.objectIndex; - - // special cases were we need to reach deeper into the hierarchy to get the face materials.... - switch (objectName) { - - case 'materials': - - if (!targetObject.material) { - - console.error('THREE.PropertyBinding: Can not bind to material as node does not have a material.', this); - return; - - } - - if (!targetObject.material.materials) { - - console.error('THREE.PropertyBinding: Can not bind to material.materials as node.material does not have a materials array.', this); - return; - - } - - targetObject = targetObject.material.materials; - - break; - - case 'bones': - - if (!targetObject.skeleton) { - - console.error('THREE.PropertyBinding: Can not bind to bones as node does not have a skeleton.', this); - return; - - } - - // potential future optimization: skip this if propertyIndex is already an integer - // and convert the integer string to a true integer. - - targetObject = targetObject.skeleton.bones; - - // support resolving morphTarget names into indices. - for (let i = 0; i < targetObject.length; i++) { - - if (targetObject[i].name === objectIndex) { - - objectIndex = i; - break; - - } - - } - - break; - - case 'map': - - if ('map' in targetObject) { - - targetObject = targetObject.map; - break; - - } - - if (!targetObject.material) { - - console.error('THREE.PropertyBinding: Can not bind to material as node does not have a material.', this); - return; - - } - - if (!targetObject.material.map) { - - console.error('THREE.PropertyBinding: Can not bind to material.map as node.material does not have a map.', this); - return; - - } - - targetObject = targetObject.material.map; - break; - - default: - - if (targetObject[objectName] === undefined) { - - console.error('THREE.PropertyBinding: Can not bind to objectName of node undefined.', this); - return; - - } - - targetObject = targetObject[objectName]; - - } - - - if (objectIndex !== undefined) { - - if (targetObject[objectIndex] === undefined) { - - console.error('THREE.PropertyBinding: Trying to bind to objectIndex of objectName, but is undefined.', this, targetObject); - return; - - } - - targetObject = targetObject[objectIndex]; - - } - - } - - // resolve property - const nodeProperty = targetObject[propertyName]; - - if (nodeProperty === undefined) { - - const nodeName = parsedPath.nodeName; - - console.error('THREE.PropertyBinding: Trying to update property for track: ' + nodeName + - '.' + propertyName + ' but it wasn\'t found.', targetObject); - return; - - } - - // determine versioning scheme - let versioning = this.Versioning.None; - - this.targetObject = targetObject; - - if (targetObject.needsUpdate !== undefined) { // material - - versioning = this.Versioning.NeedsUpdate; - - } else if (targetObject.matrixWorldNeedsUpdate !== undefined) { // node transform - - versioning = this.Versioning.MatrixWorldNeedsUpdate; - - } - - // determine how the property gets bound - let bindingType = this.BindingType.Direct; - - if (propertyIndex !== undefined) { - - // access a sub element of the property array (only primitives are supported right now) - - if (propertyName === 'morphTargetInfluences') { - - // potential optimization, skip this if propertyIndex is already an integer, and convert the integer string to a true integer. - - // support resolving morphTarget names into indices. - if (!targetObject.geometry) { - - console.error('THREE.PropertyBinding: Can not bind to morphTargetInfluences because node does not have a geometry.', this); - return; - - } - - if (!targetObject.geometry.morphAttributes) { - - console.error('THREE.PropertyBinding: Can not bind to morphTargetInfluences because node does not have a geometry.morphAttributes.', this); - return; - - } - - if (targetObject.morphTargetDictionary[propertyIndex] !== undefined) { - - propertyIndex = targetObject.morphTargetDictionary[propertyIndex]; - - } - - } - - bindingType = this.BindingType.ArrayElement; - - this.resolvedProperty = nodeProperty; - this.propertyIndex = propertyIndex; - - } else if (nodeProperty.fromArray !== undefined && nodeProperty.toArray !== undefined) { - - // must use copy for Object3D.Euler/Quaternion - - bindingType = this.BindingType.HasFromToArray; - - this.resolvedProperty = nodeProperty; - - } else if (Array.isArray(nodeProperty)) { - - bindingType = this.BindingType.EntireArray; - - this.resolvedProperty = nodeProperty; - - } else { - - this.propertyName = propertyName; - - } - - // select getter / setter - this.getValue = this.GetterByBindingType[bindingType]; - this.setValue = this.SetterByBindingTypeAndVersioning[bindingType][versioning]; - - } - - unbind() { - - this.node = null; - - // back to the prototype version of getValue / setValue - // note: avoiding to mutate the shape of 'this' via 'delete' - this.getValue = this._getValue_unbound; - this.setValue = this._setValue_unbound; - - } - -} - -PropertyBinding.Composite = Composite; - -PropertyBinding.prototype.BindingType = { - Direct: 0, - EntireArray: 1, - ArrayElement: 2, - HasFromToArray: 3 -}; - -PropertyBinding.prototype.Versioning = { - None: 0, - NeedsUpdate: 1, - MatrixWorldNeedsUpdate: 2 -}; - -PropertyBinding.prototype.GetterByBindingType = [ - - PropertyBinding.prototype._getValue_direct, - PropertyBinding.prototype._getValue_array, - PropertyBinding.prototype._getValue_arrayElement, - PropertyBinding.prototype._getValue_toArray, - -]; - -PropertyBinding.prototype.SetterByBindingTypeAndVersioning = [ - - [ - // Direct - PropertyBinding.prototype._setValue_direct, - PropertyBinding.prototype._setValue_direct_setNeedsUpdate, - PropertyBinding.prototype._setValue_direct_setMatrixWorldNeedsUpdate, - - ], [ - - // EntireArray - - PropertyBinding.prototype._setValue_array, - PropertyBinding.prototype._setValue_array_setNeedsUpdate, - PropertyBinding.prototype._setValue_array_setMatrixWorldNeedsUpdate, - - ], [ - - // ArrayElement - PropertyBinding.prototype._setValue_arrayElement, - PropertyBinding.prototype._setValue_arrayElement_setNeedsUpdate, - PropertyBinding.prototype._setValue_arrayElement_setMatrixWorldNeedsUpdate, - - ], [ - - // HasToFromArray - PropertyBinding.prototype._setValue_fromArray, - PropertyBinding.prototype._setValue_fromArray_setNeedsUpdate, - PropertyBinding.prototype._setValue_fromArray_setMatrixWorldNeedsUpdate, - - ] - -]; - -/** - * - * A group of objects that receives a shared animation state. - * - * Usage: - * - * - Add objects you would otherwise pass as 'root' to the - * constructor or the .clipAction method of AnimationMixer. - * - * - Instead pass this object as 'root'. - * - * - You can also add and remove objects later when the mixer - * is running. - * - * Note: - * - * Objects of this class appear as one object to the mixer, - * so cache control of the individual objects must be done - * on the group. - * - * Limitation: - * - * - The animated properties must be compatible among the - * all objects in the group. - * - * - A single property can either be controlled through a - * target group or directly, but not both. - */ - -class AnimationObjectGroup { - - constructor() { - - this.isAnimationObjectGroup = true; - - this.uuid = generateUUID(); - - // cached objects followed by the active ones - this._objects = Array.prototype.slice.call(arguments); - - this.nCachedObjects_ = 0; // threshold - // note: read by PropertyBinding.Composite - - const indices = {}; - this._indicesByUUID = indices; // for bookkeeping - - for (let i = 0, n = arguments.length; i !== n; ++i) { - - indices[arguments[i].uuid] = i; - - } - - this._paths = []; // inside: string - this._parsedPaths = []; // inside: { we don't care, here } - this._bindings = []; // inside: Array< PropertyBinding > - this._bindingsIndicesByPath = {}; // inside: indices in these arrays - - const scope = this; - - this.stats = { - - objects: { - get total() { - - return scope._objects.length; - - }, - get inUse() { - - return this.total - scope.nCachedObjects_; - - } - }, - get bindingsPerObject() { - - return scope._bindings.length; - - } - - }; - - } - - add() { - - const objects = this._objects, - indicesByUUID = this._indicesByUUID, - paths = this._paths, - parsedPaths = this._parsedPaths, - bindings = this._bindings, - nBindings = bindings.length; - - let knownObject = undefined, - nObjects = objects.length, - nCachedObjects = this.nCachedObjects_; - - for (let i = 0, n = arguments.length; i !== n; ++i) { - - const object = arguments[i], - uuid = object.uuid; - let index = indicesByUUID[uuid]; - - if (index === undefined) { - - // unknown object -> add it to the ACTIVE region - - index = nObjects++; - indicesByUUID[uuid] = index; - objects.push(object); - - // accounting is done, now do the same for all bindings - - for (let j = 0, m = nBindings; j !== m; ++j) { - - bindings[j].push(new PropertyBinding(object, paths[j], parsedPaths[j])); - - } - - } else if (index < nCachedObjects) { - - knownObject = objects[index]; - - // move existing object to the ACTIVE region - - const firstActiveIndex = --nCachedObjects, - lastCachedObject = objects[firstActiveIndex]; - - indicesByUUID[lastCachedObject.uuid] = index; - objects[index] = lastCachedObject; - - indicesByUUID[uuid] = firstActiveIndex; - objects[firstActiveIndex] = object; - - // accounting is done, now do the same for all bindings - - for (let j = 0, m = nBindings; j !== m; ++j) { - - const bindingsForPath = bindings[j], - lastCached = bindingsForPath[firstActiveIndex]; - - let binding = bindingsForPath[index]; - - bindingsForPath[index] = lastCached; - - if (binding === undefined) { - - // since we do not bother to create new bindings - // for objects that are cached, the binding may - // or may not exist - - binding = new PropertyBinding(object, paths[j], parsedPaths[j]); - - } - - bindingsForPath[firstActiveIndex] = binding; - - } - - } else if (objects[index] !== knownObject) { - - console.error('THREE.AnimationObjectGroup: Different objects with the same UUID ' + - 'detected. Clean the caches or recreate your infrastructure when reloading scenes.'); - - } // else the object is already where we want it to be - - } // for arguments - - this.nCachedObjects_ = nCachedObjects; - - } - - remove() { - - const objects = this._objects, - indicesByUUID = this._indicesByUUID, - bindings = this._bindings, - nBindings = bindings.length; - - let nCachedObjects = this.nCachedObjects_; - - for (let i = 0, n = arguments.length; i !== n; ++i) { - - const object = arguments[i], - uuid = object.uuid, - index = indicesByUUID[uuid]; - - if (index !== undefined && index >= nCachedObjects) { - - // move existing object into the CACHED region - - const lastCachedIndex = nCachedObjects++, - firstActiveObject = objects[lastCachedIndex]; - - indicesByUUID[firstActiveObject.uuid] = index; - objects[index] = firstActiveObject; - - indicesByUUID[uuid] = lastCachedIndex; - objects[lastCachedIndex] = object; - - // accounting is done, now do the same for all bindings - - for (let j = 0, m = nBindings; j !== m; ++j) { - - const bindingsForPath = bindings[j], - firstActive = bindingsForPath[lastCachedIndex], - binding = bindingsForPath[index]; - - bindingsForPath[index] = firstActive; - bindingsForPath[lastCachedIndex] = binding; - - } - - } - - } // for arguments - - this.nCachedObjects_ = nCachedObjects; - - } - - // remove & forget - uncache() { - - const objects = this._objects, - indicesByUUID = this._indicesByUUID, - bindings = this._bindings, - nBindings = bindings.length; - - let nCachedObjects = this.nCachedObjects_, - nObjects = objects.length; - - for (let i = 0, n = arguments.length; i !== n; ++i) { - - const object = arguments[i], - uuid = object.uuid, - index = indicesByUUID[uuid]; - - if (index !== undefined) { - - delete indicesByUUID[uuid]; - - if (index < nCachedObjects) { - - // object is cached, shrink the CACHED region - - const firstActiveIndex = --nCachedObjects, - lastCachedObject = objects[firstActiveIndex], - lastIndex = --nObjects, - lastObject = objects[lastIndex]; - - // last cached object takes this object's place - indicesByUUID[lastCachedObject.uuid] = index; - objects[index] = lastCachedObject; - - // last object goes to the activated slot and pop - indicesByUUID[lastObject.uuid] = firstActiveIndex; - objects[firstActiveIndex] = lastObject; - objects.pop(); - - // accounting is done, now do the same for all bindings - - for (let j = 0, m = nBindings; j !== m; ++j) { - - const bindingsForPath = bindings[j], - lastCached = bindingsForPath[firstActiveIndex], - last = bindingsForPath[lastIndex]; - - bindingsForPath[index] = lastCached; - bindingsForPath[firstActiveIndex] = last; - bindingsForPath.pop(); - - } - - } else { - - // object is active, just swap with the last and pop - - const lastIndex = --nObjects, - lastObject = objects[lastIndex]; - - if (lastIndex > 0) { - - indicesByUUID[lastObject.uuid] = index; - - } - - objects[index] = lastObject; - objects.pop(); - - // accounting is done, now do the same for all bindings - - for (let j = 0, m = nBindings; j !== m; ++j) { - - const bindingsForPath = bindings[j]; - - bindingsForPath[index] = bindingsForPath[lastIndex]; - bindingsForPath.pop(); - - } - - } // cached or active - - } // if object is known - - } // for arguments - - this.nCachedObjects_ = nCachedObjects; - - } - - // Internal interface used by befriended PropertyBinding.Composite: - - subscribe_(path, parsedPath) { - - // returns an array of bindings for the given path that is changed - // according to the contained objects in the group - - const indicesByPath = this._bindingsIndicesByPath; - let index = indicesByPath[path]; - const bindings = this._bindings; - - if (index !== undefined) return bindings[index]; - - const paths = this._paths, - parsedPaths = this._parsedPaths, - objects = this._objects, - nObjects = objects.length, - nCachedObjects = this.nCachedObjects_, - bindingsForPath = new Array(nObjects); - - index = bindings.length; - - indicesByPath[path] = index; - - paths.push(path); - parsedPaths.push(parsedPath); - bindings.push(bindingsForPath); - - for (let i = nCachedObjects, n = objects.length; i !== n; ++i) { - - const object = objects[i]; - bindingsForPath[i] = new PropertyBinding(object, path, parsedPath); - - } - - return bindingsForPath; - - } - - unsubscribe_(path) { - - // tells the group to forget about a property path and no longer - // update the array previously obtained with 'subscribe_' - - const indicesByPath = this._bindingsIndicesByPath, - index = indicesByPath[path]; - - if (index !== undefined) { - - const paths = this._paths, - parsedPaths = this._parsedPaths, - bindings = this._bindings, - lastBindingsIndex = bindings.length - 1, - lastBindings = bindings[lastBindingsIndex], - lastBindingsPath = path[lastBindingsIndex]; - - indicesByPath[lastBindingsPath] = index; - - bindings[index] = lastBindings; - bindings.pop(); - - parsedPaths[index] = parsedPaths[lastBindingsIndex]; - parsedPaths.pop(); - - paths[index] = paths[lastBindingsIndex]; - paths.pop(); - - } - - } - -} - -class AnimationAction { - - constructor(mixer, clip, localRoot = null, blendMode = clip.blendMode) { - - this._mixer = mixer; - this._clip = clip; - this._localRoot = localRoot; - this.blendMode = blendMode; - - const tracks = clip.tracks, - nTracks = tracks.length, - interpolants = new Array(nTracks); - - const interpolantSettings = { - endingStart: ZeroCurvatureEnding, - endingEnd: ZeroCurvatureEnding - }; - - for (let i = 0; i !== nTracks; ++i) { - - const interpolant = tracks[i].createInterpolant(null); - interpolants[i] = interpolant; - interpolant.settings = interpolantSettings; - - } - - this._interpolantSettings = interpolantSettings; - - this._interpolants = interpolants; // bound by the mixer - - // inside: PropertyMixer (managed by the mixer) - this._propertyBindings = new Array(nTracks); - - this._cacheIndex = null; // for the memory manager - this._byClipCacheIndex = null; // for the memory manager - - this._timeScaleInterpolant = null; - this._weightInterpolant = null; - - this.loop = LoopRepeat; - this._loopCount = - 1; - - // global mixer time when the action is to be started - // it's set back to 'null' upon start of the action - this._startTime = null; - - // scaled local time of the action - // gets clamped or wrapped to 0..clip.duration according to loop - this.time = 0; - - this.timeScale = 1; - this._effectiveTimeScale = 1; - - this.weight = 1; - this._effectiveWeight = 1; - - this.repetitions = Infinity; // no. of repetitions when looping - - this.paused = false; // true -> zero effective time scale - this.enabled = true; // false -> zero effective weight - - this.clampWhenFinished = false;// keep feeding the last frame? - - this.zeroSlopeAtStart = true;// for smooth interpolation w/o separate - this.zeroSlopeAtEnd = true;// clips for start, loop and end - - } - - // State & Scheduling - - play() { - - this._mixer._activateAction(this); - - return this; - - } - - stop() { - - this._mixer._deactivateAction(this); - - return this.reset(); - - } - - reset() { - - this.paused = false; - this.enabled = true; - - this.time = 0; // restart clip - this._loopCount = - 1;// forget previous loops - this._startTime = null;// forget scheduling - - return this.stopFading().stopWarping(); - - } - - isRunning() { - - return this.enabled && !this.paused && this.timeScale !== 0 && - this._startTime === null && this._mixer._isActiveAction(this); - - } - - // return true when play has been called - isScheduled() { - - return this._mixer._isActiveAction(this); - - } - - startAt(time) { - - this._startTime = time; - - return this; - - } - - setLoop(mode, repetitions) { - - this.loop = mode; - this.repetitions = repetitions; - - return this; - - } - - // Weight - - // set the weight stopping any scheduled fading - // although .enabled = false yields an effective weight of zero, this - // method does *not* change .enabled, because it would be confusing - setEffectiveWeight(weight) { - - this.weight = weight; - - // note: same logic as when updated at runtime - this._effectiveWeight = this.enabled ? weight : 0; - - return this.stopFading(); - - } - - // return the weight considering fading and .enabled - getEffectiveWeight() { - - return this._effectiveWeight; - - } - - fadeIn(duration) { - - return this._scheduleFading(duration, 0, 1); - - } - - fadeOut(duration) { - - return this._scheduleFading(duration, 1, 0); - - } - - crossFadeFrom(fadeOutAction, duration, warp) { - - fadeOutAction.fadeOut(duration); - this.fadeIn(duration); - - if (warp) { - - const fadeInDuration = this._clip.duration, - fadeOutDuration = fadeOutAction._clip.duration, - - startEndRatio = fadeOutDuration / fadeInDuration, - endStartRatio = fadeInDuration / fadeOutDuration; - - fadeOutAction.warp(1.0, startEndRatio, duration); - this.warp(endStartRatio, 1.0, duration); - - } - - return this; - - } - - crossFadeTo(fadeInAction, duration, warp) { - - return fadeInAction.crossFadeFrom(this, duration, warp); - - } - - stopFading() { - - const weightInterpolant = this._weightInterpolant; - - if (weightInterpolant !== null) { - - this._weightInterpolant = null; - this._mixer._takeBackControlInterpolant(weightInterpolant); - - } - - return this; - - } - - // Time Scale Control - - // set the time scale stopping any scheduled warping - // although .paused = true yields an effective time scale of zero, this - // method does *not* change .paused, because it would be confusing - setEffectiveTimeScale(timeScale) { - - this.timeScale = timeScale; - this._effectiveTimeScale = this.paused ? 0 : timeScale; - - return this.stopWarping(); - - } - - // return the time scale considering warping and .paused - getEffectiveTimeScale() { - - return this._effectiveTimeScale; - - } - - setDuration(duration) { - - this.timeScale = this._clip.duration / duration; - - return this.stopWarping(); - - } - - syncWith(action) { - - this.time = action.time; - this.timeScale = action.timeScale; - - return this.stopWarping(); - - } - - halt(duration) { - - return this.warp(this._effectiveTimeScale, 0, duration); - - } - - warp(startTimeScale, endTimeScale, duration) { - - const mixer = this._mixer, - now = mixer.time, - timeScale = this.timeScale; - - let interpolant = this._timeScaleInterpolant; - - if (interpolant === null) { - - interpolant = mixer._lendControlInterpolant(); - this._timeScaleInterpolant = interpolant; - - } - - const times = interpolant.parameterPositions, - values = interpolant.sampleValues; - - times[0] = now; - times[1] = now + duration; - - values[0] = startTimeScale / timeScale; - values[1] = endTimeScale / timeScale; - - return this; - - } - - stopWarping() { - - const timeScaleInterpolant = this._timeScaleInterpolant; - - if (timeScaleInterpolant !== null) { - - this._timeScaleInterpolant = null; - this._mixer._takeBackControlInterpolant(timeScaleInterpolant); - - } - - return this; - - } - - // Object Accessors - - getMixer() { - - return this._mixer; - - } - - getClip() { - - return this._clip; - - } - - getRoot() { - - return this._localRoot || this._mixer._root; - - } - - // Interna - - _update(time, deltaTime, timeDirection, accuIndex) { - - // called by the mixer - - if (!this.enabled) { - - // call ._updateWeight() to update ._effectiveWeight - - this._updateWeight(time); - return; - - } - - const startTime = this._startTime; - - if (startTime !== null) { - - // check for scheduled start of action - - const timeRunning = (time - startTime) * timeDirection; - if (timeRunning < 0 || timeDirection === 0) { - - deltaTime = 0; - - } else { - - - this._startTime = null; // unschedule - deltaTime = timeDirection * timeRunning; - - } - - } - - // apply time scale and advance time - - deltaTime *= this._updateTimeScale(time); - const clipTime = this._updateTime(deltaTime); - - // note: _updateTime may disable the action resulting in - // an effective weight of 0 - - const weight = this._updateWeight(time); - - if (weight > 0) { - - const interpolants = this._interpolants; - const propertyMixers = this._propertyBindings; - - switch (this.blendMode) { - - case AdditiveAnimationBlendMode: - - for (let j = 0, m = interpolants.length; j !== m; ++j) { - - interpolants[j].evaluate(clipTime); - propertyMixers[j].accumulateAdditive(weight); - - } - - break; - - case NormalAnimationBlendMode: - default: - - for (let j = 0, m = interpolants.length; j !== m; ++j) { - - interpolants[j].evaluate(clipTime); - propertyMixers[j].accumulate(accuIndex, weight); - - } - - } - - } - - } - - _updateWeight(time) { - - let weight = 0; - - if (this.enabled) { - - weight = this.weight; - const interpolant = this._weightInterpolant; - - if (interpolant !== null) { - - const interpolantValue = interpolant.evaluate(time)[0]; - - weight *= interpolantValue; - - if (time > interpolant.parameterPositions[1]) { - - this.stopFading(); - - if (interpolantValue === 0) { - - // faded out, disable - this.enabled = false; - - } - - } - - } - - } - - this._effectiveWeight = weight; - return weight; - - } - - _updateTimeScale(time) { - - let timeScale = 0; - - if (!this.paused) { - - timeScale = this.timeScale; - - const interpolant = this._timeScaleInterpolant; - - if (interpolant !== null) { - - const interpolantValue = interpolant.evaluate(time)[0]; - - timeScale *= interpolantValue; - - if (time > interpolant.parameterPositions[1]) { - - this.stopWarping(); - - if (timeScale === 0) { - - // motion has halted, pause - this.paused = true; - - } else { - - // warp done - apply final time scale - this.timeScale = timeScale; - - } - - } - - } - - } - - this._effectiveTimeScale = timeScale; - return timeScale; - - } - - _updateTime(deltaTime) { - - const duration = this._clip.duration; - const loop = this.loop; - - let time = this.time + deltaTime; - let loopCount = this._loopCount; - - const pingPong = (loop === LoopPingPong); - - if (deltaTime === 0) { - - if (loopCount === - 1) return time; - - return (pingPong && (loopCount & 1) === 1) ? duration - time : time; - - } - - if (loop === LoopOnce) { - - if (loopCount === - 1) { - - // just started - - this._loopCount = 0; - this._setEndings(true, true, false); - - } - - handle_stop: { - - if (time >= duration) { - - time = duration; - - } else if (time < 0) { - - time = 0; - - } else { - - this.time = time; - - break handle_stop; - - } - - if (this.clampWhenFinished) this.paused = true; - else this.enabled = false; - - this.time = time; - - this._mixer.dispatchEvent({ - type: 'finished', action: this, - direction: deltaTime < 0 ? - 1 : 1 - }); - - } - - } else { // repetitive Repeat or PingPong - - if (loopCount === - 1) { - - // just started - - if (deltaTime >= 0) { - - loopCount = 0; - - this._setEndings(true, this.repetitions === 0, pingPong); - - } else { - - // when looping in reverse direction, the initial - // transition through zero counts as a repetition, - // so leave loopCount at -1 - - this._setEndings(this.repetitions === 0, true, pingPong); - - } - - } - - if (time >= duration || time < 0) { - - // wrap around - - const loopDelta = Math.floor(time / duration); // signed - time -= duration * loopDelta; - - loopCount += Math.abs(loopDelta); - - const pending = this.repetitions - loopCount; - - if (pending <= 0) { - - // have to stop (switch state, clamp time, fire event) - - if (this.clampWhenFinished) this.paused = true; - else this.enabled = false; - - time = deltaTime > 0 ? duration : 0; - - this.time = time; - - this._mixer.dispatchEvent({ - type: 'finished', action: this, - direction: deltaTime > 0 ? 1 : - 1 - }); - - } else { - - // keep running - - if (pending === 1) { - - // entering the last round - - const atStart = deltaTime < 0; - this._setEndings(atStart, !atStart, pingPong); - - } else { - - this._setEndings(false, false, pingPong); - - } - - this._loopCount = loopCount; - - this.time = time; - - this._mixer.dispatchEvent({ - type: 'loop', action: this, loopDelta: loopDelta - }); - - } - - } else { - - this.time = time; - - } - - if (pingPong && (loopCount & 1) === 1) { - - // invert time for the "pong round" - - return duration - time; - - } - - } - - return time; - - } - - _setEndings(atStart, atEnd, pingPong) { - - const settings = this._interpolantSettings; - - if (pingPong) { - - settings.endingStart = ZeroSlopeEnding; - settings.endingEnd = ZeroSlopeEnding; - - } else { - - // assuming for LoopOnce atStart == atEnd == true - - if (atStart) { - - settings.endingStart = this.zeroSlopeAtStart ? ZeroSlopeEnding : ZeroCurvatureEnding; - - } else { - - settings.endingStart = WrapAroundEnding; - - } - - if (atEnd) { - - settings.endingEnd = this.zeroSlopeAtEnd ? ZeroSlopeEnding : ZeroCurvatureEnding; - - } else { - - settings.endingEnd = WrapAroundEnding; - - } - - } - - } - - _scheduleFading(duration, weightNow, weightThen) { - - const mixer = this._mixer, now = mixer.time; - let interpolant = this._weightInterpolant; - - if (interpolant === null) { - - interpolant = mixer._lendControlInterpolant(); - this._weightInterpolant = interpolant; - - } - - const times = interpolant.parameterPositions, - values = interpolant.sampleValues; - - times[0] = now; - values[0] = weightNow; - times[1] = now + duration; - values[1] = weightThen; - - return this; - - } - -} - -const _controlInterpolantsResultBuffer = new Float32Array(1); - - -class AnimationMixer extends EventDispatcher { - - constructor(root) { - - super(); - - this._root = root; - this._initMemoryManager(); - this._accuIndex = 0; - this.time = 0; - this.timeScale = 1.0; - - } - - _bindAction(action, prototypeAction) { - - const root = action._localRoot || this._root, - tracks = action._clip.tracks, - nTracks = tracks.length, - bindings = action._propertyBindings, - interpolants = action._interpolants, - rootUuid = root.uuid, - bindingsByRoot = this._bindingsByRootAndName; - - let bindingsByName = bindingsByRoot[rootUuid]; - - if (bindingsByName === undefined) { - - bindingsByName = {}; - bindingsByRoot[rootUuid] = bindingsByName; - - } - - for (let i = 0; i !== nTracks; ++i) { - - const track = tracks[i], - trackName = track.name; - - let binding = bindingsByName[trackName]; - - if (binding !== undefined) { - - ++binding.referenceCount; - bindings[i] = binding; - - } else { - - binding = bindings[i]; - - if (binding !== undefined) { - - // existing binding, make sure the cache knows - - if (binding._cacheIndex === null) { - - ++binding.referenceCount; - this._addInactiveBinding(binding, rootUuid, trackName); - - } - - continue; - - } - - const path = prototypeAction && prototypeAction. - _propertyBindings[i].binding.parsedPath; - - binding = new PropertyMixer( - PropertyBinding.create(root, trackName, path), - track.ValueTypeName, track.getValueSize()); - - ++binding.referenceCount; - this._addInactiveBinding(binding, rootUuid, trackName); - - bindings[i] = binding; - - } - - interpolants[i].resultBuffer = binding.buffer; - - } - - } - - _activateAction(action) { - - if (!this._isActiveAction(action)) { - - if (action._cacheIndex === null) { - - // this action has been forgotten by the cache, but the user - // appears to be still using it -> rebind - - const rootUuid = (action._localRoot || this._root).uuid, - clipUuid = action._clip.uuid, - actionsForClip = this._actionsByClip[clipUuid]; - - this._bindAction(action, - actionsForClip && actionsForClip.knownActions[0]); - - this._addInactiveAction(action, clipUuid, rootUuid); - - } - - const bindings = action._propertyBindings; - - // increment reference counts / sort out state - for (let i = 0, n = bindings.length; i !== n; ++i) { - - const binding = bindings[i]; - - if (binding.useCount++ === 0) { - - this._lendBinding(binding); - binding.saveOriginalState(); - - } - - } - - this._lendAction(action); - - } - - } - - _deactivateAction(action) { - - if (this._isActiveAction(action)) { - - const bindings = action._propertyBindings; - - // decrement reference counts / sort out state - for (let i = 0, n = bindings.length; i !== n; ++i) { - - const binding = bindings[i]; - - if (--binding.useCount === 0) { - - binding.restoreOriginalState(); - this._takeBackBinding(binding); - - } - - } - - this._takeBackAction(action); - - } - - } - - // Memory manager - - _initMemoryManager() { - - this._actions = []; // 'nActiveActions' followed by inactive ones - this._nActiveActions = 0; - - this._actionsByClip = {}; - // inside: - // { - // knownActions: Array< AnimationAction > - used as prototypes - // actionByRoot: AnimationAction - lookup - // } - - - this._bindings = []; // 'nActiveBindings' followed by inactive ones - this._nActiveBindings = 0; - - this._bindingsByRootAndName = {}; // inside: Map< name, PropertyMixer > - - - this._controlInterpolants = []; // same game as above - this._nActiveControlInterpolants = 0; - - const scope = this; - - this.stats = { - - actions: { - get total() { - - return scope._actions.length; - - }, - get inUse() { - - return scope._nActiveActions; - - } - }, - bindings: { - get total() { - - return scope._bindings.length; - - }, - get inUse() { - - return scope._nActiveBindings; - - } - }, - controlInterpolants: { - get total() { - - return scope._controlInterpolants.length; - - }, - get inUse() { - - return scope._nActiveControlInterpolants; - - } - } - - }; - - } - - // Memory management for AnimationAction objects - - _isActiveAction(action) { - - const index = action._cacheIndex; - return index !== null && index < this._nActiveActions; - - } - - _addInactiveAction(action, clipUuid, rootUuid) { - - const actions = this._actions, - actionsByClip = this._actionsByClip; - - let actionsForClip = actionsByClip[clipUuid]; - - if (actionsForClip === undefined) { - - actionsForClip = { - - knownActions: [action], - actionByRoot: {} - - }; - - action._byClipCacheIndex = 0; - - actionsByClip[clipUuid] = actionsForClip; - - } else { - - const knownActions = actionsForClip.knownActions; - - action._byClipCacheIndex = knownActions.length; - knownActions.push(action); - - } - - action._cacheIndex = actions.length; - actions.push(action); - - actionsForClip.actionByRoot[rootUuid] = action; - - } - - _removeInactiveAction(action) { - - const actions = this._actions, - lastInactiveAction = actions[actions.length - 1], - cacheIndex = action._cacheIndex; - - lastInactiveAction._cacheIndex = cacheIndex; - actions[cacheIndex] = lastInactiveAction; - actions.pop(); - - action._cacheIndex = null; - - - const clipUuid = action._clip.uuid, - actionsByClip = this._actionsByClip, - actionsForClip = actionsByClip[clipUuid], - knownActionsForClip = actionsForClip.knownActions, - - lastKnownAction = - knownActionsForClip[knownActionsForClip.length - 1], - - byClipCacheIndex = action._byClipCacheIndex; - - lastKnownAction._byClipCacheIndex = byClipCacheIndex; - knownActionsForClip[byClipCacheIndex] = lastKnownAction; - knownActionsForClip.pop(); - - action._byClipCacheIndex = null; - - - const actionByRoot = actionsForClip.actionByRoot, - rootUuid = (action._localRoot || this._root).uuid; - - delete actionByRoot[rootUuid]; - - if (knownActionsForClip.length === 0) { - - delete actionsByClip[clipUuid]; - - } - - this._removeInactiveBindingsForAction(action); - - } - - _removeInactiveBindingsForAction(action) { - - const bindings = action._propertyBindings; - - for (let i = 0, n = bindings.length; i !== n; ++i) { - - const binding = bindings[i]; - - if (--binding.referenceCount === 0) { - - this._removeInactiveBinding(binding); - - } - - } - - } - - _lendAction(action) { - - // [ active actions | inactive actions ] - // [ active actions >| inactive actions ] - // s a - // <-swap-> - // a s - - const actions = this._actions, - prevIndex = action._cacheIndex, - - lastActiveIndex = this._nActiveActions++, - - firstInactiveAction = actions[lastActiveIndex]; - - action._cacheIndex = lastActiveIndex; - actions[lastActiveIndex] = action; - - firstInactiveAction._cacheIndex = prevIndex; - actions[prevIndex] = firstInactiveAction; - - } - - _takeBackAction(action) { - - // [ active actions | inactive actions ] - // [ active actions |< inactive actions ] - // a s - // <-swap-> - // s a - - const actions = this._actions, - prevIndex = action._cacheIndex, - - firstInactiveIndex = --this._nActiveActions, - - lastActiveAction = actions[firstInactiveIndex]; - - action._cacheIndex = firstInactiveIndex; - actions[firstInactiveIndex] = action; - - lastActiveAction._cacheIndex = prevIndex; - actions[prevIndex] = lastActiveAction; - - } - - // Memory management for PropertyMixer objects - - _addInactiveBinding(binding, rootUuid, trackName) { - - const bindingsByRoot = this._bindingsByRootAndName, - bindings = this._bindings; - - let bindingByName = bindingsByRoot[rootUuid]; - - if (bindingByName === undefined) { - - bindingByName = {}; - bindingsByRoot[rootUuid] = bindingByName; - - } - - bindingByName[trackName] = binding; - - binding._cacheIndex = bindings.length; - bindings.push(binding); - - } - - _removeInactiveBinding(binding) { - - const bindings = this._bindings, - propBinding = binding.binding, - rootUuid = propBinding.rootNode.uuid, - trackName = propBinding.path, - bindingsByRoot = this._bindingsByRootAndName, - bindingByName = bindingsByRoot[rootUuid], - - lastInactiveBinding = bindings[bindings.length - 1], - cacheIndex = binding._cacheIndex; - - lastInactiveBinding._cacheIndex = cacheIndex; - bindings[cacheIndex] = lastInactiveBinding; - bindings.pop(); - - delete bindingByName[trackName]; - - if (Object.keys(bindingByName).length === 0) { - - delete bindingsByRoot[rootUuid]; - - } - - } - - _lendBinding(binding) { - - const bindings = this._bindings, - prevIndex = binding._cacheIndex, - - lastActiveIndex = this._nActiveBindings++, - - firstInactiveBinding = bindings[lastActiveIndex]; - - binding._cacheIndex = lastActiveIndex; - bindings[lastActiveIndex] = binding; - - firstInactiveBinding._cacheIndex = prevIndex; - bindings[prevIndex] = firstInactiveBinding; - - } - - _takeBackBinding(binding) { - - const bindings = this._bindings, - prevIndex = binding._cacheIndex, - - firstInactiveIndex = --this._nActiveBindings, - - lastActiveBinding = bindings[firstInactiveIndex]; - - binding._cacheIndex = firstInactiveIndex; - bindings[firstInactiveIndex] = binding; - - lastActiveBinding._cacheIndex = prevIndex; - bindings[prevIndex] = lastActiveBinding; - - } - - - // Memory management of Interpolants for weight and time scale - - _lendControlInterpolant() { - - const interpolants = this._controlInterpolants, - lastActiveIndex = this._nActiveControlInterpolants++; - - let interpolant = interpolants[lastActiveIndex]; - - if (interpolant === undefined) { - - interpolant = new LinearInterpolant( - new Float32Array(2), new Float32Array(2), - 1, _controlInterpolantsResultBuffer); - - interpolant.__cacheIndex = lastActiveIndex; - interpolants[lastActiveIndex] = interpolant; - - } - - return interpolant; - - } - - _takeBackControlInterpolant(interpolant) { - - const interpolants = this._controlInterpolants, - prevIndex = interpolant.__cacheIndex, - - firstInactiveIndex = --this._nActiveControlInterpolants, - - lastActiveInterpolant = interpolants[firstInactiveIndex]; - - interpolant.__cacheIndex = firstInactiveIndex; - interpolants[firstInactiveIndex] = interpolant; - - lastActiveInterpolant.__cacheIndex = prevIndex; - interpolants[prevIndex] = lastActiveInterpolant; - - } - - // return an action for a clip optionally using a custom root target - // object (this method allocates a lot of dynamic memory in case a - // previously unknown clip/root combination is specified) - clipAction(clip, optionalRoot, blendMode) { - - const root = optionalRoot || this._root, - rootUuid = root.uuid; - - let clipObject = typeof clip === 'string' ? AnimationClip.findByName(root, clip) : clip; - - const clipUuid = clipObject !== null ? clipObject.uuid : clip; - - const actionsForClip = this._actionsByClip[clipUuid]; - let prototypeAction = null; - - if (blendMode === undefined) { - - if (clipObject !== null) { - - blendMode = clipObject.blendMode; - - } else { - - blendMode = NormalAnimationBlendMode; - - } - - } - - if (actionsForClip !== undefined) { - - const existingAction = actionsForClip.actionByRoot[rootUuid]; - - if (existingAction !== undefined && existingAction.blendMode === blendMode) { - - return existingAction; - - } - - // we know the clip, so we don't have to parse all - // the bindings again but can just copy - prototypeAction = actionsForClip.knownActions[0]; - - // also, take the clip from the prototype action - if (clipObject === null) - clipObject = prototypeAction._clip; - - } - - // clip must be known when specified via string - if (clipObject === null) return null; - - // allocate all resources required to run it - const newAction = new AnimationAction(this, clipObject, optionalRoot, blendMode); - - this._bindAction(newAction, prototypeAction); - - // and make the action known to the memory manager - this._addInactiveAction(newAction, clipUuid, rootUuid); - - return newAction; - - } - - // get an existing action - existingAction(clip, optionalRoot) { - - const root = optionalRoot || this._root, - rootUuid = root.uuid, - - clipObject = typeof clip === 'string' ? - AnimationClip.findByName(root, clip) : clip, - - clipUuid = clipObject ? clipObject.uuid : clip, - - actionsForClip = this._actionsByClip[clipUuid]; - - if (actionsForClip !== undefined) { - - return actionsForClip.actionByRoot[rootUuid] || null; - - } - - return null; - - } - - // deactivates all previously scheduled actions - stopAllAction() { - - const actions = this._actions, - nActions = this._nActiveActions; - - for (let i = nActions - 1; i >= 0; --i) { - - actions[i].stop(); - - } - - return this; - - } - - // advance the time and update apply the animation - update(deltaTime) { - - deltaTime *= this.timeScale; - - const actions = this._actions, - nActions = this._nActiveActions, - - time = this.time += deltaTime, - timeDirection = Math.sign(deltaTime), - - accuIndex = this._accuIndex ^= 1; - - // run active actions - - for (let i = 0; i !== nActions; ++i) { - - const action = actions[i]; - - action._update(time, deltaTime, timeDirection, accuIndex); - - } - - // update scene graph - - const bindings = this._bindings, - nBindings = this._nActiveBindings; - - for (let i = 0; i !== nBindings; ++i) { - - bindings[i].apply(accuIndex); - - } - - return this; - - } - - // Allows you to seek to a specific time in an animation. - setTime(timeInSeconds) { - - this.time = 0; // Zero out time attribute for AnimationMixer object; - for (let i = 0; i < this._actions.length; i++) { - - this._actions[i].time = 0; // Zero out time attribute for all associated AnimationAction objects. - - } - - return this.update(timeInSeconds); // Update used to set exact time. Returns "this" AnimationMixer object. - - } - - // return this mixer's root target object - getRoot() { - - return this._root; - - } - - // free all resources specific to a particular clip - uncacheClip(clip) { - - const actions = this._actions, - clipUuid = clip.uuid, - actionsByClip = this._actionsByClip, - actionsForClip = actionsByClip[clipUuid]; - - if (actionsForClip !== undefined) { - - // note: just calling _removeInactiveAction would mess up the - // iteration state and also require updating the state we can - // just throw away - - const actionsToRemove = actionsForClip.knownActions; - - for (let i = 0, n = actionsToRemove.length; i !== n; ++i) { - - const action = actionsToRemove[i]; - - this._deactivateAction(action); - - const cacheIndex = action._cacheIndex, - lastInactiveAction = actions[actions.length - 1]; - - action._cacheIndex = null; - action._byClipCacheIndex = null; - - lastInactiveAction._cacheIndex = cacheIndex; - actions[cacheIndex] = lastInactiveAction; - actions.pop(); - - this._removeInactiveBindingsForAction(action); - - } - - delete actionsByClip[clipUuid]; - - } - - } - - // free all resources specific to a particular root target object - uncacheRoot(root) { - - const rootUuid = root.uuid, - actionsByClip = this._actionsByClip; - - for (const clipUuid in actionsByClip) { - - const actionByRoot = actionsByClip[clipUuid].actionByRoot, - action = actionByRoot[rootUuid]; - - if (action !== undefined) { - - this._deactivateAction(action); - this._removeInactiveAction(action); - - } - - } - - const bindingsByRoot = this._bindingsByRootAndName, - bindingByName = bindingsByRoot[rootUuid]; - - if (bindingByName !== undefined) { - - for (const trackName in bindingByName) { - - const binding = bindingByName[trackName]; - binding.restoreOriginalState(); - this._removeInactiveBinding(binding); - - } - - } - - } - - // remove a targeted clip from the cache - uncacheAction(clip, optionalRoot) { - - const action = this.existingAction(clip, optionalRoot); - - if (action !== null) { - - this._deactivateAction(action); - this._removeInactiveAction(action); - - } - - } - -} - -class Uniform { - - constructor(value) { - - this.value = value; - - } - - clone() { - - return new Uniform(this.value.clone === undefined ? this.value : this.value.clone()); - - } - -} - -let id = 0; - -class UniformsGroup extends EventDispatcher { - - constructor() { - - super(); - - this.isUniformsGroup = true; - - Object.defineProperty(this, 'id', { value: id++ }); - - this.name = ''; - - this.usage = StaticDrawUsage; - this.uniforms = []; - - } - - add(uniform) { - - this.uniforms.push(uniform); - - return this; - - } - - remove(uniform) { - - const index = this.uniforms.indexOf(uniform); - - if (index !== - 1) this.uniforms.splice(index, 1); - - return this; - - } - - setName(name) { - - this.name = name; - - return this; - - } - - setUsage(value) { - - this.usage = value; - - return this; - - } - - dispose() { - - this.dispatchEvent({ type: 'dispose' }); - - return this; - - } - - copy(source) { - - this.name = source.name; - this.usage = source.usage; - - const uniformsSource = source.uniforms; - - this.uniforms.length = 0; - - for (let i = 0, l = uniformsSource.length; i < l; i++) { - - this.uniforms.push(uniformsSource[i].clone()); - - } - - return this; - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -class InstancedInterleavedBuffer extends InterleavedBuffer { - - constructor(array, stride, meshPerAttribute = 1) { - - super(array, stride); - - this.isInstancedInterleavedBuffer = true; - - this.meshPerAttribute = meshPerAttribute; - - } - - copy(source) { - - super.copy(source); - - this.meshPerAttribute = source.meshPerAttribute; - - return this; - - } - - clone(data) { - - const ib = super.clone(data); - - ib.meshPerAttribute = this.meshPerAttribute; - - return ib; - - } - - toJSON(data) { - - const json = super.toJSON(data); - - json.isInstancedInterleavedBuffer = true; - json.meshPerAttribute = this.meshPerAttribute; - - return json; - - } - -} - -class GLBufferAttribute { - - constructor(buffer, type, itemSize, elementSize, count) { - - this.isGLBufferAttribute = true; - - this.name = ''; - - this.buffer = buffer; - this.type = type; - this.itemSize = itemSize; - this.elementSize = elementSize; - this.count = count; - - this.version = 0; - - } - - set needsUpdate(value) { - - if (value === true) this.version++; - - } - - setBuffer(buffer) { - - this.buffer = buffer; - - return this; - - } - - setType(type, elementSize) { - - this.type = type; - this.elementSize = elementSize; - - return this; - - } - - setItemSize(itemSize) { - - this.itemSize = itemSize; - - return this; - - } - - setCount(count) { - - this.count = count; - - return this; - - } - -} - -class Raycaster { - - constructor(origin, direction, near = 0, far = Infinity) { - - this.ray = new Ray(origin, direction); - // direction is assumed to be normalized (for accurate distance calculations) - - this.near = near; - this.far = far; - this.camera = null; - this.layers = new Layers(); - - this.params = { - Mesh: {}, - Line: { threshold: 1 }, - LOD: {}, - Points: { threshold: 1 }, - Sprite: {} - }; - - } - - set(origin, direction) { - - // direction is assumed to be normalized (for accurate distance calculations) - - this.ray.set(origin, direction); - - } - - setFromCamera(coords, camera) { - - if (camera.isPerspectiveCamera) { - - this.ray.origin.setFromMatrixPosition(camera.matrixWorld); - this.ray.direction.set(coords.x, coords.y, 0.5).unproject(camera).sub(this.ray.origin).normalize(); - this.camera = camera; - - } else if (camera.isOrthographicCamera) { - - this.ray.origin.set(coords.x, coords.y, (camera.near + camera.far) / (camera.near - camera.far)).unproject(camera); // set origin in plane of camera - this.ray.direction.set(0, 0, - 1).transformDirection(camera.matrixWorld); - this.camera = camera; - - } else { - - console.error('THREE.Raycaster: Unsupported camera type: ' + camera.type); - - } - - } - - intersectObject(object, recursive = true, intersects = []) { - - intersectObject(object, this, intersects, recursive); - - intersects.sort(ascSort); - - return intersects; - - } - - intersectObjects(objects, recursive = true, intersects = []) { - - for (let i = 0, l = objects.length; i < l; i++) { - - intersectObject(objects[i], this, intersects, recursive); - - } - - intersects.sort(ascSort); - - return intersects; - - } - -} - -function ascSort(a, b) { - - return a.distance - b.distance; - -} - -function intersectObject(object, raycaster, intersects, recursive) { - - if (object.layers.test(raycaster.layers)) { - - object.raycast(raycaster, intersects); - - } - - if (recursive === true) { - - const children = object.children; - - for (let i = 0, l = children.length; i < l; i++) { - - intersectObject(children[i], raycaster, intersects, true); - - } - - } - -} - -/** - * Ref: https://en.wikipedia.org/wiki/Spherical_coordinate_system - * - * The polar angle (phi) is measured from the positive y-axis. The positive y-axis is up. - * The azimuthal angle (theta) is measured from the positive z-axis. - */ - -class Spherical { - - constructor(radius = 1, phi = 0, theta = 0) { - - this.radius = radius; - this.phi = phi; // polar angle - this.theta = theta; // azimuthal angle - - return this; - - } - - set(radius, phi, theta) { - - this.radius = radius; - this.phi = phi; - this.theta = theta; - - return this; - - } - - copy(other) { - - this.radius = other.radius; - this.phi = other.phi; - this.theta = other.theta; - - return this; - - } - - // restrict phi to be between EPS and PI-EPS - makeSafe() { - - const EPS = 0.000001; - this.phi = Math.max(EPS, Math.min(Math.PI - EPS, this.phi)); - - return this; - - } - - setFromVector3(v) { - - return this.setFromCartesianCoords(v.x, v.y, v.z); - - } - - setFromCartesianCoords(x, y, z) { - - this.radius = Math.sqrt(x * x + y * y + z * z); - - if (this.radius === 0) { - - this.theta = 0; - this.phi = 0; - - } else { - - this.theta = Math.atan2(x, z); - this.phi = Math.acos(clamp(y / this.radius, - 1, 1)); - - } - - return this; - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -/** - * Ref: https://en.wikipedia.org/wiki/Cylindrical_coordinate_system - */ - -class Cylindrical { - - constructor(radius = 1, theta = 0, y = 0) { - - this.radius = radius; // distance from the origin to a point in the x-z plane - this.theta = theta; // counterclockwise angle in the x-z plane measured in radians from the positive z-axis - this.y = y; // height above the x-z plane - - return this; - - } - - set(radius, theta, y) { - - this.radius = radius; - this.theta = theta; - this.y = y; - - return this; - - } - - copy(other) { - - this.radius = other.radius; - this.theta = other.theta; - this.y = other.y; - - return this; - - } - - setFromVector3(v) { - - return this.setFromCartesianCoords(v.x, v.y, v.z); - - } - - setFromCartesianCoords(x, y, z) { - - this.radius = Math.sqrt(x * x + z * z); - this.theta = Math.atan2(x, z); - this.y = y; - - return this; - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -const _vector$4 = /*@__PURE__*/ new Vector2(); - -class Box2 { - - constructor(min = new Vector2(+ Infinity, + Infinity), max = new Vector2(- Infinity, - Infinity)) { - - this.isBox2 = true; - - this.min = min; - this.max = max; - - } - - set(min, max) { - - this.min.copy(min); - this.max.copy(max); - - return this; - - } - - setFromPoints(points) { - - this.makeEmpty(); - - for (let i = 0, il = points.length; i < il; i++) { - - this.expandByPoint(points[i]); - - } - - return this; - - } - - setFromCenterAndSize(center, size) { - - const halfSize = _vector$4.copy(size).multiplyScalar(0.5); - this.min.copy(center).sub(halfSize); - this.max.copy(center).add(halfSize); - - return this; - - } - - clone() { - - return new this.constructor().copy(this); - - } - - copy(box) { - - this.min.copy(box.min); - this.max.copy(box.max); - - return this; - - } - - makeEmpty() { - - this.min.x = this.min.y = + Infinity; - this.max.x = this.max.y = - Infinity; - - return this; - - } - - isEmpty() { - - // this is a more robust check for empty than ( volume <= 0 ) because volume can get positive with two negative axes - - return (this.max.x < this.min.x) || (this.max.y < this.min.y); - - } - - getCenter(target) { - - return this.isEmpty() ? target.set(0, 0) : target.addVectors(this.min, this.max).multiplyScalar(0.5); - - } - - getSize(target) { - - return this.isEmpty() ? target.set(0, 0) : target.subVectors(this.max, this.min); - - } - - expandByPoint(point) { - - this.min.min(point); - this.max.max(point); - - return this; - - } - - expandByVector(vector) { - - this.min.sub(vector); - this.max.add(vector); - - return this; - - } - - expandByScalar(scalar) { - - this.min.addScalar(- scalar); - this.max.addScalar(scalar); - - return this; - - } - - containsPoint(point) { - - return point.x < this.min.x || point.x > this.max.x || - point.y < this.min.y || point.y > this.max.y ? false : true; - - } - - containsBox(box) { - - return this.min.x <= box.min.x && box.max.x <= this.max.x && - this.min.y <= box.min.y && box.max.y <= this.max.y; - - } - - getParameter(point, target) { - - // This can potentially have a divide by zero if the box - // has a size dimension of 0. - - return target.set( - (point.x - this.min.x) / (this.max.x - this.min.x), - (point.y - this.min.y) / (this.max.y - this.min.y) - ); - - } - - intersectsBox(box) { - - // using 4 splitting planes to rule out intersections - - return box.max.x < this.min.x || box.min.x > this.max.x || - box.max.y < this.min.y || box.min.y > this.max.y ? false : true; - - } - - clampPoint(point, target) { - - return target.copy(point).clamp(this.min, this.max); - - } - - distanceToPoint(point) { - - const clampedPoint = _vector$4.copy(point).clamp(this.min, this.max); - return clampedPoint.sub(point).length(); - - } - - intersect(box) { - - this.min.max(box.min); - this.max.min(box.max); - - return this; - - } - - union(box) { - - this.min.min(box.min); - this.max.max(box.max); - - return this; - - } - - translate(offset) { - - this.min.add(offset); - this.max.add(offset); - - return this; - - } - - equals(box) { - - return box.min.equals(this.min) && box.max.equals(this.max); - - } - -} - -const _startP = /*@__PURE__*/ new Vector3(); -const _startEnd = /*@__PURE__*/ new Vector3(); - -class Line3 { - - constructor(start = new Vector3(), end = new Vector3()) { - - this.start = start; - this.end = end; - - } - - set(start, end) { - - this.start.copy(start); - this.end.copy(end); - - return this; - - } - - copy(line) { - - this.start.copy(line.start); - this.end.copy(line.end); - - return this; - - } - - getCenter(target) { - - return target.addVectors(this.start, this.end).multiplyScalar(0.5); - - } - - delta(target) { - - return target.subVectors(this.end, this.start); - - } - - distanceSq() { - - return this.start.distanceToSquared(this.end); - - } - - distance() { - - return this.start.distanceTo(this.end); - - } - - at(t, target) { - - return this.delta(target).multiplyScalar(t).add(this.start); - - } - - closestPointToPointParameter(point, clampToLine) { - - _startP.subVectors(point, this.start); - _startEnd.subVectors(this.end, this.start); - - const startEnd2 = _startEnd.dot(_startEnd); - const startEnd_startP = _startEnd.dot(_startP); - - let t = startEnd_startP / startEnd2; - - if (clampToLine) { - - t = clamp(t, 0, 1); - - } - - return t; - - } - - closestPointToPoint(point, clampToLine, target) { - - const t = this.closestPointToPointParameter(point, clampToLine); - - return this.delta(target).multiplyScalar(t).add(this.start); - - } - - applyMatrix4(matrix) { - - this.start.applyMatrix4(matrix); - this.end.applyMatrix4(matrix); - - return this; - - } - - equals(line) { - - return line.start.equals(this.start) && line.end.equals(this.end); - - } - - clone() { - - return new this.constructor().copy(this); - - } - -} - -const _vector$3 = /*@__PURE__*/ new Vector3(); - -class SpotLightHelper extends Object3D { - - constructor(light, color) { - - super(); - - this.light = light; - - this.matrix = light.matrixWorld; - this.matrixAutoUpdate = false; - - this.color = color; - - this.type = 'SpotLightHelper'; - - const geometry = new BufferGeometry(); - - const positions = [ - 0, 0, 0, 0, 0, 1, - 0, 0, 0, 1, 0, 1, - 0, 0, 0, - 1, 0, 1, - 0, 0, 0, 0, 1, 1, - 0, 0, 0, 0, - 1, 1 - ]; - - for (let i = 0, j = 1, l = 32; i < l; i++, j++) { - - const p1 = (i / l) * Math.PI * 2; - const p2 = (j / l) * Math.PI * 2; - - positions.push( - Math.cos(p1), Math.sin(p1), 1, - Math.cos(p2), Math.sin(p2), 1 - ); - - } - - geometry.setAttribute('position', new Float32BufferAttribute(positions, 3)); - - const material = new LineBasicMaterial({ fog: false, toneMapped: false }); - - this.cone = new LineSegments(geometry, material); - this.add(this.cone); - - this.update(); - - } - - dispose() { - - this.cone.geometry.dispose(); - this.cone.material.dispose(); - - } - - update() { - - this.light.updateWorldMatrix(true, false); - this.light.target.updateWorldMatrix(true, false); - - const coneLength = this.light.distance ? this.light.distance : 1000; - const coneWidth = coneLength * Math.tan(this.light.angle); - - this.cone.scale.set(coneWidth, coneWidth, coneLength); - - _vector$3.setFromMatrixPosition(this.light.target.matrixWorld); - - this.cone.lookAt(_vector$3); - - if (this.color !== undefined) { - - this.cone.material.color.set(this.color); - - } else { - - this.cone.material.color.copy(this.light.color); - - } - - } - -} - -const _vector$2 = /*@__PURE__*/ new Vector3(); -const _boneMatrix = /*@__PURE__*/ new Matrix4(); -const _matrixWorldInv = /*@__PURE__*/ new Matrix4(); - - -class SkeletonHelper extends LineSegments { - - constructor(object) { - - const bones = getBoneList(object); - - const geometry = new BufferGeometry(); - - const vertices = []; - const colors = []; - - const color1 = new Color(0, 0, 1); - const color2 = new Color(0, 1, 0); - - for (let i = 0; i < bones.length; i++) { - - const bone = bones[i]; - - if (bone.parent && bone.parent.isBone) { - - vertices.push(0, 0, 0); - vertices.push(0, 0, 0); - colors.push(color1.r, color1.g, color1.b); - colors.push(color2.r, color2.g, color2.b); - - } - - } - - geometry.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - geometry.setAttribute('color', new Float32BufferAttribute(colors, 3)); - - const material = new LineBasicMaterial({ vertexColors: true, depthTest: false, depthWrite: false, toneMapped: false, transparent: true }); - - super(geometry, material); - - this.isSkeletonHelper = true; - - this.type = 'SkeletonHelper'; - - this.root = object; - this.bones = bones; - - this.matrix = object.matrixWorld; - this.matrixAutoUpdate = false; - - } - - updateMatrixWorld(force) { - - const bones = this.bones; - - const geometry = this.geometry; - const position = geometry.getAttribute('position'); - - _matrixWorldInv.copy(this.root.matrixWorld).invert(); - - for (let i = 0, j = 0; i < bones.length; i++) { - - const bone = bones[i]; - - if (bone.parent && bone.parent.isBone) { - - _boneMatrix.multiplyMatrices(_matrixWorldInv, bone.matrixWorld); - _vector$2.setFromMatrixPosition(_boneMatrix); - position.setXYZ(j, _vector$2.x, _vector$2.y, _vector$2.z); - - _boneMatrix.multiplyMatrices(_matrixWorldInv, bone.parent.matrixWorld); - _vector$2.setFromMatrixPosition(_boneMatrix); - position.setXYZ(j + 1, _vector$2.x, _vector$2.y, _vector$2.z); - - j += 2; - - } - - } - - geometry.getAttribute('position').needsUpdate = true; - - super.updateMatrixWorld(force); - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - -} - - -function getBoneList(object) { - - const boneList = []; - - if (object.isBone === true) { - - boneList.push(object); - - } - - for (let i = 0; i < object.children.length; i++) { - - boneList.push.apply(boneList, getBoneList(object.children[i])); - - } - - return boneList; - -} - -class PointLightHelper extends Mesh { - - constructor(light, sphereSize, color) { - - const geometry = new SphereGeometry(sphereSize, 4, 2); - const material = new MeshBasicMaterial({ wireframe: true, fog: false, toneMapped: false }); - - super(geometry, material); - - this.light = light; - - this.color = color; - - this.type = 'PointLightHelper'; - - this.matrix = this.light.matrixWorld; - this.matrixAutoUpdate = false; - - this.update(); - - - /* - // TODO: delete this comment? - const distanceGeometry = new THREE.IcosahedronGeometry( 1, 2 ); - const distanceMaterial = new THREE.MeshBasicMaterial( { color: hexColor, fog: false, wireframe: true, opacity: 0.1, transparent: true } ); - - this.lightSphere = new THREE.Mesh( bulbGeometry, bulbMaterial ); - this.lightDistance = new THREE.Mesh( distanceGeometry, distanceMaterial ); - - const d = light.distance; - - if ( d === 0.0 ) { - - this.lightDistance.visible = false; - - } else { - - this.lightDistance.scale.set( d, d, d ); - - } - - this.add( this.lightDistance ); - */ - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - - update() { - - this.light.updateWorldMatrix(true, false); - - if (this.color !== undefined) { - - this.material.color.set(this.color); - - } else { - - this.material.color.copy(this.light.color); - - } - - /* - const d = this.light.distance; - - if ( d === 0.0 ) { - - this.lightDistance.visible = false; - - } else { - - this.lightDistance.visible = true; - this.lightDistance.scale.set( d, d, d ); - - } - */ - - } - -} - -const _vector$1 = /*@__PURE__*/ new Vector3(); -const _color1 = /*@__PURE__*/ new Color(); -const _color2 = /*@__PURE__*/ new Color(); - -class HemisphereLightHelper extends Object3D { - - constructor(light, size, color) { - - super(); - - this.light = light; - - this.matrix = light.matrixWorld; - this.matrixAutoUpdate = false; - - this.color = color; - - this.type = 'HemisphereLightHelper'; - - const geometry = new OctahedronGeometry(size); - geometry.rotateY(Math.PI * 0.5); - - this.material = new MeshBasicMaterial({ wireframe: true, fog: false, toneMapped: false }); - if (this.color === undefined) this.material.vertexColors = true; - - const position = geometry.getAttribute('position'); - const colors = new Float32Array(position.count * 3); - - geometry.setAttribute('color', new BufferAttribute(colors, 3)); - - this.add(new Mesh(geometry, this.material)); - - this.update(); - - } - - dispose() { - - this.children[0].geometry.dispose(); - this.children[0].material.dispose(); - - } - - update() { - - const mesh = this.children[0]; - - if (this.color !== undefined) { - - this.material.color.set(this.color); - - } else { - - const colors = mesh.geometry.getAttribute('color'); - - _color1.copy(this.light.color); - _color2.copy(this.light.groundColor); - - for (let i = 0, l = colors.count; i < l; i++) { - - const color = (i < (l / 2)) ? _color1 : _color2; - - colors.setXYZ(i, color.r, color.g, color.b); - - } - - colors.needsUpdate = true; - - } - - this.light.updateWorldMatrix(true, false); - - mesh.lookAt(_vector$1.setFromMatrixPosition(this.light.matrixWorld).negate()); - - } - -} - -class GridHelper extends LineSegments { - - constructor(size = 10, divisions = 10, color1 = 0x444444, color2 = 0x888888) { - - color1 = new Color(color1); - color2 = new Color(color2); - - const center = divisions / 2; - const step = size / divisions; - const halfSize = size / 2; - - const vertices = [], colors = []; - - for (let i = 0, j = 0, k = - halfSize; i <= divisions; i++, k += step) { - - vertices.push(- halfSize, 0, k, halfSize, 0, k); - vertices.push(k, 0, - halfSize, k, 0, halfSize); - - const color = i === center ? color1 : color2; - - color.toArray(colors, j); j += 3; - color.toArray(colors, j); j += 3; - color.toArray(colors, j); j += 3; - color.toArray(colors, j); j += 3; - - } - - const geometry = new BufferGeometry(); - geometry.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - geometry.setAttribute('color', new Float32BufferAttribute(colors, 3)); - - const material = new LineBasicMaterial({ vertexColors: true, toneMapped: false }); - - super(geometry, material); - - this.type = 'GridHelper'; - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - -} - -class PolarGridHelper extends LineSegments { - - constructor(radius = 10, sectors = 16, rings = 8, divisions = 64, color1 = 0x444444, color2 = 0x888888) { - - color1 = new Color(color1); - color2 = new Color(color2); - - const vertices = []; - const colors = []; - - // create the sectors - - if (sectors > 1) { - - for (let i = 0; i < sectors; i++) { - - const v = (i / sectors) * (Math.PI * 2); - - const x = Math.sin(v) * radius; - const z = Math.cos(v) * radius; - - vertices.push(0, 0, 0); - vertices.push(x, 0, z); - - const color = (i & 1) ? color1 : color2; - - colors.push(color.r, color.g, color.b); - colors.push(color.r, color.g, color.b); - - } - - } - - // create the rings - - for (let i = 0; i < rings; i++) { - - const color = (i & 1) ? color1 : color2; - - const r = radius - (radius / rings * i); - - for (let j = 0; j < divisions; j++) { - - // first vertex - - let v = (j / divisions) * (Math.PI * 2); - - let x = Math.sin(v) * r; - let z = Math.cos(v) * r; - - vertices.push(x, 0, z); - colors.push(color.r, color.g, color.b); - - // second vertex - - v = ((j + 1) / divisions) * (Math.PI * 2); - - x = Math.sin(v) * r; - z = Math.cos(v) * r; - - vertices.push(x, 0, z); - colors.push(color.r, color.g, color.b); - - } - - } - - const geometry = new BufferGeometry(); - geometry.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - geometry.setAttribute('color', new Float32BufferAttribute(colors, 3)); - - const material = new LineBasicMaterial({ vertexColors: true, toneMapped: false }); - - super(geometry, material); - - this.type = 'PolarGridHelper'; - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - -} - -const _v1 = /*@__PURE__*/ new Vector3(); -const _v2 = /*@__PURE__*/ new Vector3(); -const _v3 = /*@__PURE__*/ new Vector3(); - -class DirectionalLightHelper extends Object3D { - - constructor(light, size, color) { - - super(); - - this.light = light; - - this.matrix = light.matrixWorld; - this.matrixAutoUpdate = false; - - this.color = color; - - this.type = 'DirectionalLightHelper'; - - if (size === undefined) size = 1; - - let geometry = new BufferGeometry(); - geometry.setAttribute('position', new Float32BufferAttribute([ - - size, size, 0, - size, size, 0, - size, - size, 0, - - size, - size, 0, - - size, size, 0 - ], 3)); - - const material = new LineBasicMaterial({ fog: false, toneMapped: false }); - - this.lightPlane = new Line(geometry, material); - this.add(this.lightPlane); - - geometry = new BufferGeometry(); - geometry.setAttribute('position', new Float32BufferAttribute([0, 0, 0, 0, 0, 1], 3)); - - this.targetLine = new Line(geometry, material); - this.add(this.targetLine); - - this.update(); - - } - - dispose() { - - this.lightPlane.geometry.dispose(); - this.lightPlane.material.dispose(); - this.targetLine.geometry.dispose(); - this.targetLine.material.dispose(); - - } - - update() { - - this.light.updateWorldMatrix(true, false); - this.light.target.updateWorldMatrix(true, false); - - _v1.setFromMatrixPosition(this.light.matrixWorld); - _v2.setFromMatrixPosition(this.light.target.matrixWorld); - _v3.subVectors(_v2, _v1); - - this.lightPlane.lookAt(_v2); - - if (this.color !== undefined) { - - this.lightPlane.material.color.set(this.color); - this.targetLine.material.color.set(this.color); - - } else { - - this.lightPlane.material.color.copy(this.light.color); - this.targetLine.material.color.copy(this.light.color); - - } - - this.targetLine.lookAt(_v2); - this.targetLine.scale.z = _v3.length(); - - } - -} - -const _vector = /*@__PURE__*/ new Vector3(); -const _camera = /*@__PURE__*/ new Camera(); - -/** - * - shows frustum, line of sight and up of the camera - * - suitable for fast updates - * - based on frustum visualization in lightgl.js shadowmap example - * https://github.com/evanw/lightgl.js/blob/master/tests/shadowmap.html - */ - -class CameraHelper extends LineSegments { - - constructor(camera) { - - const geometry = new BufferGeometry(); - const material = new LineBasicMaterial({ color: 0xffffff, vertexColors: true, toneMapped: false }); - - const vertices = []; - const colors = []; - - const pointMap = {}; - - // near - - addLine('n1', 'n2'); - addLine('n2', 'n4'); - addLine('n4', 'n3'); - addLine('n3', 'n1'); - - // far - - addLine('f1', 'f2'); - addLine('f2', 'f4'); - addLine('f4', 'f3'); - addLine('f3', 'f1'); - - // sides - - addLine('n1', 'f1'); - addLine('n2', 'f2'); - addLine('n3', 'f3'); - addLine('n4', 'f4'); - - // cone - - addLine('p', 'n1'); - addLine('p', 'n2'); - addLine('p', 'n3'); - addLine('p', 'n4'); - - // up - - addLine('u1', 'u2'); - addLine('u2', 'u3'); - addLine('u3', 'u1'); - - // target - - addLine('c', 't'); - addLine('p', 'c'); - - // cross - - addLine('cn1', 'cn2'); - addLine('cn3', 'cn4'); - - addLine('cf1', 'cf2'); - addLine('cf3', 'cf4'); - - function addLine(a, b) { - - addPoint(a); - addPoint(b); - - } - - function addPoint(id) { - - vertices.push(0, 0, 0); - colors.push(0, 0, 0); - - if (pointMap[id] === undefined) { - - pointMap[id] = []; - - } - - pointMap[id].push((vertices.length / 3) - 1); - - } - - geometry.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - geometry.setAttribute('color', new Float32BufferAttribute(colors, 3)); - - super(geometry, material); - - this.type = 'CameraHelper'; - - this.camera = camera; - if (this.camera.updateProjectionMatrix) this.camera.updateProjectionMatrix(); - - this.matrix = camera.matrixWorld; - this.matrixAutoUpdate = false; - - this.pointMap = pointMap; - - this.update(); - - // colors - - const colorFrustum = new Color(0xffaa00); - const colorCone = new Color(0xff0000); - const colorUp = new Color(0x00aaff); - const colorTarget = new Color(0xffffff); - const colorCross = new Color(0x333333); - - this.setColors(colorFrustum, colorCone, colorUp, colorTarget, colorCross); - - } - - setColors(frustum, cone, up, target, cross) { - - const geometry = this.geometry; - - const colorAttribute = geometry.getAttribute('color'); - - // near - - colorAttribute.setXYZ(0, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(1, frustum.r, frustum.g, frustum.b); // n1, n2 - colorAttribute.setXYZ(2, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(3, frustum.r, frustum.g, frustum.b); // n2, n4 - colorAttribute.setXYZ(4, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(5, frustum.r, frustum.g, frustum.b); // n4, n3 - colorAttribute.setXYZ(6, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(7, frustum.r, frustum.g, frustum.b); // n3, n1 - - // far - - colorAttribute.setXYZ(8, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(9, frustum.r, frustum.g, frustum.b); // f1, f2 - colorAttribute.setXYZ(10, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(11, frustum.r, frustum.g, frustum.b); // f2, f4 - colorAttribute.setXYZ(12, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(13, frustum.r, frustum.g, frustum.b); // f4, f3 - colorAttribute.setXYZ(14, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(15, frustum.r, frustum.g, frustum.b); // f3, f1 - - // sides - - colorAttribute.setXYZ(16, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(17, frustum.r, frustum.g, frustum.b); // n1, f1 - colorAttribute.setXYZ(18, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(19, frustum.r, frustum.g, frustum.b); // n2, f2 - colorAttribute.setXYZ(20, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(21, frustum.r, frustum.g, frustum.b); // n3, f3 - colorAttribute.setXYZ(22, frustum.r, frustum.g, frustum.b); colorAttribute.setXYZ(23, frustum.r, frustum.g, frustum.b); // n4, f4 - - // cone - - colorAttribute.setXYZ(24, cone.r, cone.g, cone.b); colorAttribute.setXYZ(25, cone.r, cone.g, cone.b); // p, n1 - colorAttribute.setXYZ(26, cone.r, cone.g, cone.b); colorAttribute.setXYZ(27, cone.r, cone.g, cone.b); // p, n2 - colorAttribute.setXYZ(28, cone.r, cone.g, cone.b); colorAttribute.setXYZ(29, cone.r, cone.g, cone.b); // p, n3 - colorAttribute.setXYZ(30, cone.r, cone.g, cone.b); colorAttribute.setXYZ(31, cone.r, cone.g, cone.b); // p, n4 - - // up - - colorAttribute.setXYZ(32, up.r, up.g, up.b); colorAttribute.setXYZ(33, up.r, up.g, up.b); // u1, u2 - colorAttribute.setXYZ(34, up.r, up.g, up.b); colorAttribute.setXYZ(35, up.r, up.g, up.b); // u2, u3 - colorAttribute.setXYZ(36, up.r, up.g, up.b); colorAttribute.setXYZ(37, up.r, up.g, up.b); // u3, u1 - - // target - - colorAttribute.setXYZ(38, target.r, target.g, target.b); colorAttribute.setXYZ(39, target.r, target.g, target.b); // c, t - colorAttribute.setXYZ(40, cross.r, cross.g, cross.b); colorAttribute.setXYZ(41, cross.r, cross.g, cross.b); // p, c - - // cross - - colorAttribute.setXYZ(42, cross.r, cross.g, cross.b); colorAttribute.setXYZ(43, cross.r, cross.g, cross.b); // cn1, cn2 - colorAttribute.setXYZ(44, cross.r, cross.g, cross.b); colorAttribute.setXYZ(45, cross.r, cross.g, cross.b); // cn3, cn4 - - colorAttribute.setXYZ(46, cross.r, cross.g, cross.b); colorAttribute.setXYZ(47, cross.r, cross.g, cross.b); // cf1, cf2 - colorAttribute.setXYZ(48, cross.r, cross.g, cross.b); colorAttribute.setXYZ(49, cross.r, cross.g, cross.b); // cf3, cf4 - - colorAttribute.needsUpdate = true; - - } - - update() { - - const geometry = this.geometry; - const pointMap = this.pointMap; - - const w = 1, h = 1; - - // we need just camera projection matrix inverse - // world matrix must be identity - - _camera.projectionMatrixInverse.copy(this.camera.projectionMatrixInverse); - - // center / target - - setPoint('c', pointMap, geometry, _camera, 0, 0, - 1); - setPoint('t', pointMap, geometry, _camera, 0, 0, 1); - - // near - - setPoint('n1', pointMap, geometry, _camera, - w, - h, - 1); - setPoint('n2', pointMap, geometry, _camera, w, - h, - 1); - setPoint('n3', pointMap, geometry, _camera, - w, h, - 1); - setPoint('n4', pointMap, geometry, _camera, w, h, - 1); - - // far - - setPoint('f1', pointMap, geometry, _camera, - w, - h, 1); - setPoint('f2', pointMap, geometry, _camera, w, - h, 1); - setPoint('f3', pointMap, geometry, _camera, - w, h, 1); - setPoint('f4', pointMap, geometry, _camera, w, h, 1); - - // up - - setPoint('u1', pointMap, geometry, _camera, w * 0.7, h * 1.1, - 1); - setPoint('u2', pointMap, geometry, _camera, - w * 0.7, h * 1.1, - 1); - setPoint('u3', pointMap, geometry, _camera, 0, h * 2, - 1); - - // cross - - setPoint('cf1', pointMap, geometry, _camera, - w, 0, 1); - setPoint('cf2', pointMap, geometry, _camera, w, 0, 1); - setPoint('cf3', pointMap, geometry, _camera, 0, - h, 1); - setPoint('cf4', pointMap, geometry, _camera, 0, h, 1); - - setPoint('cn1', pointMap, geometry, _camera, - w, 0, - 1); - setPoint('cn2', pointMap, geometry, _camera, w, 0, - 1); - setPoint('cn3', pointMap, geometry, _camera, 0, - h, - 1); - setPoint('cn4', pointMap, geometry, _camera, 0, h, - 1); - - geometry.getAttribute('position').needsUpdate = true; - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - -} - - -function setPoint(point, pointMap, geometry, camera, x, y, z) { - - _vector.set(x, y, z).unproject(camera); - - const points = pointMap[point]; - - if (points !== undefined) { - - const position = geometry.getAttribute('position'); - - for (let i = 0, l = points.length; i < l; i++) { - - position.setXYZ(points[i], _vector.x, _vector.y, _vector.z); - - } - - } - -} - -const _box = /*@__PURE__*/ new Box3(); - -class BoxHelper extends LineSegments { - - constructor(object, color = 0xffff00) { - - const indices = new Uint16Array([0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7]); - const positions = new Float32Array(8 * 3); - - const geometry = new BufferGeometry(); - geometry.setIndex(new BufferAttribute(indices, 1)); - geometry.setAttribute('position', new BufferAttribute(positions, 3)); - - super(geometry, new LineBasicMaterial({ color: color, toneMapped: false })); - - this.object = object; - this.type = 'BoxHelper'; - - this.matrixAutoUpdate = false; - - this.update(); - - } - - update(object) { - - if (object !== undefined) { - - console.warn('THREE.BoxHelper: .update() has no longer arguments.'); - - } - - if (this.object !== undefined) { - - _box.setFromObject(this.object); - - } - - if (_box.isEmpty()) return; - - const min = _box.min; - const max = _box.max; - - /* - 5____4 - 1/___0/| - | 6__|_7 - 2/___3/ - - 0: max.x, max.y, max.z - 1: min.x, max.y, max.z - 2: min.x, min.y, max.z - 3: max.x, min.y, max.z - 4: max.x, max.y, min.z - 5: min.x, max.y, min.z - 6: min.x, min.y, min.z - 7: max.x, min.y, min.z - */ - - const position = this.geometry.attributes.position; - const array = position.array; - - array[0] = max.x; array[1] = max.y; array[2] = max.z; - array[3] = min.x; array[4] = max.y; array[5] = max.z; - array[6] = min.x; array[7] = min.y; array[8] = max.z; - array[9] = max.x; array[10] = min.y; array[11] = max.z; - array[12] = max.x; array[13] = max.y; array[14] = min.z; - array[15] = min.x; array[16] = max.y; array[17] = min.z; - array[18] = min.x; array[19] = min.y; array[20] = min.z; - array[21] = max.x; array[22] = min.y; array[23] = min.z; - - position.needsUpdate = true; - - this.geometry.computeBoundingSphere(); - - } - - setFromObject(object) { - - this.object = object; - this.update(); - - return this; - - } - - copy(source, recursive) { - - super.copy(source, recursive); - - this.object = source.object; - - return this; - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - -} - -class Box3Helper extends LineSegments { - - constructor(box, color = 0xffff00) { - - const indices = new Uint16Array([0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7]); - - const positions = [1, 1, 1, - 1, 1, 1, - 1, - 1, 1, 1, - 1, 1, 1, 1, - 1, - 1, 1, - 1, - 1, - 1, - 1, 1, - 1, - 1]; - - const geometry = new BufferGeometry(); - - geometry.setIndex(new BufferAttribute(indices, 1)); - - geometry.setAttribute('position', new Float32BufferAttribute(positions, 3)); - - super(geometry, new LineBasicMaterial({ color: color, toneMapped: false })); - - this.box = box; - - this.type = 'Box3Helper'; - - this.geometry.computeBoundingSphere(); - - } - - updateMatrixWorld(force) { - - const box = this.box; - - if (box.isEmpty()) return; - - box.getCenter(this.position); - - box.getSize(this.scale); - - this.scale.multiplyScalar(0.5); - - super.updateMatrixWorld(force); - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - -} - -class PlaneHelper extends Line { - - constructor(plane, size = 1, hex = 0xffff00) { - - const color = hex; - - const positions = [1, - 1, 0, - 1, 1, 0, - 1, - 1, 0, 1, 1, 0, - 1, 1, 0, - 1, - 1, 0, 1, - 1, 0, 1, 1, 0]; - - const geometry = new BufferGeometry(); - geometry.setAttribute('position', new Float32BufferAttribute(positions, 3)); - geometry.computeBoundingSphere(); - - super(geometry, new LineBasicMaterial({ color: color, toneMapped: false })); - - this.type = 'PlaneHelper'; - - this.plane = plane; - - this.size = size; - - const positions2 = [1, 1, 0, - 1, 1, 0, - 1, - 1, 0, 1, 1, 0, - 1, - 1, 0, 1, - 1, 0]; - - const geometry2 = new BufferGeometry(); - geometry2.setAttribute('position', new Float32BufferAttribute(positions2, 3)); - geometry2.computeBoundingSphere(); - - this.add(new Mesh(geometry2, new MeshBasicMaterial({ color: color, opacity: 0.2, transparent: true, depthWrite: false, toneMapped: false }))); - - } - - updateMatrixWorld(force) { - - this.position.set(0, 0, 0); - - this.scale.set(0.5 * this.size, 0.5 * this.size, 1); - - this.lookAt(this.plane.normal); - - this.translateZ(- this.plane.constant); - - super.updateMatrixWorld(force); - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - this.children[0].geometry.dispose(); - this.children[0].material.dispose(); - - } - -} - -const _axis = /*@__PURE__*/ new Vector3(); -let _lineGeometry, _coneGeometry; - -class ArrowHelper extends Object3D { - - // dir is assumed to be normalized - - constructor(dir = new Vector3(0, 0, 1), origin = new Vector3(0, 0, 0), length = 1, color = 0xffff00, headLength = length * 0.2, headWidth = headLength * 0.2) { - - super(); - - this.type = 'ArrowHelper'; - - if (_lineGeometry === undefined) { - - _lineGeometry = new BufferGeometry(); - _lineGeometry.setAttribute('position', new Float32BufferAttribute([0, 0, 0, 0, 1, 0], 3)); - - _coneGeometry = new CylinderGeometry(0, 0.5, 1, 5, 1); - _coneGeometry.translate(0, - 0.5, 0); - - } - - this.position.copy(origin); - - this.line = new Line(_lineGeometry, new LineBasicMaterial({ color: color, toneMapped: false })); - this.line.matrixAutoUpdate = false; - this.add(this.line); - - this.cone = new Mesh(_coneGeometry, new MeshBasicMaterial({ color: color, toneMapped: false })); - this.cone.matrixAutoUpdate = false; - this.add(this.cone); - - this.setDirection(dir); - this.setLength(length, headLength, headWidth); - - } - - setDirection(dir) { - - // dir is assumed to be normalized - - if (dir.y > 0.99999) { - - this.quaternion.set(0, 0, 0, 1); - - } else if (dir.y < - 0.99999) { - - this.quaternion.set(1, 0, 0, 0); - - } else { - - _axis.set(dir.z, 0, - dir.x).normalize(); - - const radians = Math.acos(dir.y); - - this.quaternion.setFromAxisAngle(_axis, radians); - - } - - } - - setLength(length, headLength = length * 0.2, headWidth = headLength * 0.2) { - - this.line.scale.set(1, Math.max(0.0001, length - headLength), 1); // see #17458 - this.line.updateMatrix(); - - this.cone.scale.set(headWidth, headLength, headWidth); - this.cone.position.y = length; - this.cone.updateMatrix(); - - } - - setColor(color) { - - this.line.material.color.set(color); - this.cone.material.color.set(color); - - } - - copy(source) { - - super.copy(source, false); - - this.line.copy(source.line); - this.cone.copy(source.cone); - - return this; - - } - - dispose() { - - this.line.geometry.dispose(); - this.line.material.dispose(); - this.cone.geometry.dispose(); - this.cone.material.dispose(); - - } - -} - -class AxesHelper extends LineSegments { - - constructor(size = 1) { - - const vertices = [ - 0, 0, 0, size, 0, 0, - 0, 0, 0, 0, size, 0, - 0, 0, 0, 0, 0, size - ]; - - const colors = [ - 1, 0, 0, 1, 0.6, 0, - 0, 1, 0, 0.6, 1, 0, - 0, 0, 1, 0, 0.6, 1 - ]; - - const geometry = new BufferGeometry(); - geometry.setAttribute('position', new Float32BufferAttribute(vertices, 3)); - geometry.setAttribute('color', new Float32BufferAttribute(colors, 3)); - - const material = new LineBasicMaterial({ vertexColors: true, toneMapped: false }); - - super(geometry, material); - - this.type = 'AxesHelper'; - - } - - setColors(xAxisColor, yAxisColor, zAxisColor) { - - const color = new Color(); - const array = this.geometry.attributes.color.array; - - color.set(xAxisColor); - color.toArray(array, 0); - color.toArray(array, 3); - - color.set(yAxisColor); - color.toArray(array, 6); - color.toArray(array, 9); - - color.set(zAxisColor); - color.toArray(array, 12); - color.toArray(array, 15); - - this.geometry.attributes.color.needsUpdate = true; - - return this; - - } - - dispose() { - - this.geometry.dispose(); - this.material.dispose(); - - } - -} - -class ShapePath { - - constructor() { - - this.type = 'ShapePath'; - - this.color = new Color(); - - this.subPaths = []; - this.currentPath = null; - - } - - moveTo(x, y) { - - this.currentPath = new Path(); - this.subPaths.push(this.currentPath); - this.currentPath.moveTo(x, y); - - return this; - - } - - lineTo(x, y) { - - this.currentPath.lineTo(x, y); - - return this; - - } - - quadraticCurveTo(aCPx, aCPy, aX, aY) { - - this.currentPath.quadraticCurveTo(aCPx, aCPy, aX, aY); - - return this; - - } - - bezierCurveTo(aCP1x, aCP1y, aCP2x, aCP2y, aX, aY) { - - this.currentPath.bezierCurveTo(aCP1x, aCP1y, aCP2x, aCP2y, aX, aY); - - return this; - - } - - splineThru(pts) { - - this.currentPath.splineThru(pts); - - return this; - - } - - toShapes(isCCW) { - - function toShapesNoHoles(inSubpaths) { - - const shapes = []; - - for (let i = 0, l = inSubpaths.length; i < l; i++) { - - const tmpPath = inSubpaths[i]; - - const tmpShape = new Shape(); - tmpShape.curves = tmpPath.curves; - - shapes.push(tmpShape); - - } - - return shapes; - - } - - function isPointInsidePolygon(inPt, inPolygon) { - - const polyLen = inPolygon.length; - - // inPt on polygon contour => immediate success or - // toggling of inside/outside at every single! intersection point of an edge - // with the horizontal line through inPt, left of inPt - // not counting lowerY endpoints of edges and whole edges on that line - let inside = false; - for (let p = polyLen - 1, q = 0; q < polyLen; p = q++) { - - let edgeLowPt = inPolygon[p]; - let edgeHighPt = inPolygon[q]; - - let edgeDx = edgeHighPt.x - edgeLowPt.x; - let edgeDy = edgeHighPt.y - edgeLowPt.y; - - if (Math.abs(edgeDy) > Number.EPSILON) { - - // not parallel - if (edgeDy < 0) { - - edgeLowPt = inPolygon[q]; edgeDx = - edgeDx; - edgeHighPt = inPolygon[p]; edgeDy = - edgeDy; - - } - - if ((inPt.y < edgeLowPt.y) || (inPt.y > edgeHighPt.y)) continue; - - if (inPt.y === edgeLowPt.y) { - - if (inPt.x === edgeLowPt.x) return true; // inPt is on contour ? - // continue; // no intersection or edgeLowPt => doesn't count !!! - - } else { - - const perpEdge = edgeDy * (inPt.x - edgeLowPt.x) - edgeDx * (inPt.y - edgeLowPt.y); - if (perpEdge === 0) return true; // inPt is on contour ? - if (perpEdge < 0) continue; - inside = !inside; // true intersection left of inPt - - } - - } else { - - // parallel or collinear - if (inPt.y !== edgeLowPt.y) continue; // parallel - // edge lies on the same horizontal line as inPt - if (((edgeHighPt.x <= inPt.x) && (inPt.x <= edgeLowPt.x)) || - ((edgeLowPt.x <= inPt.x) && (inPt.x <= edgeHighPt.x))) return true; // inPt: Point on contour ! - // continue; - - } - - } - - return inside; - - } - - const isClockWise = ShapeUtils.isClockWise; - - const subPaths = this.subPaths; - if (subPaths.length === 0) return []; - - let solid, tmpPath, tmpShape; - const shapes = []; - - if (subPaths.length === 1) { - - tmpPath = subPaths[0]; - tmpShape = new Shape(); - tmpShape.curves = tmpPath.curves; - shapes.push(tmpShape); - return shapes; - - } - - let holesFirst = !isClockWise(subPaths[0].getPoints()); - holesFirst = isCCW ? !holesFirst : holesFirst; - - // console.log("Holes first", holesFirst); - - const betterShapeHoles = []; - const newShapes = []; - let newShapeHoles = []; - let mainIdx = 0; - let tmpPoints; - - newShapes[mainIdx] = undefined; - newShapeHoles[mainIdx] = []; - - for (let i = 0, l = subPaths.length; i < l; i++) { - - tmpPath = subPaths[i]; - tmpPoints = tmpPath.getPoints(); - solid = isClockWise(tmpPoints); - solid = isCCW ? !solid : solid; - - if (solid) { - - if ((!holesFirst) && (newShapes[mainIdx])) mainIdx++; - - newShapes[mainIdx] = { s: new Shape(), p: tmpPoints }; - newShapes[mainIdx].s.curves = tmpPath.curves; - - if (holesFirst) mainIdx++; - newShapeHoles[mainIdx] = []; - - //console.log('cw', i); - - } else { - - newShapeHoles[mainIdx].push({ h: tmpPath, p: tmpPoints[0] }); - - //console.log('ccw', i); - - } - - } - - // only Holes? -> probably all Shapes with wrong orientation - if (!newShapes[0]) return toShapesNoHoles(subPaths); - - - if (newShapes.length > 1) { - - let ambiguous = false; - let toChange = 0; - - for (let sIdx = 0, sLen = newShapes.length; sIdx < sLen; sIdx++) { - - betterShapeHoles[sIdx] = []; - - } - - for (let sIdx = 0, sLen = newShapes.length; sIdx < sLen; sIdx++) { - - const sho = newShapeHoles[sIdx]; - - for (let hIdx = 0; hIdx < sho.length; hIdx++) { - - const ho = sho[hIdx]; - let hole_unassigned = true; - - for (let s2Idx = 0; s2Idx < newShapes.length; s2Idx++) { - - if (isPointInsidePolygon(ho.p, newShapes[s2Idx].p)) { - - if (sIdx !== s2Idx) toChange++; - - if (hole_unassigned) { - - hole_unassigned = false; - betterShapeHoles[s2Idx].push(ho); - - } else { - - ambiguous = true; - - } - - } - - } - - if (hole_unassigned) { - - betterShapeHoles[sIdx].push(ho); - - } - - } - - } - - if (toChange > 0 && ambiguous === false) { - - newShapeHoles = betterShapeHoles; - - } - - } - - let tmpHoles; - - for (let i = 0, il = newShapes.length; i < il; i++) { - - tmpShape = newShapes[i].s; - shapes.push(tmpShape); - tmpHoles = newShapeHoles[i]; - - for (let j = 0, jl = tmpHoles.length; j < jl; j++) { - - tmpShape.holes.push(tmpHoles[j].h); - - } - - } - - //console.log("shape", shapes); - - return shapes; - - } - -} - -// Fast Half Float Conversions, http://www.fox-toolkit.org/ftp/fasthalffloatconversion.pdf - -const _tables = /*@__PURE__*/ _generateTables(); - -function _generateTables() { - - // float32 to float16 helpers - - const buffer = new ArrayBuffer(4); - const floatView = new Float32Array(buffer); - const uint32View = new Uint32Array(buffer); - - const baseTable = new Uint32Array(512); - const shiftTable = new Uint32Array(512); - - for (let i = 0; i < 256; ++i) { - - const e = i - 127; - - // very small number (0, -0) - - if (e < - 27) { - - baseTable[i] = 0x0000; - baseTable[i | 0x100] = 0x8000; - shiftTable[i] = 24; - shiftTable[i | 0x100] = 24; - - // small number (denorm) - - } else if (e < - 14) { - - baseTable[i] = 0x0400 >> (- e - 14); - baseTable[i | 0x100] = (0x0400 >> (- e - 14)) | 0x8000; - shiftTable[i] = - e - 1; - shiftTable[i | 0x100] = - e - 1; - - // normal number - - } else if (e <= 15) { - - baseTable[i] = (e + 15) << 10; - baseTable[i | 0x100] = ((e + 15) << 10) | 0x8000; - shiftTable[i] = 13; - shiftTable[i | 0x100] = 13; - - // large number (Infinity, -Infinity) - - } else if (e < 128) { - - baseTable[i] = 0x7c00; - baseTable[i | 0x100] = 0xfc00; - shiftTable[i] = 24; - shiftTable[i | 0x100] = 24; - - // stay (NaN, Infinity, -Infinity) - - } else { - - baseTable[i] = 0x7c00; - baseTable[i | 0x100] = 0xfc00; - shiftTable[i] = 13; - shiftTable[i | 0x100] = 13; - - } - - } - - // float16 to float32 helpers - - const mantissaTable = new Uint32Array(2048); - const exponentTable = new Uint32Array(64); - const offsetTable = new Uint32Array(64); - - for (let i = 1; i < 1024; ++i) { - - let m = i << 13; // zero pad mantissa bits - let e = 0; // zero exponent - - // normalized - while ((m & 0x00800000) === 0) { - - m <<= 1; - e -= 0x00800000; // decrement exponent - - } - - m &= ~0x00800000; // clear leading 1 bit - e += 0x38800000; // adjust bias - - mantissaTable[i] = m | e; - - } - - for (let i = 1024; i < 2048; ++i) { - - mantissaTable[i] = 0x38000000 + ((i - 1024) << 13); - - } - - for (let i = 1; i < 31; ++i) { - - exponentTable[i] = i << 23; - - } - - exponentTable[31] = 0x47800000; - exponentTable[32] = 0x80000000; - - for (let i = 33; i < 63; ++i) { - - exponentTable[i] = 0x80000000 + ((i - 32) << 23); - - } - - exponentTable[63] = 0xc7800000; - - for (let i = 1; i < 64; ++i) { - - if (i !== 32) { - - offsetTable[i] = 1024; - - } - - } - - return { - floatView: floatView, - uint32View: uint32View, - baseTable: baseTable, - shiftTable: shiftTable, - mantissaTable: mantissaTable, - exponentTable: exponentTable, - offsetTable: offsetTable - }; - -} - -// float32 to float16 - -function toHalfFloat(val) { - - if (Math.abs(val) > 65504) console.warn('THREE.DataUtils.toHalfFloat(): Value out of range.'); - - val = clamp(val, - 65504, 65504); - - _tables.floatView[0] = val; - const f = _tables.uint32View[0]; - const e = (f >> 23) & 0x1ff; - return _tables.baseTable[e] + ((f & 0x007fffff) >> _tables.shiftTable[e]); - -} - -// float16 to float32 - -function fromHalfFloat(val) { - - const m = val >> 10; - _tables.uint32View[0] = _tables.mantissaTable[_tables.offsetTable[m] + (val & 0x3ff)] + _tables.exponentTable[m]; - return _tables.floatView[0]; - -} - -var DataUtils = /*#__PURE__*/Object.freeze({ - __proto__: null, - fromHalfFloat: fromHalfFloat, - toHalfFloat: toHalfFloat -}); - -// r144 - -class BoxBufferGeometry extends BoxGeometry { - - constructor(width, height, depth, widthSegments, heightSegments, depthSegments) { - - console.warn('THREE.BoxBufferGeometry has been renamed to THREE.BoxGeometry.'); - super(width, height, depth, widthSegments, heightSegments, depthSegments); - - - } - -} - -// r144 - -class CapsuleBufferGeometry extends CapsuleGeometry { - - constructor(radius, length, capSegments, radialSegments) { - - console.warn('THREE.CapsuleBufferGeometry has been renamed to THREE.CapsuleGeometry.'); - super(radius, length, capSegments, radialSegments); - - } - -} - -// r144 - -class CircleBufferGeometry extends CircleGeometry { - - constructor(radius, segments, thetaStart, thetaLength) { - - console.warn('THREE.CircleBufferGeometry has been renamed to THREE.CircleGeometry.'); - super(radius, segments, thetaStart, thetaLength); - - } - -} - -// r144 - -class ConeBufferGeometry extends ConeGeometry { - - constructor(radius, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength) { - - console.warn('THREE.ConeBufferGeometry has been renamed to THREE.ConeGeometry.'); - super(radius, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength); - - } - -} - -// r144 - -class CylinderBufferGeometry extends CylinderGeometry { - - constructor(radiusTop, radiusBottom, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength) { - - console.warn('THREE.CylinderBufferGeometry has been renamed to THREE.CylinderGeometry.'); - super(radiusTop, radiusBottom, height, radialSegments, heightSegments, openEnded, thetaStart, thetaLength); - - } - -} - -// r144 - -class DodecahedronBufferGeometry extends DodecahedronGeometry { - - constructor(radius, detail) { - - console.warn('THREE.DodecahedronBufferGeometry has been renamed to THREE.DodecahedronGeometry.'); - super(radius, detail); - - } - -} - -// r144 - -class ExtrudeBufferGeometry extends ExtrudeGeometry { - - constructor(shapes, options) { - - console.warn('THREE.ExtrudeBufferGeometry has been renamed to THREE.ExtrudeGeometry.'); - super(shapes, options); - - } - -} - -// r144 - -class IcosahedronBufferGeometry extends IcosahedronGeometry { - - constructor(radius, detail) { - - console.warn('THREE.IcosahedronBufferGeometry has been renamed to THREE.IcosahedronGeometry.'); - super(radius, detail); - - } - -} - -// r144 - -class LatheBufferGeometry extends LatheGeometry { - - constructor(points, segments, phiStart, phiLength) { - - console.warn('THREE.LatheBufferGeometry has been renamed to THREE.LatheGeometry.'); - super(points, segments, phiStart, phiLength); - - } - -} - -// r144 - -class OctahedronBufferGeometry extends OctahedronGeometry { - - constructor(radius, detail) { - - console.warn('THREE.OctahedronBufferGeometry has been renamed to THREE.OctahedronGeometry.'); - super(radius, detail); - - } - -} - -// r144 - -class PlaneBufferGeometry extends PlaneGeometry { - - constructor(width, height, widthSegments, heightSegments) { - - console.warn('THREE.PlaneBufferGeometry has been renamed to THREE.PlaneGeometry.'); - super(width, height, widthSegments, heightSegments); - - } - -} - -// r144 - -class PolyhedronBufferGeometry extends PolyhedronGeometry { - - constructor(vertices, indices, radius, detail) { - - console.warn('THREE.PolyhedronBufferGeometry has been renamed to THREE.PolyhedronGeometry.'); - super(vertices, indices, radius, detail); - - } - -} - -// r144 - -class RingBufferGeometry extends RingGeometry { - - constructor(innerRadius, outerRadius, thetaSegments, phiSegments, thetaStart, thetaLength) { - - console.warn('THREE.RingBufferGeometry has been renamed to THREE.RingGeometry.'); - super(innerRadius, outerRadius, thetaSegments, phiSegments, thetaStart, thetaLength); - - } - -} - -// r144 - -class ShapeBufferGeometry extends ShapeGeometry { - - constructor(shapes, curveSegments) { - - console.warn('THREE.ShapeBufferGeometry has been renamed to THREE.ShapeGeometry.'); - super(shapes, curveSegments); - - } - -} - -// r144 - -class SphereBufferGeometry extends SphereGeometry { - - constructor(radius, widthSegments, heightSegments, phiStart, phiLength, thetaStart, thetaLength) { - - console.warn('THREE.SphereBufferGeometry has been renamed to THREE.SphereGeometry.'); - super(radius, widthSegments, heightSegments, phiStart, phiLength, thetaStart, thetaLength); - - } - -} - -// r144 - -class TetrahedronBufferGeometry extends TetrahedronGeometry { - - constructor(radius, detail) { - - console.warn('THREE.TetrahedronBufferGeometry has been renamed to THREE.TetrahedronGeometry.'); - super(radius, detail); - - } - -} - -// r144 - -class TorusBufferGeometry extends TorusGeometry { - - constructor(radius, tube, radialSegments, tubularSegments, arc) { - - console.warn('THREE.TorusBufferGeometry has been renamed to THREE.TorusGeometry.'); - super(radius, tube, radialSegments, tubularSegments, arc); - - } - -} - -// r144 - -class TorusKnotBufferGeometry extends TorusKnotGeometry { - - constructor(radius, tube, tubularSegments, radialSegments, p, q) { - - console.warn('THREE.TorusKnotBufferGeometry has been renamed to THREE.TorusKnotGeometry.'); - super(radius, tube, tubularSegments, radialSegments, p, q); - - } - -} - -// r144 - -class TubeBufferGeometry extends TubeGeometry { - - constructor(path, tubularSegments, radius, radialSegments, closed) { - - console.warn('THREE.TubeBufferGeometry has been renamed to THREE.TubeGeometry.'); - super(path, tubularSegments, radius, radialSegments, closed); - - } - -} - -if (typeof __THREE_DEVTOOLS__ !== 'undefined') { - - __THREE_DEVTOOLS__.dispatchEvent(new CustomEvent('register', { - detail: { - revision: REVISION, - } - })); - -} - -if (typeof window !== 'undefined') { - - if (window.__THREE__) { - - console.warn('WARNING: Multiple instances of Three.js being imported.'); - - } else { - - window.__THREE__ = REVISION; - - } - -} - -export { ACESFilmicToneMapping, AddEquation, AddOperation, AdditiveAnimationBlendMode, AdditiveBlending, AlphaFormat, AlwaysDepth, AlwaysStencilFunc, AmbientLight, AmbientLightProbe, AnimationClip, AnimationLoader, AnimationMixer, AnimationObjectGroup, AnimationUtils, ArcCurve, ArrayCamera, ArrowHelper, Audio, AudioAnalyser, AudioContext, AudioListener, AudioLoader, AxesHelper, BackSide, BasicDepthPacking, BasicShadowMap, Bone, BooleanKeyframeTrack, Box2, Box3, Box3Helper, BoxBufferGeometry, BoxGeometry, BoxHelper, BufferAttribute, BufferGeometry, BufferGeometryLoader, ByteType, Cache, Camera, CameraHelper, CanvasTexture, CapsuleBufferGeometry, CapsuleGeometry, CatmullRomCurve3, CineonToneMapping, CircleBufferGeometry, CircleGeometry, ClampToEdgeWrapping, Clock, Color, ColorKeyframeTrack, ColorManagement, CompressedArrayTexture, CompressedTexture, CompressedTextureLoader, ConeBufferGeometry, ConeGeometry, CubeCamera, CubeReflectionMapping, CubeRefractionMapping, CubeTexture, CubeTextureLoader, CubeUVReflectionMapping, CubicBezierCurve, CubicBezierCurve3, CubicInterpolant, CullFaceBack, CullFaceFront, CullFaceFrontBack, CullFaceNone, Curve, CurvePath, CustomBlending, CustomToneMapping, CylinderBufferGeometry, CylinderGeometry, Cylindrical, Data3DTexture, DataArrayTexture, DataTexture, DataTextureLoader, DataUtils, DecrementStencilOp, DecrementWrapStencilOp, DefaultLoadingManager, DepthFormat, DepthStencilFormat, DepthTexture, DirectionalLight, DirectionalLightHelper, DiscreteInterpolant, DodecahedronBufferGeometry, DodecahedronGeometry, DoubleSide, DstAlphaFactor, DstColorFactor, DynamicCopyUsage, DynamicDrawUsage, DynamicReadUsage, EdgesGeometry, EllipseCurve, EqualDepth, EqualStencilFunc, EquirectangularReflectionMapping, EquirectangularRefractionMapping, Euler, EventDispatcher, ExtrudeBufferGeometry, ExtrudeGeometry, FileLoader, Float16BufferAttribute, Float32BufferAttribute, Float64BufferAttribute, FloatType, Fog, FogExp2, FramebufferTexture, FrontSide, Frustum, GLBufferAttribute, GLSL1, GLSL3, GreaterDepth, GreaterEqualDepth, GreaterEqualStencilFunc, GreaterStencilFunc, GridHelper, Group, HalfFloatType, HemisphereLight, HemisphereLightHelper, HemisphereLightProbe, IcosahedronBufferGeometry, IcosahedronGeometry, ImageBitmapLoader, ImageLoader, ImageUtils, IncrementStencilOp, IncrementWrapStencilOp, InstancedBufferAttribute, InstancedBufferGeometry, InstancedInterleavedBuffer, InstancedMesh, Int16BufferAttribute, Int32BufferAttribute, Int8BufferAttribute, IntType, InterleavedBuffer, InterleavedBufferAttribute, Interpolant, InterpolateDiscrete, InterpolateLinear, InterpolateSmooth, InvertStencilOp, KeepStencilOp, KeyframeTrack, LOD, LatheBufferGeometry, LatheGeometry, Layers, LessDepth, LessEqualDepth, LessEqualStencilFunc, LessStencilFunc, Light, LightProbe, Line, Line3, LineBasicMaterial, LineCurve, LineCurve3, LineDashedMaterial, LineLoop, LineSegments, LinearEncoding, LinearFilter, LinearInterpolant, LinearMipMapLinearFilter, LinearMipMapNearestFilter, LinearMipmapLinearFilter, LinearMipmapNearestFilter, LinearSRGBColorSpace, LinearToneMapping, Loader, LoaderUtils, LoadingManager, LoopOnce, LoopPingPong, LoopRepeat, LuminanceAlphaFormat, LuminanceFormat, MOUSE, Material, MaterialLoader, MathUtils, Matrix3, Matrix4, MaxEquation, Mesh, MeshBasicMaterial, MeshDepthMaterial, MeshDistanceMaterial, MeshLambertMaterial, MeshMatcapMaterial, MeshNormalMaterial, MeshPhongMaterial, MeshPhysicalMaterial, MeshStandardMaterial, MeshToonMaterial, MinEquation, MirroredRepeatWrapping, MixOperation, MultiplyBlending, MultiplyOperation, NearestFilter, NearestMipMapLinearFilter, NearestMipMapNearestFilter, NearestMipmapLinearFilter, NearestMipmapNearestFilter, NeverDepth, NeverStencilFunc, NoBlending, NoColorSpace, NoToneMapping, NormalAnimationBlendMode, NormalBlending, NotEqualDepth, NotEqualStencilFunc, NumberKeyframeTrack, Object3D, ObjectLoader, ObjectSpaceNormalMap, OctahedronBufferGeometry, OctahedronGeometry, OneFactor, OneMinusDstAlphaFactor, OneMinusDstColorFactor, OneMinusSrcAlphaFactor, OneMinusSrcColorFactor, OrthographicCamera, PCFShadowMap, PCFSoftShadowMap, PMREMGenerator, Path, PerspectiveCamera, Plane, PlaneBufferGeometry, PlaneGeometry, PlaneHelper, PointLight, PointLightHelper, Points, PointsMaterial, PolarGridHelper, PolyhedronBufferGeometry, PolyhedronGeometry, PositionalAudio, PropertyBinding, PropertyMixer, QuadraticBezierCurve, QuadraticBezierCurve3, Quaternion, QuaternionKeyframeTrack, QuaternionLinearInterpolant, RED_GREEN_RGTC2_Format, RED_RGTC1_Format, REVISION, RGBADepthPacking, RGBAFormat, RGBAIntegerFormat, RGBA_ASTC_10x10_Format, RGBA_ASTC_10x5_Format, RGBA_ASTC_10x6_Format, RGBA_ASTC_10x8_Format, RGBA_ASTC_12x10_Format, RGBA_ASTC_12x12_Format, RGBA_ASTC_4x4_Format, RGBA_ASTC_5x4_Format, RGBA_ASTC_5x5_Format, RGBA_ASTC_6x5_Format, RGBA_ASTC_6x6_Format, RGBA_ASTC_8x5_Format, RGBA_ASTC_8x6_Format, RGBA_ASTC_8x8_Format, RGBA_BPTC_Format, RGBA_ETC2_EAC_Format, RGBA_PVRTC_2BPPV1_Format, RGBA_PVRTC_4BPPV1_Format, RGBA_S3TC_DXT1_Format, RGBA_S3TC_DXT3_Format, RGBA_S3TC_DXT5_Format, RGB_ETC1_Format, RGB_ETC2_Format, RGB_PVRTC_2BPPV1_Format, RGB_PVRTC_4BPPV1_Format, RGB_S3TC_DXT1_Format, RGFormat, RGIntegerFormat, RawShaderMaterial, Ray, Raycaster, RectAreaLight, RedFormat, RedIntegerFormat, ReinhardToneMapping, RepeatWrapping, ReplaceStencilOp, ReverseSubtractEquation, RingBufferGeometry, RingGeometry, SIGNED_RED_GREEN_RGTC2_Format, SIGNED_RED_RGTC1_Format, SRGBColorSpace, Scene, ShaderChunk, ShaderLib, ShaderMaterial, ShadowMaterial, Shape, ShapeBufferGeometry, ShapeGeometry, ShapePath, ShapeUtils, ShortType, Skeleton, SkeletonHelper, SkinnedMesh, Source, Sphere, SphereBufferGeometry, SphereGeometry, Spherical, SphericalHarmonics3, SplineCurve, SpotLight, SpotLightHelper, Sprite, SpriteMaterial, SrcAlphaFactor, SrcAlphaSaturateFactor, SrcColorFactor, StaticCopyUsage, StaticDrawUsage, StaticReadUsage, StereoCamera, StreamCopyUsage, StreamDrawUsage, StreamReadUsage, StringKeyframeTrack, SubtractEquation, SubtractiveBlending, TOUCH, TangentSpaceNormalMap, TetrahedronBufferGeometry, TetrahedronGeometry, Texture, TextureLoader, TorusBufferGeometry, TorusGeometry, TorusKnotBufferGeometry, TorusKnotGeometry, Triangle, TriangleFanDrawMode, TriangleStripDrawMode, TrianglesDrawMode, TubeBufferGeometry, TubeGeometry, TwoPassDoubleSide, UVMapping, Uint16BufferAttribute, Uint32BufferAttribute, Uint8BufferAttribute, Uint8ClampedBufferAttribute, Uniform, UniformsGroup, UniformsLib, UniformsUtils, UnsignedByteType, UnsignedInt248Type, UnsignedIntType, UnsignedShort4444Type, UnsignedShort5551Type, UnsignedShortType, VSMShadowMap, Vector2, Vector3, Vector4, VectorKeyframeTrack, VideoTexture, WebGL1Renderer, WebGL3DRenderTarget, WebGLArrayRenderTarget, WebGLCubeRenderTarget, WebGLMultipleRenderTargets, WebGLRenderTarget, WebGLRenderer, WebGLUtils, WireframeGeometry, WrapAroundEnding, ZeroCurvatureEnding, ZeroFactor, ZeroSlopeEnding, ZeroStencilOp, _SRGBAFormat, sRGBEncoding }; \ No newline at end of file diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/cascade_stuff/predict_next_stage.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/cascade_stuff/predict_next_stage.py deleted file mode 100644 index 2c760cd048d5f0b003d7fdd86b457307ef608c24..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/cascade_stuff/predict_next_stage.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from copy import deepcopy - -import numpy as np -from batchgenerators.utilities.file_and_folder_operations import * -import argparse -from nnunet.preprocessing.preprocessing import resample_data_or_seg -from batchgenerators.utilities.file_and_folder_operations import maybe_mkdir_p -import nnunet -from nnunet.run.default_configuration import get_default_configuration -from multiprocessing import Pool - -from nnunet.training.model_restore import recursive_find_python_class -from nnunet.training.network_training.nnUNetTrainer import nnUNetTrainer - - -def resample_and_save(predicted, target_shape, output_file, force_separate_z=False, - interpolation_order=1, interpolation_order_z=0): - if isinstance(predicted, str): - assert isfile(predicted), "If isinstance(segmentation_softmax, str) then " \ - "isfile(segmentation_softmax) must be True" - del_file = deepcopy(predicted) - predicted = np.load(predicted) - os.remove(del_file) - - predicted_new_shape = resample_data_or_seg(predicted, target_shape, False, order=interpolation_order, - do_separate_z=force_separate_z, order_z=interpolation_order_z) - seg_new_shape = predicted_new_shape.argmax(0) - np.savez_compressed(output_file, data=seg_new_shape.astype(np.uint8)) - - -def predict_next_stage(trainer, stage_to_be_predicted_folder): - output_folder = join(pardir(trainer.output_folder), "pred_next_stage") - maybe_mkdir_p(output_folder) - - if 'segmentation_export_params' in trainer.plans.keys(): - force_separate_z = trainer.plans['segmentation_export_params']['force_separate_z'] - interpolation_order = trainer.plans['segmentation_export_params']['interpolation_order'] - interpolation_order_z = trainer.plans['segmentation_export_params']['interpolation_order_z'] - else: - force_separate_z = None - interpolation_order = 1 - interpolation_order_z = 0 - - export_pool = Pool(2) - results = [] - - for pat in trainer.dataset_val.keys(): - print(pat) - data_file = trainer.dataset_val[pat]['data_file'] - data_preprocessed = np.load(data_file)['data'][:-1] - - predicted_probabilities = trainer.predict_preprocessed_data_return_seg_and_softmax( - data_preprocessed, do_mirroring=trainer.data_aug_params["do_mirror"], - mirror_axes=trainer.data_aug_params['mirror_axes'], mixed_precision=trainer.fp16)[1] - - data_file_nofolder = data_file.split("/")[-1] - data_file_nextstage = join(stage_to_be_predicted_folder, data_file_nofolder) - data_nextstage = np.load(data_file_nextstage)['data'] - target_shp = data_nextstage.shape[1:] - output_file = join(output_folder, data_file_nextstage.split("/")[-1][:-4] + "_segFromPrevStage.npz") - - if np.prod(predicted_probabilities.shape) > (2e9 / 4 * 0.85): # *0.85 just to be save - np.save(output_file[:-4] + ".npy", predicted_probabilities) - predicted_probabilities = output_file[:-4] + ".npy" - - results.append(export_pool.starmap_async(resample_and_save, [(predicted_probabilities, target_shp, output_file, - force_separate_z, interpolation_order, - interpolation_order_z)])) - - _ = [i.get() for i in results] - export_pool.close() - export_pool.join() - - -if __name__ == "__main__": - """ - RUNNING THIS SCRIPT MANUALLY IS USUALLY NOT NECESSARY. USE THE run_training.py FILE! - - This script is intended for predicting all the low resolution predictions of 3d_lowres for the next stage of the - cascade. It needs to run once for each fold so that the segmentation is only generated for the validation set - and not on the data the network was trained on. Run it with - python predict_next_stage TRAINERCLASS TASK FOLD""" - - parser = argparse.ArgumentParser() - parser.add_argument("network_trainer") - parser.add_argument("task") - parser.add_argument("fold", type=int) - - args = parser.parse_args() - - trainerclass = args.network_trainer - task = args.task - fold = args.fold - - plans_file, folder_with_preprocessed_data, output_folder_name, dataset_directory, batch_dice, stage = \ - get_default_configuration("3d_lowres", task) - - trainer_class = recursive_find_python_class([join(nnunet.__path__[0], "training", "network_training")], - trainerclass, - "nnunet.training.network_training") - - if trainer_class is None: - raise RuntimeError("Could not find trainer class in nnunet.training.network_training") - else: - assert issubclass(trainer_class, - nnUNetTrainer), "network_trainer was found but is not derived from nnUNetTrainer" - - trainer = trainer_class(plans_file, fold, folder_with_preprocessed_data, output_folder=output_folder_name, - dataset_directory=dataset_directory, batch_dice=batch_dice, stage=stage) - - trainer.initialize(False) - trainer.load_dataset() - trainer.do_split() - trainer.load_best_checkpoint(train=False) - - stage_to_be_predicted_folder = join(dataset_directory, trainer.plans['data_identifier'] + "_stage%d" % 1) - output_folder = join(pardir(trainer.output_folder), "pred_next_stage") - maybe_mkdir_p(output_folder) - - predict_next_stage(trainer, stage_to_be_predicted_folder) diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNetTrainerV2_fp32.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNetTrainerV2_fp32.py deleted file mode 100644 index 58b7c2fbdfc55df3b2c46ee6acee7ab66694f455..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNetTrainerV2_fp32.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 - - -class nnUNetTrainerV2_fp32(nnUNetTrainerV2): - """ - Info for Fabian: same as internal nnUNetTrainerV2_2 - """ - - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, False) diff --git a/spaces/huggingchat/chat-ui/scripts/updateLocalEnv.ts b/spaces/huggingchat/chat-ui/scripts/updateLocalEnv.ts deleted file mode 100644 index 151eb2e301aa38987426a8bc802bdfbc397ec025..0000000000000000000000000000000000000000 --- a/spaces/huggingchat/chat-ui/scripts/updateLocalEnv.ts +++ /dev/null @@ -1,20 +0,0 @@ -import fs from "fs"; - -const SECRET_CONFIG = fs.existsSync(".env.SECRET_CONFIG") - ? fs.readFileSync(".env.SECRET_CONFIG", "utf8") - : process.env.SECRET_CONFIG; - -if (!SECRET_CONFIG) { - throw new Error( - "SECRET_CONFIG is not defined. Please provide it either in a file or as an environment variable." - ); -} - -// Read the content of the file .env.template -const PUBLIC_CONFIG = fs.readFileSync(".env.template", "utf8"); - -// Prepend the content of the env variable SECRET_CONFIG -const full_config = `${PUBLIC_CONFIG}\n${SECRET_CONFIG}`; - -// Write full_config to .env.local -fs.writeFileSync(".env.local", full_config); diff --git a/spaces/huggingface-projects/deepfloydif-bot/app.py b/spaces/huggingface-projects/deepfloydif-bot/app.py deleted file mode 100644 index f0579b2dac3a422b233cd7a9e744c33f3cd2b226..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/deepfloydif-bot/app.py +++ /dev/null @@ -1,303 +0,0 @@ -import asyncio -import glob -import os -import pathlib -import random -import threading - -import gradio as gr -import discord -from gradio_client import Client -from PIL import Image -from discord.ext import commands - -from discord.ui import Button, View - - -HF_TOKEN = os.getenv("HF_TOKEN") -deepfloydif_client = Client("huggingface-projects/IF", HF_TOKEN) -DISCORD_TOKEN = os.getenv("DISCORD_TOKEN") -intents = discord.Intents.all() -bot = commands.Bot(command_prefix="/", intents=intents) - - -@bot.event -async def on_ready(): - print(f"Logged in as {bot.user} (ID: {bot.user.id})") - synced = await bot.tree.sync() - print(f"Synced commands: {', '.join([s.name for s in synced])}.") - print("------") - - -@bot.hybrid_command( - name="deepfloydif", - description="Enter a prompt to generate an image! Can generate realistic text, too!", -) -async def deepfloydif(ctx, prompt: str): - """DeepfloydIF stage 1 generation""" - try: - await deepfloydif_generate64(ctx, prompt) - except Exception as e: - print(f"Error: {e}") - - -def deepfloydif_generate64_inference(prompt): - """Generates four images based on a prompt""" - negative_prompt = "" - seed = random.randint(0, 1000) - number_of_images = 4 - guidance_scale = 7 - custom_timesteps_1 = "smart50" - number_of_inference_steps = 50 - ( - stage_1_images, - stage_1_param_path, - path_for_upscale256_upscaling, - ) = deepfloydif_client.predict( - prompt, - negative_prompt, - seed, - number_of_images, - guidance_scale, - custom_timesteps_1, - number_of_inference_steps, - api_name="/generate64", - ) - return [stage_1_images, stage_1_param_path, path_for_upscale256_upscaling] - - -def deepfloydif_upscale256_inference(index, path_for_upscale256_upscaling): - """Upscales one of the images from deepfloydif_generate64_inference based on the chosen index""" - selected_index_for_upscale256 = index - seed_2 = 0 - guidance_scale_2 = 4 - custom_timesteps_2 = "smart50" - number_of_inference_steps_2 = 50 - result_path = deepfloydif_client.predict( - path_for_upscale256_upscaling, - selected_index_for_upscale256, - seed_2, - guidance_scale_2, - custom_timesteps_2, - number_of_inference_steps_2, - api_name="/upscale256", - ) - return result_path - - -def deepfloydif_upscale1024_inference(index, path_for_upscale256_upscaling, prompt): - """Upscales to stage 2, then stage 3""" - selected_index_for_upscale256 = index - seed_2 = 0 # default seed for stage 2 256 upscaling - guidance_scale_2 = 4 # default for stage 2 - custom_timesteps_2 = "smart50" # default for stage 2 - number_of_inference_steps_2 = 50 # default for stage 2 - negative_prompt = "" # empty (not used, could add in the future) - - seed_3 = 0 # default for stage 3 1024 upscaling - guidance_scale_3 = 9 # default for stage 3 - number_of_inference_steps_3 = 40 # default for stage 3 - - result_path = deepfloydif_client.predict( - path_for_upscale256_upscaling, - selected_index_for_upscale256, - seed_2, - guidance_scale_2, - custom_timesteps_2, - number_of_inference_steps_2, - prompt, - negative_prompt, - seed_3, - guidance_scale_3, - number_of_inference_steps_3, - api_name="/upscale1024", - ) - return result_path - - -def load_image(png_files, stage_1_images): - """Opens images as variables so we can combine them later""" - results = [] - for file in png_files: - png_path = os.path.join(stage_1_images, file) - results.append(Image.open(png_path)) - return results - - -def combine_images(png_files, stage_1_images, partial_path): - if os.environ.get("TEST_ENV") == "True": - print("Combining images for deepfloydif_generate64") - images = load_image(png_files, stage_1_images) - combined_image = Image.new("RGB", (images[0].width * 2, images[0].height * 2)) - combined_image.paste(images[0], (0, 0)) - combined_image.paste(images[1], (images[0].width, 0)) - combined_image.paste(images[2], (0, images[0].height)) - combined_image.paste(images[3], (images[0].width, images[0].height)) - combined_image_path = os.path.join(stage_1_images, f"{partial_path}.png") - combined_image.save(combined_image_path) - return combined_image_path - - -async def deepfloydif_generate64(ctx, prompt): - """DeepfloydIF command (generate images with realistic text using slash commands)""" - try: - if ctx.guild.id == 879548962464493619: - if ctx.channel.id != 1119313215675973714: - return - channel = ctx.channel - # interaction.response message can't be used to create a thread, so we create another message - message = await ctx.send(f"**{prompt}** - {ctx.author.mention} (generating...)") - - loop = asyncio.get_running_loop() - result = await loop.run_in_executor(None, deepfloydif_generate64_inference, prompt) - stage_1_images = result[0] - path_for_upscale256_upscaling = result[2] - - partial_path = pathlib.Path(path_for_upscale256_upscaling).name - png_files = list(glob.glob(f"{stage_1_images}/**/*.png")) - - if png_files: - await message.delete() - combined_image_path = combine_images(png_files, stage_1_images, partial_path) - if os.environ.get("TEST_ENV") == "True": - print("Images combined for deepfloydif_generate64") - - with Image.open(combined_image_path) as img: - width, height = img.size - new_width = width * 3 - new_height = height * 3 - resized_img = img.resize((new_width, new_height)) - x2_combined_image_path = combined_image_path - resized_img.save(x2_combined_image_path) - - # making image bigger, more readable - with open(x2_combined_image_path, "rb") as f: # was combined_image_path - button1 = Button(custom_id="0", emoji="↖") - button2 = Button(custom_id="1", emoji="↗") - button3 = Button(custom_id="2", emoji="↙") - button4 = Button(custom_id="3", emoji="↘") - - async def button_callback(interaction): - index = int(interaction.data["custom_id"]) # 0,1,2,3 - - await interaction.response.send_message( - f"{interaction.user.mention} (upscaling...)", ephemeral=True - ) - result_path = await deepfloydif_upscale256(index, path_for_upscale256_upscaling) - - # create and use upscale 1024 button - with open(result_path, "rb") as f: - upscale1024 = Button(label="High-quality upscale (x4)", custom_id=str(index)) - upscale1024.callback = upscale1024_callback - view = View(timeout=None) - view.add_item(upscale1024) - - await interaction.delete_original_response() - await channel.send( - content=( - f"{interaction.user.mention} Here is the upscaled image! Click the button" - " to upscale even more!" - ), - file=discord.File(f, f"{prompt}.png"), - view=view, - ) - - async def upscale1024_callback(interaction): - index = int(interaction.data["custom_id"]) - - await interaction.response.send_message( - f"{interaction.user.mention} (upscaling...)", ephemeral=True - ) - result_path = await deepfloydif_upscale1024(index, path_for_upscale256_upscaling, prompt) - - with open(result_path, "rb") as f: - await interaction.delete_original_response() - await channel.send( - content=f"{interaction.user.mention} Here's your high-quality x16 image!", - file=discord.File(f, f"{prompt}.png"), - ) - - button1.callback = button_callback - button2.callback = button_callback - button3.callback = button_callback - button4.callback = button_callback - - view = View(timeout=None) - view.add_item(button1) - view.add_item(button2) - view.add_item(button3) - view.add_item(button4) - - # could store this message as combined_image_dfif in case it's useful for future testing - await channel.send( - f"**{prompt}** - {ctx.author.mention} Click a button to upscale! (make larger + enhance quality)", - file=discord.File(f, f"{partial_path}.png"), - view=view, - ) - else: - await ctx.send(f"{ctx.author.mention} No PNG files were found, cannot post them!") - - except Exception as e: - print(f"Error: {e}") - - -async def deepfloydif_upscale256(index: int, path_for_upscale256_upscaling): - """upscaling function for images generated using /deepfloydif""" - try: - loop = asyncio.get_running_loop() - result_path = await loop.run_in_executor( - None, deepfloydif_upscale256_inference, index, path_for_upscale256_upscaling - ) - return result_path - - except Exception as e: - print(f"Error: {e}") - - -async def deepfloydif_upscale1024(index: int, path_for_upscale256_upscaling, prompt): - """upscaling function for images generated using /deepfloydif""" - try: - loop = asyncio.get_running_loop() - result_path = await loop.run_in_executor( - None, deepfloydif_upscale1024_inference, index, path_for_upscale256_upscaling, prompt - ) - return result_path - - except Exception as e: - print(f"Error: {e}") - - -def run_bot(): - bot.run(DISCORD_TOKEN) - - -threading.Thread(target=run_bot).start() - - -welcome_message = """ -## Add this bot to your server by clicking this link: - -https://discord.com/api/oauth2/authorize?client_id=1154395078735953930&permissions=51200&scope=bot - -## How to use it? - -The bot can be triggered via `/deepfloydif` followed by your text prompt. - -This will generate images based on the text prompt. You can upscale the images using the buttons up to 16x! - -⚠️ Note ⚠️: Please make sure this bot's command does have the same name as another command in your server. - -⚠️ Note ⚠️: Bot commands do not work in DMs with the bot as of now. -""" - - -with gr.Blocks() as demo: - gr.Markdown(f""" - # Discord bot of https://huggingface.co/spaces/DeepFloyd/IF - {welcome_message} - """) - - -demo.queue(concurrency_count=100) -demo.queue(max_size=100) -demo.launch() diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/types.ts b/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/types.ts deleted file mode 100644 index 96fc4b3f0ba0248da9c44a99e9a95794853f9e79..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/types.ts +++ /dev/null @@ -1,54 +0,0 @@ -export enum Status { - ready = 'ready', - loading = 'loading', - prompting = 'prompting', - processing = 'processing', - dragging = 'dragging', - masking = 'masking', -} - -export type Presence = { - cursor: { - x: number; - y: number; - } | null; - frame: { - x: number; - y: number; - }; - status: Status; - currentPrompt: string -} - -export type User = string; - -export type PromptImgObject = { - prompt: string; - imgURL: string; - position: { - x: number; - y: number; - } - date: number; - id: string; - room: string; -}; - -export type PromptImgKey = string; - -export interface RoomResponse { - id: number; - room_id: string; - users_count: number; -} - - -export enum ConnectionStatus { - "closed" = "closed", - "authenticating" = "authenticating", - "unavailable" = "unavailable", - "failed" = "failed", - "open" = "open", - "connecting" = "connecting", -} -export type TConnectionStatus = keyof typeof ConnectionStatus diff --git a/spaces/inamXcontru/PoeticTTS/Contoh Surat Berhenti Kerja 24 Jam PDF 62 Tips Dan Contoh Surat Perletakan Jawatan Dalam Tempoh Sehari.md b/spaces/inamXcontru/PoeticTTS/Contoh Surat Berhenti Kerja 24 Jam PDF 62 Tips Dan Contoh Surat Perletakan Jawatan Dalam Tempoh Sehari.md deleted file mode 100644 index 2e6043ade4c6a4091c3a99d075f215369558792f..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Contoh Surat Berhenti Kerja 24 Jam PDF 62 Tips Dan Contoh Surat Perletakan Jawatan Dalam Tempoh Sehari.md +++ /dev/null @@ -1,6 +0,0 @@ -

    contoh surat berhenti kerja 24 jam pdf 62


    Download Filehttps://gohhs.com/2uz4Nm



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/innnky/visinger2-nomidi/modules/transforms.py b/spaces/innnky/visinger2-nomidi/modules/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/innnky/visinger2-nomidi/modules/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Photoshop CS3 Crack - Infinite Pirate Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Photoshop CS3 Crack - Infinite Pirate Download.md deleted file mode 100644 index d80a9d092c62c967fc70751e31deecd410718384..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Photoshop CS3 Crack - Infinite Pirate Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe Photoshop CS3 Crack - Infinite Pirate download


    Download ✺✺✺ https://urlin.us/2uEvgb



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Aeccland.shx File Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Aeccland.shx File Download.md deleted file mode 100644 index 7f2369c82cb0364143112732fbae6b5e2622ed6d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Aeccland.shx File Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Aeccland.shx File Download


    Download Zip ===> https://urlin.us/2uEvUp



    - - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Imindmap 6 Serial Key.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Imindmap 6 Serial Key.md deleted file mode 100644 index 5ccc29b1103d2df02700fd72eb094d99a7f69a1a..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Imindmap 6 Serial Key.md +++ /dev/null @@ -1,28 +0,0 @@ -

    Imindmap 6 serial key


    Download ===> https://urlin.us/2uEyCI



    - -Monday, May 13, 2019 - -iCloud Full Backup can backup unlimited files to iCloud without wifi with 1-5Gbps - -How To Encrypt iPhone Text Messages & Mms Using OTP to Safeguard Your Privacy - -★#How To Encrypt iPhone Text Messages & Mms Using OTP to Safeguard Your Privacy★ - -Protecting your private data can be a huge challenge. Text messages, call logs and other data stored in your iPhone that could be used to identity or financial theft. - -It is important to encrypt your iPhone so that no one can access your private data. Apple does have it in a very convenient way with its new iOS 12. We all know that the new iOS 11 also offers the iCloud backup feature which is amazing. But it doesn't encrypt your data and data stored on your iPhone with iCloud is highly sensitive and your privacy is at stake. - -There is a way to encrypt your iPhone data which is very easy to use. All you need is a text message or OTP. This is very useful if you lose your iPhone or want to prevent someone from accessing your private data. - -Apple's new iOS 12 offers the ability to use a Personal Identification Number (PIN) as a second step to log into your iCloud account. To make it easy for people to log into their iCloud accounts, Apple includes a new "Sign In with Apple" feature that uses either Face ID, Touch ID or a PIN. - -It is a good practice to use a PIN rather than a simple 4-digit number for security reasons. A PIN is more difficult to guess and you get used to memorizing it, rather than the standard 4-digit code. - -To enable this new feature on your iPhone, it first requires to enable the “Sign In with Apple” and then go to iCloud’s Security and Privacy settings. The system will ask to confirm whether to store your PIN and then you can finally use this feature on your iPhone. - -PIN protection is great, but there are some concerns. There's a known problem with the security of a 4-digit PIN. A hacker can still easily guess the PIN based on the pattern of digits in your PIN, if you've set a simple PIN such as 12345. - -You can 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/MeluhagujaratipdfBETTER Freedownload.md b/spaces/inplisQlawa/anything-midjourney-v4-1/MeluhagujaratipdfBETTER Freedownload.md deleted file mode 100644 index 74043e9d582cb71ee60082da9449b16977c251fd..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/MeluhagujaratipdfBETTER Freedownload.md +++ /dev/null @@ -1,50 +0,0 @@ -

    meluhagujaratipdffreedownload


    Download Zip >> https://urlin.us/2uEy0e



    - -.blogspot.com/2010/01/beautiful-beach-beach-reception-invitations.html - -About Me - -I am a Graphic Designer based in Ahmedabad, India. I started this blog to keep track of the things that I design and design things that I think are cool. Things that inspire me are Amish craftwork, graphic design, simple and cute things.Q: - -Problems installing AdonisJs with Laravel - -I'm trying to install AdonisJs with Laravel. I have tried all the answers found in the web, to no avail. - -Here's my attempt: - -yarn add -D @adonisjs/laravel-framework - -yarn add @adonisjs/laravel-framework - -Both yield the same error: - -PS C:\WINDOWS\system32> yarn add -D @adonisjs/laravel-framework - -[1] - - $ yarn add @adonisjs/laravel-framework --lock-file=yarn.lock - - yarn add @adonisjs/laravel-framework --lock-file=yarn.lock - - [1/4] Resolving packages... - - [2/4] Fetching packages... - - [3/4] Linking dependencies... - - [4/4] Building fresh packages... - - [yarn] package warning: @adonisjs/laravel-framework@4.3.0 is deprecated and will be removed in v5, please use "@adonisjs/laravel-framework@4.0.x" instead. - - [yarn] package warning: @adonisjs/laravel-framework@4.1.0 is deprecated and will be removed in v5, please use "@adonisjs/laravel-framework@4.0.x" instead. - - [yarn] Visit for documentation about this command. - - [!] - - [!] yarn add @adonisjs/laravel-framework@4.0.x - - [yarn] Error: registry returned error: unknown 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Assassins.Creed.Syndicate.2.DLC-FTS Corepack.md b/spaces/inreVtussa/clothingai/Examples/Assassins.Creed.Syndicate.2.DLC-FTS Corepack.md deleted file mode 100644 index a199b730da7616501750ba290452f658f632e165..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Assassins.Creed.Syndicate.2.DLC-FTS Corepack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Assassins.Creed.Syndicate.2.DLC-FTS corepack


    Download File ••• https://tiurll.com/2uCiTs



    - -Bulletstorm Full Clip Edition Update 1Dirt 4Divinity - Original Sin 2 Definitive EditionF1 ... Assassins Creed Unity Dead Kings DLC [7,17 Gb] (2015) Assassins Creed ... I y II (GOLD REPACK) [3 Gb] Diadra Empty [2013] [178.37 Mb] DieselStormers ... Incl.DLC-FTS Evertown (258mb) (2016) (2016) F1 2013 Update 3 & 4 Fairy ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/jackrui/diff-amp-antimicrobial_peptide_generation/difformer.py b/spaces/jackrui/diff-amp-antimicrobial_peptide_generation/difformer.py deleted file mode 100644 index 4c8484c1b5d0896eb166f9e92e233a580ab79f14..0000000000000000000000000000000000000000 --- a/spaces/jackrui/diff-amp-antimicrobial_peptide_generation/difformer.py +++ /dev/null @@ -1,215 +0,0 @@ -import math,os -import torch -import numpy as np -import torch.nn as nn -import torch.nn.functional as F -# from torch_sparse import SparseTensor, matmul -from torch_geometric.utils import degree - - -def full_attention_conv(qs, ks, vs, kernel, output_attn=False): - ''' - qs: query tensor [N, H, M] - ks: key tensor [L, H, M] - vs: value tensor [L, H, D] - - return output [N, H, D] - ''' - if kernel == 'simple': - # normalize input - qs = qs / torch.norm(qs, p=2) # [N, H, M] - ks = ks / torch.norm(ks, p=2) # [L, H, M] - N = qs.shape[0] - - # numerator - kvs = torch.einsum("lhm,lhd->hmd", ks, vs) - attention_num = torch.einsum("nhm,hmd->nhd", qs, kvs) # [N, H, D] - all_ones = torch.ones([vs.shape[0]]).to(vs.device) - vs_sum = torch.einsum("l,lhd->hd", all_ones, vs) # [H, D] - attention_num += vs_sum.unsqueeze(0).repeat(vs.shape[0], 1, 1) # [N, H, D] - - # denominator - all_ones = torch.ones([ks.shape[0]]).to(ks.device) - ks_sum = torch.einsum("lhm,l->hm", ks, all_ones) - attention_normalizer = torch.einsum("nhm,hm->nh", qs, ks_sum) # [N, H] - - # attentive aggregated results - attention_normalizer = torch.unsqueeze(attention_normalizer, len(attention_normalizer.shape)) # [N, H, 1] - attention_normalizer += torch.ones_like(attention_normalizer) * N - attn_output = attention_num / attention_normalizer # [N, H, D] - - # compute attention for visualization if needed - if output_attn: - attention = torch.einsum("nhm,lhm->nlh", qs, ks) / attention_normalizer # [N, L, H] - - elif kernel == 'sigmoid': - # numerator - attention_num = torch.sigmoid(torch.einsum("nhm,lhm->nlh", qs, ks)) # [N, L, H] - - # denominator - all_ones = torch.ones([ks.shape[0]]).to(ks.device) - attention_normalizer = torch.einsum("nlh,l->nh", attention_num, all_ones) - attention_normalizer = attention_normalizer.unsqueeze(1).repeat(1, ks.shape[0], 1) # [N, L, H] - - # compute attention and attentive aggregated results - attention = attention_num / attention_normalizer - attn_output = torch.einsum("nlh,lhd->nhd", attention, vs) # [N, H, D] - - if output_attn: - return attn_output, attention - else: - return attn_output - -# def gcn_conv(x, edge_index, edge_weight): -# N, H = x.shape[0], x.shape[1] -# row, col = edge_index -# d = degree(col, N).float() -# d_norm_in = (1. / d[col]).sqrt() -# d_norm_out = (1. / d[row]).sqrt() -# gcn_conv_output = [] -# if edge_weight is None: -# value = torch.ones_like(row) * d_norm_in * d_norm_out -# else: -# value = edge_weight * d_norm_in * d_norm_out -# value = torch.nan_to_num(value, nan=0.0, posinf=0.0, neginf=0.0) -# adj = SparseTensor(row=col, col=row, value=value, sparse_sizes=(N, N)) -# for i in range(x.shape[1]): -# gcn_conv_output.append( matmul(adj, x[:, i]) ) # [N, D] -# gcn_conv_output = torch.stack(gcn_conv_output, dim=1) # [N, H, D] -# return gcn_conv_output - -class DIFFormerConv(nn.Module): - ''' - one DIFFormer layer - ''' - def __init__(self, in_channels, - out_channels, - num_heads, - kernel='simple', - use_graph=True, - use_weight=True): - super(DIFFormerConv, self).__init__() - self.Wk = nn.Linear(in_channels, out_channels * num_heads) - self.Wq = nn.Linear(in_channels, out_channels * num_heads) - if use_weight: - self.Wv = nn.Linear(in_channels, out_channels * num_heads) - - self.out_channels = out_channels - self.num_heads = num_heads - self.kernel = kernel - self.use_graph = use_graph - self.use_weight = use_weight - - def reset_parameters(self): - self.Wk.reset_parameters() - self.Wq.reset_parameters() - if self.use_weight: - self.Wv.reset_parameters() - - def forward(self, query_input, source_input, edge_index=None, edge_weight=None, output_attn=False): - # feature transformation - query = self.Wq(query_input).reshape(-1, self.num_heads, self.out_channels) - key = self.Wk(source_input).reshape(-1, self.num_heads, self.out_channels) - if self.use_weight: - value = self.Wv(source_input).reshape(-1, self.num_heads, self.out_channels) - else: - value = source_input.reshape(-1, 1, self.out_channels) - - # compute full attentive aggregation - if output_attn: - attention_output, attn = full_attention_conv(query, key, value, self.kernel, output_attn) # [N, H, D] - else: - attention_output = full_attention_conv(query,key,value,self.kernel) # [N, H, D] - - # use input graph for gcn conv - if self.use_graph: - final_output = attention_output + 1 - else: - final_output = attention_output - final_output = final_output.mean(dim=1) - - if output_attn: - return final_output, attn - else: - return final_output - -class DIFFormer(nn.Module): - ''' - DIFFormer model class - x: input node features [N, D] - edge_index: 2-dim indices of edges [2, E] - return y_hat predicted logits [N, C] - ''' - def __init__(self, in_channels, hidden_channels, out_channels, num_layers=2, num_heads=1, kernel='simple', - alpha=0.5, dropout=0.5, use_bn=True, use_residual=True, use_weight=True, use_graph=True): - super(DIFFormer, self).__init__() - - self.convs = nn.ModuleList() - self.fcs = nn.ModuleList() - self.fcs.append(nn.Linear(in_channels, hidden_channels)) - self.bns = nn.ModuleList() - self.bns.append(nn.LayerNorm(hidden_channels)) - for i in range(num_layers): - self.convs.append( - DIFFormerConv(hidden_channels, hidden_channels, num_heads=num_heads, kernel=kernel, use_graph=use_graph, use_weight=use_weight)) - self.bns.append(nn.LayerNorm(hidden_channels)) - - self.fcs.append(nn.Linear(hidden_channels, out_channels)) - - self.dropout = dropout - self.activation = F.relu - self.use_bn = use_bn - self.residual = use_residual - self.alpha = alpha - - def reset_parameters(self): - for conv in self.convs: - conv.reset_parameters() - for bn in self.bns: - bn.reset_parameters() - for fc in self.fcs: - fc.reset_parameters() - - def forward(self, x, edge_index, edge_weight=None): - layer_ = [] - - # input MLP layer - x = self.fcs[0](x) - if self.use_bn: - x = self.bns[0](x) - x = self.activation(x) - x = F.dropout(x, p=self.dropout, training=self.training) - - # store as residual link - layer_.append(x) - - for i, conv in enumerate(self.convs): - # graph convolution with DIFFormer layer - x = conv(x, x, edge_index, edge_weight) - if self.residual: - x = self.alpha * x + (1-self.alpha) * layer_[i] - if self.use_bn: - x = self.bns[i+1](x) - x = F.dropout(x, p=self.dropout, training=self.training) - layer_.append(x) - - # output MLP layer - x_out = self.fcs[-1](x) - return x_out - - def get_attentions(self, x): - layer_, attentions = [], [] - x = self.fcs[0](x) - if self.use_bn: - x = self.bns[0](x) - x = self.activation(x) - layer_.append(x) - for i, conv in enumerate(self.convs): - x, attn = conv(x, x, output_attn=True) - attentions.append(attn) - if self.residual: - x = self.alpha * x + (1 - self.alpha) * layer_[i] - if self.use_bn: - x = self.bns[i + 1](x) - layer_.append(x) - return torch.stack(attentions, dim=0) # [layer num, N, N] diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/input.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/input.tsx deleted file mode 100644 index 09fc0791ad25f88857f12280fed9882193a092e1..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from "react" - -import { cn } from "@/lib/utils" - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = "Input" - -export { Input } diff --git a/spaces/jennysun/jwsun-multisubject-render-model/dataset/cd_dataset.py b/spaces/jennysun/jwsun-multisubject-render-model/dataset/cd_dataset.py deleted file mode 100644 index 0627329bda44a15c6821fc477bbde45acfe86a2f..0000000000000000000000000000000000000000 --- a/spaces/jennysun/jwsun-multisubject-render-model/dataset/cd_dataset.py +++ /dev/null @@ -1,250 +0,0 @@ -import json, os, random, math -from collections import defaultdict -from copy import deepcopy - -import torch -from torch.utils.data import Dataset -import torchvision.transforms as transforms - -import numpy as np -from PIL import Image -from .base_dataset import BaseDataset, check_filenames_in_zipdata, recalculate_box_and_verify_if_valid -from io import BytesIO - - - -def not_in_at_all(list1, list2): - for a in list1: - if a in list2: - return False - return True - - -def clean_annotations(annotations): - for anno in annotations: - anno.pop("segmentation", None) - anno.pop("area", None) - anno.pop("iscrowd", None) - # anno.pop("id", None) - - -def make_a_sentence(obj_names, clean=False): - - if clean: - obj_names = [ name[:-6] if ("-other" in name) else name for name in obj_names] - - caption = "" - tokens_positive = [] - for obj_name in obj_names: - start_len = len(caption) - caption += obj_name - end_len = len(caption) - caption += ", " - tokens_positive.append( - [[start_len, end_len]] # in real caption, positive tokens can be disjoint, thus using list of list - ) - caption = caption[:-2] # remove last ", " - - return caption #, tokens_positive - - -def check_all_have_same_images(instances_data, stuff_data, caption_data): - if stuff_data is not None: - assert instances_data["images"] == stuff_data["images"] - if caption_data is not None: - assert instances_data["images"] == caption_data["images"] - - -class CDDataset(BaseDataset): - "CD: Caption Detection" - def __init__(self, - image_root, - category_embedding_path, - instances_json_path = None, - stuff_json_path = None, - caption_json_path = None, - prob_real_caption = 0, - fake_caption_type = 'empty', - image_size=256, - max_images=None, - min_box_size=0.01, - max_boxes_per_image=8, - include_other=False, - random_crop = False, - random_flip = True, - ): - super().__init__(random_crop, random_flip, image_size) - - self.image_root = image_root - self.category_embedding_path = category_embedding_path - self.instances_json_path = instances_json_path - self.stuff_json_path = stuff_json_path - self.caption_json_path = caption_json_path - self.prob_real_caption = prob_real_caption - self.fake_caption_type = fake_caption_type - self.max_images = max_images - self.min_box_size = min_box_size - self.max_boxes_per_image = max_boxes_per_image - self.include_other = include_other - - - assert fake_caption_type in ["empty", "made"] - if prob_real_caption > 0: - assert caption_json_path is not None, "caption json must be given" - - - # Load all jsons - with open(instances_json_path, 'r') as f: - instances_data = json.load(f) # keys: 'info', 'images', 'licenses', 'categories', 'annotations' - clean_annotations(instances_data["annotations"]) - self.instances_data = instances_data - - self.stuff_data = None - if stuff_json_path is not None: - with open(stuff_json_path, 'r') as f: - stuff_data = json.load(f) # keys: 'info', 'images', 'licenses', 'categories', 'annotations' - clean_annotations(stuff_data["annotations"]) - self.stuff_data = stuff_data - - self.captions_data = None - if caption_json_path is not None: - with open(caption_json_path, 'r') as f: - captions_data = json.load(f) # keys: 'info', 'images', 'licenses', 'categories', 'annotations' - clean_annotations(captions_data["annotations"]) - self.captions_data = captions_data - - - # Load preprocessed name embedding - self.category_embeddings = torch.load(category_embedding_path) - self.embedding_len = list( self.category_embeddings.values() )[0].shape[0] - - - # Misc - self.image_ids = [] # main list for selecting images - self.image_id_to_filename = {} # file names used to read image - check_all_have_same_images(self.instances_data, self.stuff_data, self.captions_data) - for image_data in self.instances_data['images']: - image_id = image_data['id'] - filename = image_data['file_name'] - self.image_ids.append(image_id) - self.image_id_to_filename[image_id] = filename - - - # All category names (including things and stuff) - self.object_idx_to_name = {} - for category_data in self.instances_data['categories']: - self.object_idx_to_name[category_data['id']] = category_data['name'] - if self.stuff_data is not None: - for category_data in self.stuff_data['categories']: - self.object_idx_to_name[category_data['id']] = category_data['name'] - - - # Add object data from instances and stuff - self.image_id_to_objects = defaultdict(list) - self.select_objects( self.instances_data['annotations'] ) - if self.stuff_data is not None: - self.select_objects( self.stuff_data['annotations'] ) - - # Add caption data - if self.captions_data is not None: - self.image_id_to_captions = defaultdict(list) - self.select_captions( self.captions_data['annotations'] ) - - # Check if all filenames can be found in the zip file - # all_filenames = [self.image_id_to_filename[idx] for idx in self.image_ids] - # check_filenames_in_zipdata(all_filenames, image_root) - - - def select_objects(self, annotations): - for object_anno in annotations: - image_id = object_anno['image_id'] - object_name = self.object_idx_to_name[object_anno['category_id']] - other_ok = object_name != 'other' or self.include_other - if other_ok: - self.image_id_to_objects[image_id].append(object_anno) - - - def select_captions(self, annotations): - for caption_data in annotations: - image_id = caption_data['image_id'] - self.image_id_to_captions[image_id].append(caption_data) - - - def total_images(self): - return len(self) - - - def __getitem__(self, index): - if self.max_boxes_per_image > 99: - assert False, "Are you sure setting such large number of boxes?" - - out = {} - - image_id = self.image_ids[index] - out['id'] = image_id - - # Image - filename = self.image_id_to_filename[image_id] - image = self.fetch_image(filename) - #WW, HH = image.size - image_tensor, trans_info = self.transform_image(image) - out["image"] = image_tensor - - - # Select valid boxes after cropping (center or random) - this_image_obj_annos = deepcopy(self.image_id_to_objects[image_id]) - areas = [] - all_obj_names = [] - all_boxes = [] - all_masks = [] - all_positive_embeddings = [] - for object_anno in this_image_obj_annos: - - x, y, w, h = object_anno['bbox'] - valid, (x0, y0, x1, y1) = recalculate_box_and_verify_if_valid(x, y, w, h, trans_info, self.image_size, self.min_box_size) - - if valid: - areas.append( (x1-x0)*(y1-y0) ) - obj_name = self.object_idx_to_name[ object_anno['category_id'] ] - all_obj_names.append(obj_name) - all_boxes.append( torch.tensor([x0,y0,x1,y1]) / self.image_size ) # scale to 0-1 - all_masks.append(1) - all_positive_embeddings.append( self.category_embeddings[obj_name] ) - - wanted_idxs = torch.tensor(areas).sort(descending=True)[1] - wanted_idxs = wanted_idxs[0:self.max_boxes_per_image] - obj_names = [] # used for making a sentence - boxes = torch.zeros(self.max_boxes_per_image, 4) - masks = torch.zeros(self.max_boxes_per_image) - positive_embeddings = torch.zeros(self.max_boxes_per_image, self.embedding_len) - for i, idx in enumerate(wanted_idxs): - obj_names.append( all_obj_names[idx] ) - boxes[i] = all_boxes[idx] - masks[i] = all_masks[idx] - positive_embeddings[i] = all_positive_embeddings[idx] - - # Caption - if random.uniform(0, 1) < self.prob_real_caption: - caption_data = self.image_id_to_captions[image_id] - idx = random.randint(0, len(caption_data)-1 ) - caption = caption_data[idx]["caption"] - else: - if self.fake_caption_type == "empty": - caption = "" - else: - caption = make_a_sentence(obj_names, clean=True) - - - out["caption"] = caption - out["boxes"] = boxes - out["masks"] = masks - out["positive_embeddings"] = positive_embeddings - - return out - - - def __len__(self): - if self.max_images is None: - return len(self.image_ids) - return min(len(self.image_ids), self.max_images) - diff --git a/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/mixing_manipulator/fx_utils.py b/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/mixing_manipulator/fx_utils.py deleted file mode 100644 index 1dd3137c8cb5bc3ed0a86a65a1b79fb2ab8cf73e..0000000000000000000000000000000000000000 --- a/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/mixing_manipulator/fx_utils.py +++ /dev/null @@ -1,313 +0,0 @@ -import warnings -warnings.filterwarnings("ignore", category=DeprecationWarning) - -import numpy as np -import scipy -import math -import librosa -import librosa.display -import fnmatch -import os -from functools import partial -import pyloudnorm -from scipy.signal import lfilter -from sklearn.metrics import mean_absolute_error, mean_squared_error -from sklearn.metrics.pairwise import paired_distances - - -import matplotlib.pyplot as plt - -def db(x): - """Computes the decible energy of a signal""" - return 20*np.log10(np.sqrt(np.mean(np.square(x)))) - -def melspectrogram(y, mirror_pad=False): - """Compute melspectrogram feature extraction - - Keyword arguments: - signal -- input audio as a signal in a numpy object - inputnorm -- normalization of output - mirror_pad -- pre and post-pend mirror signals - - Returns freq x time - - - Assumes the input sampling rate is 22050Hz - """ - - # Extract mel. - fftsize = 1024 - window = 1024 - hop = 512 - melBin = 128 - sr = 22050 - - # mirror pad signal - # first embedding centered on time 0 - # last embedding centered on end of signal - if mirror_pad: - y = np.insert(y, 0, y[0:int(half_frame_length_sec * sr)][::-1]) - y = np.insert(y, len(y), y[-int(half_frame_length_sec * sr):][::-1]) - - S = librosa.core.stft(y,n_fft=fftsize,hop_length=hop,win_length=window) - X = np.abs(S) - mel_basis = librosa.filters.mel(sr,n_fft=fftsize,n_mels=melBin) - mel_S = np.dot(mel_basis,X) - - # value log compression - mel_S = np.log10(1+10*mel_S) - mel_S = mel_S.astype(np.float32) - - - return mel_S - - -def getFilesPath(directory, extension): - - n_path=[] - for path, subdirs, files in os.walk(directory): - for name in files: - if fnmatch.fnmatch(name, extension): - n_path.append(os.path.join(path,name)) - n_path.sort() - - return n_path - - - -def getRandomTrim(x, length, pad=0, start=None): - - length = length+pad - if x.shape[0] <= length: - x_ = x - while(x.shape[0] <= length): - x_ = np.concatenate((x_,x_)) - else: - if start is None: - start = np.random.randint(0, x.shape[0]-length, size=None) - end = length+start - if end > x.shape[0]: - x_ = x[start:] - x_ = np.concatenate((x_, x[:length-x.shape[0]])) - else: - x_ = x[start:length+start] - - return x_[:length] - -def fadeIn(x, length=128): - - w = scipy.signal.hann(length*2, sym=True) - w1 = w[0:length] - ones = np.ones(int(x.shape[0]-length)) - w = np.append(w1, ones) - - return x*w - -def fadeOut(x, length=128): - - w = scipy.signal.hann(length*2, sym=True) - w2 = w[length:length*2] - ones = np.ones(int(x.shape[0]-length)) - w = np.append(ones, w2) - - return x*w - - -def plotTimeFreq(audio, sr, n_fft=512, hop_length=128, ylabels=None): - - n = len(audio) -# plt.figure(figsize=(14, 4*n)) - colors = list(plt.cm.viridis(np.linspace(0,1,n))) - - X = [] - X_db = [] - maxs = np.zeros((n,)) - mins = np.zeros((n,)) - maxs_t = np.zeros((n,)) - for i, x in enumerate(audio): - - if x.ndim == 2 and x.shape[-1] == 2: - x = librosa.core.to_mono(x.T) - X_ = librosa.stft(x, n_fft=n_fft, hop_length=hop_length) - X_db_ = librosa.amplitude_to_db(abs(X_)) - X.append(X_) - X_db.append(X_db_) - maxs[i] = np.max(X_db_) - mins[i] = np.min(X_db_) - maxs_t[i] = np.max(np.abs(x)) - vmax = np.max(maxs) - vmin = np.min(mins) - tmax = np.max(maxs_t) - for i, x in enumerate(audio): - - if x.ndim == 2 and x.shape[-1] == 2: - x = librosa.core.to_mono(x.T) - - plt.subplot(n, 2, 2*i+1) - librosa.display.waveplot(x, sr=sr, color=colors[i]) - if ylabels: - plt.ylabel(ylabels[i]) - - plt.ylim(-tmax,tmax) - plt.subplot(n, 2, 2*i+2) - librosa.display.specshow(X_db[i], sr=sr, x_axis='time', y_axis='log', - hop_length=hop_length, cmap='GnBu', vmax=vmax, vmin=vmin) -# plt.colorbar(format='%+2.0f dB') - - - - - - - - -def slicing(x, win_length, hop_length, center = True, windowing = False, pad = 0): - # Pad the time series so that frames are centered - if center: -# x = np.pad(x, int((win_length-hop_length+pad) // 2), mode='constant') - x = np.pad(x, ((int((win_length-hop_length+pad)//2), int((win_length+hop_length+pad)//2)),), mode='constant') - - # Window the time series. - y_frames = librosa.util.frame(x, frame_length=win_length, hop_length=hop_length) - if windowing: - window = scipy.signal.hann(win_length, sym=False) - else: - window = 1.0 - f = [] - for i in range(len(y_frames.T)): - f.append(y_frames.T[i]*window) - return np.float32(np.asarray(f)) - - -def overlap(x, x_len, win_length, hop_length, windowing = True, rate = 1): - x = x.reshape(x.shape[0],x.shape[1]).T - if windowing: - window = scipy.signal.hann(win_length, sym=False) - rate = rate*hop_length/win_length - else: - window = 1 - rate = 1 - n_frames = x_len / hop_length - expected_signal_len = int(win_length + hop_length * (n_frames)) - y = np.zeros(expected_signal_len) - for i in range(int(n_frames)): - sample = i * hop_length - w = x[:, i] - y[sample:(sample + win_length)] = y[sample:(sample + win_length)] + w*window - y = y[int(win_length // 2):-int(win_length // 2)] - return np.float32(y*rate) - - - - - - - -def highpassFiltering(x_list, f0, sr): - - b1, a1 = scipy.signal.butter(4, f0/(sr/2),'highpass') - x_f = [] - for x in x_list: - x_f_ = scipy.signal.filtfilt(b1, a1, x).copy(order='F') - x_f.append(x_f_) - return x_f - -def lineartodB(x): - return 20*np.log10(x) -def dBtoLinear(x): - return np.power(10,x/20) - -def lufs_normalize(x, sr, lufs, log=True): - - # measure the loudness first - meter = pyloudnorm.Meter(sr) # create BS.1770 meter - loudness = meter.integrated_loudness(x+1e-10) - if log: - print("original loudness: ", loudness," max value: ", np.max(np.abs(x))) - - loudness_normalized_audio = pyloudnorm.normalize.loudness(x, loudness, lufs) - - maxabs_amp = np.maximum(1.0, 1e-6 + np.max(np.abs(loudness_normalized_audio))) - loudness_normalized_audio /= maxabs_amp - - loudness = meter.integrated_loudness(loudness_normalized_audio) - if log: - print("new loudness: ", loudness," max value: ", np.max(np.abs(loudness_normalized_audio))) - - - return loudness_normalized_audio - -import soxbindings as sox - -def lufs_normalize_compand(x, sr, lufs): - - tfm = sox.Transformer() - tfm.compand(attack_time = 0.001, - decay_time = 0.01, - soft_knee_db = 1.0, - tf_points = [(-70, -70), (-0.1, -20), (0, 0)]) - - x = tfm.build_array(input_array=x, sample_rate_in=sr).astype(np.float32) - - # measure the loudness first - meter = pyloudnorm.Meter(sr) # create BS.1770 meter - loudness = meter.integrated_loudness(x) - print("original loudness: ", loudness," max value: ", np.max(np.abs(x))) - - loudness_normalized_audio = pyloudnorm.normalize.loudness(x, loudness, lufs) - - maxabs_amp = np.maximum(1.0, 1e-6 + np.max(np.abs(loudness_normalized_audio))) - loudness_normalized_audio /= maxabs_amp - - loudness = meter.integrated_loudness(loudness_normalized_audio) - print("new loudness: ", loudness," max value: ", np.max(np.abs(loudness_normalized_audio))) - - - - - - - return loudness_normalized_audio - - - - - -def getDistances(x,y): - - distances = {} - distances['mae'] = mean_absolute_error(x, y) - distances['mse'] = mean_squared_error(x, y) - distances['euclidean'] = np.mean(paired_distances(x, y, metric='euclidean')) - distances['manhattan'] = np.mean(paired_distances(x, y, metric='manhattan')) - distances['cosine'] = np.mean(paired_distances(x, y, metric='cosine')) - - distances['mae'] = round(distances['mae'], 5) - distances['mse'] = round(distances['mse'], 5) - distances['euclidean'] = round(distances['euclidean'], 5) - distances['manhattan'] = round(distances['manhattan'], 5) - distances['cosine'] = round(distances['cosine'], 5) - - return distances - -def getMFCC(x, sr, mels=128, mfcc=13, mean_norm=False): - - melspec = librosa.feature.melspectrogram(y=x, sr=sr, S=None, - n_fft=1024, hop_length=256, - n_mels=mels, power=2.0) - melspec_dB = librosa.power_to_db(melspec, ref=np.max) - mfcc = librosa.feature.mfcc(S=melspec_dB, sr=sr, n_mfcc=mfcc) - if mean_norm: - mfcc -= (np.mean(mfcc, axis=0)) - return mfcc - - -def getMSE_MFCC(y_true, y_pred, sr, mels=128, mfcc=13, mean_norm=False): - - ratio = np.mean(np.abs(y_true))/np.mean(np.abs(y_pred)) - y_pred = ratio*y_pred - - y_mfcc = getMFCC(y_true, sr, mels=mels, mfcc=mfcc, mean_norm=mean_norm) - z_mfcc = getMFCC(y_pred, sr, mels=mels, mfcc=mfcc, mean_norm=mean_norm) - - return getDistances(y_mfcc[:,:], z_mfcc[:,:]) \ No newline at end of file diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig2a_DLRMSE.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig2a_DLRMSE.py deleted file mode 100644 index 7443af6f6b08ead727bbcc7ac97dee9493d432a3..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig2a_DLRMSE.py +++ /dev/null @@ -1,90 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -# Author: LE YUAN -# https://blog.csdn.net/roguesir/article/details/77839721 - -import matplotlib.pyplot as plt -from matplotlib import rc - -with open('../../Results/output/MAEs--all--radius2--ngram3--dim20--layer_gnn3--window11--layer_cnn3--layer_output3--lr1e-3--lr_decay0.5--decay_interval10--weight_decay1e-6--iteration50.txt', 'r') as infile : - lines = infile.readlines()[1:] - -# print(len(lines)) - -epoch_dev = list() -RMSE_dev = list() - -for line in lines[:18] : - data = line.strip().split('\t') - epoch_line = int(data[0]) - RMSE_line = float(data[-4]) - if epoch_line%2 == 0 or epoch_line in [1,99] : - epoch_dev.append(epoch_line) - RMSE_dev.append(RMSE_line) - -epoch_test = list() -RMSE_test = list() - -for line in lines[:18] : - data = line.strip().split('\t') - epoch_line = int(data[0]) - RMSE_line = float(data[-3]) - if epoch_line%2 == 0 or epoch_line in [1,99] : - epoch_test.append(epoch_line) - RMSE_test.append(RMSE_line) - -epoch_train = list() -RMSE_train = list() - -for line in lines[:18] : - data = line.strip().split('\t') - epoch_line = int(data[0]) - RMSE_line = float(data[2]) - if epoch_line%2 == 0 or epoch_line in [1,99] : - epoch_train.append(epoch_line) - RMSE_train.append(RMSE_line) - -# fig=plt.figure(figsize=(1.5,1.5)) -# # fig.add_axes([0.2,0.2,0.6,0.6]) -# # fig.add_axes([6.8/39.6,6.8/39.6,31.7/39.6,31.7/39.6]) -# fig.add_axes([0.12,0.12,0.83,0.83]) - -plt.figure(figsize=(1.5,1.5)) - -# To solve the 'Helvetica' font cannot be used in PDF file -# https://stackoverflow.com/questions/59845568/the-pdf-backend-does-not-currently-support-the-selected-font -rc('font',**{'family':'serif','serif':['Helvetica']}) -plt.rcParams['pdf.fonttype'] = 42 - -plt.axes([0.12,0.12,0.83,0.83]) - -# plt.rcParams['xtick.direction'] = 'in' -# plt.rcParams['ytick.direction'] = 'in' - -plt.tick_params(direction='in') -plt.tick_params(which='major',length=1.5) -plt.tick_params(which='major',width=0.4) - -plt.plot(epoch_train,RMSE_train,color='#159090',linestyle='dashed',linewidth=0.75,marker='s',markerfacecolor='#159090', markersize=3,label='Training') -plt.plot(epoch_dev,RMSE_dev,color='#b2182b',linestyle='dashed',linewidth=0.75,marker='o',markerfacecolor='#b2182b', markersize=3,label='Validation') -plt.plot(epoch_test,RMSE_test,color='#2166ac',linestyle='dashed',linewidth=0.75,marker='^',markerfacecolor='#2166ac', markersize=3,label='Test') - -plt.rcParams['font.family'] = 'Helvetica' -# plt.rc('font', family='Helvetica') -plt.xticks([0,3,6,9,12,15,18]) -plt.yticks([0.5,0.7,0.9,1.1,1.3,1.5]) - -plt.xlabel('Epoch', fontsize=7) -plt.ylabel('RMSE', fontsize=7) -plt.xticks(fontsize=6) -plt.yticks(fontsize=6) -plt.legend(frameon=False, prop={"size":6}) - -ax = plt.gca() -ax.spines['bottom'].set_linewidth(0.5) -ax.spines['left'].set_linewidth(0.5) -ax.spines['top'].set_linewidth(0.5) -ax.spines['right'].set_linewidth(0.5) - -plt.savefig("../../Results/figures/Fig2a.pdf", dpi=400, bbox_inches='tight') diff --git a/spaces/jlondonobo/whisper-pt-demo/app.py b/spaces/jlondonobo/whisper-pt-demo/app.py deleted file mode 100644 index 97ce903eda51a1946a786934bf772f33b89d23d3..0000000000000000000000000000000000000000 --- a/spaces/jlondonobo/whisper-pt-demo/app.py +++ /dev/null @@ -1,95 +0,0 @@ -import gradio as gr -import pytube as pt -import torch -import whisper -from hf_to_whisper import write_whisper_model_to_memory -import os - -MODEL_NAME = "jlondonobo/whisper-medium-pt" #this always needs to stay in line 8 :D sorry for the hackiness -lang = "pt" - -device = 0 if torch.cuda.is_available() else "cpu" - -local_model_path = "whisper-pt.pt" -if not os.path.exists(local_model_path): - write_whisper_model_to_memory(MODEL_NAME, local_model_path) - -model = whisper.load_model(local_model_path) - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - text = model.transcribe(file, language=lang)["text"] - - return warn_output + text - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
    ' - "
    " - ) - return HTML_str - - -def yt_transcribe(yt_url): - yt = pt.YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - - text = model.transcribe("audio.mp3", language=lang)["text"] - - return html_embed_str, text - - -demo = gr.Blocks() - -mf_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True), - gr.inputs.Audio(source="upload", type="filepath", optional=True), - ], - outputs="text", - layout="horizontal", - theme="huggingface", - title="Whisper Portuguese Demo 🇧🇷🇵🇹
    Transcribe Audio", - description=( - "Transcribe long-form microphone or audio inputs with the click of a button! Demo uses the the fine-tuned" - f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files" - " of arbitrary length." - ), - allow_flagging="never", -) - -yt_transcribe = gr.Interface( - fn=yt_transcribe, - inputs=[gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL")], - outputs=["html", "text"], - layout="horizontal", - theme="huggingface", - title="Whisper Portuguese Demo 🇧🇷🇵🇹
    Transcribe YouTube", - description=( - "Transcribe long-form YouTube videos with the click of a button! Demo uses the the fine-tuned checkpoint:" - f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files of" - " arbitrary length." - ), - allow_flagging="never", -) - -with demo: - gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe YouTube"]) - -demo.launch(enable_queue=True) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/feaLib/lexer.c b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/feaLib/lexer.c deleted file mode 100644 index 279046f6cfa8ae917fe6f06e42b2a4ad520f3244..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/feaLib/lexer.c +++ /dev/null @@ -1,17856 +0,0 @@ -/* Generated by Cython 3.0.2 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "fontTools.feaLib.lexer", - "sources": [ - "Lib/fontTools/feaLib/lexer.py" - ] - }, - "module_name": "fontTools.feaLib.lexer" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#if defined(CYTHON_LIMITED_API) && 0 - #ifndef Py_LIMITED_API - #if CYTHON_LIMITED_API+0 > 0x03030000 - #define Py_LIMITED_API CYTHON_LIMITED_API - #else - #define Py_LIMITED_API 0x03030000 - #endif - #endif -#endif - -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02070000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.7+ or Python 3.3+. -#else -#if CYTHON_LIMITED_API -#define __PYX_EXTRA_ABI_MODULE_NAME "limited" -#else -#define __PYX_EXTRA_ABI_MODULE_NAME "" -#endif -#define CYTHON_ABI "3_0_2" __PYX_EXTRA_ABI_MODULE_NAME -#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI -#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "." -#define CYTHON_HEX_VERSION 0x030002F0 -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #define HAVE_LONG_LONG -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#define __PYX_LIMITED_VERSION_HEX PY_VERSION_HEX -#if defined(GRAALVM_PYTHON) - /* For very preliminary testing purposes. Most variables are set the same as PyPy. - The existence of this section does not imply that anything works or is even tested */ - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PYPY_VERSION) - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00) - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(CYTHON_LIMITED_API) - #ifdef Py_LIMITED_API - #undef __PYX_LIMITED_VERSION_HEX - #define __PYX_LIMITED_VERSION_HEX Py_LIMITED_API - #endif - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 1 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_CLINE_IN_TRACEBACK - #define CYTHON_CLINE_IN_TRACEBACK 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 1 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #endif - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 1 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #ifndef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #ifndef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL (PY_MAJOR_VERSION < 3 || PY_VERSION_HEX >= 0x03060000 && PY_VERSION_HEX < 0x030C00A6) - #endif - #ifndef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL (PY_VERSION_HEX >= 0x030700A1) - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #if PY_VERSION_HEX < 0x030400a1 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #elif !defined(CYTHON_USE_TP_FINALIZE) - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #if PY_VERSION_HEX < 0x030600B1 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #elif !defined(CYTHON_USE_DICT_VERSIONS) - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX < 0x030C00A5) - #endif - #if PY_VERSION_HEX < 0x030700A3 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK 1 - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if !defined(CYTHON_VECTORCALL) -#define CYTHON_VECTORCALL (CYTHON_FAST_PYCCALL && PY_VERSION_HEX >= 0x030800B1) -#endif -#define CYTHON_BACKPORT_VECTORCALL (CYTHON_METH_FASTCALL && PY_VERSION_HEX < 0x030800B1) -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(maybe_unused) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(maybe_unused) - #define CYTHON_UNUSED [[maybe_unused]] - #endif - #endif - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR - #define CYTHON_MAYBE_UNUSED_VAR(x) CYTHON_UNUSED_VAR(x) -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_USE_CPP_STD_MOVE - #if defined(__cplusplus) && (\ - __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1600)) - #define CYTHON_USE_CPP_STD_MOVE 1 - #else - #define CYTHON_USE_CPP_STD_MOVE 0 - #endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - #endif - #endif - #if _MSC_VER < 1300 - #ifdef _WIN64 - typedef unsigned long long __pyx_uintptr_t; - #else - typedef unsigned int __pyx_uintptr_t; - #endif - #else - #ifdef _WIN64 - typedef unsigned __int64 __pyx_uintptr_t; - #else - typedef unsigned __int32 __pyx_uintptr_t; - #endif - #endif -#else - #include - typedef uintptr_t __pyx_uintptr_t; -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(fallthrough) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif -#ifdef __cplusplus - template - struct __PYX_IS_UNSIGNED_IMPL {static const bool value = T(0) < T(-1);}; - #define __PYX_IS_UNSIGNED(type) (__PYX_IS_UNSIGNED_IMPL::value) -#else - #define __PYX_IS_UNSIGNED(type) (((type)-1) > 0) -#endif -#if CYTHON_COMPILING_IN_PYPY == 1 - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x030A0000) -#else - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000) -#endif -#define __PYX_REINTERPRET_FUNCION(func_pointer, other_pointer) ((func_pointer)(void(*)(void))(other_pointer)) - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_DefaultClassType PyClass_Type - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if CYTHON_COMPILING_IN_LIMITED_API - static CYTHON_INLINE PyObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *exception_table = NULL; - PyObject *types_module=NULL, *code_type=NULL, *result=NULL; - PyObject *version_info; // borrowed - PyObject *py_minor_version = NULL; - long minor_version = 0; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - #if __PYX_LIMITED_VERSION_HEX >= 0x030B0000 - minor_version = 11; // we don't yet need to distinguish between versions > 11 - #else - if (!(version_info = PySys_GetObject("version_info"))) goto end; - if (!(py_minor_version = PySequence_GetItem(version_info, 1))) goto end; - minor_version = PyLong_AsLong(py_minor_version); - if (minor_version == -1 && PyErr_Occurred()) goto end; - #endif - if (!(types_module = PyImport_ImportModule("types"))) goto end; - if (!(code_type = PyObject_GetAttrString(types_module, "CodeType"))) goto end; - if (minor_version <= 7) { - (void)p; - result = PyObject_CallFunction(code_type, "iiiiiOOOOOOiOO", a, k, l, s, f, code, - c, n, v, fn, name, fline, lnos, fv, cell); - } else if (minor_version <= 10) { - result = PyObject_CallFunction(code_type, "iiiiiiOOOOOOiOO", a,p, k, l, s, f, code, - c, n, v, fn, name, fline, lnos, fv, cell); - } else { - if (!(exception_table = PyBytes_FromStringAndSize(NULL, 0))) goto end; - result = PyObject_CallFunction(code_type, "iiiiiiOOOOOOOiOO", a,p, k, l, s, f, code, - c, n, v, fn, name, name, fline, lnos, exception_table, fv, cell); - } - end: - Py_XDECREF(code_type); - Py_XDECREF(exception_table); - Py_XDECREF(types_module); - Py_XDECREF(py_minor_version); - if (type) { - PyErr_Restore(type, value, traceback); - } - return result; - } - #ifndef CO_OPTIMIZED - #define CO_OPTIMIZED 0x0001 - #endif - #ifndef CO_NEWLOCALS - #define CO_NEWLOCALS 0x0002 - #endif - #ifndef CO_VARARGS - #define CO_VARARGS 0x0004 - #endif - #ifndef CO_VARKEYWORDS - #define CO_VARKEYWORDS 0x0008 - #endif - #ifndef CO_ASYNC_GENERATOR - #define CO_ASYNC_GENERATOR 0x0200 - #endif - #ifndef CO_GENERATOR - #define CO_GENERATOR 0x0020 - #endif - #ifndef CO_COROUTINE - #define CO_COROUTINE 0x0080 - #endif -#elif PY_VERSION_HEX >= 0x030B0000 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyCodeObject *result; - PyObject *empty_bytes = PyBytes_FromStringAndSize("", 0); // we don't have access to __pyx_empty_bytes here - if (!empty_bytes) return NULL; - result = - #if PY_VERSION_HEX >= 0x030C0000 - PyUnstable_Code_NewWithPosOnlyArgs - #else - PyCode_NewWithPosOnlyArgs - #endif - (a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, name, fline, lnos, empty_bytes); - Py_DECREF(empty_bytes); - return result; - } -#elif PY_VERSION_HEX >= 0x030800B2 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_NewWithPosOnlyArgs(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif -#endif -#if PY_VERSION_HEX >= 0x030900A4 || defined(Py_IS_TYPE) - #define __Pyx_IS_TYPE(ob, type) Py_IS_TYPE(ob, type) -#else - #define __Pyx_IS_TYPE(ob, type) (((const PyObject*)ob)->ob_type == (type)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_Is) - #define __Pyx_Py_Is(x, y) Py_Is(x, y) -#else - #define __Pyx_Py_Is(x, y) ((x) == (y)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsNone) - #define __Pyx_Py_IsNone(ob) Py_IsNone(ob) -#else - #define __Pyx_Py_IsNone(ob) __Pyx_Py_Is((ob), Py_None) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsTrue) - #define __Pyx_Py_IsTrue(ob) Py_IsTrue(ob) -#else - #define __Pyx_Py_IsTrue(ob) __Pyx_Py_Is((ob), Py_True) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsFalse) - #define __Pyx_Py_IsFalse(ob) Py_IsFalse(ob) -#else - #define __Pyx_Py_IsFalse(ob) __Pyx_Py_Is((ob), Py_False) -#endif -#define __Pyx_NoneAsNull(obj) (__Pyx_Py_IsNone(obj) ? NULL : (obj)) -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef CO_COROUTINE - #define CO_COROUTINE 0x80 -#endif -#ifndef CO_ASYNC_GENERATOR - #define CO_ASYNC_GENERATOR 0x200 -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef Py_TPFLAGS_SEQUENCE - #define Py_TPFLAGS_SEQUENCE 0 -#endif -#ifndef Py_TPFLAGS_MAPPING - #define Py_TPFLAGS_MAPPING 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_METH_FASTCALL - #define __Pyx_METH_FASTCALL METH_FASTCALL - #define __Pyx_PyCFunction_FastCall __Pyx_PyCFunctionFast - #define __Pyx_PyCFunction_FastCallWithKeywords __Pyx_PyCFunctionFastWithKeywords -#else - #define __Pyx_METH_FASTCALL METH_VARARGS - #define __Pyx_PyCFunction_FastCall PyCFunction - #define __Pyx_PyCFunction_FastCallWithKeywords PyCFunctionWithKeywords -#endif -#if CYTHON_VECTORCALL - #define __pyx_vectorcallfunc vectorcallfunc - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET PY_VECTORCALL_ARGUMENTS_OFFSET - #define __Pyx_PyVectorcall_NARGS(n) PyVectorcall_NARGS((size_t)(n)) -#elif CYTHON_BACKPORT_VECTORCALL - typedef PyObject *(*__pyx_vectorcallfunc)(PyObject *callable, PyObject *const *args, - size_t nargsf, PyObject *kwnames); - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET ((size_t)1 << (8 * sizeof(size_t) - 1)) - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(((size_t)(n)) & ~__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)) -#else - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET 0 - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(n)) -#endif -#if __PYX_LIMITED_VERSION_HEX < 0x030900B1 - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) ((void)m, PyType_FromSpecWithBases(s, b)) - typedef PyObject *(*__Pyx_PyCMethod)(PyObject *, PyTypeObject *, PyObject *const *, size_t, PyObject *); -#else - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) PyType_FromModuleAndSpec(m, s, b) - #define __Pyx_PyCMethod PyCMethod -#endif -#ifndef METH_METHOD - #define METH_METHOD 0x200 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyThreadState_Current PyThreadState_Get() -#elif !CYTHON_FAST_THREAD_STATE - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE void *__Pyx_PyModule_GetState(PyObject *op) -{ - void *result; - result = PyModule_GetState(op); - if (!result) - Py_FatalError("Couldn't find the module state"); - return result; -} -#endif -#define __Pyx_PyObject_GetSlot(obj, name, func_ctype) __Pyx_PyType_GetSlot(Py_TYPE(obj), name, func_ctype) -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((func_ctype) PyType_GetSlot((type), Py_##name)) -#else - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((type)->name) -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if PY_MAJOR_VERSION < 3 - #if CYTHON_COMPILING_IN_PYPY - #if PYPY_VERSION_NUM < 0x07030600 - #if defined(__cplusplus) && __cplusplus >= 201402L - [[deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")]] - #elif defined(__GNUC__) || defined(__clang__) - __attribute__ ((__deprecated__("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6"))) - #elif defined(_MSC_VER) - __declspec(deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")) - #endif - static CYTHON_INLINE int PyGILState_Check(void) { - return 0; - } - #else // PYPY_VERSION_NUM < 0x07030600 - #endif // PYPY_VERSION_NUM < 0x07030600 - #else - static CYTHON_INLINE int PyGILState_Check(void) { - PyThreadState * tstate = _PyThreadState_Current; - return tstate && (tstate == PyGILState_GetThisThreadState()); - } - #endif -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX > 0x030600B4 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStrWithError(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStr(PyObject *dict, PyObject *name) { - PyObject *res = __Pyx_PyDict_GetItemStrWithError(dict, name); - if (res == NULL) PyErr_Clear(); - return res; -} -#elif PY_MAJOR_VERSION >= 3 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000) -#define __Pyx_PyDict_GetItemStrWithError PyDict_GetItemWithError -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#else -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStrWithError(PyObject *dict, PyObject *name) { -#if CYTHON_COMPILING_IN_PYPY - return PyDict_GetItem(dict, name); -#else - PyDictEntry *ep; - PyDictObject *mp = (PyDictObject*) dict; - long hash = ((PyStringObject *) name)->ob_shash; - assert(hash != -1); - ep = (mp->ma_lookup)(mp, name, hash); - if (ep == NULL) { - return NULL; - } - return ep->me_value; -#endif -} -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#endif -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetFlags(tp) (((PyTypeObject *)tp)->tp_flags) - #define __Pyx_PyType_HasFeature(type, feature) ((__Pyx_PyType_GetFlags(type) & (feature)) != 0) - #define __Pyx_PyObject_GetIterNextFunc(obj) (Py_TYPE(obj)->tp_iternext) -#else - #define __Pyx_PyType_GetFlags(tp) (PyType_GetFlags((PyTypeObject *)tp)) - #define __Pyx_PyType_HasFeature(type, feature) PyType_HasFeature(type, feature) - #define __Pyx_PyObject_GetIterNextFunc(obj) PyIter_Next -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_SetItemOnTypeDict(tp, k, v) PyObject_GenericSetAttr((PyObject*)tp, k, v) -#else - #define __Pyx_SetItemOnTypeDict(tp, k, v) PyDict_SetItem(tp->tp_dict, k, v) -#endif -#if CYTHON_USE_TYPE_SPECS && PY_VERSION_HEX >= 0x03080000 -#define __Pyx_PyHeapTypeObject_GC_Del(obj) {\ - PyTypeObject *type = Py_TYPE(obj);\ - assert(__Pyx_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE));\ - PyObject_GC_Del(obj);\ - Py_DECREF(type);\ -} -#else -#define __Pyx_PyHeapTypeObject_GC_Del(obj) PyObject_GC_Del(obj) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GetLength(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_ReadChar(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((void)u, 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((void)u, (0)) - #define __Pyx_PyUnicode_DATA(u) ((void*)u) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)k, PyUnicode_ReadChar((PyObject*)(d), i)) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GetLength(u)) -#elif PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) ((int)PyUnicode_KIND(u)) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, (Py_UCS4) ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535U : 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((int)sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = (Py_UNICODE) ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #if !defined(PyUnicode_DecodeUnicodeEscape) - #define PyUnicode_DecodeUnicodeEscape(s, size, errors) PyUnicode_Decode(s, size, "unicode_escape", errors) - #endif - #if !defined(PyUnicode_Contains) || (PY_MAJOR_VERSION == 2 && PYPY_VERSION_NUM < 0x07030500) - #undef PyUnicode_Contains - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) - #endif - #if !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) - #endif - #if !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) - #endif -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#if CYTHON_COMPILING_IN_CPYTHON - #define __Pyx_PySequence_ListKeepNew(obj)\ - (likely(PyList_CheckExact(obj) && Py_REFCNT(obj) == 1) ? __Pyx_NewRef(obj) : PySequence_List(obj)) -#else - #define __Pyx_PySequence_ListKeepNew(obj) PySequence_List(obj) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) __Pyx_IS_TYPE(obj, &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_ITEM(o, i) PySequence_ITEM(o, i) - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) - #define __Pyx_PyTuple_SET_ITEM(o, i, v) (PyTuple_SET_ITEM(o, i, v), (0)) - #define __Pyx_PyList_SET_ITEM(o, i, v) (PyList_SET_ITEM(o, i, v), (0)) - #define __Pyx_PyTuple_GET_SIZE(o) PyTuple_GET_SIZE(o) - #define __Pyx_PyList_GET_SIZE(o) PyList_GET_SIZE(o) - #define __Pyx_PySet_GET_SIZE(o) PySet_GET_SIZE(o) - #define __Pyx_PyBytes_GET_SIZE(o) PyBytes_GET_SIZE(o) - #define __Pyx_PyByteArray_GET_SIZE(o) PyByteArray_GET_SIZE(o) -#else - #define __Pyx_PySequence_ITEM(o, i) PySequence_GetItem(o, i) - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) - #define __Pyx_PyTuple_SET_ITEM(o, i, v) PyTuple_SetItem(o, i, v) - #define __Pyx_PyList_SET_ITEM(o, i, v) PyList_SetItem(o, i, v) - #define __Pyx_PyTuple_GET_SIZE(o) PyTuple_Size(o) - #define __Pyx_PyList_GET_SIZE(o) PyList_Size(o) - #define __Pyx_PySet_GET_SIZE(o) PySet_Size(o) - #define __Pyx_PyBytes_GET_SIZE(o) PyBytes_Size(o) - #define __Pyx_PyByteArray_GET_SIZE(o) PyByteArray_Size(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define __Pyx_Py3Int_Check(op) PyLong_Check(op) - #define __Pyx_Py3Int_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#else - #define __Pyx_Py3Int_Check(op) (PyLong_Check(op) || PyInt_Check(op)) - #define __Pyx_Py3Int_CheckExact(op) (PyLong_CheckExact(op) || PyInt_CheckExact(op)) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifdef CYTHON_EXTERN_C - #undef __PYX_EXTERN_C - #define __PYX_EXTERN_C CYTHON_EXTERN_C -#elif defined(__PYX_EXTERN_C) - #ifdef _MSC_VER - #pragma message ("Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead.") - #else - #warning Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead. - #endif -#else - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__fontTools__feaLib__lexer -#define __PYX_HAVE_API__fontTools__feaLib__lexer -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const wchar_t *u) -{ - const wchar_t *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#else -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) -{ - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#endif -#define __Pyx_PyUnicode_FromOrdinal(o) PyUnicode_FromOrdinal((int)o) -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_VERSION_HEX >= 0x030C00A7 - #ifndef _PyLong_SIGN_MASK - #define _PyLong_SIGN_MASK 3 - #endif - #ifndef _PyLong_NON_SIZE_BITS - #define _PyLong_NON_SIZE_BITS 3 - #endif - #define __Pyx_PyLong_Sign(x) (((PyLongObject*)x)->long_value.lv_tag & _PyLong_SIGN_MASK) - #define __Pyx_PyLong_IsNeg(x) ((__Pyx_PyLong_Sign(x) & 2) != 0) - #define __Pyx_PyLong_IsNonNeg(x) (!__Pyx_PyLong_IsNeg(x)) - #define __Pyx_PyLong_IsZero(x) (__Pyx_PyLong_Sign(x) & 1) - #define __Pyx_PyLong_IsPos(x) (__Pyx_PyLong_Sign(x) == 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) (__Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) ((Py_ssize_t) (((PyLongObject*)x)->long_value.lv_tag >> _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_SignedDigitCount(x)\ - ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * __Pyx_PyLong_DigitCount(x)) - #if defined(PyUnstable_Long_IsCompact) && defined(PyUnstable_Long_CompactValue) - #define __Pyx_PyLong_IsCompact(x) PyUnstable_Long_IsCompact((PyLongObject*) x) - #define __Pyx_PyLong_CompactValue(x) PyUnstable_Long_CompactValue((PyLongObject*) x) - #else - #define __Pyx_PyLong_IsCompact(x) (((PyLongObject*)x)->long_value.lv_tag < (2 << _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_CompactValue(x) ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * (Py_ssize_t) __Pyx_PyLong_Digits(x)[0]) - #endif - typedef Py_ssize_t __Pyx_compact_pylong; - typedef size_t __Pyx_compact_upylong; - #else // Py < 3.12 - #define __Pyx_PyLong_IsNeg(x) (Py_SIZE(x) < 0) - #define __Pyx_PyLong_IsNonNeg(x) (Py_SIZE(x) >= 0) - #define __Pyx_PyLong_IsZero(x) (Py_SIZE(x) == 0) - #define __Pyx_PyLong_IsPos(x) (Py_SIZE(x) > 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) ((Py_SIZE(x) == 0) ? 0 : __Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) __Pyx_sst_abs(Py_SIZE(x)) - #define __Pyx_PyLong_SignedDigitCount(x) Py_SIZE(x) - #define __Pyx_PyLong_IsCompact(x) (Py_SIZE(x) == 0 || Py_SIZE(x) == 1 || Py_SIZE(x) == -1) - #define __Pyx_PyLong_CompactValue(x)\ - ((Py_SIZE(x) == 0) ? (sdigit) 0 : ((Py_SIZE(x) < 0) ? -(sdigit)__Pyx_PyLong_Digits(x)[0] : (sdigit)__Pyx_PyLong_Digits(x)[0])) - typedef sdigit __Pyx_compact_pylong; - typedef digit __Pyx_compact_upylong; - #endif - #if PY_VERSION_HEX >= 0x030C00A5 - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->long_value.ob_digit) - #else - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->ob_digit) - #endif -#endif -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = (char) c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_m = NULL; -#endif -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm = __FILE__; -static const char *__pyx_filename; - -/* #### Code section: filename_table ### */ - -static const char *__pyx_f[] = { - "Lib/fontTools/feaLib/lexer.py", -}; -/* #### Code section: utility_code_proto_before_types ### */ -/* #### Code section: numeric_typedefs ### */ -/* #### Code section: complex_type_declarations ### */ -/* #### Code section: type_declarations ### */ - -/*--- Type declarations ---*/ -/* #### Code section: utility_code_proto ### */ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, Py_ssize_t); - void (*DECREF)(void*, PyObject*, Py_ssize_t); - void (*GOTREF)(void*, PyObject*, Py_ssize_t); - void (*GIVEREF)(void*, PyObject*, Py_ssize_t); - void* (*SetupContext)(const char*, Py_ssize_t, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__)) - #define __Pyx_RefNannyFinishContextNogil() __Pyx_RefNannyFinishContext() -#endif - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_XINCREF(r) do { if((r) == NULL); else {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) == NULL); else {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) == NULL); else {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) == NULL); else {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContextNogil() - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_Py_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; Py_XDECREF(tmp);\ - } while (0) -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#if PY_VERSION_HEX >= 0x030C00A6 -#define __Pyx_PyErr_Occurred() (__pyx_tstate->current_exception != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->current_exception ? (PyObject*) Py_TYPE(__pyx_tstate->current_exception) : (PyObject*) NULL) -#else -#define __Pyx_PyErr_Occurred() (__pyx_tstate->curexc_type != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->curexc_type) -#endif -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() (PyErr_Occurred() != NULL) -#define __Pyx_PyErr_CurrentExceptionType() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A6 -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* TupleAndListFromArray.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n); -#endif - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* fastcall.proto */ -#if CYTHON_AVOID_BORROWED_REFS - #define __Pyx_Arg_VARARGS(args, i) PySequence_GetItem(args, i) -#elif CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_Arg_VARARGS(args, i) PyTuple_GET_ITEM(args, i) -#else - #define __Pyx_Arg_VARARGS(args, i) PyTuple_GetItem(args, i) -#endif -#if CYTHON_AVOID_BORROWED_REFS - #define __Pyx_Arg_NewRef_VARARGS(arg) __Pyx_NewRef(arg) - #define __Pyx_Arg_XDECREF_VARARGS(arg) Py_XDECREF(arg) -#else - #define __Pyx_Arg_NewRef_VARARGS(arg) arg // no-op - #define __Pyx_Arg_XDECREF_VARARGS(arg) // no-op - arg is borrowed -#endif -#define __Pyx_NumKwargs_VARARGS(kwds) PyDict_Size(kwds) -#define __Pyx_KwValues_VARARGS(args, nargs) NULL -#define __Pyx_GetKwValue_VARARGS(kw, kwvalues, s) __Pyx_PyDict_GetItemStrWithError(kw, s) -#define __Pyx_KwargsAsDict_VARARGS(kw, kwvalues) PyDict_Copy(kw) -#if CYTHON_METH_FASTCALL - #define __Pyx_Arg_FASTCALL(args, i) args[i] - #define __Pyx_NumKwargs_FASTCALL(kwds) PyTuple_GET_SIZE(kwds) - #define __Pyx_KwValues_FASTCALL(args, nargs) ((args) + (nargs)) - static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s); - #define __Pyx_KwargsAsDict_FASTCALL(kw, kwvalues) _PyStack_AsDict(kwvalues, kw) - #define __Pyx_Arg_NewRef_FASTCALL(arg) arg // no-op, __Pyx_Arg_FASTCALL is direct and this needs - #define __Pyx_Arg_XDECREF_FASTCALL(arg) // no-op - arg was returned from array -#else - #define __Pyx_Arg_FASTCALL __Pyx_Arg_VARARGS - #define __Pyx_NumKwargs_FASTCALL __Pyx_NumKwargs_VARARGS - #define __Pyx_KwValues_FASTCALL __Pyx_KwValues_VARARGS - #define __Pyx_GetKwValue_FASTCALL __Pyx_GetKwValue_VARARGS - #define __Pyx_KwargsAsDict_FASTCALL __Pyx_KwargsAsDict_VARARGS - #define __Pyx_Arg_NewRef_FASTCALL(arg) __Pyx_Arg_NewRef_VARARGS(arg) - #define __Pyx_Arg_XDECREF_FASTCALL(arg) __Pyx_Arg_XDECREF_VARARGS(arg) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_VARARGS(args, start), stop - start) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_FASTCALL(args, start), stop - start) -#else -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) PyTuple_GetSlice(args, start, stop) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) PyTuple_GetSlice(args, start, stop) -#endif - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, - const char* function_name); - -/* PyObjectSetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL) -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value); -#else -#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n) -#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v) -#endif - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#if !CYTHON_VECTORCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if !CYTHON_VECTORCALL -#if PY_VERSION_HEX >= 0x03080000 - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 && !CYTHON_COMPILING_IN_LIMITED_API - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets() - #define __Pyx_PyFrame_GetLocalsplus(frame) ((frame)->f_localsplus) -#else - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif -#endif -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectFastCall.proto */ -#define __Pyx_PyObject_FastCall(func, args, nargs) __Pyx_PyObject_FastCallDict(func, args, (size_t)(nargs), NULL) -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs); - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* PyObjectCallNoArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* SliceObject.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice( - PyObject* obj, Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** py_start, PyObject** py_stop, PyObject** py_slice, - int has_cstart, int has_cstop, int wraparound); - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_SubtractObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceSubtract(op1, op2) : PyNumber_Subtract(op1, op2)) -#endif - -/* PySequenceContains.proto */ -static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { - int result = PySequence_Contains(seq, item); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* PyUnicodeContains.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_ContainsTF(PyObject* substring, PyObject* text, int eq) { - int result = PyUnicode_Contains(text, substring); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* pybytes_as_double.proto */ -static double __Pyx_SlowPyString_AsDouble(PyObject *obj); -static double __Pyx__PyBytes_AsDouble(PyObject *obj, const char* start, Py_ssize_t length); -static CYTHON_INLINE double __Pyx_PyBytes_AsDouble(PyObject *obj) { - return __Pyx__PyBytes_AsDouble(obj, PyBytes_AS_STRING(obj), PyBytes_GET_SIZE(obj)); -} -static CYTHON_INLINE double __Pyx_PyByteArray_AsDouble(PyObject *obj) { - return __Pyx__PyBytes_AsDouble(obj, PyByteArray_AS_STRING(obj), PyByteArray_GET_SIZE(obj)); -} - -/* pyunicode_as_double.proto */ -#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY -static const char* __Pyx__PyUnicode_AsDouble_Copy(const void* data, const int kind, char* buffer, Py_ssize_t start, Py_ssize_t end) { - int last_was_punctuation; - Py_ssize_t i; - last_was_punctuation = 1; - for (i=start; i <= end; i++) { - Py_UCS4 chr = PyUnicode_READ(kind, data, i); - int is_punctuation = (chr == '_') | (chr == '.'); - *buffer = (char)chr; - buffer += (chr != '_'); - if (unlikely(chr > 127)) goto parse_failure; - if (unlikely(last_was_punctuation & is_punctuation)) goto parse_failure; - last_was_punctuation = is_punctuation; - } - if (unlikely(last_was_punctuation)) goto parse_failure; - *buffer = '\0'; - return buffer; -parse_failure: - return NULL; -} -static double __Pyx__PyUnicode_AsDouble_inf_nan(const void* data, int kind, Py_ssize_t start, Py_ssize_t length) { - int matches = 1; - Py_UCS4 chr; - Py_UCS4 sign = PyUnicode_READ(kind, data, start); - int is_signed = (sign == '-') | (sign == '+'); - start += is_signed; - length -= is_signed; - switch (PyUnicode_READ(kind, data, start)) { - #ifdef Py_NAN - case 'n': - case 'N': - if (unlikely(length != 3)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+1); - matches &= (chr == 'a') | (chr == 'A'); - chr = PyUnicode_READ(kind, data, start+2); - matches &= (chr == 'n') | (chr == 'N'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_NAN : Py_NAN; - #endif - case 'i': - case 'I': - if (unlikely(length < 3)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+1); - matches &= (chr == 'n') | (chr == 'N'); - chr = PyUnicode_READ(kind, data, start+2); - matches &= (chr == 'f') | (chr == 'F'); - if (likely(length == 3 && matches)) - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - if (unlikely(length != 8)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+3); - matches &= (chr == 'i') | (chr == 'I'); - chr = PyUnicode_READ(kind, data, start+4); - matches &= (chr == 'n') | (chr == 'N'); - chr = PyUnicode_READ(kind, data, start+5); - matches &= (chr == 'i') | (chr == 'I'); - chr = PyUnicode_READ(kind, data, start+6); - matches &= (chr == 't') | (chr == 'T'); - chr = PyUnicode_READ(kind, data, start+7); - matches &= (chr == 'y') | (chr == 'Y'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - case '.': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': - break; - default: - goto parse_failure; - } - return 0.0; -parse_failure: - return -1.0; -} -static double __Pyx_PyUnicode_AsDouble_WithSpaces(PyObject *obj) { - double value; - const char *last; - char *end; - Py_ssize_t start, length = PyUnicode_GET_LENGTH(obj); - const int kind = PyUnicode_KIND(obj); - const void* data = PyUnicode_DATA(obj); - start = 0; - while (Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, start))) - start++; - while (start < length - 1 && Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, length - 1))) - length--; - length -= start; - if (unlikely(length <= 0)) goto fallback; - value = __Pyx__PyUnicode_AsDouble_inf_nan(data, kind, start, length); - if (unlikely(value == -1.0)) goto fallback; - if (value != 0.0) return value; - if (length < 40) { - char number[40]; - last = __Pyx__PyUnicode_AsDouble_Copy(data, kind, number, start, start + length); - if (unlikely(!last)) goto fallback; - value = PyOS_string_to_double(number, &end, NULL); - } else { - char *number = (char*) PyMem_Malloc((length + 1) * sizeof(char)); - if (unlikely(!number)) goto fallback; - last = __Pyx__PyUnicode_AsDouble_Copy(data, kind, number, start, start + length); - if (unlikely(!last)) { - PyMem_Free(number); - goto fallback; - } - value = PyOS_string_to_double(number, &end, NULL); - PyMem_Free(number); - } - if (likely(end == last) || (value == (double)-1 && PyErr_Occurred())) { - return value; - } -fallback: - return __Pyx_SlowPyString_AsDouble(obj); -} -#endif -static CYTHON_INLINE double __Pyx_PyUnicode_AsDouble(PyObject *obj) { -#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY - if (unlikely(__Pyx_PyUnicode_READY(obj) == -1)) - return (double)-1; - if (likely(PyUnicode_IS_ASCII(obj))) { - const char *s; - Py_ssize_t length; - s = PyUnicode_AsUTF8AndSize(obj, &length); - return __Pyx__PyBytes_AsDouble(obj, s, length); - } - return __Pyx_PyUnicode_AsDouble_WithSpaces(obj); -#else - return __Pyx_SlowPyString_AsDouble(obj); -#endif -} - -/* pynumber_float.proto */ -static CYTHON_INLINE PyObject* __Pyx__PyNumber_Float(PyObject* obj); -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : __Pyx__PyNumber_Float(x)) - -/* IterNext.proto */ -#define __Pyx_PyIter_Next(obj) __Pyx_PyIter_Next2(obj, NULL) -static CYTHON_INLINE PyObject *__Pyx_PyIter_Next2(PyObject *, PyObject *); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* pop.proto */ -static CYTHON_INLINE PyObject* __Pyx__PyObject_Pop(PyObject* L); -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE PyObject* __Pyx_PyList_Pop(PyObject* L); -#define __Pyx_PyObject_Pop(L) (likely(PyList_CheckExact(L)) ?\ - __Pyx_PyList_Pop(L) : __Pyx__PyObject_Pop(L)) -#else -#define __Pyx_PyList_Pop(L) __Pyx__PyObject_Pop(L) -#define __Pyx_PyObject_Pop(L) __Pyx__PyObject_Pop(L) -#endif - -/* UnpackUnboundCMethod.proto */ -typedef struct { - PyObject *type; - PyObject **method_name; - PyCFunction func; - PyObject *method; - int flag; -} __Pyx_CachedCFunction; - -/* CallUnboundCMethod0.proto */ -static PyObject* __Pyx__CallUnboundCMethod0(__Pyx_CachedCFunction* cfunc, PyObject* self); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_CallUnboundCMethod0(cfunc, self)\ - (likely((cfunc)->func) ?\ - (likely((cfunc)->flag == METH_NOARGS) ? (*((cfunc)->func))(self, NULL) :\ - (PY_VERSION_HEX >= 0x030600B1 && likely((cfunc)->flag == METH_FASTCALL) ?\ - (PY_VERSION_HEX >= 0x030700A0 ?\ - (*(__Pyx_PyCFunctionFast)(void*)(PyCFunction)(cfunc)->func)(self, &__pyx_empty_tuple, 0) :\ - (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, &__pyx_empty_tuple, 0, NULL)) :\ - (PY_VERSION_HEX >= 0x030700A0 && (cfunc)->flag == (METH_FASTCALL | METH_KEYWORDS) ?\ - (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, &__pyx_empty_tuple, 0, NULL) :\ - (likely((cfunc)->flag == (METH_VARARGS | METH_KEYWORDS)) ? ((*(PyCFunctionWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, __pyx_empty_tuple, NULL)) :\ - ((cfunc)->flag == METH_VARARGS ? (*((cfunc)->func))(self, __pyx_empty_tuple) :\ - __Pyx__CallUnboundCMethod0(cfunc, self)))))) :\ - __Pyx__CallUnboundCMethod0(cfunc, self)) -#else -#define __Pyx_CallUnboundCMethod0(cfunc, self) __Pyx__CallUnboundCMethod0(cfunc, self) -#endif - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethod1.proto */ -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); - -/* append.proto */ -static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) __Pyx_IsAnySubtype2(Py_TYPE(obj), (PyTypeObject *)type1, (PyTypeObject *)type2) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) (PyObject_TypeCheck(obj, (PyTypeObject *)type1) || PyObject_TypeCheck(obj, (PyTypeObject *)type2)) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyErr_ExceptionMatches2(err1, err2) __Pyx_PyErr_GivenExceptionMatches2(__Pyx_PyErr_CurrentExceptionType(), err1, err2) -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* ImportDottedModule.proto */ -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple); -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple); -#endif - -/* Py3UpdateBases.proto */ -static PyObject* __Pyx_PEP560_update_bases(PyObject *bases); - -/* CalculateMetaclass.proto */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases); - -/* SetNameInClass.proto */ -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value)) -#elif CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value)) -#else -#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value) -#endif - -/* IncludeStructmemberH.proto */ -#include - -/* FixUpExtensionType.proto */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type); -#endif - -/* FetchSharedCythonModule.proto */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void); - -/* FetchCommonType.proto */ -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); -#else -static PyTypeObject* __Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases); -#endif - -/* PyMethodNew.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - PyObject *typesModule=NULL, *methodType=NULL, *result=NULL; - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - typesModule = PyImport_ImportModule("types"); - if (!typesModule) return NULL; - methodType = PyObject_GetAttrString(typesModule, "MethodType"); - Py_DECREF(typesModule); - if (!methodType) return NULL; - result = PyObject_CallFunctionObjArgs(methodType, func, self, NULL); - Py_DECREF(methodType); - return result; -} -#elif PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - return PyMethod_New(func, self); -} -#else - #define __Pyx_PyMethod_New PyMethod_New -#endif - -/* PyVectorcallFastCallDict.proto */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); -#endif - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CYFUNCTION_COROUTINE 0x08 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#else - #define __Pyx_CyFunction_GetClassObj(f)\ - ((PyObject*) ((PyCMethodObject *) (f))->mm_class) -#endif -#define __Pyx_CyFunction_SetClassObj(f, classobj)\ - __Pyx__CyFunction_SetClassObj((__pyx_CyFunctionObject *) (f), (classobj)) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject_HEAD - PyObject *func; -#elif PY_VERSION_HEX < 0x030900B1 - PyCFunctionObject func; -#else - PyCMethodObject func; -#endif -#if CYTHON_BACKPORT_VECTORCALL - __pyx_vectorcallfunc func_vectorcall; -#endif -#if PY_VERSION_HEX < 0x030500A0 || CYTHON_COMPILING_IN_LIMITED_API - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - PyObject *func_classobj; -#endif - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; - PyObject *func_is_coroutine; -} __pyx_CyFunctionObject; -#define __Pyx_CyFunction_Check(obj) __Pyx_TypeCheck(obj, __pyx_CyFunctionType) -#define __Pyx_IsCyOrPyCFunction(obj) __Pyx_TypeCheck2(obj, __pyx_CyFunctionType, &PyCFunction_Type) -#define __Pyx_CyFunction_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_CyFunctionType) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(PyObject *module); -#if CYTHON_METH_FASTCALL -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -#if CYTHON_BACKPORT_VECTORCALL -#define __Pyx_CyFunction_func_vectorcall(f) (((__pyx_CyFunctionObject*)f)->func_vectorcall) -#else -#define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) -#endif -#endif - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* PyObjectLookupSpecial.proto */ -#if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_LookupSpecialNoError(obj, attr_name) __Pyx__PyObject_LookupSpecial(obj, attr_name, 0) -#define __Pyx_PyObject_LookupSpecial(obj, attr_name) __Pyx__PyObject_LookupSpecial(obj, attr_name, 1) -static CYTHON_INLINE PyObject* __Pyx__PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name, int with_error); -#else -#define __Pyx_PyObject_LookupSpecialNoError(o,n) __Pyx_PyObject_GetAttrStrNoError(o,n) -#define __Pyx_PyObject_LookupSpecial(o,n) __Pyx_PyObject_GetAttrStr(o,n) -#endif - -/* Py3ClassCreate.proto */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname, - PyObject *mkw, PyObject *modname, PyObject *doc); -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict, - PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); -#endif - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* GCCDiagnostics.proto */ -#if !defined(__INTEL_COMPILER) && defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* FormatTypeName.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%U" -static __Pyx_TypeName __Pyx_PyType_GetName(PyTypeObject* tp); -#define __Pyx_DECREF_TypeName(obj) Py_XDECREF(obj) -#else -typedef const char *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%.200s" -#define __Pyx_PyType_GetName(tp) ((tp)->tp_name) -#define __Pyx_DECREF_TypeName(obj) -#endif - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -/* #### Code section: module_declarations ### */ - -/* Module declarations from "cython" */ - -/* Module declarations from "fontTools.feaLib.lexer" */ -/* #### Code section: typeinfo ### */ -/* #### Code section: before_global_var ### */ -#define __Pyx_MODULE_NAME "fontTools.feaLib.lexer" -extern int __pyx_module_is_main_fontTools__feaLib__lexer; -int __pyx_module_is_main_fontTools__feaLib__lexer = 0; - -/* Implementation of "fontTools.feaLib.lexer" */ -/* #### Code section: global_var ### */ -static PyObject *__pyx_builtin_ImportError; -static PyObject *__pyx_builtin_object; -static PyObject *__pyx_builtin_staticmethod; -static PyObject *__pyx_builtin_StopIteration; -static PyObject *__pyx_builtin_open; -/* #### Code section: string_decls ### */ -static const char __pyx_k_[] = "\n"; -static const char __pyx_k_0[] = "0"; -static const char __pyx_k_p[] = "p"; -static const char __pyx_k_r[] = "r"; -static const char __pyx_k_s[] = "}\\s*"; -static const char __pyx_k__2[] = "\r"; -static const char __pyx_k__3[] = "#"; -static const char __pyx_k__4[] = "("; -static const char __pyx_k__5[] = ")"; -static const char __pyx_k__6[] = "\\"; -static const char __pyx_k__7[] = "@"; -static const char __pyx_k__8[] = "."; -static const char __pyx_k__9[] = "-"; -static const char __pyx_k_os[] = "os"; -static const char __pyx_k_re[] = "re"; -static const char __pyx_k_xX[] = "xX"; -static const char __pyx_k_CID[] = "CID"; -static const char __pyx_k__10[] = "\""; -static const char __pyx_k__11[] = "[\r\n]"; -static const char __pyx_k__12[] = ""; -static const char __pyx_k__13[] = "*"; -static const char __pyx_k__16[] = " \t"; -static const char __pyx_k__17[] = "\r\n"; -static const char __pyx_k__18[] = ",;:-+'{}[]<>()="; -static const char __pyx_k__19[] = "_+*:.^~!\\"; -static const char __pyx_k__20[] = "_.+*:^~!/-"; -static const char __pyx_k__51[] = "?"; -static const char __pyx_k_doc[] = "__doc__"; -static const char __pyx_k_err[] = "err"; -static const char __pyx_k_pop[] = "pop"; -static const char __pyx_k_pos[] = "pos_"; -static const char __pyx_k_s_2[] = "\\s*;"; -static const char __pyx_k_sub[] = "sub"; -static const char __pyx_k_tag[] = "tag"; -static const char __pyx_k_NAME[] = "NAME"; -static const char __pyx_k_data[] = "data"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_init[] = "__init__"; -static const char __pyx_k_iter[] = "__iter__"; -static const char __pyx_k_join[] = "join"; -static const char __pyx_k_line[] = "line_"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode_"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_next[] = "__next__"; -static const char __pyx_k_open[] = "open"; -static const char __pyx_k_path[] = "path"; -static const char __pyx_k_read[] = "read"; -static const char __pyx_k_self[] = "self"; -static const char __pyx_k_spec[] = "__spec__"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_text[] = "text"; -static const char __pyx_k_FLOAT[] = "FLOAT"; -static const char __pyx_k_Lexer[] = "Lexer"; -static const char __pyx_k_OCTAL[] = "OCTAL"; -static const char __pyx_k_close[] = "close"; -static const char __pyx_k_isabs[] = "isabs"; -static const char __pyx_k_lexer[] = "lexer"; -static const char __pyx_k_limit[] = "limit"; -static const char __pyx_k_match[] = "match"; -static const char __pyx_k_split[] = "split"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_strip[] = "strip"; -static const char __pyx_k_super[] = "super"; -static const char __pyx_k_token[] = "token"; -static const char __pyx_k_utf_8[] = "utf-8"; -static const char __pyx_k_valid[] = "valid"; -static const char __pyx_k_NORMAL[] = "NORMAL"; -static const char __pyx_k_NUMBER[] = "NUMBER"; -static const char __pyx_k_STRING[] = "STRING"; -static const char __pyx_k_SYMBOL[] = "SYMBOL"; -static const char __pyx_k_append[] = "append"; -static const char __pyx_k_column[] = "column"; -static const char __pyx_k_getcwd[] = "getcwd"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_lexers[] = "lexers_"; -static const char __pyx_k_module[] = "__module__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_next_2[] = "next_"; -static const char __pyx_k_next_3[] = "next"; -static const char __pyx_k_object[] = "object"; -static const char __pyx_k_regexp[] = "regexp"; -static const char __pyx_k_string[] = "string"; -static const char __pyx_k_text_2[] = "text_"; -static const char __pyx_k_COMMENT[] = "COMMENT"; -static const char __pyx_k_NEWLINE[] = "NEWLINE"; -static const char __pyx_k_NUMBERS[] = "NUMBERS"; -static const char __pyx_k_closing[] = "closing"; -static const char __pyx_k_compile[] = "compile"; -static const char __pyx_k_curpath[] = "curpath"; -static const char __pyx_k_dirname[] = "dirname"; -static const char __pyx_k_fileobj[] = "fileobj"; -static const char __pyx_k_include[] = "include"; -static const char __pyx_k_prepare[] = "__prepare__"; -static const char __pyx_k_stop_at[] = "stop_at"; -static const char __pyx_k_FILENAME[] = "FILENAME"; -static const char __pyx_k_cur_char[] = "cur_char"; -static const char __pyx_k_encoding[] = "encoding"; -static const char __pyx_k_features[] = ""; -static const char __pyx_k_filename[] = "filename"; -static const char __pyx_k_location[] = "location_"; -static const char __pyx_k_maxsplit[] = "maxsplit"; -static const char __pyx_k_qualname[] = "__qualname__"; -static const char __pyx_k_set_name[] = "__set_name__"; -static const char __pyx_k_metaclass[] = "__metaclass__"; -static const char __pyx_k_next_char[] = "next_char"; -static const char __pyx_k_scan_over[] = "scan_over_"; -static const char __pyx_k_0123456789[] = "0123456789"; -static const char __pyx_k_A_Za_z_0_9[] = "^[A-Za-z_0-9.\\-]+$"; -static const char __pyx_k_CHAR_DIGIT[] = "CHAR_DIGIT_"; -static const char __pyx_k_GLYPHCLASS[] = "GLYPHCLASS"; -static const char __pyx_k_Lexer_next[] = "Lexer.next"; -static const char __pyx_k_filename_2[] = "filename_"; -static const char __pyx_k_fname_type[] = "fname_type"; -static const char __pyx_k_glyphclass[] = "glyphclass"; -static const char __pyx_k_includeDir[] = "includeDir"; -static const char __pyx_k_line_start[] = "line_start_"; -static const char __pyx_k_location_2[] = "location"; -static const char __pyx_k_make_lexer[] = "make_lexer_"; -static const char __pyx_k_scan_until[] = "scan_until_"; -static const char __pyx_k_token_type[] = "token_type"; -static const char __pyx_k_CHAR_LETTER[] = "CHAR_LETTER_"; -static const char __pyx_k_CHAR_SYMBOL[] = "CHAR_SYMBOL_"; -static const char __pyx_k_HEXADECIMAL[] = "HEXADECIMAL"; -static const char __pyx_k_ImportError[] = "ImportError"; -static const char __pyx_k_MODE_NORMAL[] = "MODE_NORMAL_"; -static const char __pyx_k_featurefile[] = "featurefile"; -static const char __pyx_k_fname_token[] = "fname_token"; -static const char __pyx_k_mro_entries[] = "__mro_entries__"; -static const char __pyx_k_text_length[] = "text_length_"; -static const char __pyx_k_CHAR_NEWLINE[] = "CHAR_NEWLINE_"; -static const char __pyx_k_Lexer___init[] = "Lexer.__init__"; -static const char __pyx_k_Lexer___iter[] = "Lexer.__iter__"; -static const char __pyx_k_Lexer___next[] = "Lexer.__next__"; -static const char __pyx_k_Lexer_next_2[] = "Lexer.next_"; -static const char __pyx_k_file_or_path[] = "file_or_path"; -static const char __pyx_k_initializing[] = "_initializing"; -static const char __pyx_k_is_coroutine[] = "_is_coroutine"; -static const char __pyx_k_staticmethod[] = "staticmethod"; -static const char __pyx_k_CHAR_HEXDIGIT[] = "CHAR_HEXDIGIT_"; -static const char __pyx_k_MODE_FILENAME[] = "MODE_FILENAME_"; -static const char __pyx_k_RE_GLYPHCLASS[] = "RE_GLYPHCLASS"; -static const char __pyx_k_StopIteration[] = "StopIteration"; -static const char __pyx_k_class_getitem[] = "__class_getitem__"; -static const char __pyx_k_init_subclass[] = "__init_subclass__"; -static const char __pyx_k_IncludingLexer[] = "IncludingLexer"; -static const char __pyx_k_Lexer_location[] = "Lexer.location_"; -static const char __pyx_k_fname_location[] = "fname_location"; -static const char __pyx_k_ANONYMOUS_BLOCK[] = "ANONYMOUS_BLOCK"; -static const char __pyx_k_CHAR_NAME_START[] = "CHAR_NAME_START_"; -static const char __pyx_k_CHAR_WHITESPACE[] = "CHAR_WHITESPACE_"; -static const char __pyx_k_FeatureLibError[] = "FeatureLibError"; -static const char __pyx_k_Lexer_scan_over[] = "Lexer.scan_over_"; -static const char __pyx_k_featurefilepath[] = "featurefilepath"; -static const char __pyx_k_Lexer_scan_until[] = "Lexer.scan_until_"; -static const char __pyx_k_FileNotFoundError[] = "FileNotFoundError"; -static const char __pyx_k_NonIncludingLexer[] = "NonIncludingLexer"; -static const char __pyx_k_Expected_file_name[] = "Expected file name"; -static const char __pyx_k_FeatureLibLocation[] = "FeatureLibLocation"; -static const char __pyx_k_asyncio_coroutines[] = "asyncio.coroutines"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_IncludedFeaNotFound[] = "IncludedFeaNotFound"; -static const char __pyx_k_IncludingLexer_next[] = "IncludingLexer.next"; -static const char __pyx_k_scan_anonymous_block[] = "scan_anonymous_block"; -static const char __pyx_k_IncludingLexer___init[] = "IncludingLexer.__init__"; -static const char __pyx_k_IncludingLexer___iter[] = "IncludingLexer.__iter__"; -static const char __pyx_k_IncludingLexer___next[] = "IncludingLexer.__next__"; -static const char __pyx_k_0123456789ABCDEFabcdef[] = "0123456789ABCDEFabcdef"; -static const char __pyx_k_CHAR_NAME_CONTINUATION[] = "CHAR_NAME_CONTINUATION_"; -static const char __pyx_k_Unexpected_character_r[] = "Unexpected character: %r"; -static const char __pyx_k_fontTools_feaLib_error[] = "fontTools.feaLib.error"; -static const char __pyx_k_fontTools_feaLib_lexer[] = "fontTools.feaLib.lexer"; -static const char __pyx_k_Expected_after_file_name[] = "Expected ')' after file name"; -static const char __pyx_k_NonIncludingLexer___next[] = "NonIncludingLexer.__next__"; -static const char __pyx_k_Expected_before_file_name[] = "Expected '(' before file name"; -static const char __pyx_k_Expected_glyph_class_name[] = "Expected glyph class name"; -static const char __pyx_k_IncludingLexer_make_lexer[] = "IncludingLexer.make_lexer_"; -static const char __pyx_k_fontTools_feaLib_location[] = "fontTools.feaLib.location"; -static const char __pyx_k_Lexer_scan_anonymous_block[] = "Lexer.scan_anonymous_block"; -static const char __pyx_k_Too_many_recursive_includes[] = "Too many recursive includes"; -static const char __pyx_k_Expected_to_terminate_string[] = "Expected '\"' to terminate string"; -static const char __pyx_k_Lib_fontTools_feaLib_lexer_py[] = "Lib/fontTools/feaLib/lexer.py"; -static const char __pyx_k_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; -static const char __pyx_k_A_Lexer_that_follows_include_sta[] = "A Lexer that follows include statements.\n\n The OpenType feature file specification states that due to\n historical reasons, relative imports should be resolved in this\n order:\n\n 1. If the source font is UFO format, then relative to the UFO's\n font directory\n 2. relative to the top-level include file\n 3. relative to the parent include file\n\n We only support 1 (via includeDir) and 2.\n "; -static const char __pyx_k_Expected_s_to_terminate_anonymou[] = "Expected '} %s;' to terminate anonymous block"; -static const char __pyx_k_Glyph_class_names_must_consist_o[] = "Glyph class names must consist of letters, digits, underscore, period or hyphen"; -static const char __pyx_k_Glyph_class_names_must_not_be_lo[] = "Glyph class names must not be longer than 63 characters"; -static const char __pyx_k_IncludingLexer_scan_anonymous_bl[] = "IncludingLexer.scan_anonymous_block"; -static const char __pyx_k_Lexer_that_does_not_follow_inclu[] = "Lexer that does not follow `include` statements, emits them as-is."; -/* #### Code section: decls ### */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_text, PyObject *__pyx_v_filename); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_2__iter__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_4next(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_6__next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_8location_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_10next_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_12scan_over_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_valid); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_14scan_until_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_stop_at); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_16scan_anonymous_block(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_tag); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_featurefile, PyObject *__pyx_v_includeDir); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_2__iter__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_4next(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_6__next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_8make_lexer_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_file_or_path); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_10scan_anonymous_block(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_tag); /* proto */ -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_17NonIncludingLexer___next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static __Pyx_CachedCFunction __pyx_umethod_PyList_Type_pop = {0, 0, 0, 0, 0}; -/* #### Code section: late_includes ### */ -/* #### Code section: module_state ### */ -typedef struct { - PyObject *__pyx_d; - PyObject *__pyx_b; - PyObject *__pyx_cython_runtime; - PyObject *__pyx_empty_tuple; - PyObject *__pyx_empty_bytes; - PyObject *__pyx_empty_unicode; - #ifdef __Pyx_CyFunction_USED - PyTypeObject *__pyx_CyFunctionType; - #endif - #ifdef __Pyx_FusedFunction_USED - PyTypeObject *__pyx_FusedFunctionType; - #endif - #ifdef __Pyx_Generator_USED - PyTypeObject *__pyx_GeneratorType; - #endif - #ifdef __Pyx_IterableCoroutine_USED - PyTypeObject *__pyx_IterableCoroutineType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineAwaitType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineType; - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - #endif - PyObject *__pyx_kp_u_; - PyObject *__pyx_kp_u_0; - PyObject *__pyx_kp_u_0123456789; - PyObject *__pyx_kp_u_0123456789ABCDEFabcdef; - PyObject *__pyx_n_u_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef; - PyObject *__pyx_n_s_ANONYMOUS_BLOCK; - PyObject *__pyx_n_u_ANONYMOUS_BLOCK; - PyObject *__pyx_kp_s_A_Lexer_that_follows_include_sta; - PyObject *__pyx_kp_u_A_Za_z_0_9; - PyObject *__pyx_n_s_CHAR_DIGIT; - PyObject *__pyx_n_s_CHAR_HEXDIGIT; - PyObject *__pyx_n_s_CHAR_LETTER; - PyObject *__pyx_n_s_CHAR_NAME_CONTINUATION; - PyObject *__pyx_n_s_CHAR_NAME_START; - PyObject *__pyx_n_s_CHAR_NEWLINE; - PyObject *__pyx_n_s_CHAR_SYMBOL; - PyObject *__pyx_n_s_CHAR_WHITESPACE; - PyObject *__pyx_n_s_CID; - PyObject *__pyx_n_u_CID; - PyObject *__pyx_n_s_COMMENT; - PyObject *__pyx_n_u_COMMENT; - PyObject *__pyx_kp_u_Expected_after_file_name; - PyObject *__pyx_kp_u_Expected_before_file_name; - PyObject *__pyx_kp_u_Expected_file_name; - PyObject *__pyx_kp_u_Expected_glyph_class_name; - PyObject *__pyx_kp_u_Expected_s_to_terminate_anonymou; - PyObject *__pyx_kp_u_Expected_to_terminate_string; - PyObject *__pyx_n_s_FILENAME; - PyObject *__pyx_n_u_FILENAME; - PyObject *__pyx_n_s_FLOAT; - PyObject *__pyx_n_u_FLOAT; - PyObject *__pyx_n_s_FeatureLibError; - PyObject *__pyx_n_s_FeatureLibLocation; - PyObject *__pyx_n_s_FileNotFoundError; - PyObject *__pyx_n_s_GLYPHCLASS; - PyObject *__pyx_n_u_GLYPHCLASS; - PyObject *__pyx_kp_u_Glyph_class_names_must_consist_o; - PyObject *__pyx_kp_u_Glyph_class_names_must_not_be_lo; - PyObject *__pyx_n_s_HEXADECIMAL; - PyObject *__pyx_n_u_HEXADECIMAL; - PyObject *__pyx_n_s_ImportError; - PyObject *__pyx_n_s_IncludedFeaNotFound; - PyObject *__pyx_n_s_IncludingLexer; - PyObject *__pyx_n_s_IncludingLexer___init; - PyObject *__pyx_n_s_IncludingLexer___iter; - PyObject *__pyx_n_s_IncludingLexer___next; - PyObject *__pyx_n_s_IncludingLexer_make_lexer; - PyObject *__pyx_n_s_IncludingLexer_next; - PyObject *__pyx_n_s_IncludingLexer_scan_anonymous_bl; - PyObject *__pyx_n_s_Lexer; - PyObject *__pyx_n_s_Lexer___init; - PyObject *__pyx_n_s_Lexer___iter; - PyObject *__pyx_n_s_Lexer___next; - PyObject *__pyx_n_s_Lexer_location; - PyObject *__pyx_n_s_Lexer_next; - PyObject *__pyx_n_s_Lexer_next_2; - PyObject *__pyx_n_s_Lexer_scan_anonymous_block; - PyObject *__pyx_n_s_Lexer_scan_over; - PyObject *__pyx_n_s_Lexer_scan_until; - PyObject *__pyx_kp_s_Lexer_that_does_not_follow_inclu; - PyObject *__pyx_kp_s_Lib_fontTools_feaLib_lexer_py; - PyObject *__pyx_n_s_MODE_FILENAME; - PyObject *__pyx_n_s_MODE_NORMAL; - PyObject *__pyx_n_s_NAME; - PyObject *__pyx_n_u_NAME; - PyObject *__pyx_n_s_NEWLINE; - PyObject *__pyx_n_u_NEWLINE; - PyObject *__pyx_n_u_NORMAL; - PyObject *__pyx_n_s_NUMBER; - PyObject *__pyx_n_u_NUMBER; - PyObject *__pyx_n_s_NUMBERS; - PyObject *__pyx_n_s_NonIncludingLexer; - PyObject *__pyx_n_s_NonIncludingLexer___next; - PyObject *__pyx_n_s_OCTAL; - PyObject *__pyx_n_u_OCTAL; - PyObject *__pyx_n_s_RE_GLYPHCLASS; - PyObject *__pyx_n_s_STRING; - PyObject *__pyx_n_u_STRING; - PyObject *__pyx_n_s_SYMBOL; - PyObject *__pyx_n_u_SYMBOL; - PyObject *__pyx_n_s_StopIteration; - PyObject *__pyx_kp_u_Too_many_recursive_includes; - PyObject *__pyx_kp_u_Unexpected_character_r; - PyObject *__pyx_kp_u__10; - PyObject *__pyx_kp_u__11; - PyObject *__pyx_kp_u__12; - PyObject *__pyx_n_s__13; - PyObject *__pyx_kp_u__16; - PyObject *__pyx_kp_u__17; - PyObject *__pyx_kp_u__18; - PyObject *__pyx_kp_u__19; - PyObject *__pyx_kp_u__2; - PyObject *__pyx_kp_u__20; - PyObject *__pyx_kp_u__3; - PyObject *__pyx_kp_u__4; - PyObject *__pyx_kp_u__5; - PyObject *__pyx_n_s__51; - PyObject *__pyx_kp_u__6; - PyObject *__pyx_kp_u__7; - PyObject *__pyx_kp_u__8; - PyObject *__pyx_kp_u__9; - PyObject *__pyx_n_s_append; - PyObject *__pyx_n_s_asyncio_coroutines; - PyObject *__pyx_n_s_class_getitem; - PyObject *__pyx_n_s_cline_in_traceback; - PyObject *__pyx_n_s_close; - PyObject *__pyx_n_s_closing; - PyObject *__pyx_n_s_column; - PyObject *__pyx_n_s_compile; - PyObject *__pyx_n_s_cur_char; - PyObject *__pyx_n_s_curpath; - PyObject *__pyx_n_s_data; - PyObject *__pyx_n_s_dict; - PyObject *__pyx_n_s_dirname; - PyObject *__pyx_n_s_doc; - PyObject *__pyx_n_s_encoding; - PyObject *__pyx_n_s_err; - PyObject *__pyx_n_s_featurefile; - PyObject *__pyx_n_s_featurefilepath; - PyObject *__pyx_kp_u_features; - PyObject *__pyx_n_s_file_or_path; - PyObject *__pyx_n_s_filename; - PyObject *__pyx_n_s_filename_2; - PyObject *__pyx_n_s_fileobj; - PyObject *__pyx_n_s_fname_location; - PyObject *__pyx_n_s_fname_token; - PyObject *__pyx_n_s_fname_type; - PyObject *__pyx_n_s_fontTools_feaLib_error; - PyObject *__pyx_n_s_fontTools_feaLib_lexer; - PyObject *__pyx_n_s_fontTools_feaLib_location; - PyObject *__pyx_n_s_getcwd; - PyObject *__pyx_n_s_glyphclass; - PyObject *__pyx_n_s_import; - PyObject *__pyx_n_u_include; - PyObject *__pyx_n_s_includeDir; - PyObject *__pyx_n_s_init; - PyObject *__pyx_n_s_init_subclass; - PyObject *__pyx_n_s_initializing; - PyObject *__pyx_n_s_is_coroutine; - PyObject *__pyx_n_s_isabs; - PyObject *__pyx_n_s_iter; - PyObject *__pyx_n_s_join; - PyObject *__pyx_n_s_lexer; - PyObject *__pyx_n_s_lexers; - PyObject *__pyx_n_s_limit; - PyObject *__pyx_n_s_line; - PyObject *__pyx_n_s_line_start; - PyObject *__pyx_n_s_location; - PyObject *__pyx_n_s_location_2; - PyObject *__pyx_n_s_main; - PyObject *__pyx_n_s_make_lexer; - PyObject *__pyx_n_s_match; - PyObject *__pyx_n_s_maxsplit; - PyObject *__pyx_n_s_metaclass; - PyObject *__pyx_n_s_mode; - PyObject *__pyx_n_s_module; - PyObject *__pyx_n_s_mro_entries; - PyObject *__pyx_n_u_name; - PyObject *__pyx_n_s_name_2; - PyObject *__pyx_n_s_next; - PyObject *__pyx_n_s_next_2; - PyObject *__pyx_n_s_next_3; - PyObject *__pyx_n_s_next_char; - PyObject *__pyx_n_s_object; - PyObject *__pyx_n_s_open; - PyObject *__pyx_n_s_os; - PyObject *__pyx_n_s_p; - PyObject *__pyx_n_s_path; - PyObject *__pyx_n_s_pop; - PyObject *__pyx_n_s_pos; - PyObject *__pyx_n_s_prepare; - PyObject *__pyx_n_s_qualname; - PyObject *__pyx_n_u_r; - PyObject *__pyx_n_s_re; - PyObject *__pyx_n_s_read; - PyObject *__pyx_n_u_read; - PyObject *__pyx_n_s_regexp; - PyObject *__pyx_kp_u_s; - PyObject *__pyx_kp_u_s_2; - PyObject *__pyx_n_s_scan_anonymous_block; - PyObject *__pyx_n_s_scan_over; - PyObject *__pyx_n_s_scan_until; - PyObject *__pyx_n_s_self; - PyObject *__pyx_n_s_set_name; - PyObject *__pyx_n_s_spec; - PyObject *__pyx_n_s_split; - PyObject *__pyx_n_s_start; - PyObject *__pyx_n_s_staticmethod; - PyObject *__pyx_n_s_stop_at; - PyObject *__pyx_n_s_string; - PyObject *__pyx_n_s_strip; - PyObject *__pyx_n_s_sub; - PyObject *__pyx_n_s_super; - PyObject *__pyx_n_s_tag; - PyObject *__pyx_n_s_test; - PyObject *__pyx_n_s_text; - PyObject *__pyx_n_s_text_2; - PyObject *__pyx_n_s_text_length; - PyObject *__pyx_n_s_token; - PyObject *__pyx_n_s_token_type; - PyObject *__pyx_kp_u_utf_8; - PyObject *__pyx_n_s_valid; - PyObject *__pyx_n_u_xX; - PyObject *__pyx_int_0; - PyObject *__pyx_int_1; - PyObject *__pyx_int_2; - PyObject *__pyx_int_8; - PyObject *__pyx_int_10; - PyObject *__pyx_int_16; - PyObject *__pyx_tuple__14; - PyObject *__pyx_tuple__15; - PyObject *__pyx_tuple__21; - PyObject *__pyx_tuple__23; - PyObject *__pyx_tuple__26; - PyObject *__pyx_tuple__28; - PyObject *__pyx_tuple__30; - PyObject *__pyx_tuple__32; - PyObject *__pyx_tuple__34; - PyObject *__pyx_tuple__36; - PyObject *__pyx_tuple__38; - PyObject *__pyx_tuple__39; - PyObject *__pyx_tuple__40; - PyObject *__pyx_tuple__44; - PyObject *__pyx_tuple__46; - PyObject *__pyx_tuple__48; - PyObject *__pyx_codeobj__22; - PyObject *__pyx_codeobj__24; - PyObject *__pyx_codeobj__25; - PyObject *__pyx_codeobj__27; - PyObject *__pyx_codeobj__29; - PyObject *__pyx_codeobj__31; - PyObject *__pyx_codeobj__33; - PyObject *__pyx_codeobj__35; - PyObject *__pyx_codeobj__37; - PyObject *__pyx_codeobj__41; - PyObject *__pyx_codeobj__42; - PyObject *__pyx_codeobj__43; - PyObject *__pyx_codeobj__45; - PyObject *__pyx_codeobj__47; - PyObject *__pyx_codeobj__49; - PyObject *__pyx_codeobj__50; -} __pyx_mstate; - -#if CYTHON_USE_MODULE_STATE -#ifdef __cplusplus -namespace { - extern struct PyModuleDef __pyx_moduledef; -} /* anonymous namespace */ -#else -static struct PyModuleDef __pyx_moduledef; -#endif - -#define __pyx_mstate(o) ((__pyx_mstate *)__Pyx_PyModule_GetState(o)) - -#define __pyx_mstate_global (__pyx_mstate(PyState_FindModule(&__pyx_moduledef))) - -#define __pyx_m (PyState_FindModule(&__pyx_moduledef)) -#else -static __pyx_mstate __pyx_mstate_global_static = -#ifdef __cplusplus - {}; -#else - {0}; -#endif -static __pyx_mstate *__pyx_mstate_global = &__pyx_mstate_global_static; -#endif -/* #### Code section: module_state_clear ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_clear(PyObject *m) { - __pyx_mstate *clear_module_state = __pyx_mstate(m); - if (!clear_module_state) return 0; - Py_CLEAR(clear_module_state->__pyx_d); - Py_CLEAR(clear_module_state->__pyx_b); - Py_CLEAR(clear_module_state->__pyx_cython_runtime); - Py_CLEAR(clear_module_state->__pyx_empty_tuple); - Py_CLEAR(clear_module_state->__pyx_empty_bytes); - Py_CLEAR(clear_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_CLEAR(clear_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_CLEAR(clear_module_state->__pyx_FusedFunctionType); - #endif - Py_CLEAR(clear_module_state->__pyx_kp_u_); - Py_CLEAR(clear_module_state->__pyx_kp_u_0); - Py_CLEAR(clear_module_state->__pyx_kp_u_0123456789); - Py_CLEAR(clear_module_state->__pyx_kp_u_0123456789ABCDEFabcdef); - Py_CLEAR(clear_module_state->__pyx_n_u_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef); - Py_CLEAR(clear_module_state->__pyx_n_s_ANONYMOUS_BLOCK); - Py_CLEAR(clear_module_state->__pyx_n_u_ANONYMOUS_BLOCK); - Py_CLEAR(clear_module_state->__pyx_kp_s_A_Lexer_that_follows_include_sta); - Py_CLEAR(clear_module_state->__pyx_kp_u_A_Za_z_0_9); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_DIGIT); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_HEXDIGIT); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_LETTER); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_NAME_CONTINUATION); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_NAME_START); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_NEWLINE); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_SYMBOL); - Py_CLEAR(clear_module_state->__pyx_n_s_CHAR_WHITESPACE); - Py_CLEAR(clear_module_state->__pyx_n_s_CID); - Py_CLEAR(clear_module_state->__pyx_n_u_CID); - Py_CLEAR(clear_module_state->__pyx_n_s_COMMENT); - Py_CLEAR(clear_module_state->__pyx_n_u_COMMENT); - Py_CLEAR(clear_module_state->__pyx_kp_u_Expected_after_file_name); - Py_CLEAR(clear_module_state->__pyx_kp_u_Expected_before_file_name); - Py_CLEAR(clear_module_state->__pyx_kp_u_Expected_file_name); - Py_CLEAR(clear_module_state->__pyx_kp_u_Expected_glyph_class_name); - Py_CLEAR(clear_module_state->__pyx_kp_u_Expected_s_to_terminate_anonymou); - Py_CLEAR(clear_module_state->__pyx_kp_u_Expected_to_terminate_string); - Py_CLEAR(clear_module_state->__pyx_n_s_FILENAME); - Py_CLEAR(clear_module_state->__pyx_n_u_FILENAME); - Py_CLEAR(clear_module_state->__pyx_n_s_FLOAT); - Py_CLEAR(clear_module_state->__pyx_n_u_FLOAT); - Py_CLEAR(clear_module_state->__pyx_n_s_FeatureLibError); - Py_CLEAR(clear_module_state->__pyx_n_s_FeatureLibLocation); - Py_CLEAR(clear_module_state->__pyx_n_s_FileNotFoundError); - Py_CLEAR(clear_module_state->__pyx_n_s_GLYPHCLASS); - Py_CLEAR(clear_module_state->__pyx_n_u_GLYPHCLASS); - Py_CLEAR(clear_module_state->__pyx_kp_u_Glyph_class_names_must_consist_o); - Py_CLEAR(clear_module_state->__pyx_kp_u_Glyph_class_names_must_not_be_lo); - Py_CLEAR(clear_module_state->__pyx_n_s_HEXADECIMAL); - Py_CLEAR(clear_module_state->__pyx_n_u_HEXADECIMAL); - Py_CLEAR(clear_module_state->__pyx_n_s_ImportError); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludedFeaNotFound); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludingLexer); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludingLexer___init); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludingLexer___iter); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludingLexer___next); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludingLexer_make_lexer); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludingLexer_next); - Py_CLEAR(clear_module_state->__pyx_n_s_IncludingLexer_scan_anonymous_bl); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer___init); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer___iter); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer___next); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer_location); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer_next); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer_next_2); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer_scan_anonymous_block); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer_scan_over); - Py_CLEAR(clear_module_state->__pyx_n_s_Lexer_scan_until); - Py_CLEAR(clear_module_state->__pyx_kp_s_Lexer_that_does_not_follow_inclu); - Py_CLEAR(clear_module_state->__pyx_kp_s_Lib_fontTools_feaLib_lexer_py); - Py_CLEAR(clear_module_state->__pyx_n_s_MODE_FILENAME); - Py_CLEAR(clear_module_state->__pyx_n_s_MODE_NORMAL); - Py_CLEAR(clear_module_state->__pyx_n_s_NAME); - Py_CLEAR(clear_module_state->__pyx_n_u_NAME); - Py_CLEAR(clear_module_state->__pyx_n_s_NEWLINE); - Py_CLEAR(clear_module_state->__pyx_n_u_NEWLINE); - Py_CLEAR(clear_module_state->__pyx_n_u_NORMAL); - Py_CLEAR(clear_module_state->__pyx_n_s_NUMBER); - Py_CLEAR(clear_module_state->__pyx_n_u_NUMBER); - Py_CLEAR(clear_module_state->__pyx_n_s_NUMBERS); - Py_CLEAR(clear_module_state->__pyx_n_s_NonIncludingLexer); - Py_CLEAR(clear_module_state->__pyx_n_s_NonIncludingLexer___next); - Py_CLEAR(clear_module_state->__pyx_n_s_OCTAL); - Py_CLEAR(clear_module_state->__pyx_n_u_OCTAL); - Py_CLEAR(clear_module_state->__pyx_n_s_RE_GLYPHCLASS); - Py_CLEAR(clear_module_state->__pyx_n_s_STRING); - Py_CLEAR(clear_module_state->__pyx_n_u_STRING); - Py_CLEAR(clear_module_state->__pyx_n_s_SYMBOL); - Py_CLEAR(clear_module_state->__pyx_n_u_SYMBOL); - Py_CLEAR(clear_module_state->__pyx_n_s_StopIteration); - Py_CLEAR(clear_module_state->__pyx_kp_u_Too_many_recursive_includes); - Py_CLEAR(clear_module_state->__pyx_kp_u_Unexpected_character_r); - Py_CLEAR(clear_module_state->__pyx_kp_u__10); - Py_CLEAR(clear_module_state->__pyx_kp_u__11); - Py_CLEAR(clear_module_state->__pyx_kp_u__12); - Py_CLEAR(clear_module_state->__pyx_n_s__13); - Py_CLEAR(clear_module_state->__pyx_kp_u__16); - Py_CLEAR(clear_module_state->__pyx_kp_u__17); - Py_CLEAR(clear_module_state->__pyx_kp_u__18); - Py_CLEAR(clear_module_state->__pyx_kp_u__19); - Py_CLEAR(clear_module_state->__pyx_kp_u__2); - Py_CLEAR(clear_module_state->__pyx_kp_u__20); - Py_CLEAR(clear_module_state->__pyx_kp_u__3); - Py_CLEAR(clear_module_state->__pyx_kp_u__4); - Py_CLEAR(clear_module_state->__pyx_kp_u__5); - Py_CLEAR(clear_module_state->__pyx_n_s__51); - Py_CLEAR(clear_module_state->__pyx_kp_u__6); - Py_CLEAR(clear_module_state->__pyx_kp_u__7); - Py_CLEAR(clear_module_state->__pyx_kp_u__8); - Py_CLEAR(clear_module_state->__pyx_kp_u__9); - Py_CLEAR(clear_module_state->__pyx_n_s_append); - Py_CLEAR(clear_module_state->__pyx_n_s_asyncio_coroutines); - Py_CLEAR(clear_module_state->__pyx_n_s_class_getitem); - Py_CLEAR(clear_module_state->__pyx_n_s_cline_in_traceback); - Py_CLEAR(clear_module_state->__pyx_n_s_close); - Py_CLEAR(clear_module_state->__pyx_n_s_closing); - Py_CLEAR(clear_module_state->__pyx_n_s_column); - Py_CLEAR(clear_module_state->__pyx_n_s_compile); - Py_CLEAR(clear_module_state->__pyx_n_s_cur_char); - Py_CLEAR(clear_module_state->__pyx_n_s_curpath); - Py_CLEAR(clear_module_state->__pyx_n_s_data); - Py_CLEAR(clear_module_state->__pyx_n_s_dict); - Py_CLEAR(clear_module_state->__pyx_n_s_dirname); - Py_CLEAR(clear_module_state->__pyx_n_s_doc); - Py_CLEAR(clear_module_state->__pyx_n_s_encoding); - Py_CLEAR(clear_module_state->__pyx_n_s_err); - Py_CLEAR(clear_module_state->__pyx_n_s_featurefile); - Py_CLEAR(clear_module_state->__pyx_n_s_featurefilepath); - Py_CLEAR(clear_module_state->__pyx_kp_u_features); - Py_CLEAR(clear_module_state->__pyx_n_s_file_or_path); - Py_CLEAR(clear_module_state->__pyx_n_s_filename); - Py_CLEAR(clear_module_state->__pyx_n_s_filename_2); - Py_CLEAR(clear_module_state->__pyx_n_s_fileobj); - Py_CLEAR(clear_module_state->__pyx_n_s_fname_location); - Py_CLEAR(clear_module_state->__pyx_n_s_fname_token); - Py_CLEAR(clear_module_state->__pyx_n_s_fname_type); - Py_CLEAR(clear_module_state->__pyx_n_s_fontTools_feaLib_error); - Py_CLEAR(clear_module_state->__pyx_n_s_fontTools_feaLib_lexer); - Py_CLEAR(clear_module_state->__pyx_n_s_fontTools_feaLib_location); - Py_CLEAR(clear_module_state->__pyx_n_s_getcwd); - Py_CLEAR(clear_module_state->__pyx_n_s_glyphclass); - Py_CLEAR(clear_module_state->__pyx_n_s_import); - Py_CLEAR(clear_module_state->__pyx_n_u_include); - Py_CLEAR(clear_module_state->__pyx_n_s_includeDir); - Py_CLEAR(clear_module_state->__pyx_n_s_init); - Py_CLEAR(clear_module_state->__pyx_n_s_init_subclass); - Py_CLEAR(clear_module_state->__pyx_n_s_initializing); - Py_CLEAR(clear_module_state->__pyx_n_s_is_coroutine); - Py_CLEAR(clear_module_state->__pyx_n_s_isabs); - Py_CLEAR(clear_module_state->__pyx_n_s_iter); - Py_CLEAR(clear_module_state->__pyx_n_s_join); - Py_CLEAR(clear_module_state->__pyx_n_s_lexer); - Py_CLEAR(clear_module_state->__pyx_n_s_lexers); - Py_CLEAR(clear_module_state->__pyx_n_s_limit); - Py_CLEAR(clear_module_state->__pyx_n_s_line); - Py_CLEAR(clear_module_state->__pyx_n_s_line_start); - Py_CLEAR(clear_module_state->__pyx_n_s_location); - Py_CLEAR(clear_module_state->__pyx_n_s_location_2); - Py_CLEAR(clear_module_state->__pyx_n_s_main); - Py_CLEAR(clear_module_state->__pyx_n_s_make_lexer); - Py_CLEAR(clear_module_state->__pyx_n_s_match); - Py_CLEAR(clear_module_state->__pyx_n_s_maxsplit); - Py_CLEAR(clear_module_state->__pyx_n_s_metaclass); - Py_CLEAR(clear_module_state->__pyx_n_s_mode); - Py_CLEAR(clear_module_state->__pyx_n_s_module); - Py_CLEAR(clear_module_state->__pyx_n_s_mro_entries); - Py_CLEAR(clear_module_state->__pyx_n_u_name); - Py_CLEAR(clear_module_state->__pyx_n_s_name_2); - Py_CLEAR(clear_module_state->__pyx_n_s_next); - Py_CLEAR(clear_module_state->__pyx_n_s_next_2); - Py_CLEAR(clear_module_state->__pyx_n_s_next_3); - Py_CLEAR(clear_module_state->__pyx_n_s_next_char); - Py_CLEAR(clear_module_state->__pyx_n_s_object); - Py_CLEAR(clear_module_state->__pyx_n_s_open); - Py_CLEAR(clear_module_state->__pyx_n_s_os); - Py_CLEAR(clear_module_state->__pyx_n_s_p); - Py_CLEAR(clear_module_state->__pyx_n_s_path); - Py_CLEAR(clear_module_state->__pyx_n_s_pop); - Py_CLEAR(clear_module_state->__pyx_n_s_pos); - Py_CLEAR(clear_module_state->__pyx_n_s_prepare); - Py_CLEAR(clear_module_state->__pyx_n_s_qualname); - Py_CLEAR(clear_module_state->__pyx_n_u_r); - Py_CLEAR(clear_module_state->__pyx_n_s_re); - Py_CLEAR(clear_module_state->__pyx_n_s_read); - Py_CLEAR(clear_module_state->__pyx_n_u_read); - Py_CLEAR(clear_module_state->__pyx_n_s_regexp); - Py_CLEAR(clear_module_state->__pyx_kp_u_s); - Py_CLEAR(clear_module_state->__pyx_kp_u_s_2); - Py_CLEAR(clear_module_state->__pyx_n_s_scan_anonymous_block); - Py_CLEAR(clear_module_state->__pyx_n_s_scan_over); - Py_CLEAR(clear_module_state->__pyx_n_s_scan_until); - Py_CLEAR(clear_module_state->__pyx_n_s_self); - Py_CLEAR(clear_module_state->__pyx_n_s_set_name); - Py_CLEAR(clear_module_state->__pyx_n_s_spec); - Py_CLEAR(clear_module_state->__pyx_n_s_split); - Py_CLEAR(clear_module_state->__pyx_n_s_start); - Py_CLEAR(clear_module_state->__pyx_n_s_staticmethod); - Py_CLEAR(clear_module_state->__pyx_n_s_stop_at); - Py_CLEAR(clear_module_state->__pyx_n_s_string); - Py_CLEAR(clear_module_state->__pyx_n_s_strip); - Py_CLEAR(clear_module_state->__pyx_n_s_sub); - Py_CLEAR(clear_module_state->__pyx_n_s_super); - Py_CLEAR(clear_module_state->__pyx_n_s_tag); - Py_CLEAR(clear_module_state->__pyx_n_s_test); - Py_CLEAR(clear_module_state->__pyx_n_s_text); - Py_CLEAR(clear_module_state->__pyx_n_s_text_2); - Py_CLEAR(clear_module_state->__pyx_n_s_text_length); - Py_CLEAR(clear_module_state->__pyx_n_s_token); - Py_CLEAR(clear_module_state->__pyx_n_s_token_type); - Py_CLEAR(clear_module_state->__pyx_kp_u_utf_8); - Py_CLEAR(clear_module_state->__pyx_n_s_valid); - Py_CLEAR(clear_module_state->__pyx_n_u_xX); - Py_CLEAR(clear_module_state->__pyx_int_0); - Py_CLEAR(clear_module_state->__pyx_int_1); - Py_CLEAR(clear_module_state->__pyx_int_2); - Py_CLEAR(clear_module_state->__pyx_int_8); - Py_CLEAR(clear_module_state->__pyx_int_10); - Py_CLEAR(clear_module_state->__pyx_int_16); - Py_CLEAR(clear_module_state->__pyx_tuple__14); - Py_CLEAR(clear_module_state->__pyx_tuple__15); - Py_CLEAR(clear_module_state->__pyx_tuple__21); - Py_CLEAR(clear_module_state->__pyx_tuple__23); - Py_CLEAR(clear_module_state->__pyx_tuple__26); - Py_CLEAR(clear_module_state->__pyx_tuple__28); - Py_CLEAR(clear_module_state->__pyx_tuple__30); - Py_CLEAR(clear_module_state->__pyx_tuple__32); - Py_CLEAR(clear_module_state->__pyx_tuple__34); - Py_CLEAR(clear_module_state->__pyx_tuple__36); - Py_CLEAR(clear_module_state->__pyx_tuple__38); - Py_CLEAR(clear_module_state->__pyx_tuple__39); - Py_CLEAR(clear_module_state->__pyx_tuple__40); - Py_CLEAR(clear_module_state->__pyx_tuple__44); - Py_CLEAR(clear_module_state->__pyx_tuple__46); - Py_CLEAR(clear_module_state->__pyx_tuple__48); - Py_CLEAR(clear_module_state->__pyx_codeobj__22); - Py_CLEAR(clear_module_state->__pyx_codeobj__24); - Py_CLEAR(clear_module_state->__pyx_codeobj__25); - Py_CLEAR(clear_module_state->__pyx_codeobj__27); - Py_CLEAR(clear_module_state->__pyx_codeobj__29); - Py_CLEAR(clear_module_state->__pyx_codeobj__31); - Py_CLEAR(clear_module_state->__pyx_codeobj__33); - Py_CLEAR(clear_module_state->__pyx_codeobj__35); - Py_CLEAR(clear_module_state->__pyx_codeobj__37); - Py_CLEAR(clear_module_state->__pyx_codeobj__41); - Py_CLEAR(clear_module_state->__pyx_codeobj__42); - Py_CLEAR(clear_module_state->__pyx_codeobj__43); - Py_CLEAR(clear_module_state->__pyx_codeobj__45); - Py_CLEAR(clear_module_state->__pyx_codeobj__47); - Py_CLEAR(clear_module_state->__pyx_codeobj__49); - Py_CLEAR(clear_module_state->__pyx_codeobj__50); - return 0; -} -#endif -/* #### Code section: module_state_traverse ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_traverse(PyObject *m, visitproc visit, void *arg) { - __pyx_mstate *traverse_module_state = __pyx_mstate(m); - if (!traverse_module_state) return 0; - Py_VISIT(traverse_module_state->__pyx_d); - Py_VISIT(traverse_module_state->__pyx_b); - Py_VISIT(traverse_module_state->__pyx_cython_runtime); - Py_VISIT(traverse_module_state->__pyx_empty_tuple); - Py_VISIT(traverse_module_state->__pyx_empty_bytes); - Py_VISIT(traverse_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_VISIT(traverse_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_VISIT(traverse_module_state->__pyx_FusedFunctionType); - #endif - Py_VISIT(traverse_module_state->__pyx_kp_u_); - Py_VISIT(traverse_module_state->__pyx_kp_u_0); - Py_VISIT(traverse_module_state->__pyx_kp_u_0123456789); - Py_VISIT(traverse_module_state->__pyx_kp_u_0123456789ABCDEFabcdef); - Py_VISIT(traverse_module_state->__pyx_n_u_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef); - Py_VISIT(traverse_module_state->__pyx_n_s_ANONYMOUS_BLOCK); - Py_VISIT(traverse_module_state->__pyx_n_u_ANONYMOUS_BLOCK); - Py_VISIT(traverse_module_state->__pyx_kp_s_A_Lexer_that_follows_include_sta); - Py_VISIT(traverse_module_state->__pyx_kp_u_A_Za_z_0_9); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_DIGIT); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_HEXDIGIT); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_LETTER); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_NAME_CONTINUATION); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_NAME_START); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_NEWLINE); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_SYMBOL); - Py_VISIT(traverse_module_state->__pyx_n_s_CHAR_WHITESPACE); - Py_VISIT(traverse_module_state->__pyx_n_s_CID); - Py_VISIT(traverse_module_state->__pyx_n_u_CID); - Py_VISIT(traverse_module_state->__pyx_n_s_COMMENT); - Py_VISIT(traverse_module_state->__pyx_n_u_COMMENT); - Py_VISIT(traverse_module_state->__pyx_kp_u_Expected_after_file_name); - Py_VISIT(traverse_module_state->__pyx_kp_u_Expected_before_file_name); - Py_VISIT(traverse_module_state->__pyx_kp_u_Expected_file_name); - Py_VISIT(traverse_module_state->__pyx_kp_u_Expected_glyph_class_name); - Py_VISIT(traverse_module_state->__pyx_kp_u_Expected_s_to_terminate_anonymou); - Py_VISIT(traverse_module_state->__pyx_kp_u_Expected_to_terminate_string); - Py_VISIT(traverse_module_state->__pyx_n_s_FILENAME); - Py_VISIT(traverse_module_state->__pyx_n_u_FILENAME); - Py_VISIT(traverse_module_state->__pyx_n_s_FLOAT); - Py_VISIT(traverse_module_state->__pyx_n_u_FLOAT); - Py_VISIT(traverse_module_state->__pyx_n_s_FeatureLibError); - Py_VISIT(traverse_module_state->__pyx_n_s_FeatureLibLocation); - Py_VISIT(traverse_module_state->__pyx_n_s_FileNotFoundError); - Py_VISIT(traverse_module_state->__pyx_n_s_GLYPHCLASS); - Py_VISIT(traverse_module_state->__pyx_n_u_GLYPHCLASS); - Py_VISIT(traverse_module_state->__pyx_kp_u_Glyph_class_names_must_consist_o); - Py_VISIT(traverse_module_state->__pyx_kp_u_Glyph_class_names_must_not_be_lo); - Py_VISIT(traverse_module_state->__pyx_n_s_HEXADECIMAL); - Py_VISIT(traverse_module_state->__pyx_n_u_HEXADECIMAL); - Py_VISIT(traverse_module_state->__pyx_n_s_ImportError); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludedFeaNotFound); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludingLexer); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludingLexer___init); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludingLexer___iter); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludingLexer___next); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludingLexer_make_lexer); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludingLexer_next); - Py_VISIT(traverse_module_state->__pyx_n_s_IncludingLexer_scan_anonymous_bl); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer___init); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer___iter); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer___next); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer_location); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer_next); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer_next_2); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer_scan_anonymous_block); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer_scan_over); - Py_VISIT(traverse_module_state->__pyx_n_s_Lexer_scan_until); - Py_VISIT(traverse_module_state->__pyx_kp_s_Lexer_that_does_not_follow_inclu); - Py_VISIT(traverse_module_state->__pyx_kp_s_Lib_fontTools_feaLib_lexer_py); - Py_VISIT(traverse_module_state->__pyx_n_s_MODE_FILENAME); - Py_VISIT(traverse_module_state->__pyx_n_s_MODE_NORMAL); - Py_VISIT(traverse_module_state->__pyx_n_s_NAME); - Py_VISIT(traverse_module_state->__pyx_n_u_NAME); - Py_VISIT(traverse_module_state->__pyx_n_s_NEWLINE); - Py_VISIT(traverse_module_state->__pyx_n_u_NEWLINE); - Py_VISIT(traverse_module_state->__pyx_n_u_NORMAL); - Py_VISIT(traverse_module_state->__pyx_n_s_NUMBER); - Py_VISIT(traverse_module_state->__pyx_n_u_NUMBER); - Py_VISIT(traverse_module_state->__pyx_n_s_NUMBERS); - Py_VISIT(traverse_module_state->__pyx_n_s_NonIncludingLexer); - Py_VISIT(traverse_module_state->__pyx_n_s_NonIncludingLexer___next); - Py_VISIT(traverse_module_state->__pyx_n_s_OCTAL); - Py_VISIT(traverse_module_state->__pyx_n_u_OCTAL); - Py_VISIT(traverse_module_state->__pyx_n_s_RE_GLYPHCLASS); - Py_VISIT(traverse_module_state->__pyx_n_s_STRING); - Py_VISIT(traverse_module_state->__pyx_n_u_STRING); - Py_VISIT(traverse_module_state->__pyx_n_s_SYMBOL); - Py_VISIT(traverse_module_state->__pyx_n_u_SYMBOL); - Py_VISIT(traverse_module_state->__pyx_n_s_StopIteration); - Py_VISIT(traverse_module_state->__pyx_kp_u_Too_many_recursive_includes); - Py_VISIT(traverse_module_state->__pyx_kp_u_Unexpected_character_r); - Py_VISIT(traverse_module_state->__pyx_kp_u__10); - Py_VISIT(traverse_module_state->__pyx_kp_u__11); - Py_VISIT(traverse_module_state->__pyx_kp_u__12); - Py_VISIT(traverse_module_state->__pyx_n_s__13); - Py_VISIT(traverse_module_state->__pyx_kp_u__16); - Py_VISIT(traverse_module_state->__pyx_kp_u__17); - Py_VISIT(traverse_module_state->__pyx_kp_u__18); - Py_VISIT(traverse_module_state->__pyx_kp_u__19); - Py_VISIT(traverse_module_state->__pyx_kp_u__2); - Py_VISIT(traverse_module_state->__pyx_kp_u__20); - Py_VISIT(traverse_module_state->__pyx_kp_u__3); - Py_VISIT(traverse_module_state->__pyx_kp_u__4); - Py_VISIT(traverse_module_state->__pyx_kp_u__5); - Py_VISIT(traverse_module_state->__pyx_n_s__51); - Py_VISIT(traverse_module_state->__pyx_kp_u__6); - Py_VISIT(traverse_module_state->__pyx_kp_u__7); - Py_VISIT(traverse_module_state->__pyx_kp_u__8); - Py_VISIT(traverse_module_state->__pyx_kp_u__9); - Py_VISIT(traverse_module_state->__pyx_n_s_append); - Py_VISIT(traverse_module_state->__pyx_n_s_asyncio_coroutines); - Py_VISIT(traverse_module_state->__pyx_n_s_class_getitem); - Py_VISIT(traverse_module_state->__pyx_n_s_cline_in_traceback); - Py_VISIT(traverse_module_state->__pyx_n_s_close); - Py_VISIT(traverse_module_state->__pyx_n_s_closing); - Py_VISIT(traverse_module_state->__pyx_n_s_column); - Py_VISIT(traverse_module_state->__pyx_n_s_compile); - Py_VISIT(traverse_module_state->__pyx_n_s_cur_char); - Py_VISIT(traverse_module_state->__pyx_n_s_curpath); - Py_VISIT(traverse_module_state->__pyx_n_s_data); - Py_VISIT(traverse_module_state->__pyx_n_s_dict); - Py_VISIT(traverse_module_state->__pyx_n_s_dirname); - Py_VISIT(traverse_module_state->__pyx_n_s_doc); - Py_VISIT(traverse_module_state->__pyx_n_s_encoding); - Py_VISIT(traverse_module_state->__pyx_n_s_err); - Py_VISIT(traverse_module_state->__pyx_n_s_featurefile); - Py_VISIT(traverse_module_state->__pyx_n_s_featurefilepath); - Py_VISIT(traverse_module_state->__pyx_kp_u_features); - Py_VISIT(traverse_module_state->__pyx_n_s_file_or_path); - Py_VISIT(traverse_module_state->__pyx_n_s_filename); - Py_VISIT(traverse_module_state->__pyx_n_s_filename_2); - Py_VISIT(traverse_module_state->__pyx_n_s_fileobj); - Py_VISIT(traverse_module_state->__pyx_n_s_fname_location); - Py_VISIT(traverse_module_state->__pyx_n_s_fname_token); - Py_VISIT(traverse_module_state->__pyx_n_s_fname_type); - Py_VISIT(traverse_module_state->__pyx_n_s_fontTools_feaLib_error); - Py_VISIT(traverse_module_state->__pyx_n_s_fontTools_feaLib_lexer); - Py_VISIT(traverse_module_state->__pyx_n_s_fontTools_feaLib_location); - Py_VISIT(traverse_module_state->__pyx_n_s_getcwd); - Py_VISIT(traverse_module_state->__pyx_n_s_glyphclass); - Py_VISIT(traverse_module_state->__pyx_n_s_import); - Py_VISIT(traverse_module_state->__pyx_n_u_include); - Py_VISIT(traverse_module_state->__pyx_n_s_includeDir); - Py_VISIT(traverse_module_state->__pyx_n_s_init); - Py_VISIT(traverse_module_state->__pyx_n_s_init_subclass); - Py_VISIT(traverse_module_state->__pyx_n_s_initializing); - Py_VISIT(traverse_module_state->__pyx_n_s_is_coroutine); - Py_VISIT(traverse_module_state->__pyx_n_s_isabs); - Py_VISIT(traverse_module_state->__pyx_n_s_iter); - Py_VISIT(traverse_module_state->__pyx_n_s_join); - Py_VISIT(traverse_module_state->__pyx_n_s_lexer); - Py_VISIT(traverse_module_state->__pyx_n_s_lexers); - Py_VISIT(traverse_module_state->__pyx_n_s_limit); - Py_VISIT(traverse_module_state->__pyx_n_s_line); - Py_VISIT(traverse_module_state->__pyx_n_s_line_start); - Py_VISIT(traverse_module_state->__pyx_n_s_location); - Py_VISIT(traverse_module_state->__pyx_n_s_location_2); - Py_VISIT(traverse_module_state->__pyx_n_s_main); - Py_VISIT(traverse_module_state->__pyx_n_s_make_lexer); - Py_VISIT(traverse_module_state->__pyx_n_s_match); - Py_VISIT(traverse_module_state->__pyx_n_s_maxsplit); - Py_VISIT(traverse_module_state->__pyx_n_s_metaclass); - Py_VISIT(traverse_module_state->__pyx_n_s_mode); - Py_VISIT(traverse_module_state->__pyx_n_s_module); - Py_VISIT(traverse_module_state->__pyx_n_s_mro_entries); - Py_VISIT(traverse_module_state->__pyx_n_u_name); - Py_VISIT(traverse_module_state->__pyx_n_s_name_2); - Py_VISIT(traverse_module_state->__pyx_n_s_next); - Py_VISIT(traverse_module_state->__pyx_n_s_next_2); - Py_VISIT(traverse_module_state->__pyx_n_s_next_3); - Py_VISIT(traverse_module_state->__pyx_n_s_next_char); - Py_VISIT(traverse_module_state->__pyx_n_s_object); - Py_VISIT(traverse_module_state->__pyx_n_s_open); - Py_VISIT(traverse_module_state->__pyx_n_s_os); - Py_VISIT(traverse_module_state->__pyx_n_s_p); - Py_VISIT(traverse_module_state->__pyx_n_s_path); - Py_VISIT(traverse_module_state->__pyx_n_s_pop); - Py_VISIT(traverse_module_state->__pyx_n_s_pos); - Py_VISIT(traverse_module_state->__pyx_n_s_prepare); - Py_VISIT(traverse_module_state->__pyx_n_s_qualname); - Py_VISIT(traverse_module_state->__pyx_n_u_r); - Py_VISIT(traverse_module_state->__pyx_n_s_re); - Py_VISIT(traverse_module_state->__pyx_n_s_read); - Py_VISIT(traverse_module_state->__pyx_n_u_read); - Py_VISIT(traverse_module_state->__pyx_n_s_regexp); - Py_VISIT(traverse_module_state->__pyx_kp_u_s); - Py_VISIT(traverse_module_state->__pyx_kp_u_s_2); - Py_VISIT(traverse_module_state->__pyx_n_s_scan_anonymous_block); - Py_VISIT(traverse_module_state->__pyx_n_s_scan_over); - Py_VISIT(traverse_module_state->__pyx_n_s_scan_until); - Py_VISIT(traverse_module_state->__pyx_n_s_self); - Py_VISIT(traverse_module_state->__pyx_n_s_set_name); - Py_VISIT(traverse_module_state->__pyx_n_s_spec); - Py_VISIT(traverse_module_state->__pyx_n_s_split); - Py_VISIT(traverse_module_state->__pyx_n_s_start); - Py_VISIT(traverse_module_state->__pyx_n_s_staticmethod); - Py_VISIT(traverse_module_state->__pyx_n_s_stop_at); - Py_VISIT(traverse_module_state->__pyx_n_s_string); - Py_VISIT(traverse_module_state->__pyx_n_s_strip); - Py_VISIT(traverse_module_state->__pyx_n_s_sub); - Py_VISIT(traverse_module_state->__pyx_n_s_super); - Py_VISIT(traverse_module_state->__pyx_n_s_tag); - Py_VISIT(traverse_module_state->__pyx_n_s_test); - Py_VISIT(traverse_module_state->__pyx_n_s_text); - Py_VISIT(traverse_module_state->__pyx_n_s_text_2); - Py_VISIT(traverse_module_state->__pyx_n_s_text_length); - Py_VISIT(traverse_module_state->__pyx_n_s_token); - Py_VISIT(traverse_module_state->__pyx_n_s_token_type); - Py_VISIT(traverse_module_state->__pyx_kp_u_utf_8); - Py_VISIT(traverse_module_state->__pyx_n_s_valid); - Py_VISIT(traverse_module_state->__pyx_n_u_xX); - Py_VISIT(traverse_module_state->__pyx_int_0); - Py_VISIT(traverse_module_state->__pyx_int_1); - Py_VISIT(traverse_module_state->__pyx_int_2); - Py_VISIT(traverse_module_state->__pyx_int_8); - Py_VISIT(traverse_module_state->__pyx_int_10); - Py_VISIT(traverse_module_state->__pyx_int_16); - Py_VISIT(traverse_module_state->__pyx_tuple__14); - Py_VISIT(traverse_module_state->__pyx_tuple__15); - Py_VISIT(traverse_module_state->__pyx_tuple__21); - Py_VISIT(traverse_module_state->__pyx_tuple__23); - Py_VISIT(traverse_module_state->__pyx_tuple__26); - Py_VISIT(traverse_module_state->__pyx_tuple__28); - Py_VISIT(traverse_module_state->__pyx_tuple__30); - Py_VISIT(traverse_module_state->__pyx_tuple__32); - Py_VISIT(traverse_module_state->__pyx_tuple__34); - Py_VISIT(traverse_module_state->__pyx_tuple__36); - Py_VISIT(traverse_module_state->__pyx_tuple__38); - Py_VISIT(traverse_module_state->__pyx_tuple__39); - Py_VISIT(traverse_module_state->__pyx_tuple__40); - Py_VISIT(traverse_module_state->__pyx_tuple__44); - Py_VISIT(traverse_module_state->__pyx_tuple__46); - Py_VISIT(traverse_module_state->__pyx_tuple__48); - Py_VISIT(traverse_module_state->__pyx_codeobj__22); - Py_VISIT(traverse_module_state->__pyx_codeobj__24); - Py_VISIT(traverse_module_state->__pyx_codeobj__25); - Py_VISIT(traverse_module_state->__pyx_codeobj__27); - Py_VISIT(traverse_module_state->__pyx_codeobj__29); - Py_VISIT(traverse_module_state->__pyx_codeobj__31); - Py_VISIT(traverse_module_state->__pyx_codeobj__33); - Py_VISIT(traverse_module_state->__pyx_codeobj__35); - Py_VISIT(traverse_module_state->__pyx_codeobj__37); - Py_VISIT(traverse_module_state->__pyx_codeobj__41); - Py_VISIT(traverse_module_state->__pyx_codeobj__42); - Py_VISIT(traverse_module_state->__pyx_codeobj__43); - Py_VISIT(traverse_module_state->__pyx_codeobj__45); - Py_VISIT(traverse_module_state->__pyx_codeobj__47); - Py_VISIT(traverse_module_state->__pyx_codeobj__49); - Py_VISIT(traverse_module_state->__pyx_codeobj__50); - return 0; -} -#endif -/* #### Code section: module_state_defines ### */ -#define __pyx_d __pyx_mstate_global->__pyx_d -#define __pyx_b __pyx_mstate_global->__pyx_b -#define __pyx_cython_runtime __pyx_mstate_global->__pyx_cython_runtime -#define __pyx_empty_tuple __pyx_mstate_global->__pyx_empty_tuple -#define __pyx_empty_bytes __pyx_mstate_global->__pyx_empty_bytes -#define __pyx_empty_unicode __pyx_mstate_global->__pyx_empty_unicode -#ifdef __Pyx_CyFunction_USED -#define __pyx_CyFunctionType __pyx_mstate_global->__pyx_CyFunctionType -#endif -#ifdef __Pyx_FusedFunction_USED -#define __pyx_FusedFunctionType __pyx_mstate_global->__pyx_FusedFunctionType -#endif -#ifdef __Pyx_Generator_USED -#define __pyx_GeneratorType __pyx_mstate_global->__pyx_GeneratorType -#endif -#ifdef __Pyx_IterableCoroutine_USED -#define __pyx_IterableCoroutineType __pyx_mstate_global->__pyx_IterableCoroutineType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineAwaitType __pyx_mstate_global->__pyx_CoroutineAwaitType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineType __pyx_mstate_global->__pyx_CoroutineType -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#define __pyx_kp_u_ __pyx_mstate_global->__pyx_kp_u_ -#define __pyx_kp_u_0 __pyx_mstate_global->__pyx_kp_u_0 -#define __pyx_kp_u_0123456789 __pyx_mstate_global->__pyx_kp_u_0123456789 -#define __pyx_kp_u_0123456789ABCDEFabcdef __pyx_mstate_global->__pyx_kp_u_0123456789ABCDEFabcdef -#define __pyx_n_u_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef __pyx_mstate_global->__pyx_n_u_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef -#define __pyx_n_s_ANONYMOUS_BLOCK __pyx_mstate_global->__pyx_n_s_ANONYMOUS_BLOCK -#define __pyx_n_u_ANONYMOUS_BLOCK __pyx_mstate_global->__pyx_n_u_ANONYMOUS_BLOCK -#define __pyx_kp_s_A_Lexer_that_follows_include_sta __pyx_mstate_global->__pyx_kp_s_A_Lexer_that_follows_include_sta -#define __pyx_kp_u_A_Za_z_0_9 __pyx_mstate_global->__pyx_kp_u_A_Za_z_0_9 -#define __pyx_n_s_CHAR_DIGIT __pyx_mstate_global->__pyx_n_s_CHAR_DIGIT -#define __pyx_n_s_CHAR_HEXDIGIT __pyx_mstate_global->__pyx_n_s_CHAR_HEXDIGIT -#define __pyx_n_s_CHAR_LETTER __pyx_mstate_global->__pyx_n_s_CHAR_LETTER -#define __pyx_n_s_CHAR_NAME_CONTINUATION __pyx_mstate_global->__pyx_n_s_CHAR_NAME_CONTINUATION -#define __pyx_n_s_CHAR_NAME_START __pyx_mstate_global->__pyx_n_s_CHAR_NAME_START -#define __pyx_n_s_CHAR_NEWLINE __pyx_mstate_global->__pyx_n_s_CHAR_NEWLINE -#define __pyx_n_s_CHAR_SYMBOL __pyx_mstate_global->__pyx_n_s_CHAR_SYMBOL -#define __pyx_n_s_CHAR_WHITESPACE __pyx_mstate_global->__pyx_n_s_CHAR_WHITESPACE -#define __pyx_n_s_CID __pyx_mstate_global->__pyx_n_s_CID -#define __pyx_n_u_CID __pyx_mstate_global->__pyx_n_u_CID -#define __pyx_n_s_COMMENT __pyx_mstate_global->__pyx_n_s_COMMENT -#define __pyx_n_u_COMMENT __pyx_mstate_global->__pyx_n_u_COMMENT -#define __pyx_kp_u_Expected_after_file_name __pyx_mstate_global->__pyx_kp_u_Expected_after_file_name -#define __pyx_kp_u_Expected_before_file_name __pyx_mstate_global->__pyx_kp_u_Expected_before_file_name -#define __pyx_kp_u_Expected_file_name __pyx_mstate_global->__pyx_kp_u_Expected_file_name -#define __pyx_kp_u_Expected_glyph_class_name __pyx_mstate_global->__pyx_kp_u_Expected_glyph_class_name -#define __pyx_kp_u_Expected_s_to_terminate_anonymou __pyx_mstate_global->__pyx_kp_u_Expected_s_to_terminate_anonymou -#define __pyx_kp_u_Expected_to_terminate_string __pyx_mstate_global->__pyx_kp_u_Expected_to_terminate_string -#define __pyx_n_s_FILENAME __pyx_mstate_global->__pyx_n_s_FILENAME -#define __pyx_n_u_FILENAME __pyx_mstate_global->__pyx_n_u_FILENAME -#define __pyx_n_s_FLOAT __pyx_mstate_global->__pyx_n_s_FLOAT -#define __pyx_n_u_FLOAT __pyx_mstate_global->__pyx_n_u_FLOAT -#define __pyx_n_s_FeatureLibError __pyx_mstate_global->__pyx_n_s_FeatureLibError -#define __pyx_n_s_FeatureLibLocation __pyx_mstate_global->__pyx_n_s_FeatureLibLocation -#define __pyx_n_s_FileNotFoundError __pyx_mstate_global->__pyx_n_s_FileNotFoundError -#define __pyx_n_s_GLYPHCLASS __pyx_mstate_global->__pyx_n_s_GLYPHCLASS -#define __pyx_n_u_GLYPHCLASS __pyx_mstate_global->__pyx_n_u_GLYPHCLASS -#define __pyx_kp_u_Glyph_class_names_must_consist_o __pyx_mstate_global->__pyx_kp_u_Glyph_class_names_must_consist_o -#define __pyx_kp_u_Glyph_class_names_must_not_be_lo __pyx_mstate_global->__pyx_kp_u_Glyph_class_names_must_not_be_lo -#define __pyx_n_s_HEXADECIMAL __pyx_mstate_global->__pyx_n_s_HEXADECIMAL -#define __pyx_n_u_HEXADECIMAL __pyx_mstate_global->__pyx_n_u_HEXADECIMAL -#define __pyx_n_s_ImportError __pyx_mstate_global->__pyx_n_s_ImportError -#define __pyx_n_s_IncludedFeaNotFound __pyx_mstate_global->__pyx_n_s_IncludedFeaNotFound -#define __pyx_n_s_IncludingLexer __pyx_mstate_global->__pyx_n_s_IncludingLexer -#define __pyx_n_s_IncludingLexer___init __pyx_mstate_global->__pyx_n_s_IncludingLexer___init -#define __pyx_n_s_IncludingLexer___iter __pyx_mstate_global->__pyx_n_s_IncludingLexer___iter -#define __pyx_n_s_IncludingLexer___next __pyx_mstate_global->__pyx_n_s_IncludingLexer___next -#define __pyx_n_s_IncludingLexer_make_lexer __pyx_mstate_global->__pyx_n_s_IncludingLexer_make_lexer -#define __pyx_n_s_IncludingLexer_next __pyx_mstate_global->__pyx_n_s_IncludingLexer_next -#define __pyx_n_s_IncludingLexer_scan_anonymous_bl __pyx_mstate_global->__pyx_n_s_IncludingLexer_scan_anonymous_bl -#define __pyx_n_s_Lexer __pyx_mstate_global->__pyx_n_s_Lexer -#define __pyx_n_s_Lexer___init __pyx_mstate_global->__pyx_n_s_Lexer___init -#define __pyx_n_s_Lexer___iter __pyx_mstate_global->__pyx_n_s_Lexer___iter -#define __pyx_n_s_Lexer___next __pyx_mstate_global->__pyx_n_s_Lexer___next -#define __pyx_n_s_Lexer_location __pyx_mstate_global->__pyx_n_s_Lexer_location -#define __pyx_n_s_Lexer_next __pyx_mstate_global->__pyx_n_s_Lexer_next -#define __pyx_n_s_Lexer_next_2 __pyx_mstate_global->__pyx_n_s_Lexer_next_2 -#define __pyx_n_s_Lexer_scan_anonymous_block __pyx_mstate_global->__pyx_n_s_Lexer_scan_anonymous_block -#define __pyx_n_s_Lexer_scan_over __pyx_mstate_global->__pyx_n_s_Lexer_scan_over -#define __pyx_n_s_Lexer_scan_until __pyx_mstate_global->__pyx_n_s_Lexer_scan_until -#define __pyx_kp_s_Lexer_that_does_not_follow_inclu __pyx_mstate_global->__pyx_kp_s_Lexer_that_does_not_follow_inclu -#define __pyx_kp_s_Lib_fontTools_feaLib_lexer_py __pyx_mstate_global->__pyx_kp_s_Lib_fontTools_feaLib_lexer_py -#define __pyx_n_s_MODE_FILENAME __pyx_mstate_global->__pyx_n_s_MODE_FILENAME -#define __pyx_n_s_MODE_NORMAL __pyx_mstate_global->__pyx_n_s_MODE_NORMAL -#define __pyx_n_s_NAME __pyx_mstate_global->__pyx_n_s_NAME -#define __pyx_n_u_NAME __pyx_mstate_global->__pyx_n_u_NAME -#define __pyx_n_s_NEWLINE __pyx_mstate_global->__pyx_n_s_NEWLINE -#define __pyx_n_u_NEWLINE __pyx_mstate_global->__pyx_n_u_NEWLINE -#define __pyx_n_u_NORMAL __pyx_mstate_global->__pyx_n_u_NORMAL -#define __pyx_n_s_NUMBER __pyx_mstate_global->__pyx_n_s_NUMBER -#define __pyx_n_u_NUMBER __pyx_mstate_global->__pyx_n_u_NUMBER -#define __pyx_n_s_NUMBERS __pyx_mstate_global->__pyx_n_s_NUMBERS -#define __pyx_n_s_NonIncludingLexer __pyx_mstate_global->__pyx_n_s_NonIncludingLexer -#define __pyx_n_s_NonIncludingLexer___next __pyx_mstate_global->__pyx_n_s_NonIncludingLexer___next -#define __pyx_n_s_OCTAL __pyx_mstate_global->__pyx_n_s_OCTAL -#define __pyx_n_u_OCTAL __pyx_mstate_global->__pyx_n_u_OCTAL -#define __pyx_n_s_RE_GLYPHCLASS __pyx_mstate_global->__pyx_n_s_RE_GLYPHCLASS -#define __pyx_n_s_STRING __pyx_mstate_global->__pyx_n_s_STRING -#define __pyx_n_u_STRING __pyx_mstate_global->__pyx_n_u_STRING -#define __pyx_n_s_SYMBOL __pyx_mstate_global->__pyx_n_s_SYMBOL -#define __pyx_n_u_SYMBOL __pyx_mstate_global->__pyx_n_u_SYMBOL -#define __pyx_n_s_StopIteration __pyx_mstate_global->__pyx_n_s_StopIteration -#define __pyx_kp_u_Too_many_recursive_includes __pyx_mstate_global->__pyx_kp_u_Too_many_recursive_includes -#define __pyx_kp_u_Unexpected_character_r __pyx_mstate_global->__pyx_kp_u_Unexpected_character_r -#define __pyx_kp_u__10 __pyx_mstate_global->__pyx_kp_u__10 -#define __pyx_kp_u__11 __pyx_mstate_global->__pyx_kp_u__11 -#define __pyx_kp_u__12 __pyx_mstate_global->__pyx_kp_u__12 -#define __pyx_n_s__13 __pyx_mstate_global->__pyx_n_s__13 -#define __pyx_kp_u__16 __pyx_mstate_global->__pyx_kp_u__16 -#define __pyx_kp_u__17 __pyx_mstate_global->__pyx_kp_u__17 -#define __pyx_kp_u__18 __pyx_mstate_global->__pyx_kp_u__18 -#define __pyx_kp_u__19 __pyx_mstate_global->__pyx_kp_u__19 -#define __pyx_kp_u__2 __pyx_mstate_global->__pyx_kp_u__2 -#define __pyx_kp_u__20 __pyx_mstate_global->__pyx_kp_u__20 -#define __pyx_kp_u__3 __pyx_mstate_global->__pyx_kp_u__3 -#define __pyx_kp_u__4 __pyx_mstate_global->__pyx_kp_u__4 -#define __pyx_kp_u__5 __pyx_mstate_global->__pyx_kp_u__5 -#define __pyx_n_s__51 __pyx_mstate_global->__pyx_n_s__51 -#define __pyx_kp_u__6 __pyx_mstate_global->__pyx_kp_u__6 -#define __pyx_kp_u__7 __pyx_mstate_global->__pyx_kp_u__7 -#define __pyx_kp_u__8 __pyx_mstate_global->__pyx_kp_u__8 -#define __pyx_kp_u__9 __pyx_mstate_global->__pyx_kp_u__9 -#define __pyx_n_s_append __pyx_mstate_global->__pyx_n_s_append -#define __pyx_n_s_asyncio_coroutines __pyx_mstate_global->__pyx_n_s_asyncio_coroutines -#define __pyx_n_s_class_getitem __pyx_mstate_global->__pyx_n_s_class_getitem -#define __pyx_n_s_cline_in_traceback __pyx_mstate_global->__pyx_n_s_cline_in_traceback -#define __pyx_n_s_close __pyx_mstate_global->__pyx_n_s_close -#define __pyx_n_s_closing __pyx_mstate_global->__pyx_n_s_closing -#define __pyx_n_s_column __pyx_mstate_global->__pyx_n_s_column -#define __pyx_n_s_compile __pyx_mstate_global->__pyx_n_s_compile -#define __pyx_n_s_cur_char __pyx_mstate_global->__pyx_n_s_cur_char -#define __pyx_n_s_curpath __pyx_mstate_global->__pyx_n_s_curpath -#define __pyx_n_s_data __pyx_mstate_global->__pyx_n_s_data -#define __pyx_n_s_dict __pyx_mstate_global->__pyx_n_s_dict -#define __pyx_n_s_dirname __pyx_mstate_global->__pyx_n_s_dirname -#define __pyx_n_s_doc __pyx_mstate_global->__pyx_n_s_doc -#define __pyx_n_s_encoding __pyx_mstate_global->__pyx_n_s_encoding -#define __pyx_n_s_err __pyx_mstate_global->__pyx_n_s_err -#define __pyx_n_s_featurefile __pyx_mstate_global->__pyx_n_s_featurefile -#define __pyx_n_s_featurefilepath __pyx_mstate_global->__pyx_n_s_featurefilepath -#define __pyx_kp_u_features __pyx_mstate_global->__pyx_kp_u_features -#define __pyx_n_s_file_or_path __pyx_mstate_global->__pyx_n_s_file_or_path -#define __pyx_n_s_filename __pyx_mstate_global->__pyx_n_s_filename -#define __pyx_n_s_filename_2 __pyx_mstate_global->__pyx_n_s_filename_2 -#define __pyx_n_s_fileobj __pyx_mstate_global->__pyx_n_s_fileobj -#define __pyx_n_s_fname_location __pyx_mstate_global->__pyx_n_s_fname_location -#define __pyx_n_s_fname_token __pyx_mstate_global->__pyx_n_s_fname_token -#define __pyx_n_s_fname_type __pyx_mstate_global->__pyx_n_s_fname_type -#define __pyx_n_s_fontTools_feaLib_error __pyx_mstate_global->__pyx_n_s_fontTools_feaLib_error -#define __pyx_n_s_fontTools_feaLib_lexer __pyx_mstate_global->__pyx_n_s_fontTools_feaLib_lexer -#define __pyx_n_s_fontTools_feaLib_location __pyx_mstate_global->__pyx_n_s_fontTools_feaLib_location -#define __pyx_n_s_getcwd __pyx_mstate_global->__pyx_n_s_getcwd -#define __pyx_n_s_glyphclass __pyx_mstate_global->__pyx_n_s_glyphclass -#define __pyx_n_s_import __pyx_mstate_global->__pyx_n_s_import -#define __pyx_n_u_include __pyx_mstate_global->__pyx_n_u_include -#define __pyx_n_s_includeDir __pyx_mstate_global->__pyx_n_s_includeDir -#define __pyx_n_s_init __pyx_mstate_global->__pyx_n_s_init -#define __pyx_n_s_init_subclass __pyx_mstate_global->__pyx_n_s_init_subclass -#define __pyx_n_s_initializing __pyx_mstate_global->__pyx_n_s_initializing -#define __pyx_n_s_is_coroutine __pyx_mstate_global->__pyx_n_s_is_coroutine -#define __pyx_n_s_isabs __pyx_mstate_global->__pyx_n_s_isabs -#define __pyx_n_s_iter __pyx_mstate_global->__pyx_n_s_iter -#define __pyx_n_s_join __pyx_mstate_global->__pyx_n_s_join -#define __pyx_n_s_lexer __pyx_mstate_global->__pyx_n_s_lexer -#define __pyx_n_s_lexers __pyx_mstate_global->__pyx_n_s_lexers -#define __pyx_n_s_limit __pyx_mstate_global->__pyx_n_s_limit -#define __pyx_n_s_line __pyx_mstate_global->__pyx_n_s_line -#define __pyx_n_s_line_start __pyx_mstate_global->__pyx_n_s_line_start -#define __pyx_n_s_location __pyx_mstate_global->__pyx_n_s_location -#define __pyx_n_s_location_2 __pyx_mstate_global->__pyx_n_s_location_2 -#define __pyx_n_s_main __pyx_mstate_global->__pyx_n_s_main -#define __pyx_n_s_make_lexer __pyx_mstate_global->__pyx_n_s_make_lexer -#define __pyx_n_s_match __pyx_mstate_global->__pyx_n_s_match -#define __pyx_n_s_maxsplit __pyx_mstate_global->__pyx_n_s_maxsplit -#define __pyx_n_s_metaclass __pyx_mstate_global->__pyx_n_s_metaclass -#define __pyx_n_s_mode __pyx_mstate_global->__pyx_n_s_mode -#define __pyx_n_s_module __pyx_mstate_global->__pyx_n_s_module -#define __pyx_n_s_mro_entries __pyx_mstate_global->__pyx_n_s_mro_entries -#define __pyx_n_u_name __pyx_mstate_global->__pyx_n_u_name -#define __pyx_n_s_name_2 __pyx_mstate_global->__pyx_n_s_name_2 -#define __pyx_n_s_next __pyx_mstate_global->__pyx_n_s_next -#define __pyx_n_s_next_2 __pyx_mstate_global->__pyx_n_s_next_2 -#define __pyx_n_s_next_3 __pyx_mstate_global->__pyx_n_s_next_3 -#define __pyx_n_s_next_char __pyx_mstate_global->__pyx_n_s_next_char -#define __pyx_n_s_object __pyx_mstate_global->__pyx_n_s_object -#define __pyx_n_s_open __pyx_mstate_global->__pyx_n_s_open -#define __pyx_n_s_os __pyx_mstate_global->__pyx_n_s_os -#define __pyx_n_s_p __pyx_mstate_global->__pyx_n_s_p -#define __pyx_n_s_path __pyx_mstate_global->__pyx_n_s_path -#define __pyx_n_s_pop __pyx_mstate_global->__pyx_n_s_pop -#define __pyx_n_s_pos __pyx_mstate_global->__pyx_n_s_pos -#define __pyx_n_s_prepare __pyx_mstate_global->__pyx_n_s_prepare -#define __pyx_n_s_qualname __pyx_mstate_global->__pyx_n_s_qualname -#define __pyx_n_u_r __pyx_mstate_global->__pyx_n_u_r -#define __pyx_n_s_re __pyx_mstate_global->__pyx_n_s_re -#define __pyx_n_s_read __pyx_mstate_global->__pyx_n_s_read -#define __pyx_n_u_read __pyx_mstate_global->__pyx_n_u_read -#define __pyx_n_s_regexp __pyx_mstate_global->__pyx_n_s_regexp -#define __pyx_kp_u_s __pyx_mstate_global->__pyx_kp_u_s -#define __pyx_kp_u_s_2 __pyx_mstate_global->__pyx_kp_u_s_2 -#define __pyx_n_s_scan_anonymous_block __pyx_mstate_global->__pyx_n_s_scan_anonymous_block -#define __pyx_n_s_scan_over __pyx_mstate_global->__pyx_n_s_scan_over -#define __pyx_n_s_scan_until __pyx_mstate_global->__pyx_n_s_scan_until -#define __pyx_n_s_self __pyx_mstate_global->__pyx_n_s_self -#define __pyx_n_s_set_name __pyx_mstate_global->__pyx_n_s_set_name -#define __pyx_n_s_spec __pyx_mstate_global->__pyx_n_s_spec -#define __pyx_n_s_split __pyx_mstate_global->__pyx_n_s_split -#define __pyx_n_s_start __pyx_mstate_global->__pyx_n_s_start -#define __pyx_n_s_staticmethod __pyx_mstate_global->__pyx_n_s_staticmethod -#define __pyx_n_s_stop_at __pyx_mstate_global->__pyx_n_s_stop_at -#define __pyx_n_s_string __pyx_mstate_global->__pyx_n_s_string -#define __pyx_n_s_strip __pyx_mstate_global->__pyx_n_s_strip -#define __pyx_n_s_sub __pyx_mstate_global->__pyx_n_s_sub -#define __pyx_n_s_super __pyx_mstate_global->__pyx_n_s_super -#define __pyx_n_s_tag __pyx_mstate_global->__pyx_n_s_tag -#define __pyx_n_s_test __pyx_mstate_global->__pyx_n_s_test -#define __pyx_n_s_text __pyx_mstate_global->__pyx_n_s_text -#define __pyx_n_s_text_2 __pyx_mstate_global->__pyx_n_s_text_2 -#define __pyx_n_s_text_length __pyx_mstate_global->__pyx_n_s_text_length -#define __pyx_n_s_token __pyx_mstate_global->__pyx_n_s_token -#define __pyx_n_s_token_type __pyx_mstate_global->__pyx_n_s_token_type -#define __pyx_kp_u_utf_8 __pyx_mstate_global->__pyx_kp_u_utf_8 -#define __pyx_n_s_valid __pyx_mstate_global->__pyx_n_s_valid -#define __pyx_n_u_xX __pyx_mstate_global->__pyx_n_u_xX -#define __pyx_int_0 __pyx_mstate_global->__pyx_int_0 -#define __pyx_int_1 __pyx_mstate_global->__pyx_int_1 -#define __pyx_int_2 __pyx_mstate_global->__pyx_int_2 -#define __pyx_int_8 __pyx_mstate_global->__pyx_int_8 -#define __pyx_int_10 __pyx_mstate_global->__pyx_int_10 -#define __pyx_int_16 __pyx_mstate_global->__pyx_int_16 -#define __pyx_tuple__14 __pyx_mstate_global->__pyx_tuple__14 -#define __pyx_tuple__15 __pyx_mstate_global->__pyx_tuple__15 -#define __pyx_tuple__21 __pyx_mstate_global->__pyx_tuple__21 -#define __pyx_tuple__23 __pyx_mstate_global->__pyx_tuple__23 -#define __pyx_tuple__26 __pyx_mstate_global->__pyx_tuple__26 -#define __pyx_tuple__28 __pyx_mstate_global->__pyx_tuple__28 -#define __pyx_tuple__30 __pyx_mstate_global->__pyx_tuple__30 -#define __pyx_tuple__32 __pyx_mstate_global->__pyx_tuple__32 -#define __pyx_tuple__34 __pyx_mstate_global->__pyx_tuple__34 -#define __pyx_tuple__36 __pyx_mstate_global->__pyx_tuple__36 -#define __pyx_tuple__38 __pyx_mstate_global->__pyx_tuple__38 -#define __pyx_tuple__39 __pyx_mstate_global->__pyx_tuple__39 -#define __pyx_tuple__40 __pyx_mstate_global->__pyx_tuple__40 -#define __pyx_tuple__44 __pyx_mstate_global->__pyx_tuple__44 -#define __pyx_tuple__46 __pyx_mstate_global->__pyx_tuple__46 -#define __pyx_tuple__48 __pyx_mstate_global->__pyx_tuple__48 -#define __pyx_codeobj__22 __pyx_mstate_global->__pyx_codeobj__22 -#define __pyx_codeobj__24 __pyx_mstate_global->__pyx_codeobj__24 -#define __pyx_codeobj__25 __pyx_mstate_global->__pyx_codeobj__25 -#define __pyx_codeobj__27 __pyx_mstate_global->__pyx_codeobj__27 -#define __pyx_codeobj__29 __pyx_mstate_global->__pyx_codeobj__29 -#define __pyx_codeobj__31 __pyx_mstate_global->__pyx_codeobj__31 -#define __pyx_codeobj__33 __pyx_mstate_global->__pyx_codeobj__33 -#define __pyx_codeobj__35 __pyx_mstate_global->__pyx_codeobj__35 -#define __pyx_codeobj__37 __pyx_mstate_global->__pyx_codeobj__37 -#define __pyx_codeobj__41 __pyx_mstate_global->__pyx_codeobj__41 -#define __pyx_codeobj__42 __pyx_mstate_global->__pyx_codeobj__42 -#define __pyx_codeobj__43 __pyx_mstate_global->__pyx_codeobj__43 -#define __pyx_codeobj__45 __pyx_mstate_global->__pyx_codeobj__45 -#define __pyx_codeobj__47 __pyx_mstate_global->__pyx_codeobj__47 -#define __pyx_codeobj__49 __pyx_mstate_global->__pyx_codeobj__49 -#define __pyx_codeobj__50 __pyx_mstate_global->__pyx_codeobj__50 -/* #### Code section: module_code ### */ - -/* "fontTools/feaLib/lexer.py":43 - * MODE_FILENAME_ = "FILENAME" - * - * def __init__(self, text, filename): # <<<<<<<<<<<<<< - * self.filename_ = filename - * self.line_ = 1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer___init__, "Lexer.__init__(self, text, filename)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_1__init__ = {"__init__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_1__init__, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer___init__}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_text = 0; - PyObject *__pyx_v_filename = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 43, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_text,&__pyx_n_s_filename,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 43, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_text)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 43, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__init__", 1, 3, 3, 1); __PYX_ERR(0, 43, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_filename)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 43, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__init__", 1, 3, 3, 2); __PYX_ERR(0, 43, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__init__") < 0)) __PYX_ERR(0, 43, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_self = values[0]; - __pyx_v_text = values[1]; - __pyx_v_filename = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 43, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer___init__(__pyx_self, __pyx_v_self, __pyx_v_text, __pyx_v_filename); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_text, PyObject *__pyx_v_filename) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__init__", 0); - - /* "fontTools/feaLib/lexer.py":44 - * - * def __init__(self, text, filename): - * self.filename_ = filename # <<<<<<<<<<<<<< - * self.line_ = 1 - * self.pos_ = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_filename_2, __pyx_v_filename) < 0) __PYX_ERR(0, 44, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":45 - * def __init__(self, text, filename): - * self.filename_ = filename - * self.line_ = 1 # <<<<<<<<<<<<<< - * self.pos_ = 0 - * self.line_start_ = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_line, __pyx_int_1) < 0) __PYX_ERR(0, 45, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":46 - * self.filename_ = filename - * self.line_ = 1 - * self.pos_ = 0 # <<<<<<<<<<<<<< - * self.line_start_ = 0 - * self.text_ = text - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_int_0) < 0) __PYX_ERR(0, 46, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":47 - * self.line_ = 1 - * self.pos_ = 0 - * self.line_start_ = 0 # <<<<<<<<<<<<<< - * self.text_ = text - * self.text_length_ = len(text) - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_line_start, __pyx_int_0) < 0) __PYX_ERR(0, 47, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":48 - * self.pos_ = 0 - * self.line_start_ = 0 - * self.text_ = text # <<<<<<<<<<<<<< - * self.text_length_ = len(text) - * self.mode_ = Lexer.MODE_NORMAL_ - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_text_2, __pyx_v_text) < 0) __PYX_ERR(0, 48, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":49 - * self.line_start_ = 0 - * self.text_ = text - * self.text_length_ = len(text) # <<<<<<<<<<<<<< - * self.mode_ = Lexer.MODE_NORMAL_ - * - */ - __pyx_t_1 = PyObject_Length(__pyx_v_text); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 49, __pyx_L1_error) - __pyx_t_2 = PyInt_FromSsize_t(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 49, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_text_length, __pyx_t_2) < 0) __PYX_ERR(0, 49, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":50 - * self.text_ = text - * self.text_length_ = len(text) - * self.mode_ = Lexer.MODE_NORMAL_ # <<<<<<<<<<<<<< - * - * def __iter__(self): - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_MODE_NORMAL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_mode, __pyx_t_3) < 0) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":43 - * MODE_FILENAME_ = "FILENAME" - * - * def __init__(self, text, filename): # <<<<<<<<<<<<<< - * self.filename_ = filename - * self.line_ = 1 - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":52 - * self.mode_ = Lexer.MODE_NORMAL_ - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_3__iter__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_2__iter__, "Lexer.__iter__(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_3__iter__ = {"__iter__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_3__iter__, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_2__iter__}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_3__iter__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__iter__ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 52, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 52, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__iter__") < 0)) __PYX_ERR(0, 52, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__iter__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 52, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.__iter__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_2__iter__(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_2__iter__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__iter__", 0); - - /* "fontTools/feaLib/lexer.py":53 - * - * def __iter__(self): - * return self # <<<<<<<<<<<<<< - * - * def next(self): # Python 2 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self); - __pyx_r = __pyx_v_self; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":52 - * self.mode_ = Lexer.MODE_NORMAL_ - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":55 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_5next(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_4next, "Lexer.next(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_5next = {"next", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_5next, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_4next}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_5next(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("next (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 55, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 55, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "next") < 0)) __PYX_ERR(0, 55, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("next", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 55, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.next", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_4next(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_4next(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("next", 0); - - /* "fontTools/feaLib/lexer.py":56 - * - * def next(self): # Python 2 - * return self.__next__() # <<<<<<<<<<<<<< - * - * def __next__(self): # Python 3 - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_next); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":55 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.next", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":58 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while True: - * token_type, token, location = self.next_() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_7__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_6__next__, "Lexer.__next__(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_7__next__ = {"__next__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_7__next__, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_6__next__}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_7__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__next__ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 58, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 58, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__next__") < 0)) __PYX_ERR(0, 58, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__next__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 58, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_6__next__(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_6__next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_token_type = NULL; - PyObject *__pyx_v_token = NULL; - PyObject *__pyx_v_location = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__next__", 0); - - /* "fontTools/feaLib/lexer.py":59 - * - * def __next__(self): # Python 3 - * while True: # <<<<<<<<<<<<<< - * token_type, token, location = self.next_() - * if token_type != Lexer.NEWLINE: - */ - while (1) { - - /* "fontTools/feaLib/lexer.py":60 - * def __next__(self): # Python 3 - * while True: - * token_type, token, location = self.next_() # <<<<<<<<<<<<<< - * if token_type != Lexer.NEWLINE: - * return (token_type, token, location) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_next_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 60, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_5 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - index = 0; __pyx_t_2 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 2; __pyx_t_5 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_5)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 3) < 0) __PYX_ERR(0, 60, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 60, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_token_type, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_token, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_XDECREF_SET(__pyx_v_location, __pyx_t_5); - __pyx_t_5 = 0; - - /* "fontTools/feaLib/lexer.py":61 - * while True: - * token_type, token, location = self.next_() - * if token_type != Lexer.NEWLINE: # <<<<<<<<<<<<<< - * return (token_type, token, location) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_NEWLINE); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_RichCompare(__pyx_v_token_type, __pyx_t_5, Py_NE); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":62 - * token_type, token, location = self.next_() - * if token_type != Lexer.NEWLINE: - * return (token_type, token, location) # <<<<<<<<<<<<<< - * - * def location_(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_token_type); - __Pyx_GIVEREF(__pyx_v_token_type); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_token_type)) __PYX_ERR(0, 62, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_token); - __Pyx_GIVEREF(__pyx_v_token); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_token)) __PYX_ERR(0, 62, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_location)) __PYX_ERR(0, 62, __pyx_L1_error); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":61 - * while True: - * token_type, token, location = self.next_() - * if token_type != Lexer.NEWLINE: # <<<<<<<<<<<<<< - * return (token_type, token, location) - * - */ - } - } - - /* "fontTools/feaLib/lexer.py":58 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while True: - * token_type, token, location = self.next_() - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_token_type); - __Pyx_XDECREF(__pyx_v_token); - __Pyx_XDECREF(__pyx_v_location); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":64 - * return (token_type, token, location) - * - * def location_(self): # <<<<<<<<<<<<<< - * column = self.pos_ - self.line_start_ + 1 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_9location_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_8location_, "Lexer.location_(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_9location_ = {"location_", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_9location_, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_8location_}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_9location_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("location_ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 64, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 64, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "location_") < 0)) __PYX_ERR(0, 64, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("location_", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 64, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.location_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_8location_(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_8location_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_column = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("location_", 0); - - /* "fontTools/feaLib/lexer.py":65 - * - * def location_(self): - * column = self.pos_ - self.line_start_ + 1 # <<<<<<<<<<<<<< - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_line_start); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Subtract(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_column = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":66 - * def location_(self): - * column = self.pos_ - self.line_start_ + 1 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) # <<<<<<<<<<<<<< - * - * def next_(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FeatureLibLocation); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_filename_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 66, __pyx_L1_error) - if (!__pyx_t_5) { - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } else { - __Pyx_INCREF(__pyx_t_4); - __pyx_t_1 = __pyx_t_4; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L3_bool_binop_done; - } - __Pyx_INCREF(__pyx_kp_u_features); - __pyx_t_1 = __pyx_kp_u_features; - __pyx_L3_bool_binop_done:; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_line); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_6, __pyx_t_1, __pyx_t_4, __pyx_v_column}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_7, 3+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":64 - * return (token_type, token, location) - * - * def location_(self): # <<<<<<<<<<<<<< - * column = self.pos_ - self.line_start_ + 1 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.location_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_column); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":68 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - * - * def next_(self): # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_WHITESPACE_) - * location = self.location_() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_11next_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_10next_, "Lexer.next_(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_11next_ = {"next_", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_11next_, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_10next_}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_11next_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("next_ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 68, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 68, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "next_") < 0)) __PYX_ERR(0, 68, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("next_", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 68, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.next_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_10next_(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_10next_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_location = NULL; - PyObject *__pyx_v_start = NULL; - PyObject *__pyx_v_text = NULL; - Py_ssize_t __pyx_v_limit; - PyObject *__pyx_v_cur_char = NULL; - PyObject *__pyx_v_next_char = NULL; - PyObject *__pyx_v_glyphclass = NULL; - PyObject *__pyx_v_token = NULL; - PyObject *__pyx_v_string = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("next_", 0); - - /* "fontTools/feaLib/lexer.py":69 - * - * def next_(self): - * self.scan_over_(Lexer.CHAR_WHITESPACE_) # <<<<<<<<<<<<<< - * location = self.location_() - * start = self.pos_ - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_CHAR_WHITESPACE); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":70 - * def next_(self): - * self.scan_over_(Lexer.CHAR_WHITESPACE_) - * location = self.location_() # <<<<<<<<<<<<<< - * start = self.pos_ - * text = self.text_ - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_location); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 70, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_4, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 70, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_location = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":71 - * self.scan_over_(Lexer.CHAR_WHITESPACE_) - * location = self.location_() - * start = self.pos_ # <<<<<<<<<<<<<< - * text = self.text_ - * limit = len(text) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_start = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":72 - * location = self.location_() - * start = self.pos_ - * text = self.text_ # <<<<<<<<<<<<<< - * limit = len(text) - * if start >= limit: - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 72, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_text = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":73 - * start = self.pos_ - * text = self.text_ - * limit = len(text) # <<<<<<<<<<<<<< - * if start >= limit: - * raise StopIteration() - */ - __pyx_t_6 = PyObject_Length(__pyx_v_text); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 73, __pyx_L1_error) - __pyx_v_limit = __pyx_t_6; - - /* "fontTools/feaLib/lexer.py":74 - * text = self.text_ - * limit = len(text) - * if start >= limit: # <<<<<<<<<<<<<< - * raise StopIteration() - * cur_char = text[start] - */ - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_limit); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 74, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_RichCompare(__pyx_v_start, __pyx_t_1, Py_GE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 74, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 74, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(__pyx_t_7)) { - - /* "fontTools/feaLib/lexer.py":75 - * limit = len(text) - * if start >= limit: - * raise StopIteration() # <<<<<<<<<<<<<< - * cur_char = text[start] - * next_char = text[start + 1] if start + 1 < limit else None - */ - __pyx_t_2 = __Pyx_PyObject_CallNoArg(__pyx_builtin_StopIteration); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 75, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":74 - * text = self.text_ - * limit = len(text) - * if start >= limit: # <<<<<<<<<<<<<< - * raise StopIteration() - * cur_char = text[start] - */ - } - - /* "fontTools/feaLib/lexer.py":76 - * if start >= limit: - * raise StopIteration() - * cur_char = text[start] # <<<<<<<<<<<<<< - * next_char = text[start + 1] if start + 1 < limit else None - * - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_text, __pyx_v_start); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_cur_char = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":77 - * raise StopIteration() - * cur_char = text[start] - * next_char = text[start + 1] if start + 1 < limit else None # <<<<<<<<<<<<<< - * - * if cur_char == "\n": - */ - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_v_start, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_limit); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_4, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_7) { - __pyx_t_3 = __Pyx_PyInt_AddObjC(__pyx_v_start, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_text, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_2 = __pyx_t_4; - __pyx_t_4 = 0; - } else { - __Pyx_INCREF(Py_None); - __pyx_t_2 = Py_None; - } - __pyx_v_next_char = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":79 - * next_char = text[start + 1] if start + 1 < limit else None - * - * if cur_char == "\n": # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.line_ += 1 - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u_, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 79, __pyx_L1_error) - if (__pyx_t_7) { - - /* "fontTools/feaLib/lexer.py":80 - * - * if cur_char == "\n": - * self.pos_ += 1 # <<<<<<<<<<<<<< - * self.line_ += 1 - * self.line_start_ = self.pos_ - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_4) < 0) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":81 - * if cur_char == "\n": - * self.pos_ += 1 - * self.line_ += 1 # <<<<<<<<<<<<<< - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_line); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_4, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_line, __pyx_t_2) < 0) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":82 - * self.pos_ += 1 - * self.line_ += 1 - * self.line_start_ = self.pos_ # <<<<<<<<<<<<<< - * return (Lexer.NEWLINE, None, location) - * if cur_char == "\r": - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_line_start, __pyx_t_2) < 0) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":83 - * self.line_ += 1 - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) # <<<<<<<<<<<<<< - * if cur_char == "\r": - * self.pos_ += 2 if next_char == "\n" else 1 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_NEWLINE); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4)) __PYX_ERR(0, 83, __pyx_L1_error); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, Py_None)) __PYX_ERR(0, 83, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_location)) __PYX_ERR(0, 83, __pyx_L1_error); - __pyx_t_4 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":79 - * next_char = text[start + 1] if start + 1 < limit else None - * - * if cur_char == "\n": # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.line_ += 1 - */ - } - - /* "fontTools/feaLib/lexer.py":84 - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) - * if cur_char == "\r": # <<<<<<<<<<<<<< - * self.pos_ += 2 if next_char == "\n" else 1 - * self.line_ += 1 - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__2, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 84, __pyx_L1_error) - if (__pyx_t_7) { - - /* "fontTools/feaLib/lexer.py":85 - * return (Lexer.NEWLINE, None, location) - * if cur_char == "\r": - * self.pos_ += 2 if next_char == "\n" else 1 # <<<<<<<<<<<<<< - * self.line_ += 1 - * self.line_start_ = self.pos_ - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 85, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_next_char, __pyx_kp_u_, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 85, __pyx_L1_error) - if (__pyx_t_7) { - __Pyx_INCREF(__pyx_int_2); - __pyx_t_4 = __pyx_int_2; - } else { - __Pyx_INCREF(__pyx_int_1); - __pyx_t_4 = __pyx_int_1; - } - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 85, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_3) < 0) __PYX_ERR(0, 85, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":86 - * if cur_char == "\r": - * self.pos_ += 2 if next_char == "\n" else 1 - * self.line_ += 1 # <<<<<<<<<<<<<< - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_line); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 86, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 86, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_line, __pyx_t_4) < 0) __PYX_ERR(0, 86, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":87 - * self.pos_ += 2 if next_char == "\n" else 1 - * self.line_ += 1 - * self.line_start_ = self.pos_ # <<<<<<<<<<<<<< - * return (Lexer.NEWLINE, None, location) - * if cur_char == "#": - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_line_start, __pyx_t_4) < 0) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":88 - * self.line_ += 1 - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) # <<<<<<<<<<<<<< - * if cur_char == "#": - * self.scan_until_(Lexer.CHAR_NEWLINE_) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_NEWLINE); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3)) __PYX_ERR(0, 88, __pyx_L1_error); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, Py_None)) __PYX_ERR(0, 88, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_location)) __PYX_ERR(0, 88, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":84 - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) - * if cur_char == "\r": # <<<<<<<<<<<<<< - * self.pos_ += 2 if next_char == "\n" else 1 - * self.line_ += 1 - */ - } - - /* "fontTools/feaLib/lexer.py":89 - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) - * if cur_char == "#": # <<<<<<<<<<<<<< - * self.scan_until_(Lexer.CHAR_NEWLINE_) - * return (Lexer.COMMENT, text[start : self.pos_], location) - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__3, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 89, __pyx_L1_error) - if (__pyx_t_7) { - - /* "fontTools/feaLib/lexer.py":90 - * return (Lexer.NEWLINE, None, location) - * if cur_char == "#": - * self.scan_until_(Lexer.CHAR_NEWLINE_) # <<<<<<<<<<<<<< - * return (Lexer.COMMENT, text[start : self.pos_], location) - * - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_until); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 90, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 90, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_CHAR_NEWLINE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 90, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_1}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 90, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":91 - * if cur_char == "#": - * self.scan_until_(Lexer.CHAR_NEWLINE_) - * return (Lexer.COMMENT, text[start : self.pos_], location) # <<<<<<<<<<<<<< - * - * if self.mode_ is Lexer.MODE_FILENAME_: - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_COMMENT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_4, NULL, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3)) __PYX_ERR(0, 91, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1)) __PYX_ERR(0, 91, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_location)) __PYX_ERR(0, 91, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":89 - * self.line_start_ = self.pos_ - * return (Lexer.NEWLINE, None, location) - * if cur_char == "#": # <<<<<<<<<<<<<< - * self.scan_until_(Lexer.CHAR_NEWLINE_) - * return (Lexer.COMMENT, text[start : self.pos_], location) - */ - } - - /* "fontTools/feaLib/lexer.py":93 - * return (Lexer.COMMENT, text[start : self.pos_], location) - * - * if self.mode_ is Lexer.MODE_FILENAME_: # <<<<<<<<<<<<<< - * if cur_char != "(": - * raise FeatureLibError("Expected '(' before file name", location) - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_mode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_MODE_FILENAME); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = (__pyx_t_4 == __pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_7) { - - /* "fontTools/feaLib/lexer.py":94 - * - * if self.mode_ is Lexer.MODE_FILENAME_: - * if cur_char != "(": # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected '(' before file name", location) - * self.scan_until_(")") - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__4, Py_NE)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 94, __pyx_L1_error) - if (unlikely(__pyx_t_7)) { - - /* "fontTools/feaLib/lexer.py":95 - * if self.mode_ is Lexer.MODE_FILENAME_: - * if cur_char != "(": - * raise FeatureLibError("Expected '(' before file name", location) # <<<<<<<<<<<<<< - * self.scan_until_(")") - * cur_char = text[self.pos_] if self.pos_ < limit else None - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_kp_u_Expected_before_file_name, __pyx_v_location}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(0, 95, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":94 - * - * if self.mode_ is Lexer.MODE_FILENAME_: - * if cur_char != "(": # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected '(' before file name", location) - * self.scan_until_(")") - */ - } - - /* "fontTools/feaLib/lexer.py":96 - * if cur_char != "(": - * raise FeatureLibError("Expected '(' before file name", location) - * self.scan_until_(")") # <<<<<<<<<<<<<< - * cur_char = text[self.pos_] if self.pos_ < limit else None - * if cur_char != ")": - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_until); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_kp_u__5}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":97 - * raise FeatureLibError("Expected '(' before file name", location) - * self.scan_until_(")") - * cur_char = text[self.pos_] if self.pos_ < limit else None # <<<<<<<<<<<<<< - * if cur_char != ")": - * raise FeatureLibError("Expected ')' after file name", location) - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_limit); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_RichCompare(__pyx_t_4, __pyx_t_1, Py_LT); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_7) { - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_text, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_3 = __pyx_t_1; - __pyx_t_1 = 0; - } else { - __Pyx_INCREF(Py_None); - __pyx_t_3 = Py_None; - } - __Pyx_DECREF_SET(__pyx_v_cur_char, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":98 - * self.scan_until_(")") - * cur_char = text[self.pos_] if self.pos_ < limit else None - * if cur_char != ")": # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected ')' after file name", location) - * self.pos_ += 1 - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__5, Py_NE)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 98, __pyx_L1_error) - if (unlikely(__pyx_t_7)) { - - /* "fontTools/feaLib/lexer.py":99 - * cur_char = text[self.pos_] if self.pos_ < limit else None - * if cur_char != ")": - * raise FeatureLibError("Expected ')' after file name", location) # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.mode_ = Lexer.MODE_NORMAL_ - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_kp_u_Expected_after_file_name, __pyx_v_location}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(0, 99, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":98 - * self.scan_until_(")") - * cur_char = text[self.pos_] if self.pos_ < limit else None - * if cur_char != ")": # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected ')' after file name", location) - * self.pos_ += 1 - */ - } - - /* "fontTools/feaLib/lexer.py":100 - * if cur_char != ")": - * raise FeatureLibError("Expected ')' after file name", location) - * self.pos_ += 1 # <<<<<<<<<<<<<< - * self.mode_ = Lexer.MODE_NORMAL_ - * return (Lexer.FILENAME, text[start + 1 : self.pos_ - 1], location) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_1) < 0) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":101 - * raise FeatureLibError("Expected ')' after file name", location) - * self.pos_ += 1 - * self.mode_ = Lexer.MODE_NORMAL_ # <<<<<<<<<<<<<< - * return (Lexer.FILENAME, text[start + 1 : self.pos_ - 1], location) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_MODE_NORMAL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_mode, __pyx_t_3) < 0) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":102 - * self.pos_ += 1 - * self.mode_ = Lexer.MODE_NORMAL_ - * return (Lexer.FILENAME, text[start + 1 : self.pos_ - 1], location) # <<<<<<<<<<<<<< - * - * if cur_char == "\\" and next_char in Lexer.CHAR_DIGIT_: - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_FILENAME); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyInt_AddObjC(__pyx_v_start, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyInt_SubtractObjC(__pyx_t_2, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_t_3, &__pyx_t_4, NULL, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1)) __PYX_ERR(0, 102, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2)) __PYX_ERR(0, 102, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_location)) __PYX_ERR(0, 102, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":93 - * return (Lexer.COMMENT, text[start : self.pos_], location) - * - * if self.mode_ is Lexer.MODE_FILENAME_: # <<<<<<<<<<<<<< - * if cur_char != "(": - * raise FeatureLibError("Expected '(' before file name", location) - */ - } - - /* "fontTools/feaLib/lexer.py":104 - * return (Lexer.FILENAME, text[start + 1 : self.pos_ - 1], location) - * - * if cur_char == "\\" and next_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - __pyx_t_8 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__6, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 104, __pyx_L1_error) - if (__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L11_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 104, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 104, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_8 = (__Pyx_PySequence_ContainsTF(__pyx_v_next_char, __pyx_t_2, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 104, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_t_8; - __pyx_L11_bool_binop_done:; - if (__pyx_t_7) { - - /* "fontTools/feaLib/lexer.py":105 - * - * if cur_char == "\\" and next_char in Lexer.CHAR_DIGIT_: - * self.pos_ += 1 # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_4) < 0) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":106 - * if cur_char == "\\" and next_char in Lexer.CHAR_DIGIT_: - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) # <<<<<<<<<<<<<< - * return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location) - * if cur_char == "@": - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 106, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 106, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 106, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_t_3}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 106, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":107 - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location) # <<<<<<<<<<<<<< - * if cur_char == "@": - * self.pos_ += 1 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_CID); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyInt_AddObjC(__pyx_v_start, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_t_4, &__pyx_t_3, NULL, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error); - __Pyx_INCREF(__pyx_int_10); - __Pyx_GIVEREF(__pyx_int_10); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_10)) __PYX_ERR(0, 107, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)(&PyInt_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 107, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 107, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_location)) __PYX_ERR(0, 107, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":104 - * return (Lexer.FILENAME, text[start + 1 : self.pos_ - 1], location) - * - * if cur_char == "\\" and next_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - } - - /* "fontTools/feaLib/lexer.py":108 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location) - * if cur_char == "@": # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__7, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 108, __pyx_L1_error) - if (__pyx_t_7) { - - /* "fontTools/feaLib/lexer.py":109 - * return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location) - * if cur_char == "@": - * self.pos_ += 1 # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * glyphclass = text[start + 1 : self.pos_] - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_1) < 0) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":110 - * if cur_char == "@": - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) # <<<<<<<<<<<<<< - * glyphclass = text[start + 1 : self.pos_] - * if len(glyphclass) < 1: - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_CHAR_NAME_CONTINUATION); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":111 - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * glyphclass = text[start + 1 : self.pos_] # <<<<<<<<<<<<<< - * if len(glyphclass) < 1: - * raise FeatureLibError("Expected glyph class name", location) - */ - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_v_start, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_t_1, &__pyx_t_3, NULL, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_glyphclass = __pyx_t_4; - __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":112 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * glyphclass = text[start + 1 : self.pos_] - * if len(glyphclass) < 1: # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected glyph class name", location) - * if len(glyphclass) > 63: - */ - __pyx_t_6 = PyObject_Length(__pyx_v_glyphclass); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 112, __pyx_L1_error) - __pyx_t_7 = (__pyx_t_6 < 1); - if (unlikely(__pyx_t_7)) { - - /* "fontTools/feaLib/lexer.py":113 - * glyphclass = text[start + 1 : self.pos_] - * if len(glyphclass) < 1: - * raise FeatureLibError("Expected glyph class name", location) # <<<<<<<<<<<<<< - * if len(glyphclass) > 63: - * raise FeatureLibError( - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 113, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_kp_u_Expected_glyph_class_name, __pyx_v_location}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 113, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(0, 113, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":112 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * glyphclass = text[start + 1 : self.pos_] - * if len(glyphclass) < 1: # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected glyph class name", location) - * if len(glyphclass) > 63: - */ - } - - /* "fontTools/feaLib/lexer.py":114 - * if len(glyphclass) < 1: - * raise FeatureLibError("Expected glyph class name", location) - * if len(glyphclass) > 63: # <<<<<<<<<<<<<< - * raise FeatureLibError( - * "Glyph class names must not be longer than 63 characters", location - */ - __pyx_t_6 = PyObject_Length(__pyx_v_glyphclass); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 114, __pyx_L1_error) - __pyx_t_7 = (__pyx_t_6 > 63); - if (unlikely(__pyx_t_7)) { - - /* "fontTools/feaLib/lexer.py":115 - * raise FeatureLibError("Expected glyph class name", location) - * if len(glyphclass) > 63: - * raise FeatureLibError( # <<<<<<<<<<<<<< - * "Glyph class names must not be longer than 63 characters", location - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 115, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/feaLib/lexer.py":116 - * if len(glyphclass) > 63: - * raise FeatureLibError( - * "Glyph class names must not be longer than 63 characters", location # <<<<<<<<<<<<<< - * ) - * if not Lexer.RE_GLYPHCLASS.match(glyphclass): - */ - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_kp_u_Glyph_class_names_must_not_be_lo, __pyx_v_location}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 115, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(0, 115, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":114 - * if len(glyphclass) < 1: - * raise FeatureLibError("Expected glyph class name", location) - * if len(glyphclass) > 63: # <<<<<<<<<<<<<< - * raise FeatureLibError( - * "Glyph class names must not be longer than 63 characters", location - */ - } - - /* "fontTools/feaLib/lexer.py":118 - * "Glyph class names must not be longer than 63 characters", location - * ) - * if not Lexer.RE_GLYPHCLASS.match(glyphclass): # <<<<<<<<<<<<<< - * raise FeatureLibError( - * "Glyph class names must consist of letters, digits, " - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_RE_GLYPHCLASS); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_match); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_glyphclass}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_8 = (!__pyx_t_7); - if (unlikely(__pyx_t_8)) { - - /* "fontTools/feaLib/lexer.py":119 - * ) - * if not Lexer.RE_GLYPHCLASS.match(glyphclass): - * raise FeatureLibError( # <<<<<<<<<<<<<< - * "Glyph class names must consist of letters, digits, " - * "underscore, period or hyphen", - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/feaLib/lexer.py":122 - * "Glyph class names must consist of letters, digits, " - * "underscore, period or hyphen", - * location, # <<<<<<<<<<<<<< - * ) - * return (Lexer.GLYPHCLASS, glyphclass, location) - */ - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_kp_u_Glyph_class_names_must_consist_o, __pyx_v_location}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(0, 119, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":118 - * "Glyph class names must not be longer than 63 characters", location - * ) - * if not Lexer.RE_GLYPHCLASS.match(glyphclass): # <<<<<<<<<<<<<< - * raise FeatureLibError( - * "Glyph class names must consist of letters, digits, " - */ - } - - /* "fontTools/feaLib/lexer.py":124 - * location, - * ) - * return (Lexer.GLYPHCLASS, glyphclass, location) # <<<<<<<<<<<<<< - * if cur_char in Lexer.CHAR_NAME_START_: - * self.pos_ += 1 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 124, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_GLYPHCLASS); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 124, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 124, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3)) __PYX_ERR(0, 124, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_glyphclass); - __Pyx_GIVEREF(__pyx_v_glyphclass); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_v_glyphclass)) __PYX_ERR(0, 124, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_location)) __PYX_ERR(0, 124, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":108 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location) - * if cur_char == "@": # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - */ - } - - /* "fontTools/feaLib/lexer.py":125 - * ) - * return (Lexer.GLYPHCLASS, glyphclass, location) - * if cur_char in Lexer.CHAR_NAME_START_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_CHAR_NAME_START); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_8 = (__Pyx_PySequence_ContainsTF(__pyx_v_cur_char, __pyx_t_3, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":126 - * return (Lexer.GLYPHCLASS, glyphclass, location) - * if cur_char in Lexer.CHAR_NAME_START_: - * self.pos_ += 1 # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * token = text[start : self.pos_] - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_4) < 0) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":127 - * if cur_char in Lexer.CHAR_NAME_START_: - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) # <<<<<<<<<<<<<< - * token = text[start : self.pos_] - * if token == "include": - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 127, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 127, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_CHAR_NAME_CONTINUATION); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 127, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_t_2}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 127, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":128 - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * token = text[start : self.pos_] # <<<<<<<<<<<<<< - * if token == "include": - * self.mode_ = Lexer.MODE_FILENAME_ - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_4, NULL, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_token = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":129 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * token = text[start : self.pos_] - * if token == "include": # <<<<<<<<<<<<<< - * self.mode_ = Lexer.MODE_FILENAME_ - * return (Lexer.NAME, token, location) - */ - __pyx_t_8 = (__Pyx_PyUnicode_Equals(__pyx_v_token, __pyx_n_u_include, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 129, __pyx_L1_error) - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":130 - * token = text[start : self.pos_] - * if token == "include": - * self.mode_ = Lexer.MODE_FILENAME_ # <<<<<<<<<<<<<< - * return (Lexer.NAME, token, location) - * if cur_char == "0" and next_char in "xX": - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_MODE_FILENAME); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_mode, __pyx_t_4) < 0) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":129 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - * token = text[start : self.pos_] - * if token == "include": # <<<<<<<<<<<<<< - * self.mode_ = Lexer.MODE_FILENAME_ - * return (Lexer.NAME, token, location) - */ - } - - /* "fontTools/feaLib/lexer.py":131 - * if token == "include": - * self.mode_ = Lexer.MODE_FILENAME_ - * return (Lexer.NAME, token, location) # <<<<<<<<<<<<<< - * if cur_char == "0" and next_char in "xX": - * self.pos_ += 2 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_NAME); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3)) __PYX_ERR(0, 131, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_token); - __Pyx_GIVEREF(__pyx_v_token); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_v_token)) __PYX_ERR(0, 131, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_location)) __PYX_ERR(0, 131, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":125 - * ) - * return (Lexer.GLYPHCLASS, glyphclass, location) - * if cur_char in Lexer.CHAR_NAME_START_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_) - */ - } - - /* "fontTools/feaLib/lexer.py":132 - * self.mode_ = Lexer.MODE_FILENAME_ - * return (Lexer.NAME, token, location) - * if cur_char == "0" and next_char in "xX": # <<<<<<<<<<<<<< - * self.pos_ += 2 - * self.scan_over_(Lexer.CHAR_HEXDIGIT_) - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u_0, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 132, __pyx_L1_error) - if (__pyx_t_7) { - } else { - __pyx_t_8 = __pyx_t_7; - goto __pyx_L20_bool_binop_done; - } - __pyx_t_7 = (__Pyx_PyUnicode_ContainsTF(__pyx_v_next_char, __pyx_n_u_xX, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 132, __pyx_L1_error) - __pyx_t_8 = __pyx_t_7; - __pyx_L20_bool_binop_done:; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":133 - * return (Lexer.NAME, token, location) - * if cur_char == "0" and next_char in "xX": - * self.pos_ += 2 # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_HEXDIGIT_) - * return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location) - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyInt_AddObjC(__pyx_t_4, __pyx_int_2, 2, 1, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_3) < 0) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":134 - * if cur_char == "0" and next_char in "xX": - * self.pos_ += 2 - * self.scan_over_(Lexer.CHAR_HEXDIGIT_) # <<<<<<<<<<<<<< - * return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location) - * if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_: - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_CHAR_HEXDIGIT); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_1}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":135 - * self.pos_ += 2 - * self.scan_over_(Lexer.CHAR_HEXDIGIT_) - * return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location) # <<<<<<<<<<<<<< - * if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_: - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_HEXADECIMAL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_3, NULL, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1)) __PYX_ERR(0, 135, __pyx_L1_error); - __Pyx_INCREF(__pyx_int_16); - __Pyx_GIVEREF(__pyx_int_16); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_16)) __PYX_ERR(0, 135, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)(&PyInt_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4)) __PYX_ERR(0, 135, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1)) __PYX_ERR(0, 135, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_location)) __PYX_ERR(0, 135, __pyx_L1_error); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":132 - * self.mode_ = Lexer.MODE_FILENAME_ - * return (Lexer.NAME, token, location) - * if cur_char == "0" and next_char in "xX": # <<<<<<<<<<<<<< - * self.pos_ += 2 - * self.scan_over_(Lexer.CHAR_HEXDIGIT_) - */ - } - - /* "fontTools/feaLib/lexer.py":136 - * self.scan_over_(Lexer.CHAR_HEXDIGIT_) - * return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location) - * if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.OCTAL, int(text[start : self.pos_], 8), location) - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u_0, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 136, __pyx_L1_error) - if (__pyx_t_7) { - } else { - __pyx_t_8 = __pyx_t_7; - goto __pyx_L23_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = (__Pyx_PySequence_ContainsTF(__pyx_v_next_char, __pyx_t_1, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __pyx_t_7; - __pyx_L23_bool_binop_done:; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":137 - * return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location) - * if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_: - * self.scan_over_(Lexer.CHAR_DIGIT_) # <<<<<<<<<<<<<< - * return (Lexer.OCTAL, int(text[start : self.pos_], 8), location) - * if cur_char in Lexer.CHAR_DIGIT_: - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_t_2}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":138 - * if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_: - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.OCTAL, int(text[start : self.pos_], 8), location) # <<<<<<<<<<<<<< - * if cur_char in Lexer.CHAR_DIGIT_: - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_OCTAL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_1, NULL, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2)) __PYX_ERR(0, 138, __pyx_L1_error); - __Pyx_INCREF(__pyx_int_8); - __Pyx_GIVEREF(__pyx_int_8); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_8)) __PYX_ERR(0, 138, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)(&PyInt_Type)), __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3)) __PYX_ERR(0, 138, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_2)) __PYX_ERR(0, 138, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_location)) __PYX_ERR(0, 138, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":136 - * self.scan_over_(Lexer.CHAR_HEXDIGIT_) - * return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location) - * if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.OCTAL, int(text[start : self.pos_], 8), location) - */ - } - - /* "fontTools/feaLib/lexer.py":139 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.OCTAL, int(text[start : self.pos_], 8), location) - * if cur_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = (__Pyx_PySequence_ContainsTF(__pyx_v_cur_char, __pyx_t_2, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 139, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":140 - * return (Lexer.OCTAL, int(text[start : self.pos_], 8), location) - * if cur_char in Lexer.CHAR_DIGIT_: - * self.scan_over_(Lexer.CHAR_DIGIT_) # <<<<<<<<<<<<<< - * if self.pos_ >= limit or text[self.pos_] != ".": - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_4}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":141 - * if cur_char in Lexer.CHAR_DIGIT_: - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": # <<<<<<<<<<<<<< - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_limit); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyObject_RichCompare(__pyx_t_2, __pyx_t_1, Py_GE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (!__pyx_t_7) { - } else { - __pyx_t_8 = __pyx_t_7; - goto __pyx_L27_bool_binop_done; - } - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_text, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_t_1, __pyx_kp_u__8, Py_NE)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __pyx_t_7; - __pyx_L27_bool_binop_done:; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":142 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) # <<<<<<<<<<<<<< - * self.scan_over_(".") - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_NUMBER); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_1, NULL, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error); - __Pyx_INCREF(__pyx_int_10); - __Pyx_GIVEREF(__pyx_int_10); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_10)) __PYX_ERR(0, 142, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)(&PyInt_Type)), __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4)) __PYX_ERR(0, 142, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_location)) __PYX_ERR(0, 142, __pyx_L1_error); - __pyx_t_4 = 0; - __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":141 - * if cur_char in Lexer.CHAR_DIGIT_: - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": # <<<<<<<<<<<<<< - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") - */ - } - - /* "fontTools/feaLib/lexer.py":143 - * if self.pos_ >= limit or text[self.pos_] != ".": - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_kp_u__8}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":144 - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") - * self.scan_over_(Lexer.CHAR_DIGIT_) # <<<<<<<<<<<<<< - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_: - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_t_3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":145 - * self.scan_over_(".") - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) # <<<<<<<<<<<<<< - * if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_: - * self.pos_ += 1 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_FLOAT); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_1, NULL, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyNumber_Float(__pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_location)) __PYX_ERR(0, 145, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":139 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.OCTAL, int(text[start : self.pos_], 8), location) - * if cur_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": - */ - } - - /* "fontTools/feaLib/lexer.py":146 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__9, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 146, __pyx_L1_error) - if (__pyx_t_7) { - } else { - __pyx_t_8 = __pyx_t_7; - goto __pyx_L30_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 146, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 146, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = (__Pyx_PySequence_ContainsTF(__pyx_v_next_char, __pyx_t_1, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 146, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __pyx_t_7; - __pyx_L30_bool_binop_done:; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":147 - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_: - * self.pos_ += 1 # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyInt_AddObjC(__pyx_t_1, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_3) < 0) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":148 - * if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_: - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) # <<<<<<<<<<<<<< - * if self.pos_ >= limit or text[self.pos_] != ".": - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_4}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":149 - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": # <<<<<<<<<<<<<< - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_limit); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_t_1, Py_GE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (!__pyx_t_7) { - } else { - __pyx_t_8 = __pyx_t_7; - goto __pyx_L33_bool_binop_done; - } - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_text, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_t_1, __pyx_kp_u__8, Py_NE)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __pyx_t_7; - __pyx_L33_bool_binop_done:; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":150 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) # <<<<<<<<<<<<<< - * self.scan_over_(".") - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_NUMBER); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_1, NULL, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3)) __PYX_ERR(0, 150, __pyx_L1_error); - __Pyx_INCREF(__pyx_int_10); - __Pyx_GIVEREF(__pyx_int_10); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_10)) __PYX_ERR(0, 150, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&PyInt_Type)), __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4)) __PYX_ERR(0, 150, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3)) __PYX_ERR(0, 150, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_location)) __PYX_ERR(0, 150, __pyx_L1_error); - __pyx_t_4 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":149 - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * if self.pos_ >= limit or text[self.pos_] != ".": # <<<<<<<<<<<<<< - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") - */ - } - - /* "fontTools/feaLib/lexer.py":151 - * if self.pos_ >= limit or text[self.pos_] != ".": - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_kp_u__8}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":152 - * return (Lexer.NUMBER, int(text[start : self.pos_], 10), location) - * self.scan_over_(".") - * self.scan_over_(Lexer.CHAR_DIGIT_) # <<<<<<<<<<<<<< - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char in Lexer.CHAR_SYMBOL_: - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_CHAR_DIGIT); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_t_2}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":153 - * self.scan_over_(".") - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) # <<<<<<<<<<<<<< - * if cur_char in Lexer.CHAR_SYMBOL_: - * self.pos_ += 1 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_FLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_v_start, &__pyx_t_1, NULL, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyNumber_Float(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3)) __PYX_ERR(0, 153, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_location)) __PYX_ERR(0, 153, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":146 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_over_(Lexer.CHAR_DIGIT_) - */ - } - - /* "fontTools/feaLib/lexer.py":154 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char in Lexer.CHAR_SYMBOL_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * return (Lexer.SYMBOL, cur_char, location) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_CHAR_SYMBOL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_8 = (__Pyx_PySequence_ContainsTF(__pyx_v_cur_char, __pyx_t_1, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 154, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":155 - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char in Lexer.CHAR_SYMBOL_: - * self.pos_ += 1 # <<<<<<<<<<<<<< - * return (Lexer.SYMBOL, cur_char, location) - * if cur_char == '"': - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_1, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_2) < 0) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":156 - * if cur_char in Lexer.CHAR_SYMBOL_: - * self.pos_ += 1 - * return (Lexer.SYMBOL, cur_char, location) # <<<<<<<<<<<<<< - * if cur_char == '"': - * self.pos_ += 1 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 156, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_SYMBOL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 156, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 156, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1)) __PYX_ERR(0, 156, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_cur_char); - __Pyx_GIVEREF(__pyx_v_cur_char); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_cur_char)) __PYX_ERR(0, 156, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_location)) __PYX_ERR(0, 156, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":154 - * self.scan_over_(Lexer.CHAR_DIGIT_) - * return (Lexer.FLOAT, float(text[start : self.pos_]), location) - * if cur_char in Lexer.CHAR_SYMBOL_: # <<<<<<<<<<<<<< - * self.pos_ += 1 - * return (Lexer.SYMBOL, cur_char, location) - */ - } - - /* "fontTools/feaLib/lexer.py":157 - * self.pos_ += 1 - * return (Lexer.SYMBOL, cur_char, location) - * if cur_char == '"': # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_until_('"') - */ - __pyx_t_8 = (__Pyx_PyUnicode_Equals(__pyx_v_cur_char, __pyx_kp_u__10, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 157, __pyx_L1_error) - if (__pyx_t_8) { - - /* "fontTools/feaLib/lexer.py":158 - * return (Lexer.SYMBOL, cur_char, location) - * if cur_char == '"': - * self.pos_ += 1 # <<<<<<<<<<<<<< - * self.scan_until_('"') - * if self.pos_ < self.text_length_ and self.text_[self.pos_] == '"': - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_1) < 0) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":159 - * if cur_char == '"': - * self.pos_ += 1 - * self.scan_until_('"') # <<<<<<<<<<<<<< - * if self.pos_ < self.text_length_ and self.text_[self.pos_] == '"': - * self.pos_ += 1 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_until); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_kp_u__10}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":160 - * self.pos_ += 1 - * self.scan_until_('"') - * if self.pos_ < self.text_length_ and self.text_[self.pos_] == '"': # <<<<<<<<<<<<<< - * self.pos_ += 1 - * # strip newlines embedded within a string - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_length); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_2, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_7) { - } else { - __pyx_t_8 = __pyx_t_7; - goto __pyx_L38_bool_binop_done; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = (__Pyx_PyUnicode_Equals(__pyx_t_1, __pyx_kp_u__10, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __pyx_t_7; - __pyx_L38_bool_binop_done:; - if (likely(__pyx_t_8)) { - - /* "fontTools/feaLib/lexer.py":161 - * self.scan_until_('"') - * if self.pos_ < self.text_length_ and self.text_[self.pos_] == '"': - * self.pos_ += 1 # <<<<<<<<<<<<<< - * # strip newlines embedded within a string - * string = re.sub("[\r\n]", "", text[start + 1 : self.pos_ - 1]) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_1, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_2) < 0) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":163 - * self.pos_ += 1 - * # strip newlines embedded within a string - * string = re.sub("[\r\n]", "", text[start + 1 : self.pos_ - 1]) # <<<<<<<<<<<<<< - * return (Lexer.STRING, string, location) - * else: - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_re); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_sub); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_v_start, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_9 = __Pyx_PyInt_SubtractObjC(__pyx_t_4, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetSlice(__pyx_v_text, 0, 0, &__pyx_t_1, &__pyx_t_9, NULL, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_9)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_9, __pyx_kp_u__11, __pyx_kp_u__12, __pyx_t_4}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 3+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_string = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":164 - * # strip newlines embedded within a string - * string = re.sub("[\r\n]", "", text[start + 1 : self.pos_ - 1]) - * return (Lexer.STRING, string, location) # <<<<<<<<<<<<<< - * else: - * raise FeatureLibError("Expected '\"' to terminate string", location) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_STRING); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3)) __PYX_ERR(0, 164, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_string); - __Pyx_GIVEREF(__pyx_v_string); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_string)) __PYX_ERR(0, 164, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_location)) __PYX_ERR(0, 164, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":160 - * self.pos_ += 1 - * self.scan_until_('"') - * if self.pos_ < self.text_length_ and self.text_[self.pos_] == '"': # <<<<<<<<<<<<<< - * self.pos_ += 1 - * # strip newlines embedded within a string - */ - } - - /* "fontTools/feaLib/lexer.py":166 - * return (Lexer.STRING, string, location) - * else: - * raise FeatureLibError("Expected '\"' to terminate string", location) # <<<<<<<<<<<<<< - * raise FeatureLibError("Unexpected character: %r" % cur_char, location) - * - */ - /*else*/ { - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_4, __pyx_kp_u_Expected_to_terminate_string, __pyx_v_location}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 166, __pyx_L1_error) - } - - /* "fontTools/feaLib/lexer.py":157 - * self.pos_ += 1 - * return (Lexer.SYMBOL, cur_char, location) - * if cur_char == '"': # <<<<<<<<<<<<<< - * self.pos_ += 1 - * self.scan_until_('"') - */ - } - - /* "fontTools/feaLib/lexer.py":167 - * else: - * raise FeatureLibError("Expected '\"' to terminate string", location) - * raise FeatureLibError("Unexpected character: %r" % cur_char, location) # <<<<<<<<<<<<<< - * - * def scan_over_(self, valid): - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 167, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyUnicode_FormatSafe(__pyx_kp_u_Unexpected_character_r, __pyx_v_cur_char); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 167, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_9 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_9)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_9, __pyx_t_4, __pyx_v_location}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 167, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 167, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":68 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - * - * def next_(self): # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_WHITESPACE_) - * location = self.location_() - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.next_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_location); - __Pyx_XDECREF(__pyx_v_start); - __Pyx_XDECREF(__pyx_v_text); - __Pyx_XDECREF(__pyx_v_cur_char); - __Pyx_XDECREF(__pyx_v_next_char); - __Pyx_XDECREF(__pyx_v_glyphclass); - __Pyx_XDECREF(__pyx_v_token); - __Pyx_XDECREF(__pyx_v_string); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":169 - * raise FeatureLibError("Unexpected character: %r" % cur_char, location) - * - * def scan_over_(self, valid): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] in valid: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_13scan_over_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_12scan_over_, "Lexer.scan_over_(self, valid)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_13scan_over_ = {"scan_over_", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_13scan_over_, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_12scan_over_}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_13scan_over_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_valid = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("scan_over_ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 169, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_valid,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 169, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_valid)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 169, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("scan_over_", 1, 2, 2, 1); __PYX_ERR(0, 169, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "scan_over_") < 0)) __PYX_ERR(0, 169, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_valid = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("scan_over_", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 169, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.scan_over_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_12scan_over_(__pyx_self, __pyx_v_self, __pyx_v_valid); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_12scan_over_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_valid) { - PyObject *__pyx_v_p = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("scan_over_", 0); - - /* "fontTools/feaLib/lexer.py":170 - * - * def scan_over_(self, valid): - * p = self.pos_ # <<<<<<<<<<<<<< - * while p < self.text_length_ and self.text_[p] in valid: - * p += 1 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_p = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":171 - * def scan_over_(self, valid): - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] in valid: # <<<<<<<<<<<<<< - * p += 1 - * self.pos_ = p - */ - while (1) { - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_length); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyObject_RichCompare(__pyx_v_p, __pyx_t_1, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_3, __pyx_v_p); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_v_valid, Py_EQ)); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_2 = __pyx_t_4; - __pyx_L5_bool_binop_done:; - if (!__pyx_t_2) break; - - /* "fontTools/feaLib/lexer.py":172 - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] in valid: - * p += 1 # <<<<<<<<<<<<<< - * self.pos_ = p - * - */ - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_v_p, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_p, __pyx_t_1); - __pyx_t_1 = 0; - } - - /* "fontTools/feaLib/lexer.py":173 - * while p < self.text_length_ and self.text_[p] in valid: - * p += 1 - * self.pos_ = p # <<<<<<<<<<<<<< - * - * def scan_until_(self, stop_at): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_v_p) < 0) __PYX_ERR(0, 173, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":169 - * raise FeatureLibError("Unexpected character: %r" % cur_char, location) - * - * def scan_over_(self, valid): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] in valid: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.scan_over_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_p); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":175 - * self.pos_ = p - * - * def scan_until_(self, stop_at): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] not in stop_at: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_15scan_until_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_14scan_until_, "Lexer.scan_until_(self, stop_at)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_15scan_until_ = {"scan_until_", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_15scan_until_, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_14scan_until_}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_15scan_until_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_stop_at = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("scan_until_ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 175, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_stop_at,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 175, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_stop_at)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 175, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("scan_until_", 1, 2, 2, 1); __PYX_ERR(0, 175, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "scan_until_") < 0)) __PYX_ERR(0, 175, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_stop_at = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("scan_until_", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 175, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.scan_until_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_14scan_until_(__pyx_self, __pyx_v_self, __pyx_v_stop_at); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_14scan_until_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_stop_at) { - PyObject *__pyx_v_p = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("scan_until_", 0); - - /* "fontTools/feaLib/lexer.py":176 - * - * def scan_until_(self, stop_at): - * p = self.pos_ # <<<<<<<<<<<<<< - * while p < self.text_length_ and self.text_[p] not in stop_at: - * p += 1 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_p = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":177 - * def scan_until_(self, stop_at): - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] not in stop_at: # <<<<<<<<<<<<<< - * p += 1 - * self.pos_ = p - */ - while (1) { - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_length); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyObject_RichCompare(__pyx_v_p, __pyx_t_1, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 177, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_3, __pyx_v_p); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_v_stop_at, Py_NE)); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 177, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_2 = __pyx_t_4; - __pyx_L5_bool_binop_done:; - if (!__pyx_t_2) break; - - /* "fontTools/feaLib/lexer.py":178 - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] not in stop_at: - * p += 1 # <<<<<<<<<<<<<< - * self.pos_ = p - * - */ - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_v_p, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_p, __pyx_t_1); - __pyx_t_1 = 0; - } - - /* "fontTools/feaLib/lexer.py":179 - * while p < self.text_length_ and self.text_[p] not in stop_at: - * p += 1 - * self.pos_ = p # <<<<<<<<<<<<<< - * - * def scan_anonymous_block(self, tag): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_v_p) < 0) __PYX_ERR(0, 179, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":175 - * self.pos_ = p - * - * def scan_until_(self, stop_at): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] not in stop_at: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.scan_until_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_p); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":181 - * self.pos_ = p - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * location = self.location_() - * tag = tag.strip() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_17scan_anonymous_block(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_16scan_anonymous_block, "Lexer.scan_anonymous_block(self, tag)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_17scan_anonymous_block = {"scan_anonymous_block", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_17scan_anonymous_block, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_5Lexer_16scan_anonymous_block}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_5Lexer_17scan_anonymous_block(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_tag = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("scan_anonymous_block (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 181, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_tag,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 181, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_tag)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 181, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("scan_anonymous_block", 1, 2, 2, 1); __PYX_ERR(0, 181, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "scan_anonymous_block") < 0)) __PYX_ERR(0, 181, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_tag = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("scan_anonymous_block", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 181, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.scan_anonymous_block", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_16scan_anonymous_block(__pyx_self, __pyx_v_self, __pyx_v_tag); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_5Lexer_16scan_anonymous_block(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_tag) { - PyObject *__pyx_v_location = NULL; - PyObject *__pyx_v_regexp = NULL; - PyObject *__pyx_v_split = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("scan_anonymous_block", 0); - __Pyx_INCREF(__pyx_v_tag); - - /* "fontTools/feaLib/lexer.py":182 - * - * def scan_anonymous_block(self, tag): - * location = self.location_() # <<<<<<<<<<<<<< - * tag = tag.strip() - * self.scan_until_(Lexer.CHAR_NEWLINE_) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_location); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_location = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":183 - * def scan_anonymous_block(self, tag): - * location = self.location_() - * tag = tag.strip() # <<<<<<<<<<<<<< - * self.scan_until_(Lexer.CHAR_NEWLINE_) - * self.scan_over_(Lexer.CHAR_NEWLINE_) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_tag, __pyx_n_s_strip); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF_SET(__pyx_v_tag, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":184 - * location = self.location_() - * tag = tag.strip() - * self.scan_until_(Lexer.CHAR_NEWLINE_) # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_NEWLINE_) - * regexp = r"}\s*" + tag + r"\s*;" - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_until); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_CHAR_NEWLINE); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_5}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":185 - * tag = tag.strip() - * self.scan_until_(Lexer.CHAR_NEWLINE_) - * self.scan_over_(Lexer.CHAR_NEWLINE_) # <<<<<<<<<<<<<< - * regexp = r"}\s*" + tag + r"\s*;" - * split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_scan_over); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_CHAR_NEWLINE); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_t_3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/feaLib/lexer.py":186 - * self.scan_until_(Lexer.CHAR_NEWLINE_) - * self.scan_over_(Lexer.CHAR_NEWLINE_) - * regexp = r"}\s*" + tag + r"\s*;" # <<<<<<<<<<<<<< - * split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1) - * if len(split) != 2: - */ - __pyx_t_1 = PyNumber_Add(__pyx_kp_u_s, __pyx_v_tag); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_kp_u_s_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_regexp = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":187 - * self.scan_over_(Lexer.CHAR_NEWLINE_) - * regexp = r"}\s*" + tag + r"\s*;" - * split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1) # <<<<<<<<<<<<<< - * if len(split) != 2: - * raise FeatureLibError( - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_re); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_split); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_text_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_GetSlice(__pyx_t_2, 0, 0, &__pyx_t_3, NULL, NULL, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_regexp); - __Pyx_GIVEREF(__pyx_v_regexp); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_regexp)) __PYX_ERR(0, 187, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_5)) __PYX_ERR(0, 187, __pyx_L1_error); - __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_maxsplit, __pyx_int_1) < 0) __PYX_ERR(0, 187, __pyx_L1_error) - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_split = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":188 - * regexp = r"}\s*" + tag + r"\s*;" - * split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1) - * if len(split) != 2: # <<<<<<<<<<<<<< - * raise FeatureLibError( - * "Expected '} %s;' to terminate anonymous block" % tag, location - */ - __pyx_t_6 = PyObject_Length(__pyx_v_split); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 188, __pyx_L1_error) - __pyx_t_7 = (__pyx_t_6 != 2); - if (unlikely(__pyx_t_7)) { - - /* "fontTools/feaLib/lexer.py":189 - * split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1) - * if len(split) != 2: - * raise FeatureLibError( # <<<<<<<<<<<<<< - * "Expected '} %s;' to terminate anonymous block" % tag, location - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - - /* "fontTools/feaLib/lexer.py":190 - * if len(split) != 2: - * raise FeatureLibError( - * "Expected '} %s;' to terminate anonymous block" % tag, location # <<<<<<<<<<<<<< - * ) - * self.pos_ += len(split[0]) - */ - __pyx_t_3 = __Pyx_PyUnicode_FormatSafe(__pyx_kp_u_Expected_s_to_terminate_anonymou, __pyx_v_tag); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_t_3, __pyx_v_location}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 189, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":188 - * regexp = r"}\s*" + tag + r"\s*;" - * split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1) - * if len(split) != 2: # <<<<<<<<<<<<<< - * raise FeatureLibError( - * "Expected '} %s;' to terminate anonymous block" % tag, location - */ - } - - /* "fontTools/feaLib/lexer.py":192 - * "Expected '} %s;' to terminate anonymous block" % tag, location - * ) - * self.pos_ += len(split[0]) # <<<<<<<<<<<<<< - * return (Lexer.ANONYMOUS_BLOCK, split[0], location) - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_pos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_split, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyObject_Length(__pyx_t_5); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyInt_FromSsize_t(__pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_pos, __pyx_t_3) < 0) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":193 - * ) - * self.pos_ += len(split[0]) - * return (Lexer.ANONYMOUS_BLOCK, split[0], location) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_ANONYMOUS_BLOCK); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_split, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_5)) __PYX_ERR(0, 193, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3)) __PYX_ERR(0, 193, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_location)) __PYX_ERR(0, 193, __pyx_L1_error); - __pyx_t_5 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":181 - * self.pos_ = p - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * location = self.location_() - * tag = tag.strip() - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.feaLib.lexer.Lexer.scan_anonymous_block", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_location); - __Pyx_XDECREF(__pyx_v_regexp); - __Pyx_XDECREF(__pyx_v_split); - __Pyx_XDECREF(__pyx_v_tag); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":211 - * """ - * - * def __init__(self, featurefile, *, includeDir=None): # <<<<<<<<<<<<<< - * """Initializes an IncludingLexer. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer___init__, "IncludingLexer.__init__(self, featurefile, *, includeDir=None)\nInitializes an IncludingLexer.\n\n Behavior:\n If includeDir is passed, it will be used to determine the top-level\n include directory to use for all encountered include statements. If it is\n not passed, ``os.path.dirname(featurefile)`` will be considered the\n include directory.\n "); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_1__init__ = {"__init__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_1__init__, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer___init__}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_featurefile = 0; - PyObject *__pyx_v_includeDir = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 211, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_featurefile,&__pyx_n_s_includeDir,0}; - values[2] = __Pyx_Arg_NewRef_FASTCALL(((PyObject *)Py_None)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 211, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_featurefile)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 211, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, 1); __PYX_ERR(0, 211, __pyx_L3_error) - } - } - if (kw_args == 1) { - const Py_ssize_t index = 2; - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, *__pyx_pyargnames[index]); - if (value) { values[index] = __Pyx_Arg_NewRef_FASTCALL(value); kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 211, __pyx_L3_error) - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__init__") < 0)) __PYX_ERR(0, 211, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_featurefile = values[1]; - __pyx_v_includeDir = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 211, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer___init__(__pyx_self, __pyx_v_self, __pyx_v_featurefile, __pyx_v_includeDir); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_featurefile, PyObject *__pyx_v_includeDir) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__init__", 0); - - /* "fontTools/feaLib/lexer.py":221 - * """ - * - * self.lexers_ = [self.make_lexer_(featurefile)] # <<<<<<<<<<<<<< - * self.featurefilepath = self.lexers_[0].filename_ - * self.includeDir = includeDir - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_make_lexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_featurefile}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_t_1)) __PYX_ERR(0, 221, __pyx_L1_error); - __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_lexers, __pyx_t_2) < 0) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":222 - * - * self.lexers_ = [self.make_lexer_(featurefile)] - * self.featurefilepath = self.lexers_[0].filename_ # <<<<<<<<<<<<<< - * self.includeDir = includeDir - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 222, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 222, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_filename_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 222, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_featurefilepath, __pyx_t_2) < 0) __PYX_ERR(0, 222, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":223 - * self.lexers_ = [self.make_lexer_(featurefile)] - * self.featurefilepath = self.lexers_[0].filename_ - * self.includeDir = includeDir # <<<<<<<<<<<<<< - * - * def __iter__(self): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_includeDir, __pyx_v_includeDir) < 0) __PYX_ERR(0, 223, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":211 - * """ - * - * def __init__(self, featurefile, *, includeDir=None): # <<<<<<<<<<<<<< - * """Initializes an IncludingLexer. - * - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":225 - * self.includeDir = includeDir - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_3__iter__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_2__iter__, "IncludingLexer.__iter__(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_3__iter__ = {"__iter__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_3__iter__, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_2__iter__}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_3__iter__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__iter__ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 225, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 225, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__iter__") < 0)) __PYX_ERR(0, 225, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__iter__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 225, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.__iter__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_2__iter__(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_2__iter__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__iter__", 0); - - /* "fontTools/feaLib/lexer.py":226 - * - * def __iter__(self): - * return self # <<<<<<<<<<<<<< - * - * def next(self): # Python 2 - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self); - __pyx_r = __pyx_v_self; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":225 - * self.includeDir = includeDir - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":228 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_5next(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_4next, "IncludingLexer.next(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_5next = {"next", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_5next, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_4next}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_5next(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("next (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 228, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 228, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "next") < 0)) __PYX_ERR(0, 228, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("next", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 228, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.next", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_4next(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_4next(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("next", 0); - - /* "fontTools/feaLib/lexer.py":229 - * - * def next(self): # Python 2 - * return self.__next__() # <<<<<<<<<<<<<< - * - * def __next__(self): # Python 3 - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_next); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":228 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.next", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":231 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while self.lexers_: - * lexer = self.lexers_[-1] - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_7__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_6__next__, "IncludingLexer.__next__(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_7__next__ = {"__next__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_7__next__, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_6__next__}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_7__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__next__ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 231, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 231, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__next__") < 0)) __PYX_ERR(0, 231, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__next__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 231, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_6__next__(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_6__next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_lexer = NULL; - PyObject *__pyx_v_token_type = NULL; - PyObject *__pyx_v_token = NULL; - PyObject *__pyx_v_location = NULL; - PyObject *__pyx_v_fname_type = NULL; - PyObject *__pyx_v_fname_token = NULL; - PyObject *__pyx_v_fname_location = NULL; - PyObject *__pyx_v_path = NULL; - PyObject *__pyx_v_curpath = NULL; - PyObject *__pyx_v_err = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *(*__pyx_t_10)(PyObject *); - int __pyx_t_11; - int __pyx_t_12; - Py_ssize_t __pyx_t_13; - int __pyx_t_14; - PyObject *__pyx_t_15 = NULL; - int __pyx_t_16; - char const *__pyx_t_17; - PyObject *__pyx_t_18 = NULL; - PyObject *__pyx_t_19 = NULL; - PyObject *__pyx_t_20 = NULL; - PyObject *__pyx_t_21 = NULL; - PyObject *__pyx_t_22 = NULL; - PyObject *__pyx_t_23 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__next__", 0); - - /* "fontTools/feaLib/lexer.py":232 - * - * def __next__(self): # Python 3 - * while self.lexers_: # <<<<<<<<<<<<<< - * lexer = self.lexers_[-1] - * try: - */ - while (1) { - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (!__pyx_t_2) break; - - /* "fontTools/feaLib/lexer.py":233 - * def __next__(self): # Python 3 - * while self.lexers_: - * lexer = self.lexers_[-1] # <<<<<<<<<<<<<< - * try: - * token_type, token, location = next(lexer) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_t_1, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_lexer, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":234 - * while self.lexers_: - * lexer = self.lexers_[-1] - * try: # <<<<<<<<<<<<<< - * token_type, token, location = next(lexer) - * except StopIteration: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_4, &__pyx_t_5, &__pyx_t_6); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_6); - /*try:*/ { - - /* "fontTools/feaLib/lexer.py":235 - * lexer = self.lexers_[-1] - * try: - * token_type, token, location = next(lexer) # <<<<<<<<<<<<<< - * except StopIteration: - * self.lexers_.pop() - */ - __pyx_t_3 = __Pyx_PyIter_Next(__pyx_v_lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 235, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - if ((likely(PyTuple_CheckExact(__pyx_t_3))) || (PyList_CheckExact(__pyx_t_3))) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 235, __pyx_L5_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_7 = PyList_GET_ITEM(sequence, 1); - __pyx_t_8 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 235, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 235, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 235, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_9 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 235, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_10 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_9); - index = 0; __pyx_t_1 = __pyx_t_10(__pyx_t_9); if (unlikely(!__pyx_t_1)) goto __pyx_L13_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_7 = __pyx_t_10(__pyx_t_9); if (unlikely(!__pyx_t_7)) goto __pyx_L13_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - index = 2; __pyx_t_8 = __pyx_t_10(__pyx_t_9); if (unlikely(!__pyx_t_8)) goto __pyx_L13_unpacking_failed; - __Pyx_GOTREF(__pyx_t_8); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_10(__pyx_t_9), 3) < 0) __PYX_ERR(0, 235, __pyx_L5_error) - __pyx_t_10 = NULL; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L14_unpacking_done; - __pyx_L13_unpacking_failed:; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_10 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 235, __pyx_L5_error) - __pyx_L14_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_token_type, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_token, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_XDECREF_SET(__pyx_v_location, __pyx_t_8); - __pyx_t_8 = 0; - - /* "fontTools/feaLib/lexer.py":234 - * while self.lexers_: - * lexer = self.lexers_[-1] - * try: # <<<<<<<<<<<<<< - * token_type, token, location = next(lexer) - * except StopIteration: - */ - } - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L12_try_end; - __pyx_L5_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":236 - * try: - * token_type, token, location = next(lexer) - * except StopIteration: # <<<<<<<<<<<<<< - * self.lexers_.pop() - * continue - */ - __pyx_t_11 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_StopIteration); - if (__pyx_t_11) { - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_3, &__pyx_t_8, &__pyx_t_7) < 0) __PYX_ERR(0, 236, __pyx_L7_except_error) - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_7); - - /* "fontTools/feaLib/lexer.py":237 - * token_type, token, location = next(lexer) - * except StopIteration: - * self.lexers_.pop() # <<<<<<<<<<<<<< - * continue - * if token_type is Lexer.NAME and token == "include": - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 237, __pyx_L7_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = __Pyx_PyObject_Pop(__pyx_t_1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 237, __pyx_L7_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":238 - * except StopIteration: - * self.lexers_.pop() - * continue # <<<<<<<<<<<<<< - * if token_type is Lexer.NAME and token == "include": - * fname_type, fname_token, fname_location = lexer.next() - */ - goto __pyx_L15_except_continue; - __pyx_L15_except_continue:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L11_try_continue; - } - goto __pyx_L7_except_error; - - /* "fontTools/feaLib/lexer.py":234 - * while self.lexers_: - * lexer = self.lexers_[-1] - * try: # <<<<<<<<<<<<<< - * token_type, token, location = next(lexer) - * except StopIteration: - */ - __pyx_L7_except_error:; - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_XGIVEREF(__pyx_t_6); - __Pyx_ExceptionReset(__pyx_t_4, __pyx_t_5, __pyx_t_6); - goto __pyx_L1_error; - __pyx_L11_try_continue:; - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_XGIVEREF(__pyx_t_6); - __Pyx_ExceptionReset(__pyx_t_4, __pyx_t_5, __pyx_t_6); - goto __pyx_L3_continue; - __pyx_L12_try_end:; - } - - /* "fontTools/feaLib/lexer.py":239 - * self.lexers_.pop() - * continue - * if token_type is Lexer.NAME and token == "include": # <<<<<<<<<<<<<< - * fname_type, fname_token, fname_location = lexer.next() - * if fname_type is not Lexer.FILENAME: - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_NAME); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_12 = (__pyx_v_token_type == __pyx_t_8); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (__pyx_t_12) { - } else { - __pyx_t_2 = __pyx_t_12; - goto __pyx_L18_bool_binop_done; - } - __pyx_t_12 = (__Pyx_PyUnicode_Equals(__pyx_v_token, __pyx_n_u_include, Py_EQ)); if (unlikely((__pyx_t_12 < 0))) __PYX_ERR(0, 239, __pyx_L1_error) - __pyx_t_2 = __pyx_t_12; - __pyx_L18_bool_binop_done:; - if (__pyx_t_2) { - - /* "fontTools/feaLib/lexer.py":240 - * continue - * if token_type is Lexer.NAME and token == "include": - * fname_type, fname_token, fname_location = lexer.next() # <<<<<<<<<<<<<< - * if fname_type is not Lexer.FILENAME: - * raise FeatureLibError("Expected file name", fname_location) - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_lexer, __pyx_n_s_next_3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_11, 0+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_8))) || (PyList_CheckExact(__pyx_t_8))) { - PyObject* sequence = __pyx_t_8; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 240, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_7 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_9 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_9); - #else - __pyx_t_7 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_9 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_1 = PyObject_GetIter(__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_10 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); - index = 0; __pyx_t_7 = __pyx_t_10(__pyx_t_1); if (unlikely(!__pyx_t_7)) goto __pyx_L20_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - index = 1; __pyx_t_3 = __pyx_t_10(__pyx_t_1); if (unlikely(!__pyx_t_3)) goto __pyx_L20_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 2; __pyx_t_9 = __pyx_t_10(__pyx_t_1); if (unlikely(!__pyx_t_9)) goto __pyx_L20_unpacking_failed; - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_10(__pyx_t_1), 3) < 0) __PYX_ERR(0, 240, __pyx_L1_error) - __pyx_t_10 = NULL; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L21_unpacking_done; - __pyx_L20_unpacking_failed:; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_10 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 240, __pyx_L1_error) - __pyx_L21_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_fname_type, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_XDECREF_SET(__pyx_v_fname_token, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_XDECREF_SET(__pyx_v_fname_location, __pyx_t_9); - __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":241 - * if token_type is Lexer.NAME and token == "include": - * fname_type, fname_token, fname_location = lexer.next() - * if fname_type is not Lexer.FILENAME: # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected file name", fname_location) - * # semi_type, semi_token, semi_location = lexer.next() - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_FILENAME); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_2 = (__pyx_v_fname_type != __pyx_t_9); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(__pyx_t_2)) { - - /* "fontTools/feaLib/lexer.py":242 - * fname_type, fname_token, fname_location = lexer.next() - * if fname_type is not Lexer.FILENAME: - * raise FeatureLibError("Expected file name", fname_location) # <<<<<<<<<<<<<< - * # semi_type, semi_token, semi_location = lexer.next() - * # if semi_type is not Lexer.SYMBOL or semi_token != ";": - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_kp_u_Expected_file_name, __pyx_v_fname_location}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_11, 2+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_Raise(__pyx_t_9, 0, 0, 0); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __PYX_ERR(0, 242, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":241 - * if token_type is Lexer.NAME and token == "include": - * fname_type, fname_token, fname_location = lexer.next() - * if fname_type is not Lexer.FILENAME: # <<<<<<<<<<<<<< - * raise FeatureLibError("Expected file name", fname_location) - * # semi_type, semi_token, semi_location = lexer.next() - */ - } - - /* "fontTools/feaLib/lexer.py":246 - * # if semi_type is not Lexer.SYMBOL or semi_token != ";": - * # raise FeatureLibError("Expected ';'", semi_location) - * if os.path.isabs(fname_token): # <<<<<<<<<<<<<< - * path = fname_token - * else: - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_os); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_path); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isabs); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_fname_token}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_11, 1+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_2) { - - /* "fontTools/feaLib/lexer.py":247 - * # raise FeatureLibError("Expected ';'", semi_location) - * if os.path.isabs(fname_token): - * path = fname_token # <<<<<<<<<<<<<< - * else: - * if self.includeDir is not None: - */ - __Pyx_INCREF(__pyx_v_fname_token); - __Pyx_XDECREF_SET(__pyx_v_path, __pyx_v_fname_token); - - /* "fontTools/feaLib/lexer.py":246 - * # if semi_type is not Lexer.SYMBOL or semi_token != ";": - * # raise FeatureLibError("Expected ';'", semi_location) - * if os.path.isabs(fname_token): # <<<<<<<<<<<<<< - * path = fname_token - * else: - */ - goto __pyx_L23; - } - - /* "fontTools/feaLib/lexer.py":249 - * path = fname_token - * else: - * if self.includeDir is not None: # <<<<<<<<<<<<<< - * curpath = self.includeDir - * elif self.featurefilepath is not None: - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_includeDir); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_2 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_2) { - - /* "fontTools/feaLib/lexer.py":250 - * else: - * if self.includeDir is not None: - * curpath = self.includeDir # <<<<<<<<<<<<<< - * elif self.featurefilepath is not None: - * curpath = os.path.dirname(self.featurefilepath) - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_includeDir); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_XDECREF_SET(__pyx_v_curpath, __pyx_t_9); - __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":249 - * path = fname_token - * else: - * if self.includeDir is not None: # <<<<<<<<<<<<<< - * curpath = self.includeDir - * elif self.featurefilepath is not None: - */ - goto __pyx_L24; - } - - /* "fontTools/feaLib/lexer.py":251 - * if self.includeDir is not None: - * curpath = self.includeDir - * elif self.featurefilepath is not None: # <<<<<<<<<<<<<< - * curpath = os.path.dirname(self.featurefilepath) - * else: - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_featurefilepath); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_2 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_2) { - - /* "fontTools/feaLib/lexer.py":252 - * curpath = self.includeDir - * elif self.featurefilepath is not None: - * curpath = os.path.dirname(self.featurefilepath) # <<<<<<<<<<<<<< - * else: - * # if the IncludingLexer was initialized from an in-memory - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_os); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_path); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_dirname); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_featurefilepath); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_3}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_11, 1+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_curpath, __pyx_t_9); - __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":251 - * if self.includeDir is not None: - * curpath = self.includeDir - * elif self.featurefilepath is not None: # <<<<<<<<<<<<<< - * curpath = os.path.dirname(self.featurefilepath) - * else: - */ - goto __pyx_L24; - } - - /* "fontTools/feaLib/lexer.py":258 - * # its filesystem path, therefore we fall back to using the - * # current working directory to resolve relative includes - * curpath = os.getcwd() # <<<<<<<<<<<<<< - * path = os.path.join(curpath, fname_token) - * if len(self.lexers_) >= 5: - */ - /*else*/ { - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_os); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_getcwd); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_8, }; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_11, 0+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_curpath, __pyx_t_9); - __pyx_t_9 = 0; - } - __pyx_L24:; - - /* "fontTools/feaLib/lexer.py":259 - * # current working directory to resolve relative includes - * curpath = os.getcwd() - * path = os.path.join(curpath, fname_token) # <<<<<<<<<<<<<< - * if len(self.lexers_) >= 5: - * raise FeatureLibError("Too many recursive includes", fname_location) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_os); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_path); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_join); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_8, __pyx_v_curpath, __pyx_v_fname_token}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_11, 2+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_path, __pyx_t_9); - __pyx_t_9 = 0; - } - __pyx_L23:; - - /* "fontTools/feaLib/lexer.py":260 - * curpath = os.getcwd() - * path = os.path.join(curpath, fname_token) - * if len(self.lexers_) >= 5: # <<<<<<<<<<<<<< - * raise FeatureLibError("Too many recursive includes", fname_location) - * try: - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_13 = PyObject_Length(__pyx_t_9); if (unlikely(__pyx_t_13 == ((Py_ssize_t)-1))) __PYX_ERR(0, 260, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_2 = (__pyx_t_13 >= 5); - if (unlikely(__pyx_t_2)) { - - /* "fontTools/feaLib/lexer.py":261 - * path = os.path.join(curpath, fname_token) - * if len(self.lexers_) >= 5: - * raise FeatureLibError("Too many recursive includes", fname_location) # <<<<<<<<<<<<<< - * try: - * self.lexers_.append(self.make_lexer_(path)) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_8, __pyx_kp_u_Too_many_recursive_includes, __pyx_v_fname_location}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_11, 2+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_Raise(__pyx_t_9, 0, 0, 0); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __PYX_ERR(0, 261, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":260 - * curpath = os.getcwd() - * path = os.path.join(curpath, fname_token) - * if len(self.lexers_) >= 5: # <<<<<<<<<<<<<< - * raise FeatureLibError("Too many recursive includes", fname_location) - * try: - */ - } - - /* "fontTools/feaLib/lexer.py":262 - * if len(self.lexers_) >= 5: - * raise FeatureLibError("Too many recursive includes", fname_location) - * try: # <<<<<<<<<<<<<< - * self.lexers_.append(self.make_lexer_(path)) - * except FileNotFoundError as err: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_6, &__pyx_t_5, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_6); - __Pyx_XGOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "fontTools/feaLib/lexer.py":263 - * raise FeatureLibError("Too many recursive includes", fname_location) - * try: - * self.lexers_.append(self.make_lexer_(path)) # <<<<<<<<<<<<<< - * except FileNotFoundError as err: - * raise IncludedFeaNotFound(fname_token, fname_location) from err - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 263, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_make_lexer); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 263, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_v_path}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_11, 1+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 263, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __pyx_t_14 = __Pyx_PyObject_Append(__pyx_t_9, __pyx_t_3); if (unlikely(__pyx_t_14 == ((int)-1))) __PYX_ERR(0, 263, __pyx_L26_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":262 - * if len(self.lexers_) >= 5: - * raise FeatureLibError("Too many recursive includes", fname_location) - * try: # <<<<<<<<<<<<<< - * self.lexers_.append(self.make_lexer_(path)) - * except FileNotFoundError as err: - */ - } - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L33_try_end; - __pyx_L26_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":264 - * try: - * self.lexers_.append(self.make_lexer_(path)) - * except FileNotFoundError as err: # <<<<<<<<<<<<<< - * raise IncludedFeaNotFound(fname_token, fname_location) from err - * else: - */ - __Pyx_ErrFetch(&__pyx_t_3, &__pyx_t_9, &__pyx_t_8); - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_FileNotFoundError); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 264, __pyx_L28_except_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_3, __pyx_t_7); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_ErrRestore(__pyx_t_3, __pyx_t_9, __pyx_t_8); - __pyx_t_3 = 0; __pyx_t_9 = 0; __pyx_t_8 = 0; - if (__pyx_t_11) { - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_8, &__pyx_t_9, &__pyx_t_3) < 0) __PYX_ERR(0, 264, __pyx_L28_except_error) - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_9); - __pyx_v_err = __pyx_t_9; - /*try:*/ { - - /* "fontTools/feaLib/lexer.py":265 - * self.lexers_.append(self.make_lexer_(path)) - * except FileNotFoundError as err: - * raise IncludedFeaNotFound(fname_token, fname_location) from err # <<<<<<<<<<<<<< - * else: - * return (token_type, token, location) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_IncludedFeaNotFound); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 265, __pyx_L39_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_15 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_15 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_15)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_15); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_15, __pyx_v_fname_token, __pyx_v_fname_location}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_11, 2+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 265, __pyx_L39_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_Raise(__pyx_t_7, 0, 0, __pyx_v_err); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __PYX_ERR(0, 265, __pyx_L39_error) - } - - /* "fontTools/feaLib/lexer.py":264 - * try: - * self.lexers_.append(self.make_lexer_(path)) - * except FileNotFoundError as err: # <<<<<<<<<<<<<< - * raise IncludedFeaNotFound(fname_token, fname_location) from err - * else: - */ - /*finally:*/ { - __pyx_L39_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_18 = 0; __pyx_t_19 = 0; __pyx_t_20 = 0; __pyx_t_21 = 0; __pyx_t_22 = 0; __pyx_t_23 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_18, &__pyx_t_19, &__pyx_t_20) < 0)) __Pyx_ErrFetch(&__pyx_t_18, &__pyx_t_19, &__pyx_t_20); - __Pyx_XGOTREF(__pyx_t_18); - __Pyx_XGOTREF(__pyx_t_19); - __Pyx_XGOTREF(__pyx_t_20); - __Pyx_XGOTREF(__pyx_t_21); - __Pyx_XGOTREF(__pyx_t_22); - __Pyx_XGOTREF(__pyx_t_23); - __pyx_t_11 = __pyx_lineno; __pyx_t_16 = __pyx_clineno; __pyx_t_17 = __pyx_filename; - { - __Pyx_DECREF(__pyx_v_err); __pyx_v_err = 0; - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_21); - __Pyx_XGIVEREF(__pyx_t_22); - __Pyx_XGIVEREF(__pyx_t_23); - __Pyx_ExceptionReset(__pyx_t_21, __pyx_t_22, __pyx_t_23); - } - __Pyx_XGIVEREF(__pyx_t_18); - __Pyx_XGIVEREF(__pyx_t_19); - __Pyx_XGIVEREF(__pyx_t_20); - __Pyx_ErrRestore(__pyx_t_18, __pyx_t_19, __pyx_t_20); - __pyx_t_18 = 0; __pyx_t_19 = 0; __pyx_t_20 = 0; __pyx_t_21 = 0; __pyx_t_22 = 0; __pyx_t_23 = 0; - __pyx_lineno = __pyx_t_11; __pyx_clineno = __pyx_t_16; __pyx_filename = __pyx_t_17; - goto __pyx_L28_except_error; - } - } - } - goto __pyx_L28_except_error; - - /* "fontTools/feaLib/lexer.py":262 - * if len(self.lexers_) >= 5: - * raise FeatureLibError("Too many recursive includes", fname_location) - * try: # <<<<<<<<<<<<<< - * self.lexers_.append(self.make_lexer_(path)) - * except FileNotFoundError as err: - */ - __pyx_L28_except_error:; - __Pyx_XGIVEREF(__pyx_t_6); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_6, __pyx_t_5, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L33_try_end:; - } - - /* "fontTools/feaLib/lexer.py":239 - * self.lexers_.pop() - * continue - * if token_type is Lexer.NAME and token == "include": # <<<<<<<<<<<<<< - * fname_type, fname_token, fname_location = lexer.next() - * if fname_type is not Lexer.FILENAME: - */ - goto __pyx_L17; - } - - /* "fontTools/feaLib/lexer.py":267 - * raise IncludedFeaNotFound(fname_token, fname_location) from err - * else: - * return (token_type, token, location) # <<<<<<<<<<<<<< - * raise StopIteration() - * - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_token_type); - __Pyx_GIVEREF(__pyx_v_token_type); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_token_type)) __PYX_ERR(0, 267, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_token); - __Pyx_GIVEREF(__pyx_v_token); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_token)) __PYX_ERR(0, 267, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_location); - __Pyx_GIVEREF(__pyx_v_location); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_location)) __PYX_ERR(0, 267, __pyx_L1_error); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - __pyx_L17:; - __pyx_L3_continue:; - } - - /* "fontTools/feaLib/lexer.py":268 - * else: - * return (token_type, token, location) - * raise StopIteration() # <<<<<<<<<<<<<< - * - * @staticmethod - */ - __pyx_t_3 = __Pyx_PyObject_CallNoArg(__pyx_builtin_StopIteration); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(0, 268, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":231 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while self.lexers_: - * lexer = self.lexers_[-1] - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_lexer); - __Pyx_XDECREF(__pyx_v_token_type); - __Pyx_XDECREF(__pyx_v_token); - __Pyx_XDECREF(__pyx_v_location); - __Pyx_XDECREF(__pyx_v_fname_type); - __Pyx_XDECREF(__pyx_v_fname_token); - __Pyx_XDECREF(__pyx_v_fname_location); - __Pyx_XDECREF(__pyx_v_path); - __Pyx_XDECREF(__pyx_v_curpath); - __Pyx_XDECREF(__pyx_v_err); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":270 - * raise StopIteration() - * - * @staticmethod # <<<<<<<<<<<<<< - * def make_lexer_(file_or_path): - * if hasattr(file_or_path, "read"): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_9make_lexer_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_8make_lexer_, "IncludingLexer.make_lexer_(file_or_path)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_9make_lexer_ = {"make_lexer_", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_9make_lexer_, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_8make_lexer_}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_9make_lexer_(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_file_or_path = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("make_lexer_ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 270, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_file_or_path,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_file_or_path)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 270, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "make_lexer_") < 0)) __PYX_ERR(0, 270, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_file_or_path = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("make_lexer_", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 270, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.make_lexer_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_8make_lexer_(__pyx_self, __pyx_v_file_or_path); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_8make_lexer_(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_file_or_path) { - PyObject *__pyx_v_fileobj = NULL; - int __pyx_v_closing; - PyObject *__pyx_v_filename = NULL; - PyObject *__pyx_v_data = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("make_lexer_", 0); - - /* "fontTools/feaLib/lexer.py":272 - * @staticmethod - * def make_lexer_(file_or_path): - * if hasattr(file_or_path, "read"): # <<<<<<<<<<<<<< - * fileobj, closing = file_or_path, False - * else: - */ - __pyx_t_1 = __Pyx_HasAttr(__pyx_v_file_or_path, __pyx_n_u_read); if (unlikely(__pyx_t_1 == ((int)-1))) __PYX_ERR(0, 272, __pyx_L1_error) - if (__pyx_t_1) { - - /* "fontTools/feaLib/lexer.py":273 - * def make_lexer_(file_or_path): - * if hasattr(file_or_path, "read"): - * fileobj, closing = file_or_path, False # <<<<<<<<<<<<<< - * else: - * filename, closing = file_or_path, True - */ - __pyx_t_2 = __pyx_v_file_or_path; - __Pyx_INCREF(__pyx_t_2); - __pyx_t_1 = 0; - __pyx_v_fileobj = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_closing = __pyx_t_1; - - /* "fontTools/feaLib/lexer.py":272 - * @staticmethod - * def make_lexer_(file_or_path): - * if hasattr(file_or_path, "read"): # <<<<<<<<<<<<<< - * fileobj, closing = file_or_path, False - * else: - */ - goto __pyx_L3; - } - - /* "fontTools/feaLib/lexer.py":275 - * fileobj, closing = file_or_path, False - * else: - * filename, closing = file_or_path, True # <<<<<<<<<<<<<< - * fileobj = open(filename, "r", encoding="utf-8") - * data = fileobj.read() - */ - /*else*/ { - __pyx_t_2 = __pyx_v_file_or_path; - __Pyx_INCREF(__pyx_t_2); - __pyx_t_1 = 1; - __pyx_v_filename = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_closing = __pyx_t_1; - - /* "fontTools/feaLib/lexer.py":276 - * else: - * filename, closing = file_or_path, True - * fileobj = open(filename, "r", encoding="utf-8") # <<<<<<<<<<<<<< - * data = fileobj.read() - * filename = getattr(fileobj, "name", None) - */ - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_filename); - __Pyx_GIVEREF(__pyx_v_filename); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_filename)) __PYX_ERR(0, 276, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_r); - __Pyx_GIVEREF(__pyx_n_u_r); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_n_u_r)) __PYX_ERR(0, 276, __pyx_L1_error); - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_encoding, __pyx_kp_u_utf_8) < 0) __PYX_ERR(0, 276, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_open, __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_fileobj = __pyx_t_4; - __pyx_t_4 = 0; - } - __pyx_L3:; - - /* "fontTools/feaLib/lexer.py":277 - * filename, closing = file_or_path, True - * fileobj = open(filename, "r", encoding="utf-8") - * data = fileobj.read() # <<<<<<<<<<<<<< - * filename = getattr(fileobj, "name", None) - * if closing: - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_fileobj, __pyx_n_s_read); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_data = __pyx_t_4; - __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":278 - * fileobj = open(filename, "r", encoding="utf-8") - * data = fileobj.read() - * filename = getattr(fileobj, "name", None) # <<<<<<<<<<<<<< - * if closing: - * fileobj.close() - */ - __pyx_t_4 = __Pyx_GetAttr3(__pyx_v_fileobj, __pyx_n_u_name, Py_None); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 278, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_filename, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":279 - * data = fileobj.read() - * filename = getattr(fileobj, "name", None) - * if closing: # <<<<<<<<<<<<<< - * fileobj.close() - * return Lexer(data, filename) - */ - if (__pyx_v_closing) { - - /* "fontTools/feaLib/lexer.py":280 - * filename = getattr(fileobj, "name", None) - * if closing: - * fileobj.close() # <<<<<<<<<<<<<< - * return Lexer(data, filename) - * - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_fileobj, __pyx_n_s_close); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/feaLib/lexer.py":279 - * data = fileobj.read() - * filename = getattr(fileobj, "name", None) - * if closing: # <<<<<<<<<<<<<< - * fileobj.close() - * return Lexer(data, filename) - */ - } - - /* "fontTools/feaLib/lexer.py":281 - * if closing: - * fileobj.close() - * return Lexer(data, filename) # <<<<<<<<<<<<<< - * - * def scan_anonymous_block(self, tag): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Lexer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_data, __pyx_v_filename}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":270 - * raise StopIteration() - * - * @staticmethod # <<<<<<<<<<<<<< - * def make_lexer_(file_or_path): - * if hasattr(file_or_path, "read"): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.make_lexer_", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_fileobj); - __Pyx_XDECREF(__pyx_v_filename); - __Pyx_XDECREF(__pyx_v_data); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":283 - * return Lexer(data, filename) - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * return self.lexers_[-1].scan_anonymous_block(tag) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_11scan_anonymous_block(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_10scan_anonymous_block, "IncludingLexer.scan_anonymous_block(self, tag)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_11scan_anonymous_block = {"scan_anonymous_block", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_11scan_anonymous_block, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_14IncludingLexer_10scan_anonymous_block}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_14IncludingLexer_11scan_anonymous_block(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_tag = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("scan_anonymous_block (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 283, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_tag,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 283, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_tag)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 283, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("scan_anonymous_block", 1, 2, 2, 1); __PYX_ERR(0, 283, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "scan_anonymous_block") < 0)) __PYX_ERR(0, 283, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_tag = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("scan_anonymous_block", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 283, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.scan_anonymous_block", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_10scan_anonymous_block(__pyx_self, __pyx_v_self, __pyx_v_tag); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_14IncludingLexer_10scan_anonymous_block(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_tag) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("scan_anonymous_block", 0); - - /* "fontTools/feaLib/lexer.py":284 - * - * def scan_anonymous_block(self, tag): - * return self.lexers_[-1].scan_anonymous_block(tag) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_t_2, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_scan_anonymous_block); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_tag}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":283 - * return Lexer(data, filename) - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * return self.lexers_[-1].scan_anonymous_block(tag) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.feaLib.lexer.IncludingLexer.scan_anonymous_block", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/feaLib/lexer.py":290 - * """Lexer that does not follow `include` statements, emits them as-is.""" - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * return next(self.lexers_[0]) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_17NonIncludingLexer_1__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_6feaLib_5lexer_17NonIncludingLexer___next__, "NonIncludingLexer.__next__(self)"); -static PyMethodDef __pyx_mdef_9fontTools_6feaLib_5lexer_17NonIncludingLexer_1__next__ = {"__next__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_6feaLib_5lexer_17NonIncludingLexer_1__next__, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_6feaLib_5lexer_17NonIncludingLexer___next__}; -static PyObject *__pyx_pw_9fontTools_6feaLib_5lexer_17NonIncludingLexer_1__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__next__ (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); - if (unlikely((__pyx_nargs < 0))) __PYX_ERR(0, 290, __pyx_L3_error) - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 290, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__next__") < 0)) __PYX_ERR(0, 290, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__next__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 290, __pyx_L3_error) - goto __pyx_L3_error; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.feaLib.lexer.NonIncludingLexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_6feaLib_5lexer_17NonIncludingLexer___next__(__pyx_self, __pyx_v_self); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_6feaLib_5lexer_17NonIncludingLexer___next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__next__", 0); - - /* "fontTools/feaLib/lexer.py":291 - * - * def __next__(self): # Python 3 - * return next(self.lexers_[0]) # <<<<<<<<<<<<<< - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lexers); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyIter_Next(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/feaLib/lexer.py":290 - * """Lexer that does not follow `include` statements, emits them as-is.""" - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * return next(self.lexers_[0]) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("fontTools.feaLib.lexer.NonIncludingLexer.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif -/* #### Code section: pystring_table ### */ - -static int __Pyx_CreateStringTabAndInitStrings(void) { - __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_kp_u_, __pyx_k_, sizeof(__pyx_k_), 0, 1, 0, 0}, - {&__pyx_kp_u_0, __pyx_k_0, sizeof(__pyx_k_0), 0, 1, 0, 0}, - {&__pyx_kp_u_0123456789, __pyx_k_0123456789, sizeof(__pyx_k_0123456789), 0, 1, 0, 0}, - {&__pyx_kp_u_0123456789ABCDEFabcdef, __pyx_k_0123456789ABCDEFabcdef, sizeof(__pyx_k_0123456789ABCDEFabcdef), 0, 1, 0, 0}, - {&__pyx_n_u_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef, __pyx_k_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef, sizeof(__pyx_k_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef), 0, 1, 0, 1}, - {&__pyx_n_s_ANONYMOUS_BLOCK, __pyx_k_ANONYMOUS_BLOCK, sizeof(__pyx_k_ANONYMOUS_BLOCK), 0, 0, 1, 1}, - {&__pyx_n_u_ANONYMOUS_BLOCK, __pyx_k_ANONYMOUS_BLOCK, sizeof(__pyx_k_ANONYMOUS_BLOCK), 0, 1, 0, 1}, - {&__pyx_kp_s_A_Lexer_that_follows_include_sta, __pyx_k_A_Lexer_that_follows_include_sta, sizeof(__pyx_k_A_Lexer_that_follows_include_sta), 0, 0, 1, 0}, - {&__pyx_kp_u_A_Za_z_0_9, __pyx_k_A_Za_z_0_9, sizeof(__pyx_k_A_Za_z_0_9), 0, 1, 0, 0}, - {&__pyx_n_s_CHAR_DIGIT, __pyx_k_CHAR_DIGIT, sizeof(__pyx_k_CHAR_DIGIT), 0, 0, 1, 1}, - {&__pyx_n_s_CHAR_HEXDIGIT, __pyx_k_CHAR_HEXDIGIT, sizeof(__pyx_k_CHAR_HEXDIGIT), 0, 0, 1, 1}, - {&__pyx_n_s_CHAR_LETTER, __pyx_k_CHAR_LETTER, sizeof(__pyx_k_CHAR_LETTER), 0, 0, 1, 1}, - {&__pyx_n_s_CHAR_NAME_CONTINUATION, __pyx_k_CHAR_NAME_CONTINUATION, sizeof(__pyx_k_CHAR_NAME_CONTINUATION), 0, 0, 1, 1}, - {&__pyx_n_s_CHAR_NAME_START, __pyx_k_CHAR_NAME_START, sizeof(__pyx_k_CHAR_NAME_START), 0, 0, 1, 1}, - {&__pyx_n_s_CHAR_NEWLINE, __pyx_k_CHAR_NEWLINE, sizeof(__pyx_k_CHAR_NEWLINE), 0, 0, 1, 1}, - {&__pyx_n_s_CHAR_SYMBOL, __pyx_k_CHAR_SYMBOL, sizeof(__pyx_k_CHAR_SYMBOL), 0, 0, 1, 1}, - {&__pyx_n_s_CHAR_WHITESPACE, __pyx_k_CHAR_WHITESPACE, sizeof(__pyx_k_CHAR_WHITESPACE), 0, 0, 1, 1}, - {&__pyx_n_s_CID, __pyx_k_CID, sizeof(__pyx_k_CID), 0, 0, 1, 1}, - {&__pyx_n_u_CID, __pyx_k_CID, sizeof(__pyx_k_CID), 0, 1, 0, 1}, - {&__pyx_n_s_COMMENT, __pyx_k_COMMENT, sizeof(__pyx_k_COMMENT), 0, 0, 1, 1}, - {&__pyx_n_u_COMMENT, __pyx_k_COMMENT, sizeof(__pyx_k_COMMENT), 0, 1, 0, 1}, - {&__pyx_kp_u_Expected_after_file_name, __pyx_k_Expected_after_file_name, sizeof(__pyx_k_Expected_after_file_name), 0, 1, 0, 0}, - {&__pyx_kp_u_Expected_before_file_name, __pyx_k_Expected_before_file_name, sizeof(__pyx_k_Expected_before_file_name), 0, 1, 0, 0}, - {&__pyx_kp_u_Expected_file_name, __pyx_k_Expected_file_name, sizeof(__pyx_k_Expected_file_name), 0, 1, 0, 0}, - {&__pyx_kp_u_Expected_glyph_class_name, __pyx_k_Expected_glyph_class_name, sizeof(__pyx_k_Expected_glyph_class_name), 0, 1, 0, 0}, - {&__pyx_kp_u_Expected_s_to_terminate_anonymou, __pyx_k_Expected_s_to_terminate_anonymou, sizeof(__pyx_k_Expected_s_to_terminate_anonymou), 0, 1, 0, 0}, - {&__pyx_kp_u_Expected_to_terminate_string, __pyx_k_Expected_to_terminate_string, sizeof(__pyx_k_Expected_to_terminate_string), 0, 1, 0, 0}, - {&__pyx_n_s_FILENAME, __pyx_k_FILENAME, sizeof(__pyx_k_FILENAME), 0, 0, 1, 1}, - {&__pyx_n_u_FILENAME, __pyx_k_FILENAME, sizeof(__pyx_k_FILENAME), 0, 1, 0, 1}, - {&__pyx_n_s_FLOAT, __pyx_k_FLOAT, sizeof(__pyx_k_FLOAT), 0, 0, 1, 1}, - {&__pyx_n_u_FLOAT, __pyx_k_FLOAT, sizeof(__pyx_k_FLOAT), 0, 1, 0, 1}, - {&__pyx_n_s_FeatureLibError, __pyx_k_FeatureLibError, sizeof(__pyx_k_FeatureLibError), 0, 0, 1, 1}, - {&__pyx_n_s_FeatureLibLocation, __pyx_k_FeatureLibLocation, sizeof(__pyx_k_FeatureLibLocation), 0, 0, 1, 1}, - {&__pyx_n_s_FileNotFoundError, __pyx_k_FileNotFoundError, sizeof(__pyx_k_FileNotFoundError), 0, 0, 1, 1}, - {&__pyx_n_s_GLYPHCLASS, __pyx_k_GLYPHCLASS, sizeof(__pyx_k_GLYPHCLASS), 0, 0, 1, 1}, - {&__pyx_n_u_GLYPHCLASS, __pyx_k_GLYPHCLASS, sizeof(__pyx_k_GLYPHCLASS), 0, 1, 0, 1}, - {&__pyx_kp_u_Glyph_class_names_must_consist_o, __pyx_k_Glyph_class_names_must_consist_o, sizeof(__pyx_k_Glyph_class_names_must_consist_o), 0, 1, 0, 0}, - {&__pyx_kp_u_Glyph_class_names_must_not_be_lo, __pyx_k_Glyph_class_names_must_not_be_lo, sizeof(__pyx_k_Glyph_class_names_must_not_be_lo), 0, 1, 0, 0}, - {&__pyx_n_s_HEXADECIMAL, __pyx_k_HEXADECIMAL, sizeof(__pyx_k_HEXADECIMAL), 0, 0, 1, 1}, - {&__pyx_n_u_HEXADECIMAL, __pyx_k_HEXADECIMAL, sizeof(__pyx_k_HEXADECIMAL), 0, 1, 0, 1}, - {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, - {&__pyx_n_s_IncludedFeaNotFound, __pyx_k_IncludedFeaNotFound, sizeof(__pyx_k_IncludedFeaNotFound), 0, 0, 1, 1}, - {&__pyx_n_s_IncludingLexer, __pyx_k_IncludingLexer, sizeof(__pyx_k_IncludingLexer), 0, 0, 1, 1}, - {&__pyx_n_s_IncludingLexer___init, __pyx_k_IncludingLexer___init, sizeof(__pyx_k_IncludingLexer___init), 0, 0, 1, 1}, - {&__pyx_n_s_IncludingLexer___iter, __pyx_k_IncludingLexer___iter, sizeof(__pyx_k_IncludingLexer___iter), 0, 0, 1, 1}, - {&__pyx_n_s_IncludingLexer___next, __pyx_k_IncludingLexer___next, sizeof(__pyx_k_IncludingLexer___next), 0, 0, 1, 1}, - {&__pyx_n_s_IncludingLexer_make_lexer, __pyx_k_IncludingLexer_make_lexer, sizeof(__pyx_k_IncludingLexer_make_lexer), 0, 0, 1, 1}, - {&__pyx_n_s_IncludingLexer_next, __pyx_k_IncludingLexer_next, sizeof(__pyx_k_IncludingLexer_next), 0, 0, 1, 1}, - {&__pyx_n_s_IncludingLexer_scan_anonymous_bl, __pyx_k_IncludingLexer_scan_anonymous_bl, sizeof(__pyx_k_IncludingLexer_scan_anonymous_bl), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer, __pyx_k_Lexer, sizeof(__pyx_k_Lexer), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer___init, __pyx_k_Lexer___init, sizeof(__pyx_k_Lexer___init), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer___iter, __pyx_k_Lexer___iter, sizeof(__pyx_k_Lexer___iter), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer___next, __pyx_k_Lexer___next, sizeof(__pyx_k_Lexer___next), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer_location, __pyx_k_Lexer_location, sizeof(__pyx_k_Lexer_location), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer_next, __pyx_k_Lexer_next, sizeof(__pyx_k_Lexer_next), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer_next_2, __pyx_k_Lexer_next_2, sizeof(__pyx_k_Lexer_next_2), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer_scan_anonymous_block, __pyx_k_Lexer_scan_anonymous_block, sizeof(__pyx_k_Lexer_scan_anonymous_block), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer_scan_over, __pyx_k_Lexer_scan_over, sizeof(__pyx_k_Lexer_scan_over), 0, 0, 1, 1}, - {&__pyx_n_s_Lexer_scan_until, __pyx_k_Lexer_scan_until, sizeof(__pyx_k_Lexer_scan_until), 0, 0, 1, 1}, - {&__pyx_kp_s_Lexer_that_does_not_follow_inclu, __pyx_k_Lexer_that_does_not_follow_inclu, sizeof(__pyx_k_Lexer_that_does_not_follow_inclu), 0, 0, 1, 0}, - {&__pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_k_Lib_fontTools_feaLib_lexer_py, sizeof(__pyx_k_Lib_fontTools_feaLib_lexer_py), 0, 0, 1, 0}, - {&__pyx_n_s_MODE_FILENAME, __pyx_k_MODE_FILENAME, sizeof(__pyx_k_MODE_FILENAME), 0, 0, 1, 1}, - {&__pyx_n_s_MODE_NORMAL, __pyx_k_MODE_NORMAL, sizeof(__pyx_k_MODE_NORMAL), 0, 0, 1, 1}, - {&__pyx_n_s_NAME, __pyx_k_NAME, sizeof(__pyx_k_NAME), 0, 0, 1, 1}, - {&__pyx_n_u_NAME, __pyx_k_NAME, sizeof(__pyx_k_NAME), 0, 1, 0, 1}, - {&__pyx_n_s_NEWLINE, __pyx_k_NEWLINE, sizeof(__pyx_k_NEWLINE), 0, 0, 1, 1}, - {&__pyx_n_u_NEWLINE, __pyx_k_NEWLINE, sizeof(__pyx_k_NEWLINE), 0, 1, 0, 1}, - {&__pyx_n_u_NORMAL, __pyx_k_NORMAL, sizeof(__pyx_k_NORMAL), 0, 1, 0, 1}, - {&__pyx_n_s_NUMBER, __pyx_k_NUMBER, sizeof(__pyx_k_NUMBER), 0, 0, 1, 1}, - {&__pyx_n_u_NUMBER, __pyx_k_NUMBER, sizeof(__pyx_k_NUMBER), 0, 1, 0, 1}, - {&__pyx_n_s_NUMBERS, __pyx_k_NUMBERS, sizeof(__pyx_k_NUMBERS), 0, 0, 1, 1}, - {&__pyx_n_s_NonIncludingLexer, __pyx_k_NonIncludingLexer, sizeof(__pyx_k_NonIncludingLexer), 0, 0, 1, 1}, - {&__pyx_n_s_NonIncludingLexer___next, __pyx_k_NonIncludingLexer___next, sizeof(__pyx_k_NonIncludingLexer___next), 0, 0, 1, 1}, - {&__pyx_n_s_OCTAL, __pyx_k_OCTAL, sizeof(__pyx_k_OCTAL), 0, 0, 1, 1}, - {&__pyx_n_u_OCTAL, __pyx_k_OCTAL, sizeof(__pyx_k_OCTAL), 0, 1, 0, 1}, - {&__pyx_n_s_RE_GLYPHCLASS, __pyx_k_RE_GLYPHCLASS, sizeof(__pyx_k_RE_GLYPHCLASS), 0, 0, 1, 1}, - {&__pyx_n_s_STRING, __pyx_k_STRING, sizeof(__pyx_k_STRING), 0, 0, 1, 1}, - {&__pyx_n_u_STRING, __pyx_k_STRING, sizeof(__pyx_k_STRING), 0, 1, 0, 1}, - {&__pyx_n_s_SYMBOL, __pyx_k_SYMBOL, sizeof(__pyx_k_SYMBOL), 0, 0, 1, 1}, - {&__pyx_n_u_SYMBOL, __pyx_k_SYMBOL, sizeof(__pyx_k_SYMBOL), 0, 1, 0, 1}, - {&__pyx_n_s_StopIteration, __pyx_k_StopIteration, sizeof(__pyx_k_StopIteration), 0, 0, 1, 1}, - {&__pyx_kp_u_Too_many_recursive_includes, __pyx_k_Too_many_recursive_includes, sizeof(__pyx_k_Too_many_recursive_includes), 0, 1, 0, 0}, - {&__pyx_kp_u_Unexpected_character_r, __pyx_k_Unexpected_character_r, sizeof(__pyx_k_Unexpected_character_r), 0, 1, 0, 0}, - {&__pyx_kp_u__10, __pyx_k__10, sizeof(__pyx_k__10), 0, 1, 0, 0}, - {&__pyx_kp_u__11, __pyx_k__11, sizeof(__pyx_k__11), 0, 1, 0, 0}, - {&__pyx_kp_u__12, __pyx_k__12, sizeof(__pyx_k__12), 0, 1, 0, 0}, - {&__pyx_n_s__13, __pyx_k__13, sizeof(__pyx_k__13), 0, 0, 1, 1}, - {&__pyx_kp_u__16, __pyx_k__16, sizeof(__pyx_k__16), 0, 1, 0, 0}, - {&__pyx_kp_u__17, __pyx_k__17, sizeof(__pyx_k__17), 0, 1, 0, 0}, - {&__pyx_kp_u__18, __pyx_k__18, sizeof(__pyx_k__18), 0, 1, 0, 0}, - {&__pyx_kp_u__19, __pyx_k__19, sizeof(__pyx_k__19), 0, 1, 0, 0}, - {&__pyx_kp_u__2, __pyx_k__2, sizeof(__pyx_k__2), 0, 1, 0, 0}, - {&__pyx_kp_u__20, __pyx_k__20, sizeof(__pyx_k__20), 0, 1, 0, 0}, - {&__pyx_kp_u__3, __pyx_k__3, sizeof(__pyx_k__3), 0, 1, 0, 0}, - {&__pyx_kp_u__4, __pyx_k__4, sizeof(__pyx_k__4), 0, 1, 0, 0}, - {&__pyx_kp_u__5, __pyx_k__5, sizeof(__pyx_k__5), 0, 1, 0, 0}, - {&__pyx_n_s__51, __pyx_k__51, sizeof(__pyx_k__51), 0, 0, 1, 1}, - {&__pyx_kp_u__6, __pyx_k__6, sizeof(__pyx_k__6), 0, 1, 0, 0}, - {&__pyx_kp_u__7, __pyx_k__7, sizeof(__pyx_k__7), 0, 1, 0, 0}, - {&__pyx_kp_u__8, __pyx_k__8, sizeof(__pyx_k__8), 0, 1, 0, 0}, - {&__pyx_kp_u__9, __pyx_k__9, sizeof(__pyx_k__9), 0, 1, 0, 0}, - {&__pyx_n_s_append, __pyx_k_append, sizeof(__pyx_k_append), 0, 0, 1, 1}, - {&__pyx_n_s_asyncio_coroutines, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {&__pyx_n_s_class_getitem, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_close, __pyx_k_close, sizeof(__pyx_k_close), 0, 0, 1, 1}, - {&__pyx_n_s_closing, __pyx_k_closing, sizeof(__pyx_k_closing), 0, 0, 1, 1}, - {&__pyx_n_s_column, __pyx_k_column, sizeof(__pyx_k_column), 0, 0, 1, 1}, - {&__pyx_n_s_compile, __pyx_k_compile, sizeof(__pyx_k_compile), 0, 0, 1, 1}, - {&__pyx_n_s_cur_char, __pyx_k_cur_char, sizeof(__pyx_k_cur_char), 0, 0, 1, 1}, - {&__pyx_n_s_curpath, __pyx_k_curpath, sizeof(__pyx_k_curpath), 0, 0, 1, 1}, - {&__pyx_n_s_data, __pyx_k_data, sizeof(__pyx_k_data), 0, 0, 1, 1}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_n_s_dirname, __pyx_k_dirname, sizeof(__pyx_k_dirname), 0, 0, 1, 1}, - {&__pyx_n_s_doc, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1}, - {&__pyx_n_s_encoding, __pyx_k_encoding, sizeof(__pyx_k_encoding), 0, 0, 1, 1}, - {&__pyx_n_s_err, __pyx_k_err, sizeof(__pyx_k_err), 0, 0, 1, 1}, - {&__pyx_n_s_featurefile, __pyx_k_featurefile, sizeof(__pyx_k_featurefile), 0, 0, 1, 1}, - {&__pyx_n_s_featurefilepath, __pyx_k_featurefilepath, sizeof(__pyx_k_featurefilepath), 0, 0, 1, 1}, - {&__pyx_kp_u_features, __pyx_k_features, sizeof(__pyx_k_features), 0, 1, 0, 0}, - {&__pyx_n_s_file_or_path, __pyx_k_file_or_path, sizeof(__pyx_k_file_or_path), 0, 0, 1, 1}, - {&__pyx_n_s_filename, __pyx_k_filename, sizeof(__pyx_k_filename), 0, 0, 1, 1}, - {&__pyx_n_s_filename_2, __pyx_k_filename_2, sizeof(__pyx_k_filename_2), 0, 0, 1, 1}, - {&__pyx_n_s_fileobj, __pyx_k_fileobj, sizeof(__pyx_k_fileobj), 0, 0, 1, 1}, - {&__pyx_n_s_fname_location, __pyx_k_fname_location, sizeof(__pyx_k_fname_location), 0, 0, 1, 1}, - {&__pyx_n_s_fname_token, __pyx_k_fname_token, sizeof(__pyx_k_fname_token), 0, 0, 1, 1}, - {&__pyx_n_s_fname_type, __pyx_k_fname_type, sizeof(__pyx_k_fname_type), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_feaLib_error, __pyx_k_fontTools_feaLib_error, sizeof(__pyx_k_fontTools_feaLib_error), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_feaLib_lexer, __pyx_k_fontTools_feaLib_lexer, sizeof(__pyx_k_fontTools_feaLib_lexer), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_feaLib_location, __pyx_k_fontTools_feaLib_location, sizeof(__pyx_k_fontTools_feaLib_location), 0, 0, 1, 1}, - {&__pyx_n_s_getcwd, __pyx_k_getcwd, sizeof(__pyx_k_getcwd), 0, 0, 1, 1}, - {&__pyx_n_s_glyphclass, __pyx_k_glyphclass, sizeof(__pyx_k_glyphclass), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_u_include, __pyx_k_include, sizeof(__pyx_k_include), 0, 1, 0, 1}, - {&__pyx_n_s_includeDir, __pyx_k_includeDir, sizeof(__pyx_k_includeDir), 0, 0, 1, 1}, - {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1}, - {&__pyx_n_s_init_subclass, __pyx_k_init_subclass, sizeof(__pyx_k_init_subclass), 0, 0, 1, 1}, - {&__pyx_n_s_initializing, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {&__pyx_n_s_is_coroutine, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {&__pyx_n_s_isabs, __pyx_k_isabs, sizeof(__pyx_k_isabs), 0, 0, 1, 1}, - {&__pyx_n_s_iter, __pyx_k_iter, sizeof(__pyx_k_iter), 0, 0, 1, 1}, - {&__pyx_n_s_join, __pyx_k_join, sizeof(__pyx_k_join), 0, 0, 1, 1}, - {&__pyx_n_s_lexer, __pyx_k_lexer, sizeof(__pyx_k_lexer), 0, 0, 1, 1}, - {&__pyx_n_s_lexers, __pyx_k_lexers, sizeof(__pyx_k_lexers), 0, 0, 1, 1}, - {&__pyx_n_s_limit, __pyx_k_limit, sizeof(__pyx_k_limit), 0, 0, 1, 1}, - {&__pyx_n_s_line, __pyx_k_line, sizeof(__pyx_k_line), 0, 0, 1, 1}, - {&__pyx_n_s_line_start, __pyx_k_line_start, sizeof(__pyx_k_line_start), 0, 0, 1, 1}, - {&__pyx_n_s_location, __pyx_k_location, sizeof(__pyx_k_location), 0, 0, 1, 1}, - {&__pyx_n_s_location_2, __pyx_k_location_2, sizeof(__pyx_k_location_2), 0, 0, 1, 1}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_make_lexer, __pyx_k_make_lexer, sizeof(__pyx_k_make_lexer), 0, 0, 1, 1}, - {&__pyx_n_s_match, __pyx_k_match, sizeof(__pyx_k_match), 0, 0, 1, 1}, - {&__pyx_n_s_maxsplit, __pyx_k_maxsplit, sizeof(__pyx_k_maxsplit), 0, 0, 1, 1}, - {&__pyx_n_s_metaclass, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_module, __pyx_k_module, sizeof(__pyx_k_module), 0, 0, 1, 1}, - {&__pyx_n_s_mro_entries, __pyx_k_mro_entries, sizeof(__pyx_k_mro_entries), 0, 0, 1, 1}, - {&__pyx_n_u_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 1, 0, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_next, __pyx_k_next, sizeof(__pyx_k_next), 0, 0, 1, 1}, - {&__pyx_n_s_next_2, __pyx_k_next_2, sizeof(__pyx_k_next_2), 0, 0, 1, 1}, - {&__pyx_n_s_next_3, __pyx_k_next_3, sizeof(__pyx_k_next_3), 0, 0, 1, 1}, - {&__pyx_n_s_next_char, __pyx_k_next_char, sizeof(__pyx_k_next_char), 0, 0, 1, 1}, - {&__pyx_n_s_object, __pyx_k_object, sizeof(__pyx_k_object), 0, 0, 1, 1}, - {&__pyx_n_s_open, __pyx_k_open, sizeof(__pyx_k_open), 0, 0, 1, 1}, - {&__pyx_n_s_os, __pyx_k_os, sizeof(__pyx_k_os), 0, 0, 1, 1}, - {&__pyx_n_s_p, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1}, - {&__pyx_n_s_path, __pyx_k_path, sizeof(__pyx_k_path), 0, 0, 1, 1}, - {&__pyx_n_s_pop, __pyx_k_pop, sizeof(__pyx_k_pop), 0, 0, 1, 1}, - {&__pyx_n_s_pos, __pyx_k_pos, sizeof(__pyx_k_pos), 0, 0, 1, 1}, - {&__pyx_n_s_prepare, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1}, - {&__pyx_n_s_qualname, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1}, - {&__pyx_n_u_r, __pyx_k_r, sizeof(__pyx_k_r), 0, 1, 0, 1}, - {&__pyx_n_s_re, __pyx_k_re, sizeof(__pyx_k_re), 0, 0, 1, 1}, - {&__pyx_n_s_read, __pyx_k_read, sizeof(__pyx_k_read), 0, 0, 1, 1}, - {&__pyx_n_u_read, __pyx_k_read, sizeof(__pyx_k_read), 0, 1, 0, 1}, - {&__pyx_n_s_regexp, __pyx_k_regexp, sizeof(__pyx_k_regexp), 0, 0, 1, 1}, - {&__pyx_kp_u_s, __pyx_k_s, sizeof(__pyx_k_s), 0, 1, 0, 0}, - {&__pyx_kp_u_s_2, __pyx_k_s_2, sizeof(__pyx_k_s_2), 0, 1, 0, 0}, - {&__pyx_n_s_scan_anonymous_block, __pyx_k_scan_anonymous_block, sizeof(__pyx_k_scan_anonymous_block), 0, 0, 1, 1}, - {&__pyx_n_s_scan_over, __pyx_k_scan_over, sizeof(__pyx_k_scan_over), 0, 0, 1, 1}, - {&__pyx_n_s_scan_until, __pyx_k_scan_until, sizeof(__pyx_k_scan_until), 0, 0, 1, 1}, - {&__pyx_n_s_self, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1}, - {&__pyx_n_s_set_name, __pyx_k_set_name, sizeof(__pyx_k_set_name), 0, 0, 1, 1}, - {&__pyx_n_s_spec, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {&__pyx_n_s_split, __pyx_k_split, sizeof(__pyx_k_split), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_staticmethod, __pyx_k_staticmethod, sizeof(__pyx_k_staticmethod), 0, 0, 1, 1}, - {&__pyx_n_s_stop_at, __pyx_k_stop_at, sizeof(__pyx_k_stop_at), 0, 0, 1, 1}, - {&__pyx_n_s_string, __pyx_k_string, sizeof(__pyx_k_string), 0, 0, 1, 1}, - {&__pyx_n_s_strip, __pyx_k_strip, sizeof(__pyx_k_strip), 0, 0, 1, 1}, - {&__pyx_n_s_sub, __pyx_k_sub, sizeof(__pyx_k_sub), 0, 0, 1, 1}, - {&__pyx_n_s_super, __pyx_k_super, sizeof(__pyx_k_super), 0, 0, 1, 1}, - {&__pyx_n_s_tag, __pyx_k_tag, sizeof(__pyx_k_tag), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_n_s_text, __pyx_k_text, sizeof(__pyx_k_text), 0, 0, 1, 1}, - {&__pyx_n_s_text_2, __pyx_k_text_2, sizeof(__pyx_k_text_2), 0, 0, 1, 1}, - {&__pyx_n_s_text_length, __pyx_k_text_length, sizeof(__pyx_k_text_length), 0, 0, 1, 1}, - {&__pyx_n_s_token, __pyx_k_token, sizeof(__pyx_k_token), 0, 0, 1, 1}, - {&__pyx_n_s_token_type, __pyx_k_token_type, sizeof(__pyx_k_token_type), 0, 0, 1, 1}, - {&__pyx_kp_u_utf_8, __pyx_k_utf_8, sizeof(__pyx_k_utf_8), 0, 1, 0, 0}, - {&__pyx_n_s_valid, __pyx_k_valid, sizeof(__pyx_k_valid), 0, 0, 1, 1}, - {&__pyx_n_u_xX, __pyx_k_xX, sizeof(__pyx_k_xX), 0, 1, 0, 1}, - {0, 0, 0, 0, 0, 0, 0} - }; - return __Pyx_InitStrings(__pyx_string_tab); -} -/* #### Code section: cached_builtins ### */ -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 8, __pyx_L1_error) - __pyx_builtin_object = __Pyx_GetBuiltinName(__pyx_n_s_object); if (!__pyx_builtin_object) __PYX_ERR(0, 13, __pyx_L1_error) - __pyx_builtin_staticmethod = __Pyx_GetBuiltinName(__pyx_n_s_staticmethod); if (!__pyx_builtin_staticmethod) __PYX_ERR(0, 270, __pyx_L1_error) - __pyx_builtin_StopIteration = __Pyx_GetBuiltinName(__pyx_n_s_StopIteration); if (!__pyx_builtin_StopIteration) __PYX_ERR(0, 75, __pyx_L1_error) - __pyx_builtin_open = __Pyx_GetBuiltinName(__pyx_n_s_open); if (!__pyx_builtin_open) __PYX_ERR(0, 276, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cached_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "fontTools/feaLib/lexer.py":13 - * - * - * class Lexer(object): # <<<<<<<<<<<<<< - * NUMBER = "NUMBER" - * HEXADECIMAL = "HEXADECIMAL" - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_builtin_object); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_builtin_object); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "fontTools/feaLib/lexer.py":43 - * MODE_FILENAME_ = "FILENAME" - * - * def __init__(self, text, filename): # <<<<<<<<<<<<<< - * self.filename_ = filename - * self.line_ = 1 - */ - __pyx_tuple__21 = PyTuple_Pack(3, __pyx_n_s_self, __pyx_n_s_text, __pyx_n_s_filename); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - __pyx_codeobj__22 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__21, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_init, 43, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__22)) __PYX_ERR(0, 43, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":52 - * self.mode_ = Lexer.MODE_NORMAL_ - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_tuple__23 = PyTuple_Pack(1, __pyx_n_s_self); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - __pyx_codeobj__24 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_iter, 52, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__24)) __PYX_ERR(0, 52, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":55 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - __pyx_codeobj__25 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_next_3, 55, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__25)) __PYX_ERR(0, 55, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":58 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while True: - * token_type, token, location = self.next_() - */ - __pyx_tuple__26 = PyTuple_Pack(4, __pyx_n_s_self, __pyx_n_s_token_type, __pyx_n_s_token, __pyx_n_s_location_2); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__26); - __Pyx_GIVEREF(__pyx_tuple__26); - __pyx_codeobj__27 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__26, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_next, 58, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__27)) __PYX_ERR(0, 58, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":64 - * return (token_type, token, location) - * - * def location_(self): # <<<<<<<<<<<<<< - * column = self.pos_ - self.line_start_ + 1 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - */ - __pyx_tuple__28 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_column); if (unlikely(!__pyx_tuple__28)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__28); - __Pyx_GIVEREF(__pyx_tuple__28); - __pyx_codeobj__29 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__28, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_location, 64, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__29)) __PYX_ERR(0, 64, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":68 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - * - * def next_(self): # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_WHITESPACE_) - * location = self.location_() - */ - __pyx_tuple__30 = PyTuple_Pack(10, __pyx_n_s_self, __pyx_n_s_location_2, __pyx_n_s_start, __pyx_n_s_text, __pyx_n_s_limit, __pyx_n_s_cur_char, __pyx_n_s_next_char, __pyx_n_s_glyphclass, __pyx_n_s_token, __pyx_n_s_string); if (unlikely(!__pyx_tuple__30)) __PYX_ERR(0, 68, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__30); - __Pyx_GIVEREF(__pyx_tuple__30); - __pyx_codeobj__31 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 10, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__30, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_next_2, 68, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__31)) __PYX_ERR(0, 68, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":169 - * raise FeatureLibError("Unexpected character: %r" % cur_char, location) - * - * def scan_over_(self, valid): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] in valid: - */ - __pyx_tuple__32 = PyTuple_Pack(3, __pyx_n_s_self, __pyx_n_s_valid, __pyx_n_s_p); if (unlikely(!__pyx_tuple__32)) __PYX_ERR(0, 169, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__32); - __Pyx_GIVEREF(__pyx_tuple__32); - __pyx_codeobj__33 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__32, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_scan_over, 169, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__33)) __PYX_ERR(0, 169, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":175 - * self.pos_ = p - * - * def scan_until_(self, stop_at): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] not in stop_at: - */ - __pyx_tuple__34 = PyTuple_Pack(3, __pyx_n_s_self, __pyx_n_s_stop_at, __pyx_n_s_p); if (unlikely(!__pyx_tuple__34)) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__34); - __Pyx_GIVEREF(__pyx_tuple__34); - __pyx_codeobj__35 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__34, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_scan_until, 175, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__35)) __PYX_ERR(0, 175, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":181 - * self.pos_ = p - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * location = self.location_() - * tag = tag.strip() - */ - __pyx_tuple__36 = PyTuple_Pack(5, __pyx_n_s_self, __pyx_n_s_tag, __pyx_n_s_location_2, __pyx_n_s_regexp, __pyx_n_s_split); if (unlikely(!__pyx_tuple__36)) __PYX_ERR(0, 181, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__36); - __Pyx_GIVEREF(__pyx_tuple__36); - __pyx_codeobj__37 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__36, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_scan_anonymous_block, 181, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__37)) __PYX_ERR(0, 181, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":196 - * - * - * class IncludingLexer(object): # <<<<<<<<<<<<<< - * """A Lexer that follows include statements. - * - */ - __pyx_tuple__38 = PyTuple_Pack(1, __pyx_builtin_object); if (unlikely(!__pyx_tuple__38)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__38); - __Pyx_GIVEREF(__pyx_tuple__38); - __pyx_tuple__39 = PyTuple_Pack(1, __pyx_builtin_object); if (unlikely(!__pyx_tuple__39)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__39); - __Pyx_GIVEREF(__pyx_tuple__39); - - /* "fontTools/feaLib/lexer.py":211 - * """ - * - * def __init__(self, featurefile, *, includeDir=None): # <<<<<<<<<<<<<< - * """Initializes an IncludingLexer. - * - */ - __pyx_tuple__40 = PyTuple_Pack(3, __pyx_n_s_self, __pyx_n_s_featurefile, __pyx_n_s_includeDir); if (unlikely(!__pyx_tuple__40)) __PYX_ERR(0, 211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__40); - __Pyx_GIVEREF(__pyx_tuple__40); - __pyx_codeobj__41 = (PyObject*)__Pyx_PyCode_New(2, 0, 1, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__40, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_init, 211, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__41)) __PYX_ERR(0, 211, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":225 - * self.includeDir = includeDir - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_codeobj__42 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_iter, 225, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__42)) __PYX_ERR(0, 225, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":228 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - __pyx_codeobj__43 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_next_3, 228, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__43)) __PYX_ERR(0, 228, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":231 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while self.lexers_: - * lexer = self.lexers_[-1] - */ - __pyx_tuple__44 = PyTuple_Pack(11, __pyx_n_s_self, __pyx_n_s_lexer, __pyx_n_s_token_type, __pyx_n_s_token, __pyx_n_s_location_2, __pyx_n_s_fname_type, __pyx_n_s_fname_token, __pyx_n_s_fname_location, __pyx_n_s_path, __pyx_n_s_curpath, __pyx_n_s_err); if (unlikely(!__pyx_tuple__44)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__44); - __Pyx_GIVEREF(__pyx_tuple__44); - __pyx_codeobj__45 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 11, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__44, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_next, 231, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__45)) __PYX_ERR(0, 231, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":270 - * raise StopIteration() - * - * @staticmethod # <<<<<<<<<<<<<< - * def make_lexer_(file_or_path): - * if hasattr(file_or_path, "read"): - */ - __pyx_tuple__46 = PyTuple_Pack(5, __pyx_n_s_file_or_path, __pyx_n_s_fileobj, __pyx_n_s_closing, __pyx_n_s_filename, __pyx_n_s_data); if (unlikely(!__pyx_tuple__46)) __PYX_ERR(0, 270, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__46); - __Pyx_GIVEREF(__pyx_tuple__46); - __pyx_codeobj__47 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__46, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_make_lexer, 270, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__47)) __PYX_ERR(0, 270, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":283 - * return Lexer(data, filename) - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * return self.lexers_[-1].scan_anonymous_block(tag) - * - */ - __pyx_tuple__48 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_tag); if (unlikely(!__pyx_tuple__48)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__48); - __Pyx_GIVEREF(__pyx_tuple__48); - __pyx_codeobj__49 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__48, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_scan_anonymous_block, 283, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__49)) __PYX_ERR(0, 283, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":290 - * """Lexer that does not follow `include` statements, emits them as-is.""" - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * return next(self.lexers_[0]) - */ - __pyx_codeobj__50 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_feaLib_lexer_py, __pyx_n_s_next, 290, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__50)) __PYX_ERR(0, 290, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} -/* #### Code section: init_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitConstants(void) { - __pyx_umethod_PyList_Type_pop.type = (PyObject*)&PyList_Type; - __pyx_umethod_PyList_Type_pop.method_name = &__pyx_n_s_pop; - if (__Pyx_CreateStringTabAndInitStrings() < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_8 = PyInt_FromLong(8); if (unlikely(!__pyx_int_8)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_10 = PyInt_FromLong(10); if (unlikely(!__pyx_int_10)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_16 = PyInt_FromLong(16); if (unlikely(!__pyx_int_16)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_globals ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - return 0; -} -/* #### Code section: init_module ### */ - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_lexer(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_lexer}, - {0, NULL} -}; -#endif - -#ifdef __cplusplus -namespace { - struct PyModuleDef __pyx_moduledef = - #else - static struct PyModuleDef __pyx_moduledef = - #endif - { - PyModuleDef_HEAD_INIT, - "lexer", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #elif CYTHON_USE_MODULE_STATE - sizeof(__pyx_mstate), /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - #if CYTHON_USE_MODULE_STATE - __pyx_m_traverse, /* m_traverse */ - __pyx_m_clear, /* m_clear */ - NULL /* m_free */ - #else - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ - #endif - }; - #ifdef __cplusplus -} /* anonymous namespace */ -#endif -#endif - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initlexer(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initlexer(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_lexer(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_lexer(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *module, const char* from_name, const char* to_name, int allow_none) -#else -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) -#endif -{ - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { -#if CYTHON_COMPILING_IN_LIMITED_API - result = PyModule_AddObject(module, to_name, value); -#else - result = PyDict_SetItemString(moddict, to_name, value); -#endif - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - CYTHON_UNUSED_VAR(def); - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - moddict = module; -#else - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; -#endif - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_lexer(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - int stringtab_initialized = 0; - #if CYTHON_USE_MODULE_STATE - int pystate_addmodule_run = 0; - #endif - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'lexer' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("lexer", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #elif CYTHON_USE_MODULE_STATE - __pyx_t_1 = PyModule_Create(&__pyx_moduledef); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - { - int add_module_result = PyState_AddModule(__pyx_t_1, &__pyx_moduledef); - __pyx_t_1 = 0; /* transfer ownership from __pyx_t_1 to lexer pseudovariable */ - if (unlikely((add_module_result < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - pystate_addmodule_run = 1; - } - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #endif - CYTHON_UNUSED_VAR(__pyx_t_1); - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_lexer(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - stringtab_initialized = 1; - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_fontTools__feaLib__lexer) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "fontTools.feaLib.lexer")) { - if (unlikely((PyDict_SetItemString(modules, "fontTools.feaLib.lexer", __pyx_m) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - (void)__Pyx_modinit_type_init_code(); - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "fontTools/feaLib/lexer.py":1 - * from fontTools.feaLib.error import FeatureLibError, IncludedFeaNotFound # <<<<<<<<<<<<<< - * from fontTools.feaLib.location import FeatureLibLocation - * import re - */ - __pyx_t_2 = PyList_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_FeatureLibError); - __Pyx_GIVEREF(__pyx_n_s_FeatureLibError); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_FeatureLibError)) __PYX_ERR(0, 1, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_s_IncludedFeaNotFound); - __Pyx_GIVEREF(__pyx_n_s_IncludedFeaNotFound); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_s_IncludedFeaNotFound)) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_fontTools_feaLib_error, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_FeatureLibError); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_FeatureLibError, __pyx_t_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_IncludedFeaNotFound); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_IncludedFeaNotFound, __pyx_t_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":2 - * from fontTools.feaLib.error import FeatureLibError, IncludedFeaNotFound - * from fontTools.feaLib.location import FeatureLibLocation # <<<<<<<<<<<<<< - * import re - * import os - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_FeatureLibLocation); - __Pyx_GIVEREF(__pyx_n_s_FeatureLibLocation); - if (__Pyx_PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_FeatureLibLocation)) __PYX_ERR(0, 2, __pyx_L1_error); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_fontTools_feaLib_location, __pyx_t_3, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_FeatureLibLocation); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_FeatureLibLocation, __pyx_t_3) < 0) __PYX_ERR(0, 2, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":3 - * from fontTools.feaLib.error import FeatureLibError, IncludedFeaNotFound - * from fontTools.feaLib.location import FeatureLibLocation - * import re # <<<<<<<<<<<<<< - * import os - * - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_re, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 3, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_re, __pyx_t_2) < 0) __PYX_ERR(0, 3, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":4 - * from fontTools.feaLib.location import FeatureLibLocation - * import re - * import os # <<<<<<<<<<<<<< - * - * try: - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_os, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_os, __pyx_t_2) < 0) __PYX_ERR(0, 4, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":6 - * import os - * - * try: # <<<<<<<<<<<<<< - * import cython - * except ImportError: - */ - { - (void)__pyx_t_1; (void)__pyx_t_4; (void)__pyx_t_5; /* mark used */ - /*try:*/ { - - /* "fontTools/feaLib/lexer.py":7 - * - * try: - * import cython # <<<<<<<<<<<<<< - * except ImportError: - * # if cython not installed, use mock module with no-op decorators and types - */ - } - } - - /* "fontTools/feaLib/lexer.py":13 - * - * - * class Lexer(object): # <<<<<<<<<<<<<< - * NUMBER = "NUMBER" - * HEXADECIMAL = "HEXADECIMAL" - */ - __pyx_t_2 = __Pyx_PEP560_update_bases(__pyx_tuple__15); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_CalculateMetaclass(NULL, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_Py3MetaclassPrepare(__pyx_t_3, __pyx_t_2, __pyx_n_s_Lexer, __pyx_n_s_Lexer, (PyObject *) NULL, __pyx_n_s_fontTools_feaLib_lexer, (PyObject *) NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (__pyx_t_2 != __pyx_tuple__15) { - if (unlikely((PyDict_SetItemString(__pyx_t_6, "__orig_bases__", __pyx_tuple__15) < 0))) __PYX_ERR(0, 13, __pyx_L1_error) - } - - /* "fontTools/feaLib/lexer.py":14 - * - * class Lexer(object): - * NUMBER = "NUMBER" # <<<<<<<<<<<<<< - * HEXADECIMAL = "HEXADECIMAL" - * OCTAL = "OCTAL" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_NUMBER, __pyx_n_u_NUMBER) < 0) __PYX_ERR(0, 14, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":15 - * class Lexer(object): - * NUMBER = "NUMBER" - * HEXADECIMAL = "HEXADECIMAL" # <<<<<<<<<<<<<< - * OCTAL = "OCTAL" - * NUMBERS = (NUMBER, HEXADECIMAL, OCTAL) - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_HEXADECIMAL, __pyx_n_u_HEXADECIMAL) < 0) __PYX_ERR(0, 15, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":16 - * NUMBER = "NUMBER" - * HEXADECIMAL = "HEXADECIMAL" - * OCTAL = "OCTAL" # <<<<<<<<<<<<<< - * NUMBERS = (NUMBER, HEXADECIMAL, OCTAL) - * FLOAT = "FLOAT" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_OCTAL, __pyx_n_u_OCTAL) < 0) __PYX_ERR(0, 16, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":17 - * HEXADECIMAL = "HEXADECIMAL" - * OCTAL = "OCTAL" - * NUMBERS = (NUMBER, HEXADECIMAL, OCTAL) # <<<<<<<<<<<<<< - * FLOAT = "FLOAT" - * STRING = "STRING" - */ - __pyx_t_7 = PyObject_GetItem(__pyx_t_6, __pyx_n_s_NUMBER); - if (unlikely(!__pyx_t_7)) { - PyErr_Clear(); - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_NUMBER); - } - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyObject_GetItem(__pyx_t_6, __pyx_n_s_HEXADECIMAL); - if (unlikely(!__pyx_t_8)) { - PyErr_Clear(); - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_HEXADECIMAL); - } - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = PyObject_GetItem(__pyx_t_6, __pyx_n_s_OCTAL); - if (unlikely(!__pyx_t_9)) { - PyErr_Clear(); - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_OCTAL); - } - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = PyTuple_New(3); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_7); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_8); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_8)) __PYX_ERR(0, 17, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_9); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 2, __pyx_t_9)) __PYX_ERR(0, 17, __pyx_L1_error); - __pyx_t_7 = 0; - __pyx_t_8 = 0; - __pyx_t_9 = 0; - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_NUMBERS, __pyx_t_10) < 0) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":18 - * OCTAL = "OCTAL" - * NUMBERS = (NUMBER, HEXADECIMAL, OCTAL) - * FLOAT = "FLOAT" # <<<<<<<<<<<<<< - * STRING = "STRING" - * NAME = "NAME" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_FLOAT, __pyx_n_u_FLOAT) < 0) __PYX_ERR(0, 18, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":19 - * NUMBERS = (NUMBER, HEXADECIMAL, OCTAL) - * FLOAT = "FLOAT" - * STRING = "STRING" # <<<<<<<<<<<<<< - * NAME = "NAME" - * FILENAME = "FILENAME" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_STRING, __pyx_n_u_STRING) < 0) __PYX_ERR(0, 19, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":20 - * FLOAT = "FLOAT" - * STRING = "STRING" - * NAME = "NAME" # <<<<<<<<<<<<<< - * FILENAME = "FILENAME" - * GLYPHCLASS = "GLYPHCLASS" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_NAME, __pyx_n_u_NAME) < 0) __PYX_ERR(0, 20, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":21 - * STRING = "STRING" - * NAME = "NAME" - * FILENAME = "FILENAME" # <<<<<<<<<<<<<< - * GLYPHCLASS = "GLYPHCLASS" - * CID = "CID" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_FILENAME, __pyx_n_u_FILENAME) < 0) __PYX_ERR(0, 21, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":22 - * NAME = "NAME" - * FILENAME = "FILENAME" - * GLYPHCLASS = "GLYPHCLASS" # <<<<<<<<<<<<<< - * CID = "CID" - * SYMBOL = "SYMBOL" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_GLYPHCLASS, __pyx_n_u_GLYPHCLASS) < 0) __PYX_ERR(0, 22, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":23 - * FILENAME = "FILENAME" - * GLYPHCLASS = "GLYPHCLASS" - * CID = "CID" # <<<<<<<<<<<<<< - * SYMBOL = "SYMBOL" - * COMMENT = "COMMENT" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CID, __pyx_n_u_CID) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":24 - * GLYPHCLASS = "GLYPHCLASS" - * CID = "CID" - * SYMBOL = "SYMBOL" # <<<<<<<<<<<<<< - * COMMENT = "COMMENT" - * NEWLINE = "NEWLINE" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_SYMBOL, __pyx_n_u_SYMBOL) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":25 - * CID = "CID" - * SYMBOL = "SYMBOL" - * COMMENT = "COMMENT" # <<<<<<<<<<<<<< - * NEWLINE = "NEWLINE" - * ANONYMOUS_BLOCK = "ANONYMOUS_BLOCK" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_COMMENT, __pyx_n_u_COMMENT) < 0) __PYX_ERR(0, 25, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":26 - * SYMBOL = "SYMBOL" - * COMMENT = "COMMENT" - * NEWLINE = "NEWLINE" # <<<<<<<<<<<<<< - * ANONYMOUS_BLOCK = "ANONYMOUS_BLOCK" - * - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_NEWLINE, __pyx_n_u_NEWLINE) < 0) __PYX_ERR(0, 26, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":27 - * COMMENT = "COMMENT" - * NEWLINE = "NEWLINE" - * ANONYMOUS_BLOCK = "ANONYMOUS_BLOCK" # <<<<<<<<<<<<<< - * - * CHAR_WHITESPACE_ = " \t" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_ANONYMOUS_BLOCK, __pyx_n_u_ANONYMOUS_BLOCK) < 0) __PYX_ERR(0, 27, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":29 - * ANONYMOUS_BLOCK = "ANONYMOUS_BLOCK" - * - * CHAR_WHITESPACE_ = " \t" # <<<<<<<<<<<<<< - * CHAR_NEWLINE_ = "\r\n" - * CHAR_SYMBOL_ = ",;:-+'{}[]<>()=" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_WHITESPACE, __pyx_kp_u__16) < 0) __PYX_ERR(0, 29, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":30 - * - * CHAR_WHITESPACE_ = " \t" - * CHAR_NEWLINE_ = "\r\n" # <<<<<<<<<<<<<< - * CHAR_SYMBOL_ = ",;:-+'{}[]<>()=" - * CHAR_DIGIT_ = "0123456789" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_NEWLINE, __pyx_kp_u__17) < 0) __PYX_ERR(0, 30, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":31 - * CHAR_WHITESPACE_ = " \t" - * CHAR_NEWLINE_ = "\r\n" - * CHAR_SYMBOL_ = ",;:-+'{}[]<>()=" # <<<<<<<<<<<<<< - * CHAR_DIGIT_ = "0123456789" - * CHAR_HEXDIGIT_ = "0123456789ABCDEFabcdef" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_SYMBOL, __pyx_kp_u__18) < 0) __PYX_ERR(0, 31, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":32 - * CHAR_NEWLINE_ = "\r\n" - * CHAR_SYMBOL_ = ",;:-+'{}[]<>()=" - * CHAR_DIGIT_ = "0123456789" # <<<<<<<<<<<<<< - * CHAR_HEXDIGIT_ = "0123456789ABCDEFabcdef" - * CHAR_LETTER_ = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_DIGIT, __pyx_kp_u_0123456789) < 0) __PYX_ERR(0, 32, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":33 - * CHAR_SYMBOL_ = ",;:-+'{}[]<>()=" - * CHAR_DIGIT_ = "0123456789" - * CHAR_HEXDIGIT_ = "0123456789ABCDEFabcdef" # <<<<<<<<<<<<<< - * CHAR_LETTER_ = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" - * CHAR_NAME_START_ = CHAR_LETTER_ + "_+*:.^~!\\" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_HEXDIGIT, __pyx_kp_u_0123456789ABCDEFabcdef) < 0) __PYX_ERR(0, 33, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":34 - * CHAR_DIGIT_ = "0123456789" - * CHAR_HEXDIGIT_ = "0123456789ABCDEFabcdef" - * CHAR_LETTER_ = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" # <<<<<<<<<<<<<< - * CHAR_NAME_START_ = CHAR_LETTER_ + "_+*:.^~!\\" - * CHAR_NAME_CONTINUATION_ = CHAR_LETTER_ + CHAR_DIGIT_ + "_.+*:^~!/-" - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_LETTER, __pyx_n_u_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef) < 0) __PYX_ERR(0, 34, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":35 - * CHAR_HEXDIGIT_ = "0123456789ABCDEFabcdef" - * CHAR_LETTER_ = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" - * CHAR_NAME_START_ = CHAR_LETTER_ + "_+*:.^~!\\" # <<<<<<<<<<<<<< - * CHAR_NAME_CONTINUATION_ = CHAR_LETTER_ + CHAR_DIGIT_ + "_.+*:^~!/-" - * - */ - __pyx_t_10 = PyObject_GetItem(__pyx_t_6, __pyx_n_s_CHAR_LETTER); - if (unlikely(!__pyx_t_10)) { - PyErr_Clear(); - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_CHAR_LETTER); - } - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 35, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_9 = PyNumber_Add(__pyx_t_10, __pyx_kp_u__19); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 35, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_NAME_START, __pyx_t_9) < 0) __PYX_ERR(0, 35, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":36 - * CHAR_LETTER_ = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" - * CHAR_NAME_START_ = CHAR_LETTER_ + "_+*:.^~!\\" - * CHAR_NAME_CONTINUATION_ = CHAR_LETTER_ + CHAR_DIGIT_ + "_.+*:^~!/-" # <<<<<<<<<<<<<< - * - * RE_GLYPHCLASS = re.compile(r"^[A-Za-z_0-9.\-]+$") - */ - __pyx_t_9 = PyObject_GetItem(__pyx_t_6, __pyx_n_s_CHAR_LETTER); - if (unlikely(!__pyx_t_9)) { - PyErr_Clear(); - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_CHAR_LETTER); - } - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = PyObject_GetItem(__pyx_t_6, __pyx_n_s_CHAR_DIGIT); - if (unlikely(!__pyx_t_10)) { - PyErr_Clear(); - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_CHAR_DIGIT); - } - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_8 = PyNumber_Add(__pyx_t_9, __pyx_t_10); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = PyNumber_Add(__pyx_t_8, __pyx_kp_u__20); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_CHAR_NAME_CONTINUATION, __pyx_t_10) < 0) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":38 - * CHAR_NAME_CONTINUATION_ = CHAR_LETTER_ + CHAR_DIGIT_ + "_.+*:^~!/-" - * - * RE_GLYPHCLASS = re.compile(r"^[A-Za-z_0-9.\-]+$") # <<<<<<<<<<<<<< - * - * MODE_NORMAL_ = "NORMAL" - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_re); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_compile); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = NULL; - __pyx_t_11 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_11 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_kp_u_A_Za_z_0_9}; - __pyx_t_10 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_11, 1+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_RE_GLYPHCLASS, __pyx_t_10) < 0) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":40 - * RE_GLYPHCLASS = re.compile(r"^[A-Za-z_0-9.\-]+$") - * - * MODE_NORMAL_ = "NORMAL" # <<<<<<<<<<<<<< - * MODE_FILENAME_ = "FILENAME" - * - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_MODE_NORMAL, __pyx_n_u_NORMAL) < 0) __PYX_ERR(0, 40, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":41 - * - * MODE_NORMAL_ = "NORMAL" - * MODE_FILENAME_ = "FILENAME" # <<<<<<<<<<<<<< - * - * def __init__(self, text, filename): - */ - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_MODE_FILENAME, __pyx_n_u_FILENAME) < 0) __PYX_ERR(0, 41, __pyx_L1_error) - - /* "fontTools/feaLib/lexer.py":43 - * MODE_FILENAME_ = "FILENAME" - * - * def __init__(self, text, filename): # <<<<<<<<<<<<<< - * self.filename_ = filename - * self.line_ = 1 - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_1__init__, 0, __pyx_n_s_Lexer___init, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__22)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_init, __pyx_t_10) < 0) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":52 - * self.mode_ = Lexer.MODE_NORMAL_ - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_3__iter__, 0, __pyx_n_s_Lexer___iter, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__24)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_iter, __pyx_t_10) < 0) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":55 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_5next, 0, __pyx_n_s_Lexer_next, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__25)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 55, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_next_3, __pyx_t_10) < 0) __PYX_ERR(0, 55, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":58 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while True: - * token_type, token, location = self.next_() - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_7__next__, 0, __pyx_n_s_Lexer___next, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__27)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_next, __pyx_t_10) < 0) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":64 - * return (token_type, token, location) - * - * def location_(self): # <<<<<<<<<<<<<< - * column = self.pos_ - self.line_start_ + 1 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_9location_, 0, __pyx_n_s_Lexer_location, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__29)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_location, __pyx_t_10) < 0) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":68 - * return FeatureLibLocation(self.filename_ or "", self.line_, column) - * - * def next_(self): # <<<<<<<<<<<<<< - * self.scan_over_(Lexer.CHAR_WHITESPACE_) - * location = self.location_() - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_11next_, 0, __pyx_n_s_Lexer_next_2, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__31)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 68, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_next_2, __pyx_t_10) < 0) __PYX_ERR(0, 68, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":169 - * raise FeatureLibError("Unexpected character: %r" % cur_char, location) - * - * def scan_over_(self, valid): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] in valid: - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_13scan_over_, 0, __pyx_n_s_Lexer_scan_over, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__33)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 169, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_scan_over, __pyx_t_10) < 0) __PYX_ERR(0, 169, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":175 - * self.pos_ = p - * - * def scan_until_(self, stop_at): # <<<<<<<<<<<<<< - * p = self.pos_ - * while p < self.text_length_ and self.text_[p] not in stop_at: - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_15scan_until_, 0, __pyx_n_s_Lexer_scan_until, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__35)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_scan_until, __pyx_t_10) < 0) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":181 - * self.pos_ = p - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * location = self.location_() - * tag = tag.strip() - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_5Lexer_17scan_anonymous_block, 0, __pyx_n_s_Lexer_scan_anonymous_block, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__37)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 181, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_scan_anonymous_block, __pyx_t_10) < 0) __PYX_ERR(0, 181, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":13 - * - * - * class Lexer(object): # <<<<<<<<<<<<<< - * NUMBER = "NUMBER" - * HEXADECIMAL = "HEXADECIMAL" - */ - __pyx_t_10 = __Pyx_Py3ClassCreate(__pyx_t_3, __pyx_n_s_Lexer, __pyx_t_2, __pyx_t_6, NULL, 0, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Lexer, __pyx_t_10) < 0) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":196 - * - * - * class IncludingLexer(object): # <<<<<<<<<<<<<< - * """A Lexer that follows include statements. - * - */ - __pyx_t_2 = __Pyx_PEP560_update_bases(__pyx_tuple__39); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_CalculateMetaclass(NULL, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_Py3MetaclassPrepare(__pyx_t_3, __pyx_t_2, __pyx_n_s_IncludingLexer, __pyx_n_s_IncludingLexer, (PyObject *) NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_kp_s_A_Lexer_that_follows_include_sta); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (__pyx_t_2 != __pyx_tuple__39) { - if (unlikely((PyDict_SetItemString(__pyx_t_6, "__orig_bases__", __pyx_tuple__39) < 0))) __PYX_ERR(0, 196, __pyx_L1_error) - } - - /* "fontTools/feaLib/lexer.py":211 - * """ - * - * def __init__(self, featurefile, *, includeDir=None): # <<<<<<<<<<<<<< - * """Initializes an IncludingLexer. - * - */ - __pyx_t_10 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (PyDict_SetItem(__pyx_t_10, __pyx_n_s_includeDir, Py_None) < 0) __PYX_ERR(0, 211, __pyx_L1_error) - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_1__init__, 0, __pyx_n_s_IncludingLexer___init, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__41)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_CyFunction_SetDefaultsKwDict(__pyx_t_9, __pyx_t_10); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_init, __pyx_t_9) < 0) __PYX_ERR(0, 211, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":225 - * self.includeDir = includeDir - * - * def __iter__(self): # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_3__iter__, 0, __pyx_n_s_IncludingLexer___iter, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__42)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 225, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_iter, __pyx_t_9) < 0) __PYX_ERR(0, 225, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":228 - * return self - * - * def next(self): # Python 2 # <<<<<<<<<<<<<< - * return self.__next__() - * - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_5next, 0, __pyx_n_s_IncludingLexer_next, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__43)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_next_3, __pyx_t_9) < 0) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":231 - * return self.__next__() - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * while self.lexers_: - * lexer = self.lexers_[-1] - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_7__next__, 0, __pyx_n_s_IncludingLexer___next, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__45)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_next, __pyx_t_9) < 0) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/feaLib/lexer.py":270 - * raise StopIteration() - * - * @staticmethod # <<<<<<<<<<<<<< - * def make_lexer_(file_or_path): - * if hasattr(file_or_path, "read"): - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_9make_lexer_, __Pyx_CYFUNCTION_STATICMETHOD, __pyx_n_s_IncludingLexer_make_lexer, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__47)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 270, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_staticmethod, __pyx_t_9); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 270, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_make_lexer, __pyx_t_10) < 0) __PYX_ERR(0, 270, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":283 - * return Lexer(data, filename) - * - * def scan_anonymous_block(self, tag): # <<<<<<<<<<<<<< - * return self.lexers_[-1].scan_anonymous_block(tag) - * - */ - __pyx_t_10 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_14IncludingLexer_11scan_anonymous_block, 0, __pyx_n_s_IncludingLexer_scan_anonymous_bl, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__49)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_SetNameInClass(__pyx_t_6, __pyx_n_s_scan_anonymous_block, __pyx_t_10) < 0) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "fontTools/feaLib/lexer.py":196 - * - * - * class IncludingLexer(object): # <<<<<<<<<<<<<< - * """A Lexer that follows include statements. - * - */ - __pyx_t_10 = __Pyx_Py3ClassCreate(__pyx_t_3, __pyx_n_s_IncludingLexer, __pyx_t_2, __pyx_t_6, NULL, 0, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_IncludingLexer, __pyx_t_10) < 0) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":287 - * - * - * class NonIncludingLexer(IncludingLexer): # <<<<<<<<<<<<<< - * """Lexer that does not follow `include` statements, emits them as-is.""" - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_IncludingLexer); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 287, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PEP560_update_bases(__pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_CalculateMetaclass(NULL, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = __Pyx_Py3MetaclassPrepare(__pyx_t_6, __pyx_t_2, __pyx_n_s_NonIncludingLexer, __pyx_n_s_NonIncludingLexer, (PyObject *) NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_kp_s_Lexer_that_does_not_follow_inclu); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - if (__pyx_t_2 != __pyx_t_3) { - if (unlikely((PyDict_SetItemString(__pyx_t_10, "__orig_bases__", __pyx_t_3) < 0))) __PYX_ERR(0, 287, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":290 - * """Lexer that does not follow `include` statements, emits them as-is.""" - * - * def __next__(self): # Python 3 # <<<<<<<<<<<<<< - * return next(self.lexers_[0]) - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_6feaLib_5lexer_17NonIncludingLexer_1__next__, 0, __pyx_n_s_NonIncludingLexer___next, NULL, __pyx_n_s_fontTools_feaLib_lexer, __pyx_d, ((PyObject *)__pyx_codeobj__50)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 290, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_SetNameInClass(__pyx_t_10, __pyx_n_s_next, __pyx_t_3) < 0) __PYX_ERR(0, 290, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/feaLib/lexer.py":287 - * - * - * class NonIncludingLexer(IncludingLexer): # <<<<<<<<<<<<<< - * """Lexer that does not follow `include` statements, emits them as-is.""" - * - */ - __pyx_t_3 = __Pyx_Py3ClassCreate(__pyx_t_6, __pyx_n_s_NonIncludingLexer, __pyx_t_2, __pyx_t_10, NULL, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_NonIncludingLexer, __pyx_t_3) < 0) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/feaLib/lexer.py":1 - * from fontTools.feaLib.error import FeatureLibError, IncludedFeaNotFound # <<<<<<<<<<<<<< - * from fontTools.feaLib.location import FeatureLibLocation - * import re - */ - __pyx_t_2 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - if (__pyx_m) { - if (__pyx_d && stringtab_initialized) { - __Pyx_AddTraceback("init fontTools.feaLib.lexer", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - #if !CYTHON_USE_MODULE_STATE - Py_CLEAR(__pyx_m); - #else - Py_DECREF(__pyx_m); - if (pystate_addmodule_run) { - PyObject *tp, *value, *tb; - PyErr_Fetch(&tp, &value, &tb); - PyState_RemoveModule(&__pyx_moduledef); - PyErr_Restore(tp, value, tb); - } - #endif - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init fontTools.feaLib.lexer"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} -/* #### Code section: cleanup_globals ### */ -/* #### Code section: cleanup_module ### */ -/* #### Code section: main_method ### */ -/* #### Code section: utility_code_pragmas ### */ -#ifdef _MSC_VER -#pragma warning( push ) -/* Warning 4127: conditional expression is constant - * Cython uses constant conditional expressions to allow in inline functions to be optimized at - * compile-time, so this warning is not useful - */ -#pragma warning( disable : 4127 ) -#endif - - - -/* #### Code section: utility_code_def ### */ - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0x030C00A6 - PyObject *current_exception = tstate->current_exception; - if (unlikely(!current_exception)) return 0; - exc_type = (PyObject*) Py_TYPE(current_exception); - if (exc_type == err) return 1; -#else - exc_type = tstate->curexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; -#endif - #if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(exc_type); - #endif - if (unlikely(PyTuple_Check(err))) { - result = __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - } else { - result = __Pyx_PyErr_GivenExceptionMatches(exc_type, err); - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(exc_type); - #endif - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject *tmp_value; - assert(type == NULL || (value != NULL && type == (PyObject*) Py_TYPE(value))); - if (value) { - #if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(((PyBaseExceptionObject*) value)->traceback != tb)) - #endif - PyException_SetTraceback(value, tb); - } - tmp_value = tstate->current_exception; - tstate->current_exception = value; - Py_XDECREF(tmp_value); -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#endif -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject* exc_value; - exc_value = tstate->current_exception; - tstate->current_exception = 0; - *value = exc_value; - *type = NULL; - *tb = NULL; - if (exc_value) { - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - #if CYTHON_COMPILING_IN_CPYTHON - *tb = ((PyBaseExceptionObject*) exc_value)->traceback; - Py_XINCREF(*tb); - #else - *tb = PyException_GetTraceback(exc_value); - #endif - } -#else - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#endif -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStrNoError(__pyx_b, name); - if (unlikely(!result) && !PyErr_Occurred()) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* TupleAndListFromArray */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject *const *CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - Py_INCREF(__pyx_empty_tuple); - return __pyx_empty_tuple; - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyTupleObject*)res)->ob_item, n); - return res; -} -static CYTHON_INLINE PyObject * -__Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return PyList_New(0); - } - res = PyList_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyListObject*)res)->ob_item, n); - return res; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* fastcall */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s) -{ - Py_ssize_t i, n = PyTuple_GET_SIZE(kwnames); - for (i = 0; i < n; i++) - { - if (s == PyTuple_GET_ITEM(kwnames, i)) return kwvalues[i]; - } - for (i = 0; i < n; i++) - { - int eq = __Pyx_PyUnicode_Equals(s, PyTuple_GET_ITEM(kwnames, i), Py_EQ); - if (unlikely(eq != 0)) { - if (unlikely(eq < 0)) return NULL; // error - return kwvalues[i]; - } - } - return NULL; // not found (no exception set) -} -#endif - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - int kwds_is_tuple = CYTHON_METH_FASTCALL && likely(PyTuple_Check(kwds)); - while (1) { - Py_XDECREF(key); key = NULL; - Py_XDECREF(value); value = NULL; - if (kwds_is_tuple) { - Py_ssize_t size; -#if CYTHON_ASSUME_SAFE_MACROS - size = PyTuple_GET_SIZE(kwds); -#else - size = PyTuple_Size(kwds); - if (size < 0) goto bad; -#endif - if (pos >= size) break; -#if CYTHON_AVOID_BORROWED_REFS - key = __Pyx_PySequence_ITEM(kwds, pos); - if (!key) goto bad; -#elif CYTHON_ASSUME_SAFE_MACROS - key = PyTuple_GET_ITEM(kwds, pos); -#else - key = PyTuple_GetItem(kwds, pos); - if (!key) goto bad; -#endif - value = kwvalues[pos]; - pos++; - } - else - { - if (!PyDict_Next(kwds, &pos, &key, &value)) break; -#if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(key); -#endif - } - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; -#if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(value); // transfer ownership of value to values - Py_DECREF(key); -#endif - key = NULL; - value = NULL; - continue; - } -#if !CYTHON_AVOID_BORROWED_REFS - Py_INCREF(key); -#endif - Py_INCREF(value); - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; -#if CYTHON_AVOID_BORROWED_REFS - value = NULL; // ownership transferred to values -#endif - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = ( - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key) - ); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; -#if CYTHON_AVOID_BORROWED_REFS - value = NULL; // ownership transferred to values -#endif - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - Py_XDECREF(key); - Py_XDECREF(value); - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - Py_XDECREF(key); - Py_XDECREF(value); - return -1; -} - -/* PyObjectSetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_setattro)) - return tp->tp_setattro(obj, attr_name, value); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_setattr)) - return tp->tp_setattr(obj, PyString_AS_STRING(attr_name), value); -#endif - return PyObject_SetAttr(obj, attr_name, value); -} -#endif - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#elif CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!__pyx_m)) { - return NULL; - } - result = PyObject_GetAttr(__pyx_m, name); - if (likely(result)) { - return result; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL && !CYTHON_VECTORCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectFastCall */ -static PyObject* __Pyx_PyObject_FastCall_fallback(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs) { - PyObject *argstuple; - PyObject *result = 0; - size_t i; - argstuple = PyTuple_New((Py_ssize_t)nargs); - if (unlikely(!argstuple)) return NULL; - for (i = 0; i < nargs; i++) { - Py_INCREF(args[i]); - if (__Pyx_PyTuple_SET_ITEM(argstuple, (Py_ssize_t)i, args[i]) < 0) goto bad; - } - result = __Pyx_PyObject_Call(func, argstuple, kwargs); - bad: - Py_DECREF(argstuple); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t _nargs, PyObject *kwargs) { - Py_ssize_t nargs = __Pyx_PyVectorcall_NARGS(_nargs); -#if CYTHON_COMPILING_IN_CPYTHON - if (nargs == 0 && kwargs == NULL) { -#if defined(__Pyx_CyFunction_USED) && defined(NDEBUG) - if (__Pyx_IsCyOrPyCFunction(func)) -#else - if (PyCFunction_Check(func)) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - } - else if (nargs == 1 && kwargs == NULL) { - if (PyCFunction_Check(func)) - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, args[0]); - } - } - } -#endif - #if PY_VERSION_HEX < 0x030800B1 - #if CYTHON_FAST_PYCCALL - if (PyCFunction_Check(func)) { - if (kwargs) { - return _PyCFunction_FastCallDict(func, args, nargs, kwargs); - } else { - return _PyCFunction_FastCallKeywords(func, args, nargs, NULL); - } - } - #if PY_VERSION_HEX >= 0x030700A1 - if (!kwargs && __Pyx_IS_TYPE(func, &PyMethodDescr_Type)) { - return _PyMethodDescr_FastCallKeywords(func, args, nargs, NULL); - } - #endif - #endif - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs); - } - #endif - #endif - #if CYTHON_VECTORCALL - #if Py_VERSION_HEX < 0x03090000 - vectorcallfunc f = _PyVectorcall_Function(func); - #else - vectorcallfunc f = PyVectorcall_Function(func); - #endif - if (f) { - return f(func, args, (size_t)nargs, kwargs); - } - #elif defined(__Pyx_CyFunction_USED) && CYTHON_BACKPORT_VECTORCALL - if (__Pyx_CyFunction_CheckExact(func)) { - __pyx_vectorcallfunc f = __Pyx_CyFunction_func_vectorcall(func); - if (f) return f(func, args, (size_t)nargs, kwargs); - } - #endif - if (nargs == 0) { - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, kwargs); - } - return __Pyx_PyObject_FastCall_fallback(func, args, (size_t)nargs, kwargs); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - PyObject* exc_type = __Pyx_PyErr_CurrentExceptionType(); - if (unlikely(exc_type)) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) - return -1; - __Pyx_PyErr_Clear(); - return 0; - } - return 0; -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long x; - long a = PyInt_AS_LONG(op1); - - x = (long)((unsigned long)a + (unsigned long)b); - if (likely((x^a) >= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - if (unlikely(__Pyx_PyLong_IsZero(op1))) { - return __Pyx_NewRef(op2); - } - if (likely(__Pyx_PyLong_IsCompact(op1))) { - a = __Pyx_PyLong_CompactValue(op1); - } else { - const digit* digits = __Pyx_PyLong_Digits(op1); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op1); - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* PyObjectCallNoArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { - PyObject *arg = NULL; - return __Pyx_PyObject_FastCall(func, (&arg)+1, 0 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - __Pyx_PyThreadState_declare - CYTHON_UNUSED_VAR(cause); - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { - #if PY_VERSION_HEX >= 0x030C00A6 - PyException_SetTraceback(value, tb); - #elif CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (unlikely(!j)) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_subscript) { - PyObject *r, *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return NULL; - r = mm->mp_subscript(o, key); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return sm->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* PyObjectCallOneArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *args[2] = {NULL, arg}; - return __Pyx_PyObject_FastCall(func, args+1, 1 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject *index) { - PyObject *runerr = NULL; - Py_ssize_t key_value; - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - __Pyx_TypeName index_type_name = __Pyx_PyType_GetName(Py_TYPE(index)); - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, - "cannot fit '" __Pyx_FMT_TYPENAME "' into an index-sized integer", index_type_name); - __Pyx_DECREF_TypeName(index_type_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem_Slow(PyObject *obj, PyObject *key) { - __Pyx_TypeName obj_type_name; - if (likely(PyType_Check(obj))) { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(obj, __pyx_n_s_class_getitem); - if (meth) { - PyObject *result = __Pyx_PyObject_CallOneArg(meth, key); - Py_DECREF(meth); - return result; - } - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is not subscriptable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key) { - PyTypeObject *tp = Py_TYPE(obj); - PyMappingMethods *mm = tp->tp_as_mapping; - PySequenceMethods *sm = tp->tp_as_sequence; - if (likely(mm && mm->mp_subscript)) { - return mm->mp_subscript(obj, key); - } - if (likely(sm && sm->sq_item)) { - return __Pyx_PyObject_GetIndex(obj, key); - } - return __Pyx_PyObject_GetItem_Slow(obj, key); -} -#endif - -/* SliceObject */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice(PyObject* obj, - Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** _py_start, PyObject** _py_stop, PyObject** _py_slice, - int has_cstart, int has_cstop, int wraparound) { - __Pyx_TypeName obj_type_name; -#if CYTHON_USE_TYPE_SLOTS - PyMappingMethods* mp; -#if PY_MAJOR_VERSION < 3 - PySequenceMethods* ms = Py_TYPE(obj)->tp_as_sequence; - if (likely(ms && ms->sq_slice)) { - if (!has_cstart) { - if (_py_start && (*_py_start != Py_None)) { - cstart = __Pyx_PyIndex_AsSsize_t(*_py_start); - if ((cstart == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstart = 0; - } - if (!has_cstop) { - if (_py_stop && (*_py_stop != Py_None)) { - cstop = __Pyx_PyIndex_AsSsize_t(*_py_stop); - if ((cstop == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstop = PY_SSIZE_T_MAX; - } - if (wraparound && unlikely((cstart < 0) | (cstop < 0)) && likely(ms->sq_length)) { - Py_ssize_t l = ms->sq_length(obj); - if (likely(l >= 0)) { - if (cstop < 0) { - cstop += l; - if (cstop < 0) cstop = 0; - } - if (cstart < 0) { - cstart += l; - if (cstart < 0) cstart = 0; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - goto bad; - PyErr_Clear(); - } - } - return ms->sq_slice(obj, cstart, cstop); - } -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - mp = Py_TYPE(obj)->tp_as_mapping; - if (likely(mp && mp->mp_subscript)) -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - { - PyObject* result; - PyObject *py_slice, *py_start, *py_stop; - if (_py_slice) { - py_slice = *_py_slice; - } else { - PyObject* owned_start = NULL; - PyObject* owned_stop = NULL; - if (_py_start) { - py_start = *_py_start; - } else { - if (has_cstart) { - owned_start = py_start = PyInt_FromSsize_t(cstart); - if (unlikely(!py_start)) goto bad; - } else - py_start = Py_None; - } - if (_py_stop) { - py_stop = *_py_stop; - } else { - if (has_cstop) { - owned_stop = py_stop = PyInt_FromSsize_t(cstop); - if (unlikely(!py_stop)) { - Py_XDECREF(owned_start); - goto bad; - } - } else - py_stop = Py_None; - } - py_slice = PySlice_New(py_start, py_stop, Py_None); - Py_XDECREF(owned_start); - Py_XDECREF(owned_stop); - if (unlikely(!py_slice)) goto bad; - } -#if CYTHON_USE_TYPE_SLOTS - result = mp->mp_subscript(obj, py_slice); -#else - result = PyObject_GetItem(obj, py_slice); -#endif - if (!_py_slice) { - Py_DECREF(py_slice); - } - return result; - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is unsliceable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); -bad: - return NULL; -} - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long x; - long a = PyInt_AS_LONG(op1); - - x = (long)((unsigned long)a - (unsigned long)b); - if (likely((x^a) >= 0 || (x^~b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - if (unlikely(__Pyx_PyLong_IsZero(op1))) { - return PyLong_FromLong(-intval); - } - if (likely(__Pyx_PyLong_IsCompact(op1))) { - a = __Pyx_PyLong_CompactValue(op1); - } else { - const digit* digits = __Pyx_PyLong_Digits(op1); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op1); - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - } - x = a - b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla - llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("subtract", return NULL) - result = ((double)a) - (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceSubtract : PyNumber_Subtract)(op1, op2); -} -#endif - -/* pybytes_as_double */ -static double __Pyx_SlowPyString_AsDouble(PyObject *obj) { - PyObject *float_value; -#if PY_MAJOR_VERSION >= 3 - float_value = PyFloat_FromString(obj); -#else - float_value = PyFloat_FromString(obj, 0); -#endif - if (likely(float_value)) { - double value = PyFloat_AS_DOUBLE(float_value); - Py_DECREF(float_value); - return value; - } - return (double)-1; -} -static const char* __Pyx__PyBytes_AsDouble_Copy(const char* start, char* buffer, Py_ssize_t length) { - int last_was_punctuation = 1; - Py_ssize_t i; - for (i=0; i < length; i++) { - char chr = start[i]; - int is_punctuation = (chr == '_') | (chr == '.') | (chr == 'e') | (chr == 'E'); - *buffer = chr; - buffer += (chr != '_'); - if (unlikely(last_was_punctuation & is_punctuation)) goto parse_failure; - last_was_punctuation = is_punctuation; - } - if (unlikely(last_was_punctuation)) goto parse_failure; - *buffer = '\0'; - return buffer; -parse_failure: - return NULL; -} -static double __Pyx__PyBytes_AsDouble_inf_nan(const char* start, Py_ssize_t length) { - int matches = 1; - char sign = start[0]; - int is_signed = (sign == '+') | (sign == '-'); - start += is_signed; - length -= is_signed; - switch (start[0]) { - #ifdef Py_NAN - case 'n': - case 'N': - if (unlikely(length != 3)) goto parse_failure; - matches &= (start[1] == 'a' || start[1] == 'A'); - matches &= (start[2] == 'n' || start[2] == 'N'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_NAN : Py_NAN; - #endif - case 'i': - case 'I': - if (unlikely(length < 3)) goto parse_failure; - matches &= (start[1] == 'n' || start[1] == 'N'); - matches &= (start[2] == 'f' || start[2] == 'F'); - if (likely(length == 3 && matches)) - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - if (unlikely(length != 8)) goto parse_failure; - matches &= (start[3] == 'i' || start[3] == 'I'); - matches &= (start[4] == 'n' || start[4] == 'N'); - matches &= (start[5] == 'i' || start[5] == 'I'); - matches &= (start[6] == 't' || start[6] == 'T'); - matches &= (start[7] == 'y' || start[7] == 'Y'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - case '.': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': - break; - default: - goto parse_failure; - } - return 0.0; -parse_failure: - return -1.0; -} -static CYTHON_INLINE int __Pyx__PyBytes_AsDouble_IsSpace(char ch) { - return (ch == 0x20) | !((ch < 0x9) | (ch > 0xd)); -} -CYTHON_UNUSED static double __Pyx__PyBytes_AsDouble(PyObject *obj, const char* start, Py_ssize_t length) { - double value; - Py_ssize_t i, digits; - const char *last = start + length; - char *end; - while (__Pyx__PyBytes_AsDouble_IsSpace(*start)) - start++; - while (start < last - 1 && __Pyx__PyBytes_AsDouble_IsSpace(last[-1])) - last--; - length = last - start; - if (unlikely(length <= 0)) goto fallback; - value = __Pyx__PyBytes_AsDouble_inf_nan(start, length); - if (unlikely(value == -1.0)) goto fallback; - if (value != 0.0) return value; - digits = 0; - for (i=0; i < length; digits += start[i++] != '_'); - if (likely(digits == length)) { - value = PyOS_string_to_double(start, &end, NULL); - } else if (digits < 40) { - char number[40]; - last = __Pyx__PyBytes_AsDouble_Copy(start, number, length); - if (unlikely(!last)) goto fallback; - value = PyOS_string_to_double(number, &end, NULL); - } else { - char *number = (char*) PyMem_Malloc((digits + 1) * sizeof(char)); - if (unlikely(!number)) goto fallback; - last = __Pyx__PyBytes_AsDouble_Copy(start, number, length); - if (unlikely(!last)) { - PyMem_Free(number); - goto fallback; - } - value = PyOS_string_to_double(number, &end, NULL); - PyMem_Free(number); - } - if (likely(end == last) || (value == (double)-1 && PyErr_Occurred())) { - return value; - } -fallback: - return __Pyx_SlowPyString_AsDouble(obj); -} - -/* pynumber_float */ -static CYTHON_INLINE PyObject* __Pyx__PyNumber_Float(PyObject* obj) { - double val; - if (PyLong_CheckExact(obj)) { -#if CYTHON_USE_PYLONG_INTERNALS - if (likely(__Pyx_PyLong_IsCompact(obj))) { - val = (double) __Pyx_PyLong_CompactValue(obj); - goto no_error; - } -#endif - val = PyLong_AsDouble(obj); - } else if (PyUnicode_CheckExact(obj)) { - val = __Pyx_PyUnicode_AsDouble(obj); - } else if (PyBytes_CheckExact(obj)) { - val = __Pyx_PyBytes_AsDouble(obj); - } else if (PyByteArray_CheckExact(obj)) { - val = __Pyx_PyByteArray_AsDouble(obj); - } else { - return PyNumber_Float(obj); - } - if (unlikely(val == -1 && PyErr_Occurred())) { - return NULL; - } -#if CYTHON_USE_PYLONG_INTERNALS -no_error: -#endif - return PyFloat_FromDouble(val); -} - -/* IterNext */ -static PyObject *__Pyx_PyIter_Next2Default(PyObject* defval) { - PyObject* exc_type; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - exc_type = __Pyx_PyErr_CurrentExceptionType(); - if (unlikely(exc_type)) { - if (!defval || unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(defval); - return defval; - } - if (defval) { - Py_INCREF(defval); - return defval; - } - __Pyx_PyErr_SetNone(PyExc_StopIteration); - return NULL; -} -static void __Pyx_PyIter_Next_ErrorNoIterator(PyObject *iterator) { - __Pyx_TypeName iterator_type_name = __Pyx_PyType_GetName(Py_TYPE(iterator)); - PyErr_Format(PyExc_TypeError, - __Pyx_FMT_TYPENAME " object is not an iterator", iterator_type_name); - __Pyx_DECREF_TypeName(iterator_type_name); -} -static CYTHON_INLINE PyObject *__Pyx_PyIter_Next2(PyObject* iterator, PyObject* defval) { - PyObject* next; - iternextfunc iternext = Py_TYPE(iterator)->tp_iternext; - if (likely(iternext)) { -#if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - next = iternext(iterator); - if (likely(next)) - return next; -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(iternext == &_PyObject_NextNotImplemented)) - return NULL; -#endif -#else - next = PyIter_Next(iterator); - if (likely(next)) - return next; -#endif - } else if (CYTHON_USE_TYPE_SLOTS || unlikely(!PyIter_Check(iterator))) { - __Pyx_PyIter_Next_ErrorNoIterator(iterator); - return NULL; - } -#if !CYTHON_USE_TYPE_SLOTS - else { - next = PyIter_Next(iterator); - if (likely(next)) - return next; - } -#endif - return __Pyx_PyIter_Next2Default(defval); -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_value == NULL || exc_info->exc_value == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - PyObject *exc_value = exc_info->exc_value; - if (exc_value == NULL || exc_value == Py_None) { - *value = NULL; - *type = NULL; - *tb = NULL; - } else { - *value = exc_value; - Py_INCREF(*value); - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - *tb = PyException_GetTraceback(exc_value); - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #endif -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - PyObject *tmp_value = exc_info->exc_value; - exc_info->exc_value = value; - Py_XDECREF(tmp_value); - Py_XDECREF(type); - Py_XDECREF(tb); - #else - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); - #endif -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type = NULL, *local_value, *local_tb = NULL; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if PY_VERSION_HEX >= 0x030C00A6 - local_value = tstate->current_exception; - tstate->current_exception = 0; - if (likely(local_value)) { - local_type = (PyObject*) Py_TYPE(local_value); - Py_INCREF(local_type); - local_tb = PyException_GetTraceback(local_value); - } - #else - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - #endif -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE && PY_VERSION_HEX >= 0x030C00A6 - if (unlikely(tstate->current_exception)) -#elif CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - #if PY_VERSION_HEX >= 0x030B00a4 - tmp_value = exc_info->exc_value; - exc_info->exc_value = local_value; - tmp_type = NULL; - tmp_tb = NULL; - Py_XDECREF(local_type); - Py_XDECREF(local_tb); - #else - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - #endif - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* PyObjectGetMethod */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - __Pyx_TypeName type_name; - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if defined(Py_TPFLAGS_METHOD_DESCRIPTOR) && Py_TPFLAGS_METHOD_DESCRIPTOR - if (__Pyx_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)) -#elif PY_MAJOR_VERSION >= 3 - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type))) - #endif -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (likely(descr != NULL)) { - *method = descr; - return 0; - } - type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod0 */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -} - -/* UnpackUnboundCMethod */ -static PyObject *__Pyx_SelflessCall(PyObject *method, PyObject *args, PyObject *kwargs) { - PyObject *selfless_args = PyTuple_GetSlice(args, 1, PyTuple_Size(args)); - if (unlikely(!selfless_args)) return NULL; - PyObject *result = PyObject_Call(method, selfless_args, kwargs); - Py_DECREF(selfless_args); - return result; -} -static PyMethodDef __Pyx_UnboundCMethod_Def = { - "CythonUnboundCMethod", - __PYX_REINTERPRET_FUNCION(PyCFunction, __Pyx_SelflessCall), - METH_VARARGS | METH_KEYWORDS, - NULL -}; -static int __Pyx_TryUnpackUnboundCMethod(__Pyx_CachedCFunction* target) { - PyObject *method; - method = __Pyx_PyObject_GetAttrStr(target->type, *target->method_name); - if (unlikely(!method)) - return -1; - target->method = method; -#if CYTHON_COMPILING_IN_CPYTHON - #if PY_MAJOR_VERSION >= 3 - if (likely(__Pyx_TypeCheck(method, &PyMethodDescr_Type))) - #else - if (likely(!PyCFunction_Check(method))) - #endif - { - PyMethodDescrObject *descr = (PyMethodDescrObject*) method; - target->func = descr->d_method->ml_meth; - target->flag = descr->d_method->ml_flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_STACKLESS); - } else -#endif -#if defined(CYTHON_COMPILING_IN_PYPY) -#elif PY_VERSION_HEX >= 0x03090000 - if (PyCFunction_CheckExact(method)) -#else - if (PyCFunction_Check(method)) -#endif - { - PyObject *self; - int self_found; -#if CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_PYPY - self = PyObject_GetAttrString(method, "__self__"); - if (!self) { - PyErr_Clear(); - } -#else - self = PyCFunction_GET_SELF(method); -#endif - self_found = (self && self != Py_None); -#if CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_PYPY - Py_XDECREF(self); -#endif - if (self_found) { - PyObject *unbound_method = PyCFunction_New(&__Pyx_UnboundCMethod_Def, method); - if (unlikely(!unbound_method)) return -1; - Py_DECREF(method); - target->method = unbound_method; - } - } - return 0; -} - -/* CallUnboundCMethod0 */ -static PyObject* __Pyx__CallUnboundCMethod0(__Pyx_CachedCFunction* cfunc, PyObject* self) { - PyObject *args, *result = NULL; - if (unlikely(!cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL; -#if CYTHON_ASSUME_SAFE_MACROS - args = PyTuple_New(1); - if (unlikely(!args)) goto bad; - Py_INCREF(self); - PyTuple_SET_ITEM(args, 0, self); -#else - args = PyTuple_Pack(1, self); - if (unlikely(!args)) goto bad; -#endif - result = __Pyx_PyObject_Call(cfunc->method, args, NULL); - Py_DECREF(args); -bad: - return result; -} - -/* pop */ -static CYTHON_INLINE PyObject* __Pyx__PyObject_Pop(PyObject* L) { - if (__Pyx_IS_TYPE(L, &PySet_Type)) { - return PySet_Pop(L); - } - return __Pyx_PyObject_CallMethod0(L, __pyx_n_s_pop); -} -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE PyObject* __Pyx_PyList_Pop(PyObject* L) { - if (likely(PyList_GET_SIZE(L) > (((PyListObject*)L)->allocated >> 1))) { - __Pyx_SET_SIZE(L, Py_SIZE(L) - 1); - return PyList_GET_ITEM(L, PyList_GET_SIZE(L)); - } - return __Pyx_CallUnboundCMethod0(&__pyx_umethod_PyList_Type_pop, L); -} -#endif - -/* PyObjectCall2Args */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args[3] = {NULL, arg1, arg2}; - return __Pyx_PyObject_FastCall(function, args+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectCallMethod1 */ -static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) { - PyObject *result = __Pyx_PyObject_CallOneArg(method, arg); - Py_DECREF(method); - return result; -} -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) { - PyObject *method = NULL, *result; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_Call2Args(method, obj, arg); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) return NULL; - return __Pyx__PyObject_CallMethod1(method, arg); -} - -/* append */ -static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x) { - if (likely(PyList_CheckExact(L))) { - if (unlikely(__Pyx_PyList_Append(L, x) < 0)) return -1; - } else { - PyObject* retval = __Pyx_PyObject_CallMethod1(L, __pyx_n_s_append, x); - if (unlikely(!retval)) - return -1; - Py_DECREF(retval); - } - return 0; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = __Pyx_PyType_GetSlot(a, tp_base, PyTypeObject*); - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (cls == a || cls == b) return 1; - mro = cls->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - PyObject *base = PyTuple_GET_ITEM(mro, i); - if (base == (PyObject *)a || base == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(cls, a) || __Pyx_InBases(cls, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - if (exc_type1) { - return __Pyx_IsAnySubtype2((PyTypeObject*)err, (PyTypeObject*)exc_type1, (PyTypeObject*)exc_type2); - } else { - return __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_value = exc_info->exc_value; - exc_info->exc_value = *value; - if (tmp_value == NULL || tmp_value == Py_None) { - Py_XDECREF(tmp_value); - tmp_value = NULL; - tmp_type = NULL; - tmp_tb = NULL; - } else { - tmp_type = (PyObject*) Py_TYPE(tmp_value); - Py_INCREF(tmp_type); - #if CYTHON_COMPILING_IN_CPYTHON - tmp_tb = ((PyBaseExceptionObject*) tmp_value)->traceback; - Py_XINCREF(tmp_tb); - #else - tmp_tb = PyException_GetTraceback(tmp_value); - #endif - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (!r) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r; -#if CYTHON_USE_TYPE_SLOTS - if (likely(PyString_Check(n))) { - r = __Pyx_PyObject_GetAttrStrNoError(o, n); - if (unlikely(!r) && likely(!PyErr_Occurred())) { - r = __Pyx_NewRef(d); - } - return r; - } -#endif - r = PyObject_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *module = 0; - PyObject *empty_dict = 0; - PyObject *empty_list = 0; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (unlikely(!py_import)) - goto bad; - if (!from_list) { - empty_list = PyList_New(0); - if (unlikely(!empty_list)) - goto bad; - from_list = empty_list; - } - #endif - empty_dict = PyDict_New(); - if (unlikely(!empty_dict)) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, 1); - if (unlikely(!module)) { - if (unlikely(!PyErr_ExceptionMatches(PyExc_ImportError))) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (unlikely(!py_level)) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, __pyx_d, empty_dict, from_list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, level); - #endif - } - } -bad: - Py_XDECREF(empty_dict); - Py_XDECREF(empty_list); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - return module; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - const char* module_name_str = 0; - PyObject* module_name = 0; - PyObject* module_dot = 0; - PyObject* full_name = 0; - PyErr_Clear(); - module_name_str = PyModule_GetName(module); - if (unlikely(!module_name_str)) { goto modbad; } - module_name = PyUnicode_FromString(module_name_str); - if (unlikely(!module_name)) { goto modbad; } - module_dot = PyUnicode_Concat(module_name, __pyx_kp_u__8); - if (unlikely(!module_dot)) { goto modbad; } - full_name = PyUnicode_Concat(module_dot, name); - if (unlikely(!full_name)) { goto modbad; } - #if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - { - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - goto modbad; - value = PyObject_GetItem(modules, full_name); - } - #else - value = PyImport_GetModule(full_name); - #endif - modbad: - Py_XDECREF(full_name); - Py_XDECREF(module_dot); - Py_XDECREF(module_name); - } - if (unlikely(!value)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* ImportDottedModule */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Error(PyObject *name, PyObject *parts_tuple, Py_ssize_t count) { - PyObject *partial_name = NULL, *slice = NULL, *sep = NULL; - if (unlikely(PyErr_Occurred())) { - PyErr_Clear(); - } - if (likely(PyTuple_GET_SIZE(parts_tuple) == count)) { - partial_name = name; - } else { - slice = PySequence_GetSlice(parts_tuple, 0, count); - if (unlikely(!slice)) - goto bad; - sep = PyUnicode_FromStringAndSize(".", 1); - if (unlikely(!sep)) - goto bad; - partial_name = PyUnicode_Join(sep, slice); - } - PyErr_Format( -#if PY_MAJOR_VERSION < 3 - PyExc_ImportError, - "No module named '%s'", PyString_AS_STRING(partial_name)); -#else -#if PY_VERSION_HEX >= 0x030600B1 - PyExc_ModuleNotFoundError, -#else - PyExc_ImportError, -#endif - "No module named '%U'", partial_name); -#endif -bad: - Py_XDECREF(sep); - Py_XDECREF(slice); - Py_XDECREF(partial_name); - return NULL; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Lookup(PyObject *name) { - PyObject *imported_module; -#if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - return NULL; - imported_module = __Pyx_PyDict_GetItemStr(modules, name); - Py_XINCREF(imported_module); -#else - imported_module = PyImport_GetModule(name); -#endif - return imported_module; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple) { - Py_ssize_t i, nparts; - nparts = PyTuple_GET_SIZE(parts_tuple); - for (i=1; i < nparts && module; i++) { - PyObject *part, *submodule; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - part = PyTuple_GET_ITEM(parts_tuple, i); -#else - part = PySequence_ITEM(parts_tuple, i); -#endif - submodule = __Pyx_PyObject_GetAttrStrNoError(module, part); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(part); -#endif - Py_DECREF(module); - module = submodule; - } - if (unlikely(!module)) { - return __Pyx__ImportDottedModule_Error(name, parts_tuple, i); - } - return module; -} -#endif -static PyObject *__Pyx__ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if PY_MAJOR_VERSION < 3 - PyObject *module, *from_list, *star = __pyx_n_s__13; - CYTHON_UNUSED_VAR(parts_tuple); - from_list = PyList_New(1); - if (unlikely(!from_list)) - return NULL; - Py_INCREF(star); - PyList_SET_ITEM(from_list, 0, star); - module = __Pyx_Import(name, from_list, 0); - Py_DECREF(from_list); - return module; -#else - PyObject *imported_module; - PyObject *module = __Pyx_Import(name, NULL, 0); - if (!parts_tuple || unlikely(!module)) - return module; - imported_module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(imported_module)) { - Py_DECREF(module); - return imported_module; - } - PyErr_Clear(); - return __Pyx_ImportDottedModule_WalkParts(module, name, parts_tuple); -#endif -} -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030400B1 - PyObject *module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(module)) { - PyObject *spec = __Pyx_PyObject_GetAttrStrNoError(module, __pyx_n_s_spec); - if (likely(spec)) { - PyObject *unsafe = __Pyx_PyObject_GetAttrStrNoError(spec, __pyx_n_s_initializing); - if (likely(!unsafe || !__Pyx_PyObject_IsTrue(unsafe))) { - Py_DECREF(spec); - spec = NULL; - } - Py_XDECREF(unsafe); - } - if (likely(!spec)) { - PyErr_Clear(); - return module; - } - Py_DECREF(spec); - Py_DECREF(module); - } else if (PyErr_Occurred()) { - PyErr_Clear(); - } -#endif - return __Pyx__ImportDottedModule(name, parts_tuple); -} - -/* Py3UpdateBases */ -static PyObject* -__Pyx_PEP560_update_bases(PyObject *bases) -{ - Py_ssize_t i, j, size_bases; - PyObject *base, *meth, *new_base, *result, *new_bases = NULL; - size_bases = PyTuple_GET_SIZE(bases); - for (i = 0; i < size_bases; i++) { - base = PyTuple_GET_ITEM(bases, i); - if (PyType_Check(base)) { - if (new_bases) { - if (PyList_Append(new_bases, base) < 0) { - goto error; - } - } - continue; - } - meth = __Pyx_PyObject_GetAttrStrNoError(base, __pyx_n_s_mro_entries); - if (!meth && PyErr_Occurred()) { - goto error; - } - if (!meth) { - if (new_bases) { - if (PyList_Append(new_bases, base) < 0) { - goto error; - } - } - continue; - } - new_base = __Pyx_PyObject_CallOneArg(meth, bases); - Py_DECREF(meth); - if (!new_base) { - goto error; - } - if (!PyTuple_Check(new_base)) { - PyErr_SetString(PyExc_TypeError, - "__mro_entries__ must return a tuple"); - Py_DECREF(new_base); - goto error; - } - if (!new_bases) { - if (!(new_bases = PyList_New(i))) { - goto error; - } - for (j = 0; j < i; j++) { - base = PyTuple_GET_ITEM(bases, j); - PyList_SET_ITEM(new_bases, j, base); - Py_INCREF(base); - } - } - j = PyList_GET_SIZE(new_bases); - if (PyList_SetSlice(new_bases, j, j, new_base) < 0) { - goto error; - } - Py_DECREF(new_base); - } - if (!new_bases) { - Py_INCREF(bases); - return bases; - } - result = PyList_AsTuple(new_bases); - Py_DECREF(new_bases); - return result; -error: - Py_XDECREF(new_bases); - return NULL; -} - -/* CalculateMetaclass */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases) { - Py_ssize_t i, nbases; -#if CYTHON_ASSUME_SAFE_MACROS - nbases = PyTuple_GET_SIZE(bases); -#else - nbases = PyTuple_Size(bases); - if (nbases < 0) return NULL; -#endif - for (i=0; i < nbases; i++) { - PyTypeObject *tmptype; -#if CYTHON_ASSUME_SAFE_MACROS - PyObject *tmp = PyTuple_GET_ITEM(bases, i); -#else - PyObject *tmp = PyTuple_GetItem(bases, i); - if (!tmp) return NULL; -#endif - tmptype = Py_TYPE(tmp); -#if PY_MAJOR_VERSION < 3 - if (tmptype == &PyClass_Type) - continue; -#endif - if (!metaclass) { - metaclass = tmptype; - continue; - } - if (PyType_IsSubtype(metaclass, tmptype)) - continue; - if (PyType_IsSubtype(tmptype, metaclass)) { - metaclass = tmptype; - continue; - } - PyErr_SetString(PyExc_TypeError, - "metaclass conflict: " - "the metaclass of a derived class " - "must be a (non-strict) subclass " - "of the metaclasses of all its bases"); - return NULL; - } - if (!metaclass) { -#if PY_MAJOR_VERSION < 3 - metaclass = &PyClass_Type; -#else - metaclass = &PyType_Type; -#endif - } - Py_INCREF((PyObject*) metaclass); - return (PyObject*) metaclass; -} - -/* FixUpExtensionType */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type) { -#if PY_VERSION_HEX > 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - CYTHON_UNUSED_VAR(spec); - CYTHON_UNUSED_VAR(type); -#else - const PyType_Slot *slot = spec->slots; - while (slot && slot->slot && slot->slot != Py_tp_members) - slot++; - if (slot && slot->slot == Py_tp_members) { - int changed = 0; -#if !(PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON) - const -#endif - PyMemberDef *memb = (PyMemberDef*) slot->pfunc; - while (memb && memb->name) { - if (memb->name[0] == '_' && memb->name[1] == '_') { -#if PY_VERSION_HEX < 0x030900b1 - if (strcmp(memb->name, "__weaklistoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_weaklistoffset = memb->offset; - changed = 1; - } - else if (strcmp(memb->name, "__dictoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_dictoffset = memb->offset; - changed = 1; - } -#if CYTHON_METH_FASTCALL - else if (strcmp(memb->name, "__vectorcalloffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); -#if PY_VERSION_HEX >= 0x030800b4 - type->tp_vectorcall_offset = memb->offset; -#else - type->tp_print = (printfunc) memb->offset; -#endif - changed = 1; - } -#endif -#else - if ((0)); -#endif -#if PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON - else if (strcmp(memb->name, "__module__") == 0) { - PyObject *descr; - assert(memb->type == T_OBJECT); - assert(memb->flags == 0 || memb->flags == READONLY); - descr = PyDescr_NewMember(type, memb); - if (unlikely(!descr)) - return -1; - if (unlikely(PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr) < 0)) { - Py_DECREF(descr); - return -1; - } - Py_DECREF(descr); - changed = 1; - } -#endif - } - memb++; - } - if (changed) - PyType_Modified(type); - } -#endif - return 0; -} -#endif - -/* FetchSharedCythonModule */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void) { - PyObject *abi_module = PyImport_AddModule((char*) __PYX_ABI_MODULE_NAME); - if (unlikely(!abi_module)) return NULL; - Py_INCREF(abi_module); - return abi_module; -} - -/* FetchCommonType */ -static int __Pyx_VerifyCachedType(PyObject *cached_type, - const char *name, - Py_ssize_t basicsize, - Py_ssize_t expected_basicsize) { - if (!PyType_Check(cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", name); - return -1; - } - if (basicsize != expected_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - name); - return -1; - } - return 0; -} -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* abi_module; - const char* object_name; - PyTypeObject *cached_type = NULL; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - object_name = strrchr(type->tp_name, '.'); - object_name = object_name ? object_name+1 : type->tp_name; - cached_type = (PyTypeObject*) PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - if (__Pyx_VerifyCachedType( - (PyObject *)cached_type, - object_name, - cached_type->tp_basicsize, - type->tp_basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, (PyObject *)type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; -done: - Py_DECREF(abi_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#else -static PyTypeObject *__Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *abi_module, *cached_type = NULL; - const char* object_name = strrchr(spec->name, '.'); - object_name = object_name ? object_name+1 : spec->name; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - cached_type = PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - Py_ssize_t basicsize; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_basicsize; - py_basicsize = PyObject_GetAttrString(cached_type, "__basicsize__"); - if (unlikely(!py_basicsize)) goto bad; - basicsize = PyLong_AsSsize_t(py_basicsize); - Py_DECREF(py_basicsize); - py_basicsize = 0; - if (unlikely(basicsize == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; -#else - basicsize = likely(PyType_Check(cached_type)) ? ((PyTypeObject*) cached_type)->tp_basicsize : -1; -#endif - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - basicsize, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - CYTHON_UNUSED_VAR(module); - cached_type = __Pyx_PyType_FromModuleAndSpec(abi_module, spec, bases); - if (unlikely(!cached_type)) goto bad; - if (unlikely(__Pyx_fix_up_extension_type_from_spec(spec, (PyTypeObject *) cached_type) < 0)) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, cached_type) < 0) goto bad; -done: - Py_DECREF(abi_module); - assert(cached_type == NULL || PyType_Check(cached_type)); - return (PyTypeObject *) cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#endif - -/* PyVectorcallFastCallDict */ -#if CYTHON_METH_FASTCALL -static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - PyObject *res = NULL; - PyObject *kwnames; - PyObject **newargs; - PyObject **kwvalues; - Py_ssize_t i, pos; - size_t j; - PyObject *key, *value; - unsigned long keys_are_strings; - Py_ssize_t nkw = PyDict_GET_SIZE(kw); - newargs = (PyObject **)PyMem_Malloc((nargs + (size_t)nkw) * sizeof(args[0])); - if (unlikely(newargs == NULL)) { - PyErr_NoMemory(); - return NULL; - } - for (j = 0; j < nargs; j++) newargs[j] = args[j]; - kwnames = PyTuple_New(nkw); - if (unlikely(kwnames == NULL)) { - PyMem_Free(newargs); - return NULL; - } - kwvalues = newargs + nargs; - pos = i = 0; - keys_are_strings = Py_TPFLAGS_UNICODE_SUBCLASS; - while (PyDict_Next(kw, &pos, &key, &value)) { - keys_are_strings &= Py_TYPE(key)->tp_flags; - Py_INCREF(key); - Py_INCREF(value); - PyTuple_SET_ITEM(kwnames, i, key); - kwvalues[i] = value; - i++; - } - if (unlikely(!keys_are_strings)) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - goto cleanup; - } - res = vc(func, newargs, nargs, kwnames); -cleanup: - Py_DECREF(kwnames); - for (i = 0; i < nkw; i++) - Py_DECREF(kwvalues[i]); - PyMem_Free(newargs); - return res; -} -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - if (likely(kw == NULL) || PyDict_GET_SIZE(kw) == 0) { - return vc(func, args, nargs, NULL); - } - return __Pyx_PyVectorcall_FastCallDict_kw(func, vc, args, nargs, kw); -} -#endif - -/* CythonFunctionShared */ -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj) { -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - __Pyx_Py_XDECREF_SET( - __Pyx_CyFunction_GetClassObj(f), - ((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#else - __Pyx_Py_XDECREF_SET( - ((PyCMethodObject *) (f))->mm_class, - (PyTypeObject*)((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#endif -} -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, void *closure) -{ - CYTHON_UNUSED_VAR(closure); - if (unlikely(op->func_doc == NULL)) { -#if CYTHON_COMPILING_IN_LIMITED_API - op->func_doc = PyObject_GetAttrString(op->func, "__doc__"); - if (unlikely(!op->func_doc)) return NULL; -#else - if (((PyCFunctionObject*)op)->m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } -#endif - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_doc, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_name == NULL)) { -#if CYTHON_COMPILING_IN_LIMITED_API - op->func_name = PyObject_GetAttrString(op->func, "__name__"); -#elif PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_name, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_qualname, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_dict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(op); - CYTHON_UNUSED_VAR(context); - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - CYTHON_UNUSED_VAR(context); - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = __Pyx_PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = __Pyx_PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyTuple_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__defaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_tuple, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_tuple; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__kwdefaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_kwdict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_kwdict; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value || value == Py_None) { - value = NULL; - } else if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - __Pyx_Py_XDECREF_SET(op->func_annotations, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->func_annotations; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine(__pyx_CyFunctionObject *op, void *context) { - int is_coroutine; - CYTHON_UNUSED_VAR(context); - if (op->func_is_coroutine) { - return __Pyx_NewRef(op->func_is_coroutine); - } - is_coroutine = op->flags & __Pyx_CYFUNCTION_COROUTINE; -#if PY_VERSION_HEX >= 0x03050000 - if (is_coroutine) { - PyObject *module, *fromlist, *marker = __pyx_n_s_is_coroutine; - fromlist = PyList_New(1); - if (unlikely(!fromlist)) return NULL; - Py_INCREF(marker); -#if CYTHON_ASSUME_SAFE_MACROS - PyList_SET_ITEM(fromlist, 0, marker); -#else - if (unlikely(PyList_SetItem(fromlist, 0, marker) < 0)) { - Py_DECREF(marker); - Py_DECREF(fromlist); - return NULL; - } -#endif - module = PyImport_ImportModuleLevelObject(__pyx_n_s_asyncio_coroutines, NULL, NULL, fromlist, 0); - Py_DECREF(fromlist); - if (unlikely(!module)) goto ignore; - op->func_is_coroutine = __Pyx_PyObject_GetAttrStr(module, marker); - Py_DECREF(module); - if (likely(op->func_is_coroutine)) { - return __Pyx_NewRef(op->func_is_coroutine); - } -ignore: - PyErr_Clear(); - } -#endif - op->func_is_coroutine = __Pyx_PyBool_FromLong(is_coroutine); - return __Pyx_NewRef(op->func_is_coroutine); -} -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject * -__Pyx_CyFunction_get_module(__pyx_CyFunctionObject *op, void *context) { - CYTHON_UNUSED_VAR(context); - return PyObject_GetAttrString(op->func, "__module__"); -} -static int -__Pyx_CyFunction_set_module(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - return PyObject_SetAttrString(op->func, "__module__", value); -} -#endif -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {(char *) "_is_coroutine", (getter)__Pyx_CyFunction_get_is_coroutine, 0, 0, 0}, -#if CYTHON_COMPILING_IN_LIMITED_API - {"__module__", (getter)__Pyx_CyFunction_get_module, (setter)__Pyx_CyFunction_set_module, 0, 0}, -#endif - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { -#if !CYTHON_COMPILING_IN_LIMITED_API - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), 0, 0}, -#endif -#if CYTHON_USE_TYPE_SPECS - {(char *) "__dictoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_dict), READONLY, 0}, -#if CYTHON_METH_FASTCALL -#if CYTHON_BACKPORT_VECTORCALL - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_vectorcall), READONLY, 0}, -#else -#if !CYTHON_COMPILING_IN_LIMITED_API - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(PyCFunctionObject, vectorcall), READONLY, 0}, -#endif -#endif -#endif -#if PY_VERSION_HEX < 0x030500A0 || CYTHON_COMPILING_IN_LIMITED_API - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_weakreflist), READONLY, 0}, -#else - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(PyCFunctionObject, m_weakreflist), READONLY, 0}, -#endif -#endif - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, PyObject *args) -{ - CYTHON_UNUSED_VAR(args); -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(((PyCFunctionObject*)m)->m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 || CYTHON_COMPILING_IN_LIMITED_API -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) (((PyCFunctionObject*)cyfunc)->m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { -#if !CYTHON_COMPILING_IN_LIMITED_API - PyCFunctionObject *cf = (PyCFunctionObject*) op; -#endif - if (unlikely(op == NULL)) - return NULL; -#if CYTHON_COMPILING_IN_LIMITED_API - op->func = PyCFunction_NewEx(ml, (PyObject*)op, module); - if (unlikely(!op->func)) return NULL; -#endif - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; -#if !CYTHON_COMPILING_IN_LIMITED_API - cf->m_ml = ml; - cf->m_self = (PyObject *) op; -#endif - Py_XINCREF(closure); - op->func_closure = closure; -#if !CYTHON_COMPILING_IN_LIMITED_API - Py_XINCREF(module); - cf->m_module = module; -#endif - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - op->func_classobj = NULL; -#else - ((PyCMethodObject*)op)->mm_class = NULL; -#endif - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - op->func_is_coroutine = NULL; -#if CYTHON_METH_FASTCALL - switch (ml->ml_flags & (METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_METHOD)) { - case METH_NOARGS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_NOARGS; - break; - case METH_O: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_O; - break; - case METH_METHOD | METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD; - break; - case METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS; - break; - case METH_VARARGS | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = NULL; - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - Py_DECREF(op); - return NULL; - } -#endif - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); -#if CYTHON_COMPILING_IN_LIMITED_API - Py_CLEAR(m->func); -#else - Py_CLEAR(((PyCFunctionObject*)m)->m_module); -#endif - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); -#if !CYTHON_COMPILING_IN_LIMITED_API -#if PY_VERSION_HEX < 0x030900B1 - Py_CLEAR(__Pyx_CyFunction_GetClassObj(m)); -#else - { - PyObject *cls = (PyObject*) ((PyCMethodObject *) (m))->mm_class; - ((PyCMethodObject *) (m))->mm_class = NULL; - Py_XDECREF(cls); - } -#endif -#endif - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - Py_CLEAR(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - __Pyx_PyHeapTypeObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); -#if CYTHON_COMPILING_IN_LIMITED_API - Py_VISIT(m->func); -#else - Py_VISIT(((PyCFunctionObject*)m)->m_module); -#endif - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); -#if !CYTHON_COMPILING_IN_LIMITED_API - Py_VISIT(__Pyx_CyFunction_GetClassObj(m)); -#endif - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - Py_VISIT(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *f = ((__pyx_CyFunctionObject*)func)->func; - PyObject *py_name = NULL; - PyCFunction meth; - int flags; - meth = PyCFunction_GetFunction(f); - if (unlikely(!meth)) return NULL; - flags = PyCFunction_GetFlags(f); - if (unlikely(flags < 0)) return NULL; -#else - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - int flags = f->m_ml->ml_flags; -#endif - Py_ssize_t size; - switch (flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { -#if CYTHON_ASSUME_SAFE_MACROS - size = PyTuple_GET_SIZE(arg); -#else - size = PyTuple_Size(arg); - if (unlikely(size < 0)) return NULL; -#endif - if (likely(size == 0)) - return (*meth)(self, NULL); -#if CYTHON_COMPILING_IN_LIMITED_API - py_name = __Pyx_CyFunction_get_name((__pyx_CyFunctionObject*)func, NULL); - if (!py_name) return NULL; - PyErr_Format(PyExc_TypeError, - "%.200S() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - py_name, size); - Py_DECREF(py_name); -#else - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); -#endif - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { -#if CYTHON_ASSUME_SAFE_MACROS - size = PyTuple_GET_SIZE(arg); -#else - size = PyTuple_Size(arg); - if (unlikely(size < 0)) return NULL; -#endif - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = __Pyx_PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } -#if CYTHON_COMPILING_IN_LIMITED_API - py_name = __Pyx_CyFunction_get_name((__pyx_CyFunctionObject*)func, NULL); - if (!py_name) return NULL; - PyErr_Format(PyExc_TypeError, - "%.200S() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - py_name, size); - Py_DECREF(py_name); -#else - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); -#endif - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - return NULL; - } -#if CYTHON_COMPILING_IN_LIMITED_API - py_name = __Pyx_CyFunction_get_name((__pyx_CyFunctionObject*)func, NULL); - if (!py_name) return NULL; - PyErr_Format(PyExc_TypeError, "%.200S() takes no keyword arguments", - py_name); - Py_DECREF(py_name); -#else - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); -#endif - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *self, *result; -#if CYTHON_COMPILING_IN_LIMITED_API - self = PyCFunction_GetSelf(((__pyx_CyFunctionObject*)func)->func); - if (unlikely(!self) && PyErr_Occurred()) return NULL; -#else - self = ((PyCFunctionObject*)func)->m_self; -#endif - result = __Pyx_CyFunction_CallMethod(func, self, arg, kw); - return result; -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; -#if CYTHON_METH_FASTCALL - __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); - if (vc) { -#if CYTHON_ASSUME_SAFE_MACROS - return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); -#else - (void) &__Pyx_PyVectorcall_FastCallDict; - return PyVectorcall_Call(func, args, kw); -#endif - } -#endif - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; -#if CYTHON_ASSUME_SAFE_MACROS - argc = PyTuple_GET_SIZE(args); -#else - argc = PyTuple_Size(args); - if (unlikely(!argc) < 0) return NULL; -#endif - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); -#if PY_MAJOR_VERSION > 2 - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); -#else - PyErr_SetString(PyExc_TypeError, - "unbound method needs an argument"); -#endif - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE int __Pyx_CyFunction_Vectorcall_CheckArgs(__pyx_CyFunctionObject *cyfunc, Py_ssize_t nargs, PyObject *kwnames) -{ - int ret = 0; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - if (unlikely(nargs < 1)) { - PyErr_Format(PyExc_TypeError, "%.200s() needs an argument", - ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - ret = 1; - } - if (unlikely(kwnames) && unlikely(PyTuple_GET_SIZE(kwnames))) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no keyword arguments", ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - return ret; -} -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 0)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, NULL); -} -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 1)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, args[0]); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((_PyCFunctionFastWithKeywords)(void(*)(void))def->ml_meth)(self, args, nargs, kwnames); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; - PyTypeObject *cls = (PyTypeObject *) __Pyx_CyFunction_GetClassObj(cyfunc); -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((__Pyx_PyCMethod)(void(*)(void))def->ml_meth)(self, cls, args, (size_t)nargs, kwnames); -} -#endif -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_CyFunctionType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_CyFunction_dealloc}, - {Py_tp_repr, (void *)__Pyx_CyFunction_repr}, - {Py_tp_call, (void *)__Pyx_CyFunction_CallAsMethod}, - {Py_tp_traverse, (void *)__Pyx_CyFunction_traverse}, - {Py_tp_clear, (void *)__Pyx_CyFunction_clear}, - {Py_tp_methods, (void *)__pyx_CyFunction_methods}, - {Py_tp_members, (void *)__pyx_CyFunction_members}, - {Py_tp_getset, (void *)__pyx_CyFunction_getsets}, - {Py_tp_descr_get, (void *)__Pyx_PyMethod_New}, - {0, 0}, -}; -static PyType_Spec __pyx_CyFunctionType_spec = { - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if (defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL) - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - __pyx_CyFunctionType_slots -}; -#else -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, -#if !CYTHON_METH_FASTCALL - 0, -#elif CYTHON_BACKPORT_VECTORCALL - (printfunc)offsetof(__pyx_CyFunctionObject, func_vectorcall), -#else - offsetof(PyCFunctionObject, vectorcall), -#endif - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_PyMethod_New, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if __PYX_NEED_TP_PRINT_SLOT - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -#endif -static int __pyx_CyFunction_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_CyFunctionType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_CyFunctionType_spec, NULL); -#else - CYTHON_UNUSED_VAR(module); - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); -#endif - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* PyObjectLookupSpecial */ -#if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx__PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name, int with_error) { - PyObject *res; - PyTypeObject *tp = Py_TYPE(obj); -#if PY_MAJOR_VERSION < 3 - if (unlikely(PyInstance_Check(obj))) - return with_error ? __Pyx_PyObject_GetAttrStr(obj, attr_name) : __Pyx_PyObject_GetAttrStrNoError(obj, attr_name); -#endif - res = _PyType_Lookup(tp, attr_name); - if (likely(res)) { - descrgetfunc f = Py_TYPE(res)->tp_descr_get; - if (!f) { - Py_INCREF(res); - } else { - res = f(res, obj, (PyObject *)tp); - } - } else if (with_error) { - PyErr_SetObject(PyExc_AttributeError, attr_name); - } - return res; -} -#endif - -/* Py3ClassCreate */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, - PyObject *qualname, PyObject *mkw, PyObject *modname, PyObject *doc) { - PyObject *ns; - if (metaclass) { - PyObject *prep = __Pyx_PyObject_GetAttrStrNoError(metaclass, __pyx_n_s_prepare); - if (prep) { - PyObject *pargs[3] = {NULL, name, bases}; - ns = __Pyx_PyObject_FastCallDict(prep, pargs+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, mkw); - Py_DECREF(prep); - } else { - if (unlikely(PyErr_Occurred())) - return NULL; - ns = PyDict_New(); - } - } else { - ns = PyDict_New(); - } - if (unlikely(!ns)) - return NULL; - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_module, modname) < 0)) goto bad; -#if PY_VERSION_HEX >= 0x03030000 - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_qualname, qualname) < 0)) goto bad; -#else - CYTHON_MAYBE_UNUSED_VAR(qualname); -#endif - if (unlikely(doc && PyObject_SetItem(ns, __pyx_n_s_doc, doc) < 0)) goto bad; - return ns; -bad: - Py_DECREF(ns); - return NULL; -} -#if PY_VERSION_HEX < 0x030600A4 && CYTHON_PEP487_INIT_SUBCLASS -static int __Pyx_SetNamesPEP487(PyObject *type_obj) { - PyTypeObject *type = (PyTypeObject*) type_obj; - PyObject *names_to_set, *key, *value, *set_name, *tmp; - Py_ssize_t i = 0; -#if CYTHON_USE_TYPE_SLOTS - names_to_set = PyDict_Copy(type->tp_dict); -#else - { - PyObject *d = PyObject_GetAttr(type_obj, __pyx_n_s_dict); - names_to_set = NULL; - if (likely(d)) { - PyObject *names_to_set = PyDict_New(); - int ret = likely(names_to_set) ? PyDict_Update(names_to_set, d) : -1; - Py_DECREF(d); - if (unlikely(ret < 0)) - Py_CLEAR(names_to_set); - } - } -#endif - if (unlikely(names_to_set == NULL)) - goto bad; - while (PyDict_Next(names_to_set, &i, &key, &value)) { - set_name = __Pyx_PyObject_LookupSpecialNoError(value, __pyx_n_s_set_name); - if (unlikely(set_name != NULL)) { - tmp = __Pyx_PyObject_Call2Args(set_name, type_obj, key); - Py_DECREF(set_name); - if (unlikely(tmp == NULL)) { - __Pyx_TypeName value_type_name = - __Pyx_PyType_GetName(Py_TYPE(value)); - __Pyx_TypeName type_name = __Pyx_PyType_GetName(type); - PyErr_Format(PyExc_RuntimeError, -#if PY_MAJOR_VERSION >= 3 - "Error calling __set_name__ on '" __Pyx_FMT_TYPENAME "' instance %R " "in '" __Pyx_FMT_TYPENAME "'", - value_type_name, key, type_name); -#else - "Error calling __set_name__ on '" __Pyx_FMT_TYPENAME "' instance %.100s in '" __Pyx_FMT_TYPENAME "'", - value_type_name, - PyString_Check(key) ? PyString_AS_STRING(key) : "?", - type_name); -#endif - goto bad; - } else { - Py_DECREF(tmp); - } - } - else if (unlikely(PyErr_Occurred())) { - goto bad; - } - } - Py_DECREF(names_to_set); - return 0; -bad: - Py_XDECREF(names_to_set); - return -1; -} -static PyObject *__Pyx_InitSubclassPEP487(PyObject *type_obj, PyObject *mkw) { -#if CYTHON_USE_TYPE_SLOTS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyTypeObject *type = (PyTypeObject*) type_obj; - PyObject *mro = type->tp_mro; - Py_ssize_t i, nbases; - if (unlikely(!mro)) goto done; - (void) &__Pyx_GetBuiltinName; - Py_INCREF(mro); - nbases = PyTuple_GET_SIZE(mro); - assert(PyTuple_GET_ITEM(mro, 0) == type_obj); - for (i = 1; i < nbases-1; i++) { - PyObject *base, *dict, *meth; - base = PyTuple_GET_ITEM(mro, i); - dict = ((PyTypeObject *)base)->tp_dict; - meth = __Pyx_PyDict_GetItemStrWithError(dict, __pyx_n_s_init_subclass); - if (unlikely(meth)) { - descrgetfunc f = Py_TYPE(meth)->tp_descr_get; - PyObject *res; - Py_INCREF(meth); - if (likely(f)) { - res = f(meth, NULL, type_obj); - Py_DECREF(meth); - if (unlikely(!res)) goto bad; - meth = res; - } - res = __Pyx_PyObject_FastCallDict(meth, NULL, 0, mkw); - Py_DECREF(meth); - if (unlikely(!res)) goto bad; - Py_DECREF(res); - goto done; - } else if (unlikely(PyErr_Occurred())) { - goto bad; - } - } -done: - Py_XDECREF(mro); - return type_obj; -bad: - Py_XDECREF(mro); - Py_DECREF(type_obj); - return NULL; -#else - PyObject *super_type, *super, *func, *res; -#if CYTHON_COMPILING_IN_PYPY && !defined(PySuper_Type) - super_type = __Pyx_GetBuiltinName(__pyx_n_s_super); -#else - super_type = (PyObject*) &PySuper_Type; - (void) &__Pyx_GetBuiltinName; -#endif - super = likely(super_type) ? __Pyx_PyObject_Call2Args(super_type, type_obj, type_obj) : NULL; -#if CYTHON_COMPILING_IN_PYPY && !defined(PySuper_Type) - Py_XDECREF(super_type); -#endif - if (unlikely(!super)) { - Py_CLEAR(type_obj); - goto done; - } - func = __Pyx_PyObject_GetAttrStrNoError(super, __pyx_n_s_init_subclass); - Py_DECREF(super); - if (likely(!func)) { - if (unlikely(PyErr_Occurred())) - Py_CLEAR(type_obj); - goto done; - } - res = __Pyx_PyObject_FastCallDict(func, NULL, 0, mkw); - Py_DECREF(func); - if (unlikely(!res)) - Py_CLEAR(type_obj); - Py_XDECREF(res); -done: - return type_obj; -#endif -} -#endif -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, - PyObject *dict, PyObject *mkw, - int calculate_metaclass, int allow_py2_metaclass) { - PyObject *result; - PyObject *owned_metaclass = NULL; - PyObject *margs[4] = {NULL, name, bases, dict}; - if (allow_py2_metaclass) { - owned_metaclass = PyObject_GetItem(dict, __pyx_n_s_metaclass); - if (owned_metaclass) { - metaclass = owned_metaclass; - } else if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) { - PyErr_Clear(); - } else { - return NULL; - } - } - if (calculate_metaclass && (!metaclass || PyType_Check(metaclass))) { - metaclass = __Pyx_CalculateMetaclass((PyTypeObject*) metaclass, bases); - Py_XDECREF(owned_metaclass); - if (unlikely(!metaclass)) - return NULL; - owned_metaclass = metaclass; - } - result = __Pyx_PyObject_FastCallDict(metaclass, margs+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, -#if PY_VERSION_HEX < 0x030600A4 - (metaclass == (PyObject*)&PyType_Type) ? NULL : mkw -#else - mkw -#endif - ); - Py_XDECREF(owned_metaclass); -#if PY_VERSION_HEX < 0x030600A4 && CYTHON_PEP487_INIT_SUBCLASS - if (likely(result) && likely(PyType_Check(result))) { - if (unlikely(__Pyx_SetNamesPEP487(result) < 0)) { - Py_CLEAR(result); - } else { - result = __Pyx_InitSubclassPEP487(result, mkw); - } - } -#else - (void) &__Pyx_GetBuiltinName; -#endif - return result; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - CYTHON_MAYBE_UNUSED_VAR(tstate); - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStrNoError(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} -#endif - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 && !CYTHON_COMPILING_IN_LIMITED_API - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject *__Pyx_PyCode_Replace_For_AddTraceback(PyObject *code, PyObject *scratch_dict, - PyObject *firstlineno, PyObject *name) { - PyObject *replace = NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "co_firstlineno", firstlineno))) return NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "co_name", name))) return NULL; - replace = PyObject_GetAttrString(code, "replace"); - if (likely(replace)) { - PyObject *result; - result = PyObject_Call(replace, __pyx_empty_tuple, scratch_dict); - Py_DECREF(replace); - return result; - } - #if __PYX_LIMITED_VERSION_HEX < 0x030780000 - PyErr_Clear(); - { - PyObject *compiled = NULL, *result = NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "code", code))) return NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "type", (PyObject*)(&PyType_Type)))) return NULL; - compiled = Py_CompileString( - "out = type(code)(\n" - " code.co_argcount, code.co_kwonlyargcount, code.co_nlocals, code.co_stacksize,\n" - " code.co_flags, code.co_code, code.co_consts, code.co_names,\n" - " code.co_varnames, code.co_filename, co_name, co_firstlineno,\n" - " code.co_lnotab)\n", "", Py_file_input); - if (!compiled) return NULL; - result = PyEval_EvalCode(compiled, scratch_dict, scratch_dict); - Py_DECREF(compiled); - if (!result) PyErr_Print(); - Py_DECREF(result); - result = PyDict_GetItemString(scratch_dict, "out"); - if (result) Py_INCREF(result); - return result; - } - #endif -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyObject *code_object = NULL, *py_py_line = NULL, *py_funcname = NULL, *dict = NULL; - PyObject *replace = NULL, *getframe = NULL, *frame = NULL; - PyObject *exc_type, *exc_value, *exc_traceback; - int success = 0; - if (c_line) { - (void) __pyx_cfilenm; - (void) __Pyx_CLineForTraceback(__Pyx_PyThreadState_Current, c_line); - } - PyErr_Fetch(&exc_type, &exc_value, &exc_traceback); - code_object = Py_CompileString("_getframe()", filename, Py_eval_input); - if (unlikely(!code_object)) goto bad; - py_py_line = PyLong_FromLong(py_line); - if (unlikely(!py_py_line)) goto bad; - py_funcname = PyUnicode_FromString(funcname); - if (unlikely(!py_funcname)) goto bad; - dict = PyDict_New(); - if (unlikely(!dict)) goto bad; - { - PyObject *old_code_object = code_object; - code_object = __Pyx_PyCode_Replace_For_AddTraceback(code_object, dict, py_py_line, py_funcname); - Py_DECREF(old_code_object); - } - if (unlikely(!code_object)) goto bad; - getframe = PySys_GetObject("_getframe"); - if (unlikely(!getframe)) goto bad; - if (unlikely(PyDict_SetItemString(dict, "_getframe", getframe))) goto bad; - frame = PyEval_EvalCode(code_object, dict, dict); - if (unlikely(!frame) || frame == Py_None) goto bad; - success = 1; - bad: - PyErr_Restore(exc_type, exc_value, exc_traceback); - Py_XDECREF(code_object); - Py_XDECREF(py_py_line); - Py_XDECREF(py_funcname); - Py_XDECREF(dict); - Py_XDECREF(replace); - if (success) { - PyTraceBack_Here( - (struct _frame*)frame); - } - Py_XDECREF(frame); -} -#else -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} -#endif - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; -#if !CYTHON_COMPILING_IN_LIMITED_API - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); -#else - PyObject *from_bytes, *result = NULL; - PyObject *py_bytes = NULL, *arg_tuple = NULL, *kwds = NULL, *order_str = NULL; - from_bytes = PyObject_GetAttrString((PyObject*)&PyInt_Type, "from_bytes"); - if (!from_bytes) return NULL; - py_bytes = PyBytes_FromStringAndSize((char*)bytes, sizeof(long)); - if (!py_bytes) goto limited_bad; - order_str = PyUnicode_FromString(little ? "little" : "big"); - if (!order_str) goto limited_bad; - arg_tuple = PyTuple_Pack(2, py_bytes, order_str); - if (!arg_tuple) goto limited_bad; - kwds = PyDict_New(); - if (!kwds) goto limited_bad; - if (PyDict_SetItemString(kwds, "signed", __Pyx_NewRef(!is_unsigned ? Py_True : Py_False))) goto limited_bad; - result = PyObject_Call(from_bytes, arg_tuple, kwds); - limited_bad: - Py_XDECREF(from_bytes); - Py_XDECREF(py_bytes); - Py_XDECREF(order_str); - Py_XDECREF(arg_tuple); - Py_XDECREF(kwds); - return result; -#endif - } -} - -/* FormatTypeName */ -#if CYTHON_COMPILING_IN_LIMITED_API -static __Pyx_TypeName -__Pyx_PyType_GetName(PyTypeObject* tp) -{ - PyObject *name = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_n_s_name_2); - if (unlikely(name == NULL) || unlikely(!PyUnicode_Check(name))) { - PyErr_Clear(); - Py_XDECREF(name); - name = __Pyx_NewRef(__pyx_n_s__51); - } - return name; -} -#endif - -/* CIntFromPyVerify */ -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntFromPy */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(long) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 2 * PyLong_SHIFT)) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 3 * PyLong_SHIFT)) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 4 * PyLong_SHIFT)) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(long) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(long) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(long) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (long) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (long) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (long) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (long) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(long) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(long) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((long) 1) << (sizeof(long) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(int) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 2 * PyLong_SHIFT)) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 3 * PyLong_SHIFT)) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 4 * PyLong_SHIFT)) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(int) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(int) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(int) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (int) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (int) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (int) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (int) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(int) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(int) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((int) 1) << (sizeof(int) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CheckBinaryVersion */ -static int __Pyx_check_binary_version(void) { - char ctversion[5]; - int same=1, i, found_dot; - const char* rt_from_call = Py_GetVersion(); - PyOS_snprintf(ctversion, 5, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - found_dot = 0; - for (i = 0; i < 4; i++) { - if (!ctversion[i]) { - same = (rt_from_call[i] < '0' || rt_from_call[i] > '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compile time version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ -#if PY_MAJOR_VERSION >= 3 -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str) { - if (t.is_unicode | t.is_str) { - if (t.intern) { - *str = PyUnicode_InternFromString(t.s); - } else if (t.encoding) { - *str = PyUnicode_Decode(t.s, t.n - 1, t.encoding, NULL); - } else { - *str = PyUnicode_FromStringAndSize(t.s, t.n - 1); - } - } else { - *str = PyBytes_FromStringAndSize(t.s, t.n - 1); - } - if (!*str) - return -1; - if (PyObject_Hash(*str) == -1) - return -1; - return 0; -} -#endif -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION >= 3 - __Pyx_InitString(*t, t->p); - #else - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - #endif - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY && !CYTHON_COMPILING_IN_LIMITED_API) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { - __Pyx_TypeName result_type_name = __Pyx_PyType_GetName(Py_TYPE(result)); -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME "). " - "The ability to return an instance of a strict subclass of int is deprecated, " - "and may be removed in a future version of Python.", - result_type_name)) { - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; - } - __Pyx_DECREF_TypeName(result_type_name); - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type " __Pyx_FMT_TYPENAME ")", - type_name, type_name, result_type_name); - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(__Pyx_PyLong_IsCompact(b))) { - return __Pyx_PyLong_CompactValue(b); - } else { - const digit* digits = __Pyx_PyLong_Digits(b); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(b); - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -/* #### Code section: utility_code_pragmas_end ### */ -#ifdef _MSC_VER -#pragma warning( pop ) -#endif - - - -/* #### Code section: end ### */ -#endif /* Py_PYTHON_H */ diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/steamship/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/steamship/__init__.py deleted file mode 100644 index 032c95838ea9df68694e4e3c58100572499b10d5..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/steamship/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Init File.""" diff --git a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/modules/losses/contperceptual.py b/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/modules/losses/contperceptual.py deleted file mode 100644 index 8150b9585b2cf892a088860e9dfc5cd6c9060ff4..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/modules/losses/contperceptual.py +++ /dev/null @@ -1,110 +0,0 @@ -import torch -import torch.nn as nn - -from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no? - - -class LPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm - ).apply(weights_init) - self.discriminator_iter_start = disc_start - self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", - weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - if self.disc_factor > 0.0: - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - else: - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(), - "{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log diff --git a/spaces/jorge-henao/ask2democracy/README.md b/spaces/jorge-henao/ask2democracy/README.md deleted file mode 100644 index 982d2ba5c02ddf4f18c940cf18a25f37842ed5a3..0000000000000000000000000000000000000000 --- a/spaces/jorge-henao/ask2democracy/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Ask2democracy 🇨🇴 - Elecciones presidenciales 2022 -emoji: 🤔 🇨🇴 📄 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.0.18 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/assets/custom.js b/spaces/juanhuggingface/ChuanhuChatGPT_Beta/assets/custom.js deleted file mode 100644 index ae5a76b5e791be8b107126889519e37d89fc80f0..0000000000000000000000000000000000000000 --- a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/assets/custom.js +++ /dev/null @@ -1,607 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var chatbotWrap = null; -var apSwitch = null; -var empty_botton = null; -var messageBotDivs = null; -// var renderLatex = null; -var loginUserForm = null; -var logginUser = null; - -var userLogged = false; -var usernameGotten = false; -var shouldRenderLatex = false; -var historyLoaded = false; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); -var language = navigator.language.slice(0,2); - -var forView_i18n = { - 'zh': "仅供查看", - 'en': "For viewing only", - 'ja': "閲覧専用", - 'fr': "Pour consultation seulement", - 'es': "Solo para visualización", -}; - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - loginUserForm = document.querySelector(".gradio-container > .main > .wrap > .panel > .form") - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - chatbotWrap = document.querySelector('#chuanhu_chatbot > .wrap'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - // renderLatex = document.querySelector("#render_latex_checkbox > label > input"); - empty_botton = document.getElementById("empty_btn") - - if (loginUserForm) { - localStorage.setItem("userLogged", true); - userLogged = true; - } - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - if (!usernameGotten) { - getUserInfo(); - } - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight(); - } - if (chatbotWrap) { - if (!historyLoaded) { - loadHistoryHtml(); - } - setChatbotScroll(); - } - // if (renderLatex) { // renderLatex 加载出来了没? - // shouldRenderLatex = renderLatex.checked; - // updateMathJax(); - // } - if (empty_botton) { - emptyHistory(); - } - } - } -} - -function webLocale() { - console.log("webLocale", language); - if (forView_i18n.hasOwnProperty(language)) { - var forView = forView_i18n[language]; - var forViewStyle = document.createElement('style'); - forViewStyle.innerHTML = '.wrap>.history-message>:last-child::after { content: "' + forView + '"!important; }'; - document.head.appendChild(forViewStyle); - // console.log("added forViewStyle", forView); - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -var username = null; -function getUserInfo() { - if (usernameGotten) { - return; - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged) { - username = userInfoDiv.innerText; - if (username) { - if (username.includes("getting user info…")) { - setTimeout(getUserInfo, 500); - return; - } else if (username === " ") { - localStorage.removeItem("username"); - localStorage.removeItem("userLogged") - userLogged = false; - usernameGotten = true; - return; - } else { - username = username.match(/User:\s*(.*)/)[1] || username; - localStorage.setItem("username", username); - usernameGotten = true; - clearHistoryHtml(); - } - } - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - gradioContainer.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - gradioContainer.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} -function setChatbotScroll() { - var scrollHeight = chatbotWrap.scrollHeight; - chatbotWrap.scrollTo(0,scrollHeight) -} -var rangeInputs = null; -var numberInputs = null; -function setSlider() { - rangeInputs = document.querySelectorAll('input[type="range"]'); - numberInputs = document.querySelectorAll('input[type="number"]') - setSliderRange(); - rangeInputs.forEach(rangeInput => { - rangeInput.addEventListener('input', setSliderRange); - }); - numberInputs.forEach(numberInput => { - numberInput.addEventListener('input', setSliderRange); - }) -} -function setSliderRange() { - var range = document.querySelectorAll('input[type="range"]'); - range.forEach(range => { - range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%'; - }); -} - -function addChuanhuButton(botElement) { - var rawMessage = null; - var mdMessage = null; - rawMessage = botElement.querySelector('.raw-message'); - mdMessage = botElement.querySelector('.md-message'); - if (!rawMessage) { - var buttons = botElement.querySelectorAll('button.chuanhu-btn'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - return; - } - var copyButton = null; - var toggleButton = null; - copyButton = botElement.querySelector('button.copy-bot-btn'); - toggleButton = botElement.querySelector('button.toggle-md-btn'); - if (copyButton) copyButton.remove(); - if (toggleButton) toggleButton.remove(); - - // Copy bot button - var copyButton = document.createElement('button'); - copyButton.classList.add('chuanhu-btn'); - copyButton.classList.add('copy-bot-btn'); - copyButton.setAttribute('aria-label', 'Copy'); - copyButton.innerHTML = copyIcon; - copyButton.addEventListener('click', () => { - const textToCopy = rawMessage.innerText; - navigator.clipboard - .writeText(textToCopy) - .then(() => { - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - }) - .catch(() => { - console.error("copy failed"); - }); - }); - botElement.appendChild(copyButton); - - // Toggle button - var toggleButton = document.createElement('button'); - toggleButton.classList.add('chuanhu-btn'); - toggleButton.classList.add('toggle-md-btn'); - toggleButton.setAttribute('aria-label', 'Toggle'); - var renderMarkdown = mdMessage.classList.contains('hideM'); - toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon; - toggleButton.addEventListener('click', () => { - renderMarkdown = mdMessage.classList.contains('hideM'); - if (renderMarkdown){ - renderMarkdownText(botElement); - toggleButton.innerHTML=rawIcon; - } else { - removeMarkdownText(botElement); - toggleButton.innerHTML=mdIcon; - } - }); - botElement.insertBefore(toggleButton, copyButton); -} - -function addCopyCodeButton(pre) { - var code = null; - var firstChild = null; - code = pre.querySelector('code'); - if (!code) return; - firstChild = code.querySelector('div'); - if (!firstChild) return; - var oldCopyButton = null; - oldCopyButton = code.querySelector('button.copy-code-btn'); - // if (oldCopyButton) oldCopyButton.remove(); - if (oldCopyButton) return; // 没太有用,新生成的对话中始终会被pre覆盖,导致按钮消失,这段代码不启用…… - var codeButton = document.createElement('button'); - codeButton.classList.add('copy-code-btn'); - codeButton.textContent = '\uD83D\uDCCE'; - - code.insertBefore(codeButton, firstChild); - codeButton.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); - navigator.clipboard - .writeText(range.toString()) - .then(() => { - codeButton.textContent = '\u2714'; - setTimeout(function () { - codeButton.textContent = '\uD83D\uDCCE'; - }, 2000); - }) - .catch(e => { - console.error(e); - codeButton.textContent = '\u2716'; - }); - }); -} - -function renderMarkdownText(message) { - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.remove('hideM'); - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.add('hideM'); -} -function removeMarkdownText(message) { - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.remove('hideM'); - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.add('hideM'); -} - -var rendertime = 0; // for debugging -var mathjaxUpdated = false; - -function renderMathJax() { - messageBotDivs = document.querySelectorAll('.message.bot .md-message'); - for (var i = 0; i < messageBotDivs.length; i++) { - var mathJaxSpan = messageBotDivs[i].querySelector('.MathJax_Preview'); - if (!mathJaxSpan && shouldRenderLatex && !mathjaxUpdated) { - MathJax.Hub.Queue(["Typeset", MathJax.Hub, messageBotDivs[i]]); - rendertime +=1; // for debugging - // console.log("renderingMathJax", i) - } - } - mathjaxUpdated = true; - // console.log("MathJax Rendered") -} - -function removeMathjax() { - // var jax = MathJax.Hub.getAllJax(); - // for (var i = 0; i < jax.length; i++) { - // // MathJax.typesetClear(jax[i]); - // jax[i].Text(newmath) - // jax[i].Reprocess() - // } - // 我真的不会了啊啊啊,mathjax并没有提供转换为原先文本的办法。 - mathjaxUpdated = true; - // console.log("MathJax removed!"); -} - -function updateMathJax() { - // renderLatex.addEventListener("change", function() { - // shouldRenderLatex = renderLatex.checked; - // if (!mathjaxUpdated) { - // if (shouldRenderLatex) { - // renderMathJax(); - // } else { - // console.log("MathJax Disabled") - // removeMathjax(); - // } - // } else { - // if (!shouldRenderLatex) { - // mathjaxUpdated = false; // reset - // } - // } - // }); - if (shouldRenderLatex && !mathjaxUpdated) { - renderMathJax(); - } - mathjaxUpdated = false; -} - -let timeoutId; -let isThrottled = false; -var mmutation -// 监听所有元素中 bot message 的变化,用来查找需要渲染的mathjax, 并为 bot 消息添加复制按钮。 -var mObserver = new MutationObserver(function (mutationsList) { - for (mmutation of mutationsList) { - if (mmutation.type === 'childList') { - for (var node of mmutation.addedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - if (shouldRenderLatex) { - renderMathJax(); - mathjaxUpdated = false; - } - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot pre').forEach(addCopyCodeButton); - } - if (node.tagName === 'INPUT' && node.getAttribute('type') === 'range') { - setSlider(); - } - } - for (var node of mmutation.removedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - if (shouldRenderLatex) { - renderMathJax(); - mathjaxUpdated = false; - } - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot pre').forEach(addCopyCodeButton); - } - } - } else if (mmutation.type === 'attributes') { - if (mmutation.target.nodeType === 1 && mmutation.target.classList.contains('message') && mmutation.target.getAttribute('data-testid') === 'bot') { - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot pre').forEach(addCopyCodeButton); // 目前写的是有点问题的,会导致加button次数过多,但是bot对话内容生成时又是不断覆盖pre的…… - if (isThrottled) break; // 为了防止重复不断疯狂渲染,加上等待_(:з」∠)_ - isThrottled = true; - clearTimeout(timeoutId); - timeoutId = setTimeout(() => { - isThrottled = false; - if (shouldRenderLatex) { - renderMathJax(); - mathjaxUpdated = false; - } - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - saveHistoryHtml(); - }, 500); - } - } - } -}); -mObserver.observe(document.documentElement, { attributes: true, childList: true, subtree: true }); - -var loadhistorytime = 0; // for debugging -function saveHistoryHtml() { - var historyHtml = document.querySelector('#chuanhu_chatbot > .wrap'); - localStorage.setItem('chatHistory', historyHtml.innerHTML); - // console.log("History Saved") - historyLoaded = false; -} -function loadHistoryHtml() { - var historyHtml = localStorage.getItem('chatHistory'); - if (!historyHtml) { - historyLoaded = true; - return; // no history, do nothing - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged){ - historyLoaded = true; - return; // logged in, do nothing - } - if (!historyLoaded) { - var tempDiv = document.createElement('div'); - tempDiv.innerHTML = historyHtml; - var buttons = tempDiv.querySelectorAll('button.chuanhu-btn'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - var fakeHistory = document.createElement('div'); - fakeHistory.classList.add('history-message'); - fakeHistory.innerHTML = tempDiv.innerHTML; - webLocale(); - chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - // var fakeHistory = document.createElement('div'); - // fakeHistory.classList.add('history-message'); - // fakeHistory.innerHTML = historyHtml; - // chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - historyLoaded = true; - console.log("History Loaded"); - loadhistorytime += 1; // for debugging - } else { - historyLoaded = false; - } -} -function clearHistoryHtml() { - localStorage.removeItem("chatHistory"); - historyMessages = chatbotWrap.querySelector('.history-message'); - if (historyMessages) { - chatbotWrap.removeChild(historyMessages); - console.log("History Cleared"); - } -} -function emptyHistory() { - empty_botton.addEventListener("click", function () { - clearHistoryHtml(); - }); -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); - historyLoaded = false; - shouldRenderLatex = !!document.querySelector('script[src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-MML-AM_CHTML"]'); -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); - -// button svg code -const copyIcon = ''; -const copiedIcon = ''; -const mdIcon = ''; -const rawIcon = ''; diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/vocoder/audio.py b/spaces/keithhon/Real-Time-Voice-Cloning/vocoder/audio.py deleted file mode 100644 index 116396261e184b9968971bd06fabc6f525e0c2fe..0000000000000000000000000000000000000000 --- a/spaces/keithhon/Real-Time-Voice-Cloning/vocoder/audio.py +++ /dev/null @@ -1,108 +0,0 @@ -import math -import numpy as np -import librosa -import vocoder.hparams as hp -from scipy.signal import lfilter -import soundfile as sf - - -def label_2_float(x, bits) : - return 2 * x / (2**bits - 1.) - 1. - - -def float_2_label(x, bits) : - assert abs(x).max() <= 1.0 - x = (x + 1.) * (2**bits - 1) / 2 - return x.clip(0, 2**bits - 1) - - -def load_wav(path) : - return librosa.load(str(path), sr=hp.sample_rate)[0] - - -def save_wav(x, path) : - sf.write(path, x.astype(np.float32), hp.sample_rate) - - -def split_signal(x) : - unsigned = x + 2**15 - coarse = unsigned // 256 - fine = unsigned % 256 - return coarse, fine - - -def combine_signal(coarse, fine) : - return coarse * 256 + fine - 2**15 - - -def encode_16bits(x) : - return np.clip(x * 2**15, -2**15, 2**15 - 1).astype(np.int16) - - -mel_basis = None - - -def linear_to_mel(spectrogram): - global mel_basis - if mel_basis is None: - mel_basis = build_mel_basis() - return np.dot(mel_basis, spectrogram) - - -def build_mel_basis(): - return librosa.filters.mel(hp.sample_rate, hp.n_fft, n_mels=hp.num_mels, fmin=hp.fmin) - - -def normalize(S): - return np.clip((S - hp.min_level_db) / -hp.min_level_db, 0, 1) - - -def denormalize(S): - return (np.clip(S, 0, 1) * -hp.min_level_db) + hp.min_level_db - - -def amp_to_db(x): - return 20 * np.log10(np.maximum(1e-5, x)) - - -def db_to_amp(x): - return np.power(10.0, x * 0.05) - - -def spectrogram(y): - D = stft(y) - S = amp_to_db(np.abs(D)) - hp.ref_level_db - return normalize(S) - - -def melspectrogram(y): - D = stft(y) - S = amp_to_db(linear_to_mel(np.abs(D))) - return normalize(S) - - -def stft(y): - return librosa.stft(y=y, n_fft=hp.n_fft, hop_length=hp.hop_length, win_length=hp.win_length) - - -def pre_emphasis(x): - return lfilter([1, -hp.preemphasis], [1], x) - - -def de_emphasis(x): - return lfilter([1], [1, -hp.preemphasis], x) - - -def encode_mu_law(x, mu) : - mu = mu - 1 - fx = np.sign(x) * np.log(1 + mu * np.abs(x)) / np.log(1 + mu) - return np.floor((fx + 1) / 2 * mu + 0.5) - - -def decode_mu_law(y, mu, from_labels=True) : - if from_labels: - y = label_2_float(y, math.log2(mu)) - mu = mu - 1 - x = np.sign(y) / mu * ((1 + mu) ** np.abs(y) - 1) - return x - diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/launcher.py b/spaces/kevinwang676/ChatGLM2-SadTalker/launcher.py deleted file mode 100644 index 17ce9f1a18c3d563333bbb0eacc2922fb8524e3f..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/launcher.py +++ /dev/null @@ -1,204 +0,0 @@ -# this scripts installs necessary requirements and launches main program in webui.py -# borrow from : https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/launch.py -import subprocess -import os -import sys -import importlib.util -import shlex -import platform -import json - -python = sys.executable -git = os.environ.get('GIT', "git") -index_url = os.environ.get('INDEX_URL', "") -stored_commit_hash = None -skip_install = False -dir_repos = "repositories" -script_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) - -if 'GRADIO_ANALYTICS_ENABLED' not in os.environ: - os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False' - - -def check_python_version(): - is_windows = platform.system() == "Windows" - major = sys.version_info.major - minor = sys.version_info.minor - micro = sys.version_info.micro - - if is_windows: - supported_minors = [10] - else: - supported_minors = [7, 8, 9, 10, 11] - - if not (major == 3 and minor in supported_minors): - - raise (f""" -INCOMPATIBLE PYTHON VERSION -This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}. -If you encounter an error with "RuntimeError: Couldn't install torch." message, -or any other error regarding unsuccessful package (library) installation, -please downgrade (or upgrade) to the latest version of 3.10 Python -and delete current Python and "venv" folder in WebUI's directory. -You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3109/ -{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases" if is_windows else ""} -Use --skip-python-version-check to suppress this warning. -""") - - -def commit_hash(): - global stored_commit_hash - - if stored_commit_hash is not None: - return stored_commit_hash - - try: - stored_commit_hash = run(f"{git} rev-parse HEAD").strip() - except Exception: - stored_commit_hash = "" - - return stored_commit_hash - - -def run(command, desc=None, errdesc=None, custom_env=None, live=False): - if desc is not None: - print(desc) - - if live: - result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - raise RuntimeError(f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode}""") - - return "" - - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env) - - if result.returncode != 0: - - message = f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode} -stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''} -stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''} -""" - raise RuntimeError(message) - - return result.stdout.decode(encoding="utf8", errors="ignore") - - -def check_run(command): - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) - return result.returncode == 0 - - -def is_installed(package): - try: - spec = importlib.util.find_spec(package) - except ModuleNotFoundError: - return False - - return spec is not None - - -def repo_dir(name): - return os.path.join(script_path, dir_repos, name) - - -def run_python(code, desc=None, errdesc=None): - return run(f'"{python}" -c "{code}"', desc, errdesc) - - -def run_pip(args, desc=None): - if skip_install: - return - - index_url_line = f' --index-url {index_url}' if index_url != '' else '' - return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}") - - -def check_run_python(code): - return check_run(f'"{python}" -c "{code}"') - - -def git_clone(url, dir, name, commithash=None): - # TODO clone into temporary dir and move if successful - - if os.path.exists(dir): - if commithash is None: - return - - current_hash = run(f'"{git}" -C "{dir}" rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip() - if current_hash == commithash: - return - - run(f'"{git}" -C "{dir}" fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}") - run(f'"{git}" -C "{dir}" checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}") - return - - run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}") - - if commithash is not None: - run(f'"{git}" -C "{dir}" checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}") - - -def git_pull_recursive(dir): - for subdir, _, _ in os.walk(dir): - if os.path.exists(os.path.join(subdir, '.git')): - try: - output = subprocess.check_output([git, '-C', subdir, 'pull', '--autostash']) - print(f"Pulled changes for repository in '{subdir}':\n{output.decode('utf-8').strip()}\n") - except subprocess.CalledProcessError as e: - print(f"Couldn't perform 'git pull' on repository in '{subdir}':\n{e.output.decode('utf-8').strip()}\n") - - -def run_extension_installer(extension_dir): - path_installer = os.path.join(extension_dir, "install.py") - if not os.path.isfile(path_installer): - return - - try: - env = os.environ.copy() - env['PYTHONPATH'] = os.path.abspath(".") - - print(run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env)) - except Exception as e: - print(e, file=sys.stderr) - - -def prepare_environment(): - global skip_install - - torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113") - - ## check windows - if sys.platform != 'win32': - requirements_file = os.environ.get('REQS_FILE', "req.txt") - else: - requirements_file = os.environ.get('REQS_FILE', "requirements.txt") - - commit = commit_hash() - - print(f"Python {sys.version}") - print(f"Commit hash: {commit}") - - if not is_installed("torch") or not is_installed("torchvision"): - run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True) - - run_pip(f"install -r \"{requirements_file}\"", "requirements for SadTalker WebUI (may take longer time in first time)") - - if sys.platform != 'win32' and not is_installed('tts'): - run_pip(f"install TTS", "install TTS individually in SadTalker, which might not work on windows.") - - -def start(): - print(f"Launching SadTalker Web UI") - from app_sadtalker import sadtalker_demo - demo = sadtalker_demo() - demo.queue() - demo.launch() - -if __name__ == "__main__": - prepare_environment() - start() \ No newline at end of file diff --git a/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/pages/03_reference.py b/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/pages/03_reference.py deleted file mode 100644 index b6f4eab4f701a0c2894b9b638043956fb15842ed..0000000000000000000000000000000000000000 --- a/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/pages/03_reference.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright 2022 Ken Kawamura -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import streamlit as st - -st.set_page_config(layout="wide") -st.markdown(f'

    Reference

    ', unsafe_allow_html=True) - -st.markdown('### A chunk of codes used for this projects is taken and/or insipred from the following works and their related repository:') - -st.markdown("""1. @inproceedings{sanh2022multitask, - title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, - author={Victor Sanh and Albert Webson and Colin Raffel and Stephen Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Teven Le Scao and Stella Biderman and Leo Gao and Thomas Wolf and Alexander M Rush}, - booktitle={International Conference on Learning Representations}, - year={2022} - url={https://openreview.net/forum?id=9Vrb9D0WI4}""") - -st.markdown("""2. @software{eval-harness, author = {Gao, Leo and - Tow, Jonathan and - Biderman, Stella and - Black, Sid and - DiPofi, Anthony and - Foster, Charles and - Golding, Laurence and - Hsu, Jeffrey and - McDonell, Kyle and - Muennighoff, Niklas and - Phang, Jason and - Reynolds, Laria and - Tang, Eric and - Thite, Anish and - Wang, Ben and - Wang, Kevin and - Zou, Andy}, - title = {A framework for few-shot language model evaluation}, - month = sep, - year = 2021, - publisher = {Zenodo}, - version = {v0.0.1}, - doi = {10.5281/zenodo.5371628}, - url = {https://doi.org/10.5281/zenodo.5371628} - } -}""") - -st.markdown("""3. For style https://fossheim.io/writing/posts/css-text-gradient/""") -st.markdown("""4. For style https://css-tricks.com/css-hover-effects-background-masks-3d/""") diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/bart/README.summarization.md b/spaces/koajoel/PolyFormer/fairseq/examples/bart/README.summarization.md deleted file mode 100644 index 8727584f2b2bdd880c6cd3abbf39b75dfbf4a67c..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/bart/README.summarization.md +++ /dev/null @@ -1,102 +0,0 @@ -# Fine-tuning BART on CNN-Dailymail summarization task - -### 1) Download the CNN and Daily Mail data and preprocess it into data files with non-tokenized cased samples. - -Follow the instructions [here](https://github.com/abisee/cnn-dailymail) to download the original CNN and Daily Mail datasets. To preprocess the data, refer to the pointers in [this issue](https://github.com/pytorch/fairseq/issues/1391) or check out the code [here](https://github.com/artmatsak/cnn-dailymail). - -Follow the instructions [here](https://github.com/EdinburghNLP/XSum) to download the original Extreme Summarization datasets, or check out the code [here](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset), Please keep the raw dataset and make sure no tokenization nor BPE on the dataset. - -### 2) BPE preprocess: - -```bash -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -TASK=cnn_dm -for SPLIT in train val -do - for LANG in source target - do - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$TASK/$SPLIT.$LANG" \ - --outputs "$TASK/$SPLIT.bpe.$LANG" \ - --workers 60 \ - --keep-empty; - done -done -``` - -### 3) Binarize dataset: -```bash -fairseq-preprocess \ - --source-lang "source" \ - --target-lang "target" \ - --trainpref "${TASK}/train.bpe" \ - --validpref "${TASK}/val.bpe" \ - --destdir "${TASK}-bin/" \ - --workers 60 \ - --srcdict dict.txt \ - --tgtdict dict.txt; -``` - -### 4) Fine-tuning on CNN-DM summarization task: -Example fine-tuning CNN-DM -```bash -TOTAL_NUM_UPDATES=20000 -WARMUP_UPDATES=500 -LR=3e-05 -MAX_TOKENS=2048 -UPDATE_FREQ=4 -BART_PATH=/path/to/bart/model.pt - -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train cnn_dm-bin \ - --restore-file $BART_PATH \ - --max-tokens $MAX_TOKENS \ - --task translation \ - --source-lang source --target-lang target \ - --truncate-source \ - --layernorm-embedding \ - --share-all-embeddings \ - --share-decoder-input-output-embed \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --arch bart_large \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.999)" --adam-eps 1e-08 \ - --clip-norm 0.1 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --update-freq $UPDATE_FREQ \ - --skip-invalid-size-inputs-valid-test \ - --find-unused-parameters; -``` -Above is expected to run on `1` node with `8 32gb-V100`. -Expected training time is about `5 hours`. Training time can be reduced with distributed training on `4` nodes and `--update-freq 1`. - -Use TOTAL_NUM_UPDATES=15000 UPDATE_FREQ=2 for Xsum task - -### Inference for CNN-DM test data using above trained checkpoint. -After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using `eval_cnn.py`, for example - -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo -``` -For XSUM, which uses beam=6, lenpen=1.0, max_len_b=60, min_len=10: -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo \ - --xsum-kwargs -``` diff --git a/spaces/kriss-ml/Boston-House-Price/app.py b/spaces/kriss-ml/Boston-House-Price/app.py deleted file mode 100644 index 669a6290fd4493585c1ae2cd59490e5938cfb8bb..0000000000000000000000000000000000000000 --- a/spaces/kriss-ml/Boston-House-Price/app.py +++ /dev/null @@ -1,136 +0,0 @@ -import pickle -import numpy as np -import gradio as gr -# Note- When your are uploading your project files from Jupyter notebook to repos such as GitHub/HuggingFace, -#Then the libraries installed are done throught the "requirements.txt" file, only the imports are done in the main files. -# Also remove any EDA steps taken, such as plotting charts, printing feature etc, -# they cannot be implemented within the app, so is'nt required. - -# Assignment -# Build the linear regression model using scikit learn in boston data to predict 'Price' based on other dependent variable. Here is the code to load the data: import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt import sklearn from sklearn.datasets import load_boston boston = load_boston() bos = pd.DataFrame(boston.data) Task: Deploy this assignment in any cloud platform.(Try to look for free cloud platform) Assignment: Submit assignment’s deployable link only. - -# Solution -# Importing all dependencies - - -# Here is the code to load the data: -import numpy as np -import pandas as pd -import scipy.stats as stats -import matplotlib.pyplot as plt -import sklearn -# from sklearn.datasets import load_boston -# boston = load_boston() -# bos = pd.DataFrame(boston.data) -# Task: Deploy this assignment in any cloud platform.(Try to look for -# free cloud platform) -# Assignment: Submit assignment’s deployable link only. - -# Solution - -# Importing all dependencies - -import numpy as np # numpy for Arithmetic Manipulations -import pandas as pd # Used for Structured Dataframes -import matplotlib.pyplot as plt # matplotlib.pyplot used for Data Viz -import scipy.stats as stats # scipy.stats for Statistical Analysis -import gradio as gr - -# %matplotlib inline -# Load scikit learn module for linear regression and in-built dataset - -import sklearn.linear_model as Linear_Regression -# from sklearn.datasets import load_boston -from sklearn.metrics import mean_squared_error, r2_score -from sklearn.model_selection import train_test_split # to split the data into train & test data - -# Assigning our Load boston data to a variable - -# boston_data = load_boston() - -# print("The attributes of boston data are:",boston_data.keys()) - -# import os -# os.getcwd() -df_Boston_data = pd.read_csv("boston_house_prices.csv", header = 1) - - -df_Boston_data.head() -# Description of boston data -df_Boston_data.describe() -# Linear regression with Boston housing data using least square method -# apply sklearn linear regression model -from sklearn.linear_model import LinearRegression as Linear_Regression -#print(help(Linear_Regression)) -# Note : Hence Linear regression model requires two input paramter " X " (training data), " y " (Target variable) , thus split boston dataframe into X & y variable - -#"Features Data Frame (df_Boston_X)" -df_Boston_X=df_Boston_data.iloc[:,:-1] # Select all columns from Boston dataframe except "PRICE" column -#"Target Data Frame (df_Boston_y)" -df_Boston_y=df_Boston_data.iloc[:,-1:] # Select only "PRICE" column from Boston dataframe - -print("The Training data (X) are \n", df_Boston_X.head(2)) -print("\nThe Target variable (Y) are \n", df_Boston_y.head(2)) -# Fit the Training Data (X) and Target variable into Linear regression model - -linear_reg_model = Linear_Regression() -# Fit the data -linear_reg_model.fit(df_Boston_X,df_Boston_y) - -# Calculate the Returns the coefficient of determination (R^2) -linear_reg_model_R_square =linear_reg_model.score(df_Boston_X,df_Boston_y) - -# Prediction of Y (Target variable) based upon Training data(X) -linear_reg_model_y_price_predicted= linear_reg_model.predict(df_Boston_X) - -print("The coefficient of determination (R^2) R-square: ",linear_reg_model_R_square,"\n") -print("The Estimated Intercept: ",linear_reg_model.intercept_[0] , "\n") -print("No. of Estimated coefficients :",len(linear_reg_model.coef_[0]), "\n" ) -print("The Predicted Prices(Y) (first 5 values) based upon upon features: \n",linear_reg_model_y_price_predicted[0:5]) - -# Features ( independent variables) and their estimates coefficients -df_linear_reg_model_coef= pd.DataFrame(list(zip(df_Boston_X.columns, linear_reg_model.coef_[0])), columns=['Features','Estimated_Coefficients']) - -print("Features ( independent variables) and their estimates coefficients") -df_linear_reg_model_coef - - -# Deployment of our ML Model -# We start by importing our pickle library -# import pickle -# we will now dump our Ml model linearn regression object "linear_reg_model" in the pickle function -# We use the "Open ()"function to write this model into a new pickled file called "model.pkl", -# The 'wb' parameter indicates to Write the file in Binary mode, so that no changes are made to our data -#Simiraly 'wt' would be used to write a file in text mode -# To read a file in binary mode we use 'rb'parameter later -# pickle.dump(linear_reg_model, open('model.pkl', 'wb')) - - -# Testing our Pickled model -#To test our pickled file we use the 'pickle.load()' method to Open the pickled file in Binary Read mode using the parameter "rb" -# We read the file into a new python object called "Model" -# Model = pickle.load(open('model.pkl', 'rb')) - - -# Now we will create our function like this: - -def house_price(CRIM,ZN,INDUS,CHAS,NOX,RM,AGE,DIS,RAD,TAX,PTRATIO,B,LSTAT): - #turning the arguments into a numpy array - x = np.array([CRIM,ZN,INDUS,CHAS,NOX,RM,AGE,DIS,RAD,TAX,PTRATIO,B,LSTAT]) - prediction = linear_reg_model.predict(x.reshape(1, -1)) - return prediction - -# In the code above, we passed the feature columns from our data as arguments into a function which we named diabetes. Then we turned the arguments into a NumPy array which we then passed onto our model for prediction. Finally we returned the predicted result of our model. - - -# interface = gr.Interface(fn= house_price, inputs=['number','number','number','number','number','number','number','number','number','number','number','number','number'], outputs=outputs,description="This is a diabetes model") -# Or - -# interface = gr.Interface(fn= house_price, inputs=['number','number','number','number','number','number','number','number','number','number','number','number','number'], outputs='number',description="This is a diabetes model") -# The gradio app interface/ library is constantly being updated, the "placeholder" parameter is no longer supported. -# Please keep up to date with the current library features, or check the functions docstring for details -# The 'examples' parameter-> examples: sample inputs for the function; if provided, appear below the UI components and can be clicked to populate the interface. Should be nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component. A string path to a directory of examples can also be provided, but it should be within the directory with the python file running the gradio app. If there are multiple input components and a directory is provided, a log.csv file must be present in the directory to link corresponding inputs. - -interface = gr.Interface(fn= house_price, inputs=['number','number','number','number','number','number','number','number','number','number','number','number','number'], outputs='number',description="This is a diabetes model", examples= [[0.00632,18.0,2.31,0,0.538,6.575,65.2,4.09,1,296,15.3,396.9,4.98]]) - -interface.launch() \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/filelock/version.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/filelock/version.py deleted file mode 100644 index 3192641d0e0f4c13b70ca99cefd0a5af22354125..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/filelock/version.py +++ /dev/null @@ -1,4 +0,0 @@ -# file generated by setuptools_scm -# don't change, don't track in version control -__version__ = version = '3.12.0' -__version_tuple__ = version_tuple = (3, 12, 0) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/classifyTools.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/classifyTools.py deleted file mode 100644 index e46386230e5c826486963cf47640ae0a920377cb..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/classifyTools.py +++ /dev/null @@ -1,172 +0,0 @@ -""" fontTools.misc.classifyTools.py -- tools for classifying things. -""" - - -class Classifier(object): - - """ - Main Classifier object, used to classify things into similar sets. - """ - - def __init__(self, sort=True): - - self._things = set() # set of all things known so far - self._sets = [] # list of class sets produced so far - self._mapping = {} # map from things to their class set - self._dirty = False - self._sort = sort - - def add(self, set_of_things): - """ - Add a set to the classifier. Any iterable is accepted. - """ - if not set_of_things: - return - - self._dirty = True - - things, sets, mapping = self._things, self._sets, self._mapping - - s = set(set_of_things) - intersection = s.intersection(things) # existing things - s.difference_update(intersection) # new things - difference = s - del s - - # Add new class for new things - if difference: - things.update(difference) - sets.append(difference) - for thing in difference: - mapping[thing] = difference - del difference - - while intersection: - # Take one item and process the old class it belongs to - old_class = mapping[next(iter(intersection))] - old_class_intersection = old_class.intersection(intersection) - - # Update old class to remove items from new set - old_class.difference_update(old_class_intersection) - - # Remove processed items from todo list - intersection.difference_update(old_class_intersection) - - # Add new class for the intersection with old class - sets.append(old_class_intersection) - for thing in old_class_intersection: - mapping[thing] = old_class_intersection - del old_class_intersection - - def update(self, list_of_sets): - """ - Add a a list of sets to the classifier. Any iterable of iterables is accepted. - """ - for s in list_of_sets: - self.add(s) - - def _process(self): - if not self._dirty: - return - - # Do any deferred processing - sets = self._sets - self._sets = [s for s in sets if s] - - if self._sort: - self._sets = sorted(self._sets, key=lambda s: (-len(s), sorted(s))) - - self._dirty = False - - # Output methods - - def getThings(self): - """Returns the set of all things known so far. - - The return value belongs to the Classifier object and should NOT - be modified while the classifier is still in use. - """ - self._process() - return self._things - - def getMapping(self): - """Returns the mapping from things to their class set. - - The return value belongs to the Classifier object and should NOT - be modified while the classifier is still in use. - """ - self._process() - return self._mapping - - def getClasses(self): - """Returns the list of class sets. - - The return value belongs to the Classifier object and should NOT - be modified while the classifier is still in use. - """ - self._process() - return self._sets - - -def classify(list_of_sets, sort=True): - """ - Takes a iterable of iterables (list of sets from here on; but any - iterable works.), and returns the smallest list of sets such that - each set, is either a subset, or is disjoint from, each of the input - sets. - - In other words, this function classifies all the things present in - any of the input sets, into similar classes, based on which sets - things are a member of. - - If sort=True, return class sets are sorted by decreasing size and - their natural sort order within each class size. Otherwise, class - sets are returned in the order that they were identified, which is - generally not significant. - - >>> classify([]) == ([], {}) - True - >>> classify([[]]) == ([], {}) - True - >>> classify([[], []]) == ([], {}) - True - >>> classify([[1]]) == ([{1}], {1: {1}}) - True - >>> classify([[1,2]]) == ([{1, 2}], {1: {1, 2}, 2: {1, 2}}) - True - >>> classify([[1],[2]]) == ([{1}, {2}], {1: {1}, 2: {2}}) - True - >>> classify([[1,2],[2]]) == ([{1}, {2}], {1: {1}, 2: {2}}) - True - >>> classify([[1,2],[2,4]]) == ([{1}, {2}, {4}], {1: {1}, 2: {2}, 4: {4}}) - True - >>> classify([[1,2],[2,4,5]]) == ( - ... [{4, 5}, {1}, {2}], {1: {1}, 2: {2}, 4: {4, 5}, 5: {4, 5}}) - True - >>> classify([[1,2],[2,4,5]], sort=False) == ( - ... [{1}, {4, 5}, {2}], {1: {1}, 2: {2}, 4: {4, 5}, 5: {4, 5}}) - True - >>> classify([[1,2,9],[2,4,5]], sort=False) == ( - ... [{1, 9}, {4, 5}, {2}], {1: {1, 9}, 2: {2}, 4: {4, 5}, 5: {4, 5}, - ... 9: {1, 9}}) - True - >>> classify([[1,2,9,15],[2,4,5]], sort=False) == ( - ... [{1, 9, 15}, {4, 5}, {2}], {1: {1, 9, 15}, 2: {2}, 4: {4, 5}, - ... 5: {4, 5}, 9: {1, 9, 15}, 15: {1, 9, 15}}) - True - >>> classes, mapping = classify([[1,2,9,15],[2,4,5],[15,5]], sort=False) - >>> set([frozenset(c) for c in classes]) == set( - ... [frozenset(s) for s in ({1, 9}, {4}, {2}, {5}, {15})]) - True - >>> mapping == {1: {1, 9}, 2: {2}, 4: {4}, 5: {5}, 9: {1, 9}, 15: {15}} - True - """ - classifier = Classifier(sort=sort) - classifier.update(list_of_sets) - return classifier.getClasses(), classifier.getMapping() - - -if __name__ == "__main__": - import sys, doctest - - sys.exit(doctest.testmod(optionflags=doctest.ELLIPSIS).failed) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/V_D_M_X_.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/V_D_M_X_.py deleted file mode 100644 index 0632173cd9037e604db9fddfd7a87a0e28892857..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/V_D_M_X_.py +++ /dev/null @@ -1,241 +0,0 @@ -from . import DefaultTable -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -import struct - -VDMX_HeaderFmt = """ - > # big endian - version: H # Version number (0 or 1) - numRecs: H # Number of VDMX groups present - numRatios: H # Number of aspect ratio groupings -""" -# the VMDX header is followed by an array of RatRange[numRatios] (i.e. aspect -# ratio ranges); -VDMX_RatRangeFmt = """ - > # big endian - bCharSet: B # Character set - xRatio: B # Value to use for x-Ratio - yStartRatio: B # Starting y-Ratio value - yEndRatio: B # Ending y-Ratio value -""" -# followed by an array of offset[numRatios] from start of VDMX table to the -# VDMX Group for this ratio range (offsets will be re-calculated on compile); -# followed by an array of Group[numRecs] records; -VDMX_GroupFmt = """ - > # big endian - recs: H # Number of height records in this group - startsz: B # Starting yPelHeight - endsz: B # Ending yPelHeight -""" -# followed by an array of vTable[recs] records. -VDMX_vTableFmt = """ - > # big endian - yPelHeight: H # yPelHeight to which values apply - yMax: h # Maximum value (in pels) for this yPelHeight - yMin: h # Minimum value (in pels) for this yPelHeight -""" - - -class table_V_D_M_X_(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - pos = 0 # track current position from to start of VDMX table - dummy, data = sstruct.unpack2(VDMX_HeaderFmt, data, self) - pos += sstruct.calcsize(VDMX_HeaderFmt) - self.ratRanges = [] - for i in range(self.numRatios): - ratio, data = sstruct.unpack2(VDMX_RatRangeFmt, data) - pos += sstruct.calcsize(VDMX_RatRangeFmt) - # the mapping between a ratio and a group is defined further below - ratio["groupIndex"] = None - self.ratRanges.append(ratio) - lenOffset = struct.calcsize(">H") - _offsets = [] # temporarily store offsets to groups - for i in range(self.numRatios): - offset = struct.unpack(">H", data[0:lenOffset])[0] - data = data[lenOffset:] - pos += lenOffset - _offsets.append(offset) - self.groups = [] - for groupIndex in range(self.numRecs): - # the offset to this group from beginning of the VDMX table - currOffset = pos - group, data = sstruct.unpack2(VDMX_GroupFmt, data) - # the group lenght and bounding sizes are re-calculated on compile - recs = group.pop("recs") - startsz = group.pop("startsz") - endsz = group.pop("endsz") - pos += sstruct.calcsize(VDMX_GroupFmt) - for j in range(recs): - vTable, data = sstruct.unpack2(VDMX_vTableFmt, data) - vTableLength = sstruct.calcsize(VDMX_vTableFmt) - pos += vTableLength - # group is a dict of (yMax, yMin) tuples keyed by yPelHeight - group[vTable["yPelHeight"]] = (vTable["yMax"], vTable["yMin"]) - # make sure startsz and endsz match the calculated values - minSize = min(group.keys()) - maxSize = max(group.keys()) - assert ( - startsz == minSize - ), "startsz (%s) must equal min yPelHeight (%s): group %d" % ( - group.startsz, - minSize, - groupIndex, - ) - assert ( - endsz == maxSize - ), "endsz (%s) must equal max yPelHeight (%s): group %d" % ( - group.endsz, - maxSize, - groupIndex, - ) - self.groups.append(group) - # match the defined offsets with the current group's offset - for offsetIndex, offsetValue in enumerate(_offsets): - # when numRecs < numRatios there can more than one ratio range - # sharing the same VDMX group - if currOffset == offsetValue: - # map the group with the ratio range thas has the same - # index as the offset to that group (it took me a while..) - self.ratRanges[offsetIndex]["groupIndex"] = groupIndex - # check that all ratio ranges have a group - for i in range(self.numRatios): - ratio = self.ratRanges[i] - if ratio["groupIndex"] is None: - from fontTools import ttLib - - raise ttLib.TTLibError("no group defined for ratRange %d" % i) - - def _getOffsets(self): - """ - Calculate offsets to VDMX_Group records. - For each ratRange return a list of offset values from the beginning of - the VDMX table to a VDMX_Group. - """ - lenHeader = sstruct.calcsize(VDMX_HeaderFmt) - lenRatRange = sstruct.calcsize(VDMX_RatRangeFmt) - lenOffset = struct.calcsize(">H") - lenGroupHeader = sstruct.calcsize(VDMX_GroupFmt) - lenVTable = sstruct.calcsize(VDMX_vTableFmt) - # offset to the first group - pos = lenHeader + self.numRatios * lenRatRange + self.numRatios * lenOffset - groupOffsets = [] - for group in self.groups: - groupOffsets.append(pos) - lenGroup = lenGroupHeader + len(group) * lenVTable - pos += lenGroup # offset to next group - offsets = [] - for ratio in self.ratRanges: - groupIndex = ratio["groupIndex"] - offsets.append(groupOffsets[groupIndex]) - return offsets - - def compile(self, ttFont): - if not (self.version == 0 or self.version == 1): - from fontTools import ttLib - - raise ttLib.TTLibError( - "unknown format for VDMX table: version %s" % self.version - ) - data = sstruct.pack(VDMX_HeaderFmt, self) - for ratio in self.ratRanges: - data += sstruct.pack(VDMX_RatRangeFmt, ratio) - # recalculate offsets to VDMX groups - for offset in self._getOffsets(): - data += struct.pack(">H", offset) - for group in self.groups: - recs = len(group) - startsz = min(group.keys()) - endsz = max(group.keys()) - gHeader = {"recs": recs, "startsz": startsz, "endsz": endsz} - data += sstruct.pack(VDMX_GroupFmt, gHeader) - for yPelHeight, (yMax, yMin) in sorted(group.items()): - vTable = {"yPelHeight": yPelHeight, "yMax": yMax, "yMin": yMin} - data += sstruct.pack(VDMX_vTableFmt, vTable) - return data - - def toXML(self, writer, ttFont): - writer.simpletag("version", value=self.version) - writer.newline() - writer.begintag("ratRanges") - writer.newline() - for ratio in self.ratRanges: - groupIndex = ratio["groupIndex"] - writer.simpletag( - "ratRange", - bCharSet=ratio["bCharSet"], - xRatio=ratio["xRatio"], - yStartRatio=ratio["yStartRatio"], - yEndRatio=ratio["yEndRatio"], - groupIndex=groupIndex, - ) - writer.newline() - writer.endtag("ratRanges") - writer.newline() - writer.begintag("groups") - writer.newline() - for groupIndex in range(self.numRecs): - group = self.groups[groupIndex] - recs = len(group) - startsz = min(group.keys()) - endsz = max(group.keys()) - writer.begintag("group", index=groupIndex) - writer.newline() - writer.comment("recs=%d, startsz=%d, endsz=%d" % (recs, startsz, endsz)) - writer.newline() - for yPelHeight, (yMax, yMin) in sorted(group.items()): - writer.simpletag( - "record", - [("yPelHeight", yPelHeight), ("yMax", yMax), ("yMin", yMin)], - ) - writer.newline() - writer.endtag("group") - writer.newline() - writer.endtag("groups") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": - self.version = safeEval(attrs["value"]) - elif name == "ratRanges": - if not hasattr(self, "ratRanges"): - self.ratRanges = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "ratRange": - if not hasattr(self, "numRatios"): - self.numRatios = 1 - else: - self.numRatios += 1 - ratio = { - "bCharSet": safeEval(attrs["bCharSet"]), - "xRatio": safeEval(attrs["xRatio"]), - "yStartRatio": safeEval(attrs["yStartRatio"]), - "yEndRatio": safeEval(attrs["yEndRatio"]), - "groupIndex": safeEval(attrs["groupIndex"]), - } - self.ratRanges.append(ratio) - elif name == "groups": - if not hasattr(self, "groups"): - self.groups = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "group": - if not hasattr(self, "numRecs"): - self.numRecs = 1 - else: - self.numRecs += 1 - group = {} - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "record": - yPelHeight = safeEval(attrs["yPelHeight"]) - yMax = safeEval(attrs["yMax"]) - yMin = safeEval(attrs["yMin"]) - group[yPelHeight] = (yMax, yMin) - self.groups.append(group) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_core/inline.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_core/inline.py deleted file mode 100644 index c3fd0b5e25dda5d8a5a644cc9e460d0f92ae2d1d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_core/inline.py +++ /dev/null @@ -1,10 +0,0 @@ -from .state_core import StateCore - - -def inline(state: StateCore) -> None: - """Parse inlines""" - for token in state.tokens: - if token.type == "inline": - if token.children is None: - token.children = [] - state.md.inline.parse(token.content, state.md, state.env, token.children) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/style/core.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/style/core.py deleted file mode 100644 index 4ff4618ca6a3d6c10cf29c197af3a640db9a3856..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/style/core.py +++ /dev/null @@ -1,283 +0,0 @@ -""" -Core functions and attributes for the matplotlib style library: - -``use`` - Select style sheet to override the current matplotlib settings. -``context`` - Context manager to use a style sheet temporarily. -``available`` - List available style sheets. -``library`` - A dictionary of style names and matplotlib settings. -""" - -import contextlib -import logging -import os -from pathlib import Path -import sys -import warnings - -if sys.version_info >= (3, 10): - import importlib.resources as importlib_resources -else: - # Even though Py3.9 has importlib.resources, it doesn't properly handle - # modules added in sys.path. - import importlib_resources - -import matplotlib as mpl -from matplotlib import _api, _docstring, _rc_params_in_file, rcParamsDefault - -_log = logging.getLogger(__name__) - -__all__ = ['use', 'context', 'available', 'library', 'reload_library'] - - -BASE_LIBRARY_PATH = os.path.join(mpl.get_data_path(), 'stylelib') -# Users may want multiple library paths, so store a list of paths. -USER_LIBRARY_PATHS = [os.path.join(mpl.get_configdir(), 'stylelib')] -STYLE_EXTENSION = 'mplstyle' -# A list of rcParams that should not be applied from styles -STYLE_BLACKLIST = { - 'interactive', 'backend', 'webagg.port', 'webagg.address', - 'webagg.port_retries', 'webagg.open_in_browser', 'backend_fallback', - 'toolbar', 'timezone', 'figure.max_open_warning', - 'figure.raise_window', 'savefig.directory', 'tk.window_focus', - 'docstring.hardcopy', 'date.epoch'} -_DEPRECATED_SEABORN_STYLES = { - s: s.replace("seaborn", "seaborn-v0_8") - for s in [ - "seaborn", - "seaborn-bright", - "seaborn-colorblind", - "seaborn-dark", - "seaborn-darkgrid", - "seaborn-dark-palette", - "seaborn-deep", - "seaborn-muted", - "seaborn-notebook", - "seaborn-paper", - "seaborn-pastel", - "seaborn-poster", - "seaborn-talk", - "seaborn-ticks", - "seaborn-white", - "seaborn-whitegrid", - ] -} -_DEPRECATED_SEABORN_MSG = ( - "The seaborn styles shipped by Matplotlib are deprecated since %(since)s, " - "as they no longer correspond to the styles shipped by seaborn. However, " - "they will remain available as 'seaborn-v0_8- - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/nsarrazin/agents-js-llama/src/routes/generate/+server.ts b/spaces/nsarrazin/agents-js-llama/src/routes/generate/+server.ts deleted file mode 100644 index 40ba3e1e3922aaaac1bb4fb630748efe93811c86..0000000000000000000000000000000000000000 --- a/spaces/nsarrazin/agents-js-llama/src/routes/generate/+server.ts +++ /dev/null @@ -1,40 +0,0 @@ -import { error, json } from "@sveltejs/kit"; -import { - defaultTools, - HfAgent, - LLMFromEndpoint, - LLMFromHub, -} from "@huggingface/agents"; -import { HF_ACCESS_TOKEN, HF_ENDPOINT } from "$env/static/private"; - -export async function POST({ request }) { - const r = await request.json(); - const { prompt, tools: selectedTools, filetypes } = r; - const tools = defaultTools.filter((el) => selectedTools.includes(el.name)); - - let agent; - if (HF_ENDPOINT !== "") { - agent = new HfAgent( - HF_ACCESS_TOKEN, - LLMFromEndpoint(HF_ACCESS_TOKEN, HF_ENDPOINT), - tools - ); - } else { - agent = new HfAgent(HF_ACCESS_TOKEN, LLMFromHub(HF_ACCESS_TOKEN), tools); - } - - const files = filetypes - ? filetypes.map((el: string) => ({ - type: el, - })) - : undefined; - - let code = ""; - try { - code = await agent.generateCode(prompt, files); - } catch (e) { - throw error(500, e as Error); - } - - return json(code); -} diff --git a/spaces/ntt123/Vietnam-female-voice-TTS/models.py b/spaces/ntt123/Vietnam-female-voice-TTS/models.py deleted file mode 100644 index 54702f7161052c45f947a075b167422463e52b38..0000000000000000000000000000000000000000 --- a/spaces/ntt123/Vietnam-female-voice-TTS/models.py +++ /dev/null @@ -1,489 +0,0 @@ -import math - -import torch -from torch import nn -from torch.nn import Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm -from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence - -import attentions -import commons -import modules -from commons import get_padding, init_weights -from flow import ResidualCouplingBlock - - -class PriorEncoder(nn.Module): - def __init__( - self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - self.pre_attn_encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers // 2, - kernel_size, - p_dropout, - ) - self.post_attn_encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers - n_layers // 2, - kernel_size, - p_dropout, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, y_lengths, attn): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre_attn_encoder(x * x_mask, x_mask) - y = torch.einsum("bht,blt->bhl", x, attn) - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, y.size(2)), 1).to( - y.dtype - ) - y = self.post_attn_encoder(y * y_mask, y_mask) - stats = self.proj(y) * y_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return y, m, logs, y_mask - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - **kwargs - ): - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.enc_p = PriorEncoder( - n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 2, 4, gin_channels=gin_channels - ) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, attn, y, y_lengths, sid=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, y_lengths, attn=attn) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - l_length = None - return ( - o, - l_length, - attn, - ids_slice, - x_mask, - y_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - ) - - def infer( - self, - x, - x_lengths, - y_lengths, - attn, - sid=None, - noise_scale=1, - max_len=None, - ): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, y_lengths, attn=attn) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, attn.shape[1]), 1).to( - x_mask.dtype - ) - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - -class DurationNet(torch.nn.Module): - def __init__(self, vocab_size: int, dim: int, num_layers=2): - super().__init__() - self.embed = torch.nn.Embedding(vocab_size, embedding_dim=dim) - self.rnn = torch.nn.GRU( - dim, - dim, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - dropout=0.2, - ) - self.proj = torch.nn.Linear(2 * dim, 1) - - def forward(self, token, lengths): - x = self.embed(token) - lengths = lengths.long().cpu() - x = pack_padded_sequence( - x, lengths=lengths, batch_first=True, enforce_sorted=False - ) - x, _ = self.rnn(x) - x, _ = pad_packed_sequence(x, batch_first=True, total_length=token.shape[1]) - x = self.proj(x) - x = torch.nn.functional.softplus(x) - return x diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/zlib_wrapper/gzipheader.cc b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/zlib_wrapper/gzipheader.cc deleted file mode 100644 index a8d5c3ca26883106f791652f338caa4ae85b6386..0000000000000000000000000000000000000000 --- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/zlib_wrapper/gzipheader.cc +++ /dev/null @@ -1,190 +0,0 @@ -// Copyright 2002 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Author: Neal Cardwell -// - -#include "sparse_matmul/zlib_wrapper/gzipheader.h" - -#include - -#include "absl/base/macros.h" -#include "glog/logging.h" -#include "zlib.h" // for Z_DEFAULT_COMPRESSION - -namespace csrblocksparse { - -const uint8_t GZipHeader::magic[] = {0x1f, 0x8b}; - -// ---------------------------------------------------------------------- -// GZipHeader::ReadMore() -// Attempt to parse the beginning of the given buffer as a gzip -// header. If these bytes do not constitute a complete gzip header, -// return INCOMPLETE_HEADER. If these bytes do not constitute a -// *valid* gzip header, return INVALID_HEADER. If we find a -// complete header, return COMPLETE_HEADER and set the pointer -// pointed to by header_end to the first byte beyond the gzip header. -// ---------------------------------------------------------------------- - -GZipHeader::Status GZipHeader::ReadMore(const char* inbuf, int inbuf_len, - const char** header_end) { - CHECK_GE(inbuf_len, 0); - const uint8_t* pos = reinterpret_cast(inbuf); - const uint8_t* const end = pos + inbuf_len; - - while (pos < end) { - switch (state_) { - case IN_HEADER_ID1: - if (*pos != magic[0]) return INVALID_HEADER; - pos++; - state_++; - break; - case IN_HEADER_ID2: - if (*pos != magic[1]) return INVALID_HEADER; - pos++; - state_++; - break; - case IN_HEADER_CM: - if (*pos != Z_DEFLATED) return INVALID_HEADER; - pos++; - state_++; - break; - case IN_HEADER_FLG: - flags_ = - (*pos) & (FLAG_FHCRC | FLAG_FEXTRA | FLAG_FNAME | FLAG_FCOMMENT); - pos++; - state_++; - break; - - case IN_HEADER_MTIME_BYTE_0: - pos++; - state_++; - break; - case IN_HEADER_MTIME_BYTE_1: - pos++; - state_++; - break; - case IN_HEADER_MTIME_BYTE_2: - pos++; - state_++; - break; - case IN_HEADER_MTIME_BYTE_3: - pos++; - state_++; - break; - - case IN_HEADER_XFL: - pos++; - state_++; - break; - - case IN_HEADER_OS: - pos++; - state_++; - break; - - case IN_XLEN_BYTE_0: - if (!(flags_ & FLAG_FEXTRA)) { - state_ = IN_FNAME; - break; - } - // We have a two-byte little-endian length, followed by a - // field of that length. - extra_length_ = *pos; - pos++; - state_++; - break; - case IN_XLEN_BYTE_1: - extra_length_ += *pos << 8; - pos++; - state_++; - // If we have a zero-length FEXTRA, we want to check to notice that - // we're done reading the FEXTRA before we exit this loop... - ABSL_FALLTHROUGH_INTENDED; - - case IN_FEXTRA: { - // Grab the rest of the bytes in the extra field, or as many - // of them as are actually present so far. - const int num_extra_bytes = std::min(extra_length_, (end - pos)); - pos += num_extra_bytes; - extra_length_ -= num_extra_bytes; - if (extra_length_ == 0) { - state_ = IN_FNAME; // advance when we've seen extra_length_ bytes - flags_ &= ~FLAG_FEXTRA; // we're done with the FEXTRA stuff - } - break; - } - - case IN_FNAME: - if (!(flags_ & FLAG_FNAME)) { - state_ = IN_FCOMMENT; - break; - } - // See if we can find the end of the \0-terminated FNAME field. - pos = reinterpret_cast(memchr(pos, '\0', (end - pos))); - if (pos != nullptr) { - pos++; // advance past the '\0' - flags_ &= ~FLAG_FNAME; // we're done with the FNAME stuff - state_ = IN_FCOMMENT; - } else { - pos = end; // everything we have so far is part of the FNAME - } - break; - - case IN_FCOMMENT: - if (!(flags_ & FLAG_FCOMMENT)) { - state_ = IN_FHCRC_BYTE_0; - break; - } - // See if we can find the end of the \0-terminated FCOMMENT field. - pos = reinterpret_cast(memchr(pos, '\0', (end - pos))); - if (pos != nullptr) { - pos++; // advance past the '\0' - flags_ &= ~FLAG_FCOMMENT; // we're done with the FCOMMENT stuff - state_ = IN_FHCRC_BYTE_0; - } else { - pos = end; // everything we have so far is part of the FNAME - } - break; - - case IN_FHCRC_BYTE_0: - if (!(flags_ & FLAG_FHCRC)) { - state_ = IN_DONE; - break; - } - pos++; - state_++; - break; - - case IN_FHCRC_BYTE_1: - pos++; - flags_ &= ~FLAG_FHCRC; // we're done with the FHCRC stuff - state_++; - break; - - case IN_DONE: - *header_end = reinterpret_cast(pos); - return COMPLETE_HEADER; - } - } - - if ((state_ > IN_HEADER_OS) && (flags_ == 0)) { - *header_end = reinterpret_cast(pos); - return COMPLETE_HEADER; - } else { - return INCOMPLETE_HEADER; - } -} - -} // namespace csrblocksparse diff --git a/spaces/ntt123/vietnamese-handwriting/index.html b/spaces/ntt123/vietnamese-handwriting/index.html deleted file mode 100644 index 15356daf18b8a5729ebae8e980f4984650564fdc..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnamese-handwriting/index.html +++ /dev/null @@ -1,244 +0,0 @@ - - - - - - - Online Handwriting - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/src/flamingo.py b/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/src/flamingo.py deleted file mode 100644 index 4e6adcc6e85555d8e015beecf9cef91c8474abef..0000000000000000000000000000000000000000 --- a/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/src/flamingo.py +++ /dev/null @@ -1,356 +0,0 @@ -import torch -from einops import rearrange -from torch import nn -from .helpers import PerceiverResampler -from torch.distributed.fsdp.wrap import ( - enable_wrap, - wrap, -) -from transformers.modeling_outputs import CausalLMOutputWithPast -from torch.distributed.fsdp import ( - FullyShardedDataParallel as FSDP, -) - -from .utils import apply_with_stopping_condition - - -class Flamingo(nn.Module): - def __init__( - self, - vision_encoder: nn.Module, - lang_encoder: nn.Module, - eoc_token_id: int, - media_token_id: int, - vis_dim: int, - cross_attn_every_n_layers: int = 1, - gradient_checkpointing: bool = False, - ): - """ - Args: - vision_encoder (nn.Module): HF CLIPModel - lang_encoder (nn.Module): HF causal language model - eoc_token_id (int): Token id for <|endofchunk|> - media_token_id (int): Token id for - vis_dim (int): Dimension of the visual features. - Visual features are projected to match this shape along the last dimension. - cross_attn_every_n_layers (int, optional): How often to apply cross attention after transformer layer. Defaults to 1. - """ - super().__init__() - self.eoc_token_id = eoc_token_id - self.media_token_id = media_token_id - self.vis_dim = vis_dim - if hasattr(lang_encoder.config, "d_model"): - self.lang_dim = lang_encoder.config.d_model # mpt uses d_model - else: - self.lang_dim = lang_encoder.config.hidden_size - - self.vision_encoder = vision_encoder.visual - self.perceiver = PerceiverResampler(dim=self.vis_dim) - self.lang_encoder = lang_encoder - self.lang_encoder.init_flamingo( - media_token_id=media_token_id, - lang_hidden_size=self.lang_dim, - vis_hidden_size=self.vis_dim, - cross_attn_every_n_layers=cross_attn_every_n_layers, - gradient_checkpointing=gradient_checkpointing, - ) - self._use_gradient_checkpointing = gradient_checkpointing - self.perceiver._use_gradient_checkpointing = gradient_checkpointing - - def forward( - self, - vision_x: torch.Tensor, - lang_x: torch.Tensor, - attention_mask: torch.Tensor = None, - labels: torch.Tensor = None, - clear_conditioned_layers: bool = True, - past_key_values=None, - use_cache: bool = False, - ): - """ - Forward pass of Flamingo. - - Args: - vision_x (torch.Tensor): Vision input - shape (B, T_img, F, C, H, W) with F=1 - lang_x (torch.Tensor): Language input ids - shape (B, T_txt) - attention_mask (torch.Tensor, optional): Attention mask. Defaults to None. - labels (torch.Tensor, optional): Labels. Defaults to None. - clear_conditioned_layers: if True, clear the conditioned layers - once the foward pass is completed. Set this to false if the - same set of images will be reused in another subsequent - forward pass. - past_key_values: pre-computed values to pass to language model. - See past_key_values documentation in Hugging Face - CausalLM models. - use_cache: whether to use cached key values. See use_cache - documentation in Hugging Face CausalLM models. - """ - assert ( - self.lang_encoder.initialized_flamingo - ), "Flamingo layers are not initialized. Please call `init_flamingo` first." - - assert ( - self.lang_encoder._use_cached_vision_x or vision_x is not None - ), "Must provide either vision_x or have precached media using cache_media()." - - if self.lang_encoder._use_cached_vision_x: - # Case: use cached; vision_x should be cached and other - # vision-related inputs should not be provided. - assert ( - vision_x is None - ), "Expect vision_x to be None when media has been cached using cache_media(). Try uncache_media() first." - assert self.lang_encoder.is_conditioned() - - else: - # Case: do not use caching (i.e. this is a standard forward pass); - self._encode_vision_x(vision_x=vision_x) - self._condition_media_locations(input_ids=lang_x) - - output = self.lang_encoder( - input_ids=lang_x, - attention_mask=attention_mask, - labels=labels, - past_key_values=past_key_values, - use_cache=use_cache, - ) - - if clear_conditioned_layers: - self.lang_encoder.clear_conditioned_layers() - - return output - - def generate( - self, - vision_x: torch.Tensor, - lang_x: torch.Tensor, - attention_mask: torch.Tensor = None, - num_beams=1, - min_new_tokens=None, - max_new_tokens=None, - temperature=1.0, - top_k=0, - top_p=1.0, - no_repeat_ngram_size=0, - prefix_allowed_tokens_fn=None, - length_penalty=1.0, - num_return_sequences=1, - do_sample=False, - early_stopping=False, - ): - """ - Generate text conditioned on vision and language inputs. - - Args: - vision_x (torch.Tensor): Vision input - shape (B, T_img, F, C, H, W) - images in the same chunk are collated along T_img, and frames are collated along F - currently only F=1 is supported (single-frame videos) - lang_x (torch.Tensor): Language input - shape (B, T_txt) - max_length (int, optional): Maximum length of the output. Defaults to None. - attention_mask (torch.Tensor, optional): Attention mask. Defaults to None. - num_beams (int, optional): Number of beams. Defaults to 1. - max_new_tokens (int, optional): Maximum new tokens. Defaults to None. - temperature (float, optional): Temperature. Defaults to 1.0. - top_k (int, optional): Top k. Defaults to 0. - top_p (float, optional): Top p. Defaults to 1.0. - no_repeat_ngram_size (int, optional): No repeat ngram size. Defaults to 0. - length_penalty (float, optional): Length penalty. Defaults to 1.0. - num_return_sequences (int, optional): Number of return sequences. Defaults to 1. - do_sample (bool, optional): Do sample. Defaults to False. - early_stopping (bool, optional): Early stopping. Defaults to False. - Returns: - torch.Tensor: lang_x with generated tokens appended to it - """ - if num_beams > 1: - vision_x = vision_x.repeat_interleave(num_beams, dim=0) - - self.lang_encoder._use_cached_vision_x = True - self._encode_vision_x(vision_x=vision_x) - - output = self.lang_encoder.generate( - input_ids=lang_x, - attention_mask=attention_mask, - eos_token_id=self.eoc_token_id, - num_beams=num_beams, - min_new_tokens=min_new_tokens, - max_new_tokens=max_new_tokens, - temperature=temperature, - top_k=top_k, - top_p=top_p, - prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, - no_repeat_ngram_size=no_repeat_ngram_size, - length_penalty=length_penalty, - num_return_sequences=num_return_sequences, - do_sample=do_sample, - early_stopping=early_stopping, - ) - - self.lang_encoder.clear_conditioned_layers() - self.lang_encoder._use_cached_vision_x = False - return output - - def _encode_vision_x(self, vision_x: torch.Tensor): - """ - Compute media tokens from vision input by passing it through vision encoder and conditioning language model. - Args: - vision_x (torch.Tensor): Vision input - shape (B, T_img, F, C, H, W) - Images in the same chunk are collated along T_img, and frames are collated along F - Currently only F=1 is supported (single-frame videos) - - rearrange code based on https://github.com/dhansmair/flamingo-mini - """ - - assert vision_x.ndim == 6, "vision_x should be of shape (b, T_img, F, C, H, W)" - b, T, F = vision_x.shape[:3] - assert F == 1, "Only single frame supported" - - vision_x = rearrange(vision_x, "b T F c h w -> (b T F) c h w") - with torch.no_grad(): - vision_x = self.vision_encoder(vision_x)[1] - vision_x = rearrange(vision_x, "(b T F) v d -> b T F v d", b=b, T=T, F=F) - vision_x = self.perceiver(vision_x) - - for layer in self.lang_encoder._get_decoder_layers(): - layer.condition_vis_x(vision_x) - - def wrap_fsdp(self, wrapper_kwargs, device_id): - """ - Manually wraps submodules for FSDP and move other parameters to device_id. - - Why manually wrap? - - all parameters within the FSDP wrapper must have the same requires_grad. - We have a mix of frozen and unfrozen parameters. - - model.vision_encoder.visual needs to be individually wrapped or encode_vision_x errors - See: https://github.com/pytorch/pytorch/issues/82461#issuecomment-1269136344 - - The rough wrapping structure is: - - FlamingoModel - - FSDP(FSDP(vision_encoder)) - - FSDP(FSDP(perceiver)) - - lang_encoder - - FSDP(FSDP(input_embeddings)) - - FlamingoLayers - - FSDP(FSDP(gated_cross_attn_layer)) - - FSDP(FSDP(decoder_layer)) - - FSDP(FSDP(output_embeddings)) - - other parameters - - Known issues: - - Our FSDP strategy is not compatible with tied embeddings. If the LM embeddings are tied, - train with DDP or set the --freeze_lm_embeddings flag to true. - - With FSDP + gradient ckpting, one can increase the batch size with seemingly no upper bound. - Although the training curves look okay, we found that downstream performance dramatically - degrades if the batch size is unreasonably large (e.g., 100 MMC4 batch size for OPT-125M). - - FAQs about our FSDP wrapping strategy: - Why double wrap? - As of torch==2.0.1, FSDP's _post_forward_hook and _post_backward_hook - only free gathered parameters if the module is NOT FSDP root. - - Why unfreeze the decoder_layers? - See https://github.com/pytorch/pytorch/issues/95805 - As of torch==2.0.1, FSDP's _post_backward_hook is only registed if the flat param - requires_grad=True. We need the postback to fire to avoid OOM. - To effectively freeze the decoder layers, we exclude them from the optimizer. - - What is assumed to be frozen v. unfrozen? - We assume that the model is being trained under normal Flamingo settings - with these lines being called in factory.py: - ``` - # Freeze all parameters - model.requires_grad_(False) - assert sum(p.numel() for p in model.parameters() if p.requires_grad) == 0 - - # Unfreeze perceiver, gated_cross_attn_layers, and LM input embeddings - model.perceiver.requires_grad_(True) - model.lang_encoder.gated_cross_attn_layers.requires_grad_(True) - [optional] model.lang_encoder.get_input_embeddings().requires_grad_(True) - ``` - """ - # unfreeze the decoder layers - for block in self.lang_encoder.old_decoder_blocks: - block.requires_grad_(True) - - # wrap in FSDP - with enable_wrap(wrapper_cls=FSDP, **wrapper_kwargs): - self.perceiver = wrap(wrap(self.perceiver)) - self.lang_encoder.old_decoder_blocks = nn.ModuleList( - wrap(wrap(block)) for block in self.lang_encoder.old_decoder_blocks - ) - self.lang_encoder.gated_cross_attn_layers = nn.ModuleList( - wrap(wrap(layer)) if layer is not None else None - for layer in self.lang_encoder.gated_cross_attn_layers - ) - self.lang_encoder.init_flamingo_layers(self._use_gradient_checkpointing) - self.lang_encoder.set_input_embeddings( - wrap(wrap(self.lang_encoder.get_input_embeddings())) - ) - self.lang_encoder.set_output_embeddings( - wrap(wrap(self.lang_encoder.get_output_embeddings())) - ) - self.vision_encoder = wrap(wrap(self.vision_encoder)) # frozen - - # manually move non-FSDP managed parameters to device_id - # these are all in lang_encoder - apply_with_stopping_condition( - module=self.lang_encoder, - apply_fn=lambda m: m.to(device_id), - apply_condition=lambda m: len(list(m.children())) == 0, - stopping_condition=lambda m: isinstance(m, FSDP), - ) - - # exclude the original decoder layers from the optimizer - for block in self.lang_encoder.old_decoder_blocks: - for p in block.parameters(): - p.exclude_from_optimizer = True - - # set up clip_grad_norm_ function - def clip_grad_norm_(max_norm): - self.perceiver.clip_grad_norm_(max_norm) - for layer in self.lang_encoder.gated_cross_attn_layers: - if layer is not None: - layer.clip_grad_norm_(max_norm) - self.lang_encoder.get_input_embeddings().clip_grad_norm_(max_norm) - - self.clip_grad_norm_ = clip_grad_norm_ - - def _condition_media_locations(self, input_ids: torch.Tensor): - """ - Compute the media token locations from lang_x and condition the language model on these. - Args: - input_ids (torch.Tensor): Language input - shape (B, T_txt) - """ - media_locations = input_ids == self.media_token_id - - for layer in self.lang_encoder._get_decoder_layers(): - layer.condition_media_locations(media_locations) - - def cache_media(self, input_ids: torch.Tensor, vision_x: torch.Tensor): - """ - Pre-cache a prompt/sequence of images / text for log-likelihood evaluations. - All subsequent calls to forward() will generate attending to the LAST - image in vision_x. - This is not meant to be used to cache things for generate(). - Args: - input_ids (torch.Tensor): Language input - shape (B, T_txt) - vision_x (torch.Tensor): Vision input - shape (B, T_img, F, C, H, W) - Images in the same chunk are collated along T_img, and frames are collated along F - Currently only F=1 is supported (single-frame videos) - """ - self._encode_vision_x(vision_x=vision_x) - self._condition_media_locations(input_ids=input_ids) - self.lang_encoder._use_cached_vision_x = True - - def uncache_media(self): - """ - Clear all conditioning. - """ - self.lang_encoder.clear_conditioned_layers() - self.lang_encoder._use_cached_vision_x = False diff --git a/spaces/oyjp1234/andite-anything-v4.0/app.py b/spaces/oyjp1234/andite-anything-v4.0/app.py deleted file mode 100644 index 47a2051db6dadeea03edf70d62694fd3e5e88ba7..0000000000000000000000000000000000000000 --- a/spaces/oyjp1234/andite-anything-v4.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/andite/anything-v4.0").launch() \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/attnprocessor.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/attnprocessor.md deleted file mode 100644 index 0b11c1f5bc5d8f1217e8ebb902a5e615a77755d3..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/attnprocessor.md +++ /dev/null @@ -1,45 +0,0 @@ -# Attention Processor - -An attention processor is a class for applying different types of attention mechanisms. - -## AttnProcessor -[[autodoc]] models.attention_processor.AttnProcessor - -## AttnProcessor2_0 -[[autodoc]] models.attention_processor.AttnProcessor2_0 - -## LoRAAttnProcessor -[[autodoc]] models.attention_processor.LoRAAttnProcessor - -## LoRAAttnProcessor2_0 -[[autodoc]] models.attention_processor.LoRAAttnProcessor2_0 - -## CustomDiffusionAttnProcessor -[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor - -## CustomDiffusionAttnProcessor2_0 -[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor2_0 - -## AttnAddedKVProcessor -[[autodoc]] models.attention_processor.AttnAddedKVProcessor - -## AttnAddedKVProcessor2_0 -[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0 - -## LoRAAttnAddedKVProcessor -[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor - -## XFormersAttnProcessor -[[autodoc]] models.attention_processor.XFormersAttnProcessor - -## LoRAXFormersAttnProcessor -[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor - -## CustomDiffusionXFormersAttnProcessor -[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor - -## SlicedAttnProcessor -[[autodoc]] models.attention_processor.SlicedAttnProcessor - -## SlicedAttnAddedKVProcessor -[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/weighted_prompts.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/weighted_prompts.md deleted file mode 100644 index ce08f4949555618dbfe14b94f3964118d0fc6df3..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/weighted_prompts.md +++ /dev/null @@ -1,115 +0,0 @@ - - -# 프롬프트에 가중치 부여하기 - -[[open-in-colab]] - -텍스트 가이드 기반의 diffusion 모델은 주어진 텍스트 프롬프트를 기반으로 이미지를 생성합니다. -텍스트 프롬프트에는 모델이 생성해야 하는 여러 개념이 포함될 수 있으며 프롬프트의 특정 부분에 가중치를 부여하는 것이 바람직한 경우가 많습니다. - -Diffusion 모델은 문맥화된 텍스트 임베딩으로 diffusion 모델의 cross attention 레이어를 조절함으로써 작동합니다. -([더 많은 정보를 위한 Stable Diffusion Guide](https://huggingface.co/docs/optimum-neuron/main/en/package_reference/modeling#stable-diffusion)를 참고하세요). -따라서 프롬프트의 특정 부분을 강조하는(또는 강조하지 않는) 간단한 방법은 프롬프트의 관련 부분에 해당하는 텍스트 임베딩 벡터의 크기를 늘리거나 줄이는 것입니다. -이것은 "프롬프트 가중치 부여" 라고 하며, 커뮤니티에서 가장 요구하는 기능입니다.([이곳](https://github.com/huggingface/diffusers/issues/2431)의 issue를 보세요 ). - -## Diffusers에서 프롬프트 가중치 부여하는 방법 - -우리는 `diffusers`의 역할이 다른 프로젝트를 가능하게 하는 필수적인 기능을 제공하는 toolbex라고 생각합니다. -[InvokeAI](https://github.com/invoke-ai/InvokeAI) 나 [diffuzers](https://github.com/abhishekkrthakur/diffuzers) 같은 강력한 UI를 구축할 수 있습니다. -프롬프트를 조작하는 방법을 지원하기 위해, `diffusers` 는 -[StableDiffusionPipeline](https://huggingface.co/docs/diffusers/v0.18.2/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline)와 같은 -많은 파이프라인에 [prompt_embeds](https://huggingface.co/docs/diffusers/v0.14.0/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.__call__.prompt_embeds) -인수를 노출시켜, "prompt-weighted"/축척된 텍스트 임베딩을 파이프라인에 바로 전달할 수 있게 합니다. - -[Compel 라이브러리](https://github.com/damian0815/compel)는 프롬프트의 일부를 강조하거나 강조하지 않을 수 있는 쉬운 방법을 제공합니다. -임베딩을 직접 준비하는 것 대신 이 방법을 사용하는 것을 강력히 추천합니다. - -간단한 예제를 살펴보겠습니다. -다음과 같이 `"공을 갖고 노는 붉은색 고양이"` 이미지를 생성하고 싶습니다: - -```py -from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler - -pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") -pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) - -prompt = "a red cat playing with a ball" - -generator = torch.Generator(device="cpu").manual_seed(33) - -image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] -image -``` - -생성된 이미지: - -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/forest_0.png) - -사진에서 알 수 있듯이, "공"은 이미지에 없습니다. 이 부분을 강조해 볼까요! - -먼저 `compel` 라이브러리를 설치해야합니다: - -``` -pip install compel -``` - -그런 다음에는 `Compel` 오브젝트를 생성합니다: - -```py -from compel import Compel - -compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) -``` - -이제 `"++"` 를 사용해서 "공" 을 강조해 봅시다: - -```py -prompt = "a red cat playing with a ball++" -``` - -그리고 이 프롬프트를 파이프라인에 바로 전달하지 않고, `compel_proc` 를 사용하여 처리해야합니다: - -```py -prompt_embeds = compel_proc(prompt) -``` - -파이프라인에 `prompt_embeds` 를 바로 전달할 수 있습니다: - -```py -generator = torch.Generator(device="cpu").manual_seed(33) - -images = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] -image -``` - -이제 "공"이 있는 그림을 출력할 수 있습니다! - -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/forest_1.png) - -마찬가지로 `--` 접미사를 단어에 사용하여 문장의 일부를 강조하지 않을 수 있습니다. 한번 시도해 보세요! - -즐겨찾는 파이프라인에 `prompt_embeds` 입력이 없는 경우 issue를 새로 만들어주세요. -Diffusers 팀은 최대한 대응하려고 노력합니다. - -Compel 1.1.6 는 textual inversions을 사용하여 단순화하는 유티릴티 클래스를 추가합니다. -`DiffusersTextualInversionManager`를 인스턴스화 한 후 이를 Compel init에 전달합니다: - -``` -textual_inversion_manager = DiffusersTextualInversionManager(pipe) -compel = Compel( - tokenizer=pipe.tokenizer, - text_encoder=pipe.text_encoder, - textual_inversion_manager=textual_inversion_manager) -``` - -더 많은 정보를 얻고 싶다면 [compel](https://github.com/damian0815/compel) 라이브러리 문서를 참고하세요. diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_dit_to_diffusers.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_dit_to_diffusers.py deleted file mode 100644 index dc127f69555c260f594e70444b1540faa196e3fb..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_dit_to_diffusers.py +++ /dev/null @@ -1,162 +0,0 @@ -import argparse -import os - -import torch -from torchvision.datasets.utils import download_url - -from diffusers import AutoencoderKL, DDIMScheduler, DiTPipeline, Transformer2DModel - - -pretrained_models = {512: "DiT-XL-2-512x512.pt", 256: "DiT-XL-2-256x256.pt"} - - -def download_model(model_name): - """ - Downloads a pre-trained DiT model from the web. - """ - local_path = f"pretrained_models/{model_name}" - if not os.path.isfile(local_path): - os.makedirs("pretrained_models", exist_ok=True) - web_path = f"https://dl.fbaipublicfiles.com/DiT/models/{model_name}" - download_url(web_path, "pretrained_models") - model = torch.load(local_path, map_location=lambda storage, loc: storage) - return model - - -def main(args): - state_dict = download_model(pretrained_models[args.image_size]) - - state_dict["pos_embed.proj.weight"] = state_dict["x_embedder.proj.weight"] - state_dict["pos_embed.proj.bias"] = state_dict["x_embedder.proj.bias"] - state_dict.pop("x_embedder.proj.weight") - state_dict.pop("x_embedder.proj.bias") - - for depth in range(28): - state_dict[f"transformer_blocks.{depth}.norm1.emb.timestep_embedder.linear_1.weight"] = state_dict[ - "t_embedder.mlp.0.weight" - ] - state_dict[f"transformer_blocks.{depth}.norm1.emb.timestep_embedder.linear_1.bias"] = state_dict[ - "t_embedder.mlp.0.bias" - ] - state_dict[f"transformer_blocks.{depth}.norm1.emb.timestep_embedder.linear_2.weight"] = state_dict[ - "t_embedder.mlp.2.weight" - ] - state_dict[f"transformer_blocks.{depth}.norm1.emb.timestep_embedder.linear_2.bias"] = state_dict[ - "t_embedder.mlp.2.bias" - ] - state_dict[f"transformer_blocks.{depth}.norm1.emb.class_embedder.embedding_table.weight"] = state_dict[ - "y_embedder.embedding_table.weight" - ] - - state_dict[f"transformer_blocks.{depth}.norm1.linear.weight"] = state_dict[ - f"blocks.{depth}.adaLN_modulation.1.weight" - ] - state_dict[f"transformer_blocks.{depth}.norm1.linear.bias"] = state_dict[ - f"blocks.{depth}.adaLN_modulation.1.bias" - ] - - q, k, v = torch.chunk(state_dict[f"blocks.{depth}.attn.qkv.weight"], 3, dim=0) - q_bias, k_bias, v_bias = torch.chunk(state_dict[f"blocks.{depth}.attn.qkv.bias"], 3, dim=0) - - state_dict[f"transformer_blocks.{depth}.attn1.to_q.weight"] = q - state_dict[f"transformer_blocks.{depth}.attn1.to_q.bias"] = q_bias - state_dict[f"transformer_blocks.{depth}.attn1.to_k.weight"] = k - state_dict[f"transformer_blocks.{depth}.attn1.to_k.bias"] = k_bias - state_dict[f"transformer_blocks.{depth}.attn1.to_v.weight"] = v - state_dict[f"transformer_blocks.{depth}.attn1.to_v.bias"] = v_bias - - state_dict[f"transformer_blocks.{depth}.attn1.to_out.0.weight"] = state_dict[ - f"blocks.{depth}.attn.proj.weight" - ] - state_dict[f"transformer_blocks.{depth}.attn1.to_out.0.bias"] = state_dict[f"blocks.{depth}.attn.proj.bias"] - - state_dict[f"transformer_blocks.{depth}.ff.net.0.proj.weight"] = state_dict[f"blocks.{depth}.mlp.fc1.weight"] - state_dict[f"transformer_blocks.{depth}.ff.net.0.proj.bias"] = state_dict[f"blocks.{depth}.mlp.fc1.bias"] - state_dict[f"transformer_blocks.{depth}.ff.net.2.weight"] = state_dict[f"blocks.{depth}.mlp.fc2.weight"] - state_dict[f"transformer_blocks.{depth}.ff.net.2.bias"] = state_dict[f"blocks.{depth}.mlp.fc2.bias"] - - state_dict.pop(f"blocks.{depth}.attn.qkv.weight") - state_dict.pop(f"blocks.{depth}.attn.qkv.bias") - state_dict.pop(f"blocks.{depth}.attn.proj.weight") - state_dict.pop(f"blocks.{depth}.attn.proj.bias") - state_dict.pop(f"blocks.{depth}.mlp.fc1.weight") - state_dict.pop(f"blocks.{depth}.mlp.fc1.bias") - state_dict.pop(f"blocks.{depth}.mlp.fc2.weight") - state_dict.pop(f"blocks.{depth}.mlp.fc2.bias") - state_dict.pop(f"blocks.{depth}.adaLN_modulation.1.weight") - state_dict.pop(f"blocks.{depth}.adaLN_modulation.1.bias") - - state_dict.pop("t_embedder.mlp.0.weight") - state_dict.pop("t_embedder.mlp.0.bias") - state_dict.pop("t_embedder.mlp.2.weight") - state_dict.pop("t_embedder.mlp.2.bias") - state_dict.pop("y_embedder.embedding_table.weight") - - state_dict["proj_out_1.weight"] = state_dict["final_layer.adaLN_modulation.1.weight"] - state_dict["proj_out_1.bias"] = state_dict["final_layer.adaLN_modulation.1.bias"] - state_dict["proj_out_2.weight"] = state_dict["final_layer.linear.weight"] - state_dict["proj_out_2.bias"] = state_dict["final_layer.linear.bias"] - - state_dict.pop("final_layer.linear.weight") - state_dict.pop("final_layer.linear.bias") - state_dict.pop("final_layer.adaLN_modulation.1.weight") - state_dict.pop("final_layer.adaLN_modulation.1.bias") - - # DiT XL/2 - transformer = Transformer2DModel( - sample_size=args.image_size // 8, - num_layers=28, - attention_head_dim=72, - in_channels=4, - out_channels=8, - patch_size=2, - attention_bias=True, - num_attention_heads=16, - activation_fn="gelu-approximate", - num_embeds_ada_norm=1000, - norm_type="ada_norm_zero", - norm_elementwise_affine=False, - ) - transformer.load_state_dict(state_dict, strict=True) - - scheduler = DDIMScheduler( - num_train_timesteps=1000, - beta_schedule="linear", - prediction_type="epsilon", - clip_sample=False, - ) - - vae = AutoencoderKL.from_pretrained(args.vae_model) - - pipeline = DiTPipeline(transformer=transformer, vae=vae, scheduler=scheduler) - - if args.save: - pipeline.save_pretrained(args.checkpoint_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--image_size", - default=256, - type=int, - required=False, - help="Image size of pretrained model, either 256 or 512.", - ) - parser.add_argument( - "--vae_model", - default="stabilityai/sd-vae-ft-ema", - type=str, - required=False, - help="Path to pretrained VAE model, either stabilityai/sd-vae-ft-mse or stabilityai/sd-vae-ft-ema.", - ) - parser.add_argument( - "--save", default=True, type=bool, required=False, help="Whether to save the converted pipeline or not." - ) - parser.add_argument( - "--checkpoint_path", default=None, type=str, required=True, help="Path to the output pipeline." - ) - - args = parser.parse_args() - main(args) diff --git a/spaces/patgpt4/MusicGen/audiocraft/modules/conditioners.py b/spaces/patgpt4/MusicGen/audiocraft/modules/conditioners.py deleted file mode 100644 index 82792316024b88d4c5c38b0a28f443627771d509..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/audiocraft/modules/conditioners.py +++ /dev/null @@ -1,990 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -from copy import deepcopy -from dataclasses import dataclass, field -from itertools import chain -import logging -import math -import random -import re -import typing as tp -import warnings - -from einops import rearrange -from num2words import num2words -import spacy -from transformers import T5EncoderModel, T5Tokenizer # type: ignore -import torchaudio -import torch -from torch import nn -from torch import Tensor -import torch.nn.functional as F -from torch.nn.utils.rnn import pad_sequence - -from .streaming import StreamingModule -from .transformer import create_sin_embedding -from ..data.audio_dataset import SegmentInfo -from ..utils.autocast import TorchAutocast -from ..utils.utils import hash_trick, length_to_mask, collate - - -logger = logging.getLogger(__name__) -TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist) -ConditionType = tp.Tuple[Tensor, Tensor] # condition, mask - - -class WavCondition(tp.NamedTuple): - wav: Tensor - length: Tensor - path: tp.List[tp.Optional[str]] = [] - - -def nullify_condition(condition: ConditionType, dim: int = 1): - """This function transforms an input condition to a null condition. - The way it is done by converting it to a single zero vector similarly - to how it is done inside WhiteSpaceTokenizer and NoopTokenizer. - - Args: - condition (ConditionType): a tuple of condition and mask (tp.Tuple[Tensor, Tensor]) - dim (int): the dimension that will be truncated (should be the time dimension) - WARNING!: dim should not be the batch dimension! - Returns: - ConditionType: a tuple of null condition and mask - """ - assert dim != 0, "dim cannot be the batch dimension!" - assert type(condition) == tuple and \ - type(condition[0]) == Tensor and \ - type(condition[1]) == Tensor, "'nullify_condition' got an unexpected input type!" - cond, mask = condition - B = cond.shape[0] - last_dim = cond.dim() - 1 - out = cond.transpose(dim, last_dim) - out = 0. * out[..., :1] - out = out.transpose(dim, last_dim) - mask = torch.zeros((B, 1), device=out.device).int() - assert cond.dim() == out.dim() - return out, mask - - -def nullify_wav(wav: Tensor) -> WavCondition: - """Create a nullified WavCondition from a wav tensor with appropriate shape. - - Args: - wav (Tensor): tensor of shape [B, T] - Returns: - WavCondition: wav condition with nullified wav. - """ - null_wav, _ = nullify_condition((wav, torch.zeros_like(wav)), dim=wav.dim() - 1) - return WavCondition( - wav=null_wav, - length=torch.tensor([0] * wav.shape[0], device=wav.device), - path=['null_wav'] * wav.shape[0] - ) - - -@dataclass -class ConditioningAttributes: - text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict) - wav: tp.Dict[str, WavCondition] = field(default_factory=dict) - - def __getitem__(self, item): - return getattr(self, item) - - @property - def text_attributes(self): - return self.text.keys() - - @property - def wav_attributes(self): - return self.wav.keys() - - @property - def attributes(self): - return {"text": self.text_attributes, "wav": self.wav_attributes} - - def to_flat_dict(self): - return { - **{f"text.{k}": v for k, v in self.text.items()}, - **{f"wav.{k}": v for k, v in self.wav.items()}, - } - - @classmethod - def from_flat_dict(cls, x): - out = cls() - for k, v in x.items(): - kind, att = k.split(".") - out[kind][att] = v - return out - - -class SegmentWithAttributes(SegmentInfo): - """Base class for all dataclasses that are used for conditioning. - All child classes should implement `to_condition_attributes` that converts - the existing attributes to a dataclass of type ConditioningAttributes. - """ - def to_condition_attributes(self) -> ConditioningAttributes: - raise NotImplementedError() - - -class Tokenizer: - """Base class for all tokenizers - (in case we want to introduce more advances tokenizers in the future). - """ - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - raise NotImplementedError() - - -class WhiteSpaceTokenizer(Tokenizer): - """This tokenizer should be used for natural language descriptions. - For example: - ["he didn't, know he's going home.", 'shorter sentence'] => - [[78, 62, 31, 4, 78, 25, 19, 34], - [59, 77, 0, 0, 0, 0, 0, 0]] - """ - PUNCTUATIONS = "?:!.,;" - - def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm", - lemma: bool = True, stopwords: bool = True) -> None: - self.n_bins = n_bins - self.pad_idx = pad_idx - self.lemma = lemma - self.stopwords = stopwords - try: - self.nlp = spacy.load(language) - except IOError: - spacy.cli.download(language) # type: ignore - self.nlp = spacy.load(language) - - @tp.no_type_check - def __call__( - self, - texts: tp.List[tp.Optional[str]], - return_text: bool = False - ) -> tp.Tuple[Tensor, Tensor]: - """Take a list of strings and convert them to a tensor of indices. - - Args: - texts (tp.List[str]): List of strings. - return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False. - Returns: - tp.Tuple[Tensor, Tensor]: - - Indices of words in the LUT. - - And a mask indicating where the padding tokens are - """ - output, lengths = [], [] - texts = deepcopy(texts) - for i, text in enumerate(texts): - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(Tensor([self.pad_idx])) - lengths.append(0) - continue - - # convert numbers to words - text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore - # normalize text - text = self.nlp(text) # type: ignore - # remove stopwords - if self.stopwords: - text = [w for w in text if not w.is_stop] # type: ignore - # remove punctuations - text = [w for w in text if w.text not in self.PUNCTUATIONS] # type: ignore - # lemmatize if needed - text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore - - texts[i] = " ".join(text) - lengths.append(len(text)) - # convert to tensor - tokens = Tensor([hash_trick(w, self.n_bins) for w in text]) - output.append(tokens) - - mask = length_to_mask(torch.IntTensor(lengths)).int() - padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t() - if return_text: - return padded_output, mask, texts # type: ignore - return padded_output, mask - - -class NoopTokenizer(Tokenizer): - """This tokenizer should be used for global conditioners such as: artist, genre, key, etc. - The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split - strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will - split it to ["Jeff", "Buckley"] and return an index per word. - - For example: - ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101] - ["Metal", "Rock", "Classical"] => [0, 223, 51] - """ - def __init__(self, n_bins: int, pad_idx: int = 0): - self.n_bins = n_bins - self.pad_idx = pad_idx - - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - output, lengths = [], [] - for text in texts: - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(self.pad_idx) - lengths.append(0) - else: - output.append(hash_trick(text, self.n_bins)) - lengths.append(1) - - tokens = torch.LongTensor(output).unsqueeze(1) - mask = length_to_mask(torch.IntTensor(lengths)).int() - return tokens, mask - - -class BaseConditioner(nn.Module): - """Base model for all conditioner modules. We allow the output dim to be different - than the hidden dim for two reasons: 1) keep our LUTs small when the vocab is large; - 2) make all condition dims consistent. - - Args: - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - """ - def __init__(self, dim, output_dim): - super().__init__() - self.dim = dim - self.output_dim = output_dim - self.output_proj = nn.Linear(dim, output_dim) - - def tokenize(self, *args, **kwargs) -> tp.Any: - """Should be any part of the processing that will lead to a synchronization - point, e.g. BPE tokenization with transfer to the GPU. - - The returned value will be saved and return later when calling forward(). - """ - raise NotImplementedError() - - def forward(self, inputs: tp.Any) -> ConditionType: - """Gets input that should be used as conditioning (e.g, genre, description or a waveform). - Outputs a ConditionType, after the input data was embedded as a dense vector. - - Returns: - ConditionType: - - A tensor of size [B, T, D] where B is the batch size, T is the length of the - output embedding and D is the dimension of the embedding. - - And a mask indicating where the padding tokens. - """ - raise NotImplementedError() - - -class TextConditioner(BaseConditioner): - ... - - -class LUTConditioner(TextConditioner): - """Lookup table TextConditioner. - - Args: - n_bins (int): Number of bins. - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - tokenizer (str): Name of the tokenizer. - pad_idx (int, optional): Index for padding token. Defaults to 0. - """ - def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0): - super().__init__(dim, output_dim) - self.embed = nn.Embedding(n_bins, dim) - self.tokenizer: Tokenizer - if tokenizer == "whitespace": - self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx) - elif tokenizer == "noop": - self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx) - else: - raise ValueError(f"unrecognized tokenizer `{tokenizer}`.") - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - device = self.embed.weight.device - tokens, mask = self.tokenizer(x) - tokens, mask = tokens.to(device), mask.to(device) - return tokens, mask - - def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType: - tokens, mask = inputs - embeds = self.embed(tokens) - embeds = self.output_proj(embeds) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class T5Conditioner(TextConditioner): - """T5-based TextConditioner. - - Args: - name (str): Name of the T5 model. - output_dim (int): Output dim of the conditioner. - finetune (bool): Whether to fine-tune T5 at train time. - device (str): Device for T5 Conditioner. - autocast_dtype (tp.Optional[str], optional): Autocast dtype. - word_dropout (float, optional): Word dropout probability. - normalize_text (bool, optional): Whether to apply text normalization. - """ - MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b", - "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large", - "google/flan-t5-xl", "google/flan-t5-xxl"] - MODELS_DIMS = { - "t5-small": 512, - "t5-base": 768, - "t5-large": 1024, - "t5-3b": 1024, - "t5-11b": 1024, - "google/flan-t5-small": 512, - "google/flan-t5-base": 768, - "google/flan-t5-large": 1024, - "google/flan-t5-3b": 1024, - "google/flan-t5-11b": 1024, - } - - def __init__(self, name: str, output_dim: int, finetune: bool, device: str, - autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0., - normalize_text: bool = False): - assert name in self.MODELS, f"unrecognized t5 model name (should in {self.MODELS})" - super().__init__(self.MODELS_DIMS[name], output_dim) - self.device = device - self.name = name - self.finetune = finetune - self.word_dropout = word_dropout - - if autocast_dtype is None or self.device == 'cpu': - self.autocast = TorchAutocast(enabled=False) - if self.device != 'cpu': - logger.warning("T5 has no autocast, this might lead to NaN") - else: - dtype = getattr(torch, autocast_dtype) - assert isinstance(dtype, torch.dtype) - logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}") - self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype) - # Let's disable logging temporarily because T5 will vomit some errors otherwise. - # thanks https://gist.github.com/simon-weber/7853144 - previous_level = logging.root.manager.disable - logging.disable(logging.ERROR) - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - try: - self.t5_tokenizer = T5Tokenizer.from_pretrained(name) - t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune) - finally: - logging.disable(previous_level) - if finetune: - self.t5 = t5 - else: - # this makes sure that the t5 models is not part - # of the saved checkpoint - self.__dict__["t5"] = t5.to(device) - - self.normalize_text = normalize_text - if normalize_text: - self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True) - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]: - # if current sample doesn't have a certain attribute, replace with empty string - entries: tp.List[str] = [xi if xi is not None else "" for xi in x] - if self.normalize_text: - _, _, entries = self.text_normalizer(entries, return_text=True) - if self.word_dropout > 0. and self.training: - new_entries = [] - for entry in entries: - words = [word for word in entry.split(" ") if random.random() >= self.word_dropout] - new_entries.append(" ".join(words)) - entries = new_entries - - empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""]) - - inputs = self.t5_tokenizer(entries, return_tensors="pt", padding=True).to(self.device) - mask = inputs["attention_mask"] - mask[empty_idx, :] = 0 # zero-out index where the input is non-existant - return inputs - - def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType: - mask = inputs["attention_mask"] - with torch.set_grad_enabled(self.finetune), self.autocast: - embeds = self.t5(**inputs).last_hidden_state - embeds = self.output_proj(embeds.to(self.output_proj.weight)) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class WaveformConditioner(BaseConditioner): - """Base class for all conditioners that take a waveform as input. - Classes that inherit must implement `_get_wav_embedding` that outputs - a continuous tensor, and `_downsampling_factor` that returns the down-sampling - factor of the embedding model. - - Args: - dim (int): The internal representation dimension. - output_dim (int): Output dimension. - device (tp.Union[torch.device, str]): Device. - """ - def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]): - super().__init__(dim, output_dim) - self.device = device - - def tokenize(self, wav_length: WavCondition) -> WavCondition: - wav, length, path = wav_length - assert length is not None - return WavCondition(wav.to(self.device), length.to(self.device), path) - - def _get_wav_embedding(self, wav: Tensor) -> Tensor: - """Gets as input a wav and returns a dense vector of conditions.""" - raise NotImplementedError() - - def _downsampling_factor(self): - """Returns the downsampling factor of the embedding model.""" - raise NotImplementedError() - - def forward(self, inputs: WavCondition) -> ConditionType: - """ - Args: - input (WavCondition): Tuple of (waveform, lengths). - Returns: - ConditionType: Dense vector representing the conditioning along with its' mask. - """ - wav, lengths, path = inputs - with torch.no_grad(): - embeds = self._get_wav_embedding(wav) - embeds = embeds.to(self.output_proj.weight) - embeds = self.output_proj(embeds) - - if lengths is not None: - lengths = lengths / self._downsampling_factor() - mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore - else: - mask = torch.ones_like(embeds) - embeds = (embeds * mask.unsqueeze(2).to(self.device)) - - return embeds, mask - - -class ChromaStemConditioner(WaveformConditioner): - """Chroma conditioner that uses DEMUCS to first filter out drums and bass. The is followed by - the insight the drums and bass often dominate the chroma, leading to the chroma not containing the - information about melody. - - Args: - output_dim (int): Output dimension for the conditioner. - sample_rate (int): Sample rate for the chroma extractor. - n_chroma (int): Number of chroma for the chroma extractor. - radix2_exp (int): Radix2 exponent for the chroma extractor. - duration (float): Duration used during training. This is later used for correct padding - in case we are using chroma as prefix. - match_len_on_eval (bool, optional): If True then all chromas are padded to the training - duration. Defaults to False. - eval_wavs (str, optional): Path to a json egg with waveform, this waveforms are used as - conditions during eval (for cases where we don't want to leak test conditions like MusicCaps). - Defaults to None. - n_eval_wavs (int, optional): Limits the number of waveforms used for conditioning. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for the conditioner. - **kwargs: Additional parameters for the chroma extractor. - """ - def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int, - duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None, - n_eval_wavs: int = 0, device: tp.Union[torch.device, str] = "cpu", **kwargs): - from demucs import pretrained - super().__init__(dim=n_chroma, output_dim=output_dim, device=device) - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.sample_rate = sample_rate - self.match_len_on_eval = match_len_on_eval - self.duration = duration - self.__dict__["demucs"] = pretrained.get_model('htdemucs').to(device) - self.stem2idx = {'drums': 0, 'bass': 1, 'other': 2, 'vocal': 3} - self.stem_idx = torch.LongTensor([self.stem2idx['vocal'], self.stem2idx['other']]).to(device) - self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, radix2_exp=radix2_exp, - device=device, **kwargs) - self.chroma_len = self._get_chroma_len() - - def _downsampling_factor(self): - return self.chroma.winhop - - def _get_chroma_len(self): - """Get length of chroma during training""" - dummy_wav = torch.zeros((1, self.sample_rate * self.duration), device=self.device) - dummy_chr = self.chroma(dummy_wav) - return dummy_chr.shape[1] - - @torch.no_grad() - def _get_filtered_wav(self, wav): - from demucs.apply import apply_model - from demucs.audio import convert_audio - with self.autocast: - wav = convert_audio(wav, self.sample_rate, self.demucs.samplerate, self.demucs.audio_channels) - stems = apply_model(self.demucs, wav, device=self.device) - stems = stems[:, self.stem_idx] # extract stem - stems = stems.sum(1) # merge extracted stems - stems = stems.mean(1, keepdim=True) # mono - stems = convert_audio(stems, self.demucs.samplerate, self.sample_rate, 1) - return stems - - @torch.no_grad() - def _get_wav_embedding(self, wav): - # avoid 0-size tensors when we are working with null conds - if wav.shape[-1] == 1: - return self.chroma(wav) - stems = self._get_filtered_wav(wav) - chroma = self.chroma(stems) - - if self.match_len_on_eval: - b, t, c = chroma.shape - if t > self.chroma_len: - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was truncated! ({t} -> {chroma.shape[1]})') - elif t < self.chroma_len: - # chroma = F.pad(chroma, (0, 0, 0, self.chroma_len - t)) - n_repeat = int(math.ceil(self.chroma_len / t)) - chroma = chroma.repeat(1, n_repeat, 1) - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was zero-padded! ({t} -> {chroma.shape[1]})') - return chroma - - -class ChromaExtractor(nn.Module): - """Chroma extraction class, handles chroma extraction and quantization. - - Args: - sample_rate (int): Sample rate. - n_chroma (int): Number of chroma to consider. - radix2_exp (int): Radix2 exponent. - nfft (tp.Optional[int], optional): Number of FFT. - winlen (tp.Optional[int], optional): Window length. - winhop (tp.Optional[int], optional): Window hop size. - argmax (bool, optional): Whether to use argmax. Defaults to False. - norm (float, optional): Norm for chroma normalization. Defaults to inf. - device (tp.Union[torch.device, str], optional): Device to use. Defaults to cpu. - """ - def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12, - nfft: tp.Optional[int] = None, winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None, - argmax: bool = False, norm: float = torch.inf, device: tp.Union[torch.device, str] = "cpu"): - super().__init__() - from librosa import filters - self.device = device - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.winlen = winlen or 2 ** radix2_exp - self.nfft = nfft or self.winlen - self.winhop = winhop or (self.winlen // 4) - self.sr = sample_rate - self.n_chroma = n_chroma - self.norm = norm - self.argmax = argmax - self.window = torch.hann_window(self.winlen).to(device) - self.fbanks = torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0, - n_chroma=self.n_chroma)).to(device) - self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen, - hop_length=self.winhop, power=2, center=True, - pad=0, normalized=True).to(device) - - def forward(self, wav): - with self.autocast: - T = wav.shape[-1] - # in case we are getting a wav that was dropped out (nullified) - # make sure wav length is no less that nfft - if T < self.nfft: - pad = self.nfft - T - r = 0 if pad % 2 == 0 else 1 - wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0) - assert wav.shape[-1] == self.nfft, f'expected len {self.nfft} but got {wav.shape[-1]}' - spec = self.spec(wav).squeeze(1) - raw_chroma = torch.einsum("cf,...ft->...ct", self.fbanks, spec) - norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6) - norm_chroma = rearrange(norm_chroma, "b d t -> b t d") - - if self.argmax: - idx = norm_chroma.argmax(-1, keepdims=True) - norm_chroma[:] = 0 - norm_chroma.scatter_(dim=-1, index=idx, value=1) - - return norm_chroma - - -def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str): - """Utility function for nullifying an attribute inside an ConditioningAttributes object. - If the condition is of type "wav", then nullify it using "nullify_condition". - If the condition is of any other type, set its' value to None. - Works in-place. - """ - if condition_type not in ["text", "wav"]: - raise ValueError( - "dropout_condition got an unexpected condition type!" - f" expected 'wav' or 'text' but got '{condition_type}'" - ) - - if condition not in getattr(sample, condition_type): - raise ValueError( - "dropout_condition received an unexpected condition!" - f" expected wav={sample.wav.keys()} and text={sample.text.keys()}" - f"but got '{condition}' of type '{condition_type}'!" - ) - - if condition_type == "wav": - wav, length, path = sample.wav[condition] - sample.wav[condition] = nullify_wav(wav) - else: - sample.text[condition] = None - - return sample - - -class DropoutModule(nn.Module): - """Base class for all dropout modules.""" - def __init__(self, seed: int = 1234): - super().__init__() - self.rng = torch.Generator() - self.rng.manual_seed(seed) - - -class AttributeDropout(DropoutModule): - """Applies dropout with a given probability per attribute. This is different from the behavior of - ClassifierFreeGuidanceDropout as this allows for attributes to be dropped out separately. For example, - "artist" can be dropped while "genre" remains. This is in contrast to ClassifierFreeGuidanceDropout - where if "artist" is dropped "genre" must also be dropped. - - Args: - p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example: - ... - "genre": 0.1, - "artist": 0.5, - "wav": 0.25, - ... - active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False. - seed (int, optional): Random seed. - """ - def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234): - super().__init__(seed=seed) - self.active_on_eval = active_on_eval - # construct dict that return the values from p otherwise 0 - self.p = {} - for condition_type, probs in p.items(): - self.p[condition_type] = defaultdict(lambda: 0, probs) - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after certain attributes were set to None. - """ - if not self.training and not self.active_on_eval: - return samples - - samples = deepcopy(samples) - - for condition_type, ps in self.p.items(): # for condition types [text, wav] - for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre]) - if torch.rand(1, generator=self.rng).item() < p: - for sample in samples: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"AttributeDropout({dict(self.p)})" - - -class ClassifierFreeGuidanceDropout(DropoutModule): - """Applies Classifier Free Guidance dropout, meaning all attributes - are dropped with the same probability. - - Args: - p (float): Probability to apply condition dropout during training. - seed (int): Random seed. - """ - def __init__(self, p: float, seed: int = 1234): - super().__init__(seed=seed) - self.p = p - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after all attributes were set to None. - """ - if not self.training: - return samples - - # decide on which attributes to drop in a batched fashion - drop = torch.rand(1, generator=self.rng).item() < self.p - if not drop: - return samples - - # nullify conditions of all attributes - samples = deepcopy(samples) - - for condition_type in ["wav", "text"]: - for sample in samples: - for condition in sample.attributes[condition_type]: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"ClassifierFreeGuidanceDropout(p={self.p})" - - -class ConditioningProvider(nn.Module): - """Main class to provide conditions given all the supported conditioners. - - Args: - conditioners (dict): Dictionary of conditioners. - merge_text_conditions_p (float, optional): Probability to merge all text sources - into a single text condition. Defaults to 0. - drop_desc_p (float, optional): Probability to drop the original description - when merging all text sources into a single text condition. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for conditioners and output condition types. - """ - def __init__( - self, - conditioners: tp.Dict[str, BaseConditioner], - merge_text_conditions_p: float = 0, - drop_desc_p: float = 0, - device: tp.Union[torch.device, str] = "cpu", - ): - super().__init__() - self.device = device - self.merge_text_conditions_p = merge_text_conditions_p - self.drop_desc_p = drop_desc_p - self.conditioners = nn.ModuleDict(conditioners) - - @property - def text_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)] - - @property - def wav_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)] - - @property - def has_wav_condition(self): - return len(self.wav_conditions) > 0 - - def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]: - """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly. - This should be called before starting any real GPU work to avoid synchronization points. - This will return a dict matching conditioner names to their arbitrary tokenized representations. - - Args: - inputs (list[ConditioningAttribres]): List of ConditioningAttributes objects containing - text and wav conditions. - """ - assert all([type(x) == ConditioningAttributes for x in inputs]), \ - "got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]" \ - f" but types were {set([type(x) for x in inputs])}" - - output = {} - text = self._collate_text(inputs) - wavs = self._collate_wavs(inputs) - - assert set(text.keys() | wavs.keys()).issubset(set(self.conditioners.keys())), \ - f"got an unexpected attribute! Expected {self.conditioners.keys()}, got {text.keys(), wavs.keys()}" - - for attribute, batch in chain(text.items(), wavs.items()): - output[attribute] = self.conditioners[attribute].tokenize(batch) - return output - - def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]: - """Compute pairs of `(embedding, mask)` using the configured conditioners - and the tokenized representations. The output is for example: - - { - "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])), - "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])), - ... - } - - Args: - tokenized (dict): Dict of tokenized representations as returned by `tokenize()`. - """ - output = {} - for attribute, inputs in tokenized.items(): - condition, mask = self.conditioners[attribute](inputs) - output[attribute] = (condition, mask) - return output - - def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]: - """Given a list of ConditioningAttributes objects, compile a dictionary where the keys - are the attributes and the values are the aggregated input per attribute. - For example: - Input: - [ - ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...), - ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...), - ] - Output: - { - "genre": ["Rock", "Hip-hop"], - "description": ["A rock song with a guitar solo", "A hip-hop verse"] - } - """ - batch_per_attribute: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list) - - def _merge_conds(cond, merge_text_conditions_p=0, drop_desc_p=0): - def is_valid(k, v): - k_valid = k in ['key', 'bpm', 'genre', 'moods', 'instrument'] - v_valid = v is not None and isinstance(v, (int, float, str, list)) - return k_valid and v_valid - - def process_value(v): - if isinstance(v, (int, float, str)): - return v - if isinstance(v, list): - return ", ".join(v) - else: - RuntimeError(f"unknown type for text value! ({type(v), v})") - - desc = cond.text['description'] - meta_data = "" - if random.uniform(0, 1) < merge_text_conditions_p: - meta_pairs = [f'{k}: {process_value(v)}' for k, v in cond.text.items() if is_valid(k, v)] - random.shuffle(meta_pairs) - meta_data = ". ".join(meta_pairs) - desc = desc if not random.uniform(0, 1) < drop_desc_p else None - - if desc is None: - desc = meta_data if len(meta_data) > 1 else None - else: - desc = desc.rstrip('.') + ". " + meta_data - cond.text['description'] = desc.strip() if desc else None - - if self.training and self.merge_text_conditions_p: - for sample in samples: - _merge_conds(sample, self.merge_text_conditions_p, self.drop_desc_p) - - texts = [x.text for x in samples] - for text in texts: - for condition in self.text_conditions: - batch_per_attribute[condition].append(text[condition]) - - return batch_per_attribute - - def _collate_wavs(self, samples: tp.List[ConditioningAttributes]): - """Generate a dict where the keys are attributes by which we fetch similar wavs, - and the values are Tensors of wavs according to said attribtues. - - *Note*: by the time the samples reach this function, each sample should have some waveform - inside the "wav" attribute. It should be either: - 1. A real waveform - 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset) - 3. A null waveform due to it being dropped in a dropout module (nullified by dropout) - - Args: - samples (tp.List[ConditioningAttributes]): List of ConditioningAttributes samples. - Returns: - dict: A dicionary mapping an attribute name to wavs. - """ - wavs = defaultdict(list) - lens = defaultdict(list) - paths = defaultdict(list) - out = {} - - for sample in samples: - for attribute in self.wav_conditions: - wav, length, path = sample.wav[attribute] - wavs[attribute].append(wav.flatten()) - lens[attribute].append(length) - paths[attribute].append(path) - - # stack all wavs to a single tensor - for attribute in self.wav_conditions: - stacked_wav, _ = collate(wavs[attribute], dim=0) - out[attribute] = WavCondition(stacked_wav.unsqueeze(1), - torch.cat(lens['self_wav']), paths[attribute]) # type: ignore - - return out - - -class ConditionFuser(StreamingModule): - """Condition fuser handles the logic to combine the different conditions - to the actual model input. - - Args: - fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse - each condition. For example: - { - "prepend": ["description"], - "sum": ["genre", "bpm"], - "cross": ["description"], - } - cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention. - cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used. - """ - FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"] - - def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False, - cross_attention_pos_emb_scale: float = 1.0): - super().__init__() - assert all( - [k in self.FUSING_METHODS for k in fuse2cond.keys()] - ), f"got invalid fuse method, allowed methods: {self.FUSING_MEHTODS}" - self.cross_attention_pos_emb = cross_attention_pos_emb - self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale - self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond - self.cond2fuse: tp.Dict[str, str] = {} - for fuse_method, conditions in fuse2cond.items(): - for condition in conditions: - self.cond2fuse[condition] = fuse_method - - def forward( - self, - input: Tensor, - conditions: tp.Dict[str, ConditionType] - ) -> tp.Tuple[Tensor, tp.Optional[Tensor]]: - """Fuse the conditions to the provided model input. - - Args: - input (Tensor): Transformer input. - conditions (tp.Dict[str, ConditionType]): Dict of conditions. - Returns: - tp.Tuple[Tensor, Tensor]: The first tensor is the transformer input - after the conditions have been fused. The second output tensor is the tensor - used for cross-attention or None if no cross attention inputs exist. - """ - B, T, _ = input.shape - - if 'offsets' in self._streaming_state: - first_step = False - offsets = self._streaming_state['offsets'] - else: - first_step = True - offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device) - - assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \ - f"given conditions contain unknown attributes for fuser, " \ - f"expected {self.cond2fuse.keys()}, got {conditions.keys()}" - cross_attention_output = None - for cond_type, (cond, cond_mask) in conditions.items(): - op = self.cond2fuse[cond_type] - if op == "sum": - input += cond - elif op == "input_interpolate": - cond = rearrange(cond, "b t d -> b d t") - cond = F.interpolate(cond, size=input.shape[1]) - input += rearrange(cond, "b d t -> b t d") - elif op == "prepend": - if first_step: - input = torch.cat([cond, input], dim=1) - elif op == "cross": - if cross_attention_output is not None: - cross_attention_output = torch.cat([cross_attention_output, cond], dim=1) - else: - cross_attention_output = cond - else: - raise ValueError(f"unknown op ({op})") - - if self.cross_attention_pos_emb and cross_attention_output is not None: - positions = torch.arange( - cross_attention_output.shape[1], - device=cross_attention_output.device - ).view(1, -1, 1) - pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1]) - cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return input, cross_attention_output diff --git a/spaces/paulbricman/conceptarium/frontend/components/knowledge.py b/spaces/paulbricman/conceptarium/frontend/components/knowledge.py deleted file mode 100644 index 0da1ce1a69328123a4a833e7ab1d648cacba2dc2..0000000000000000000000000000000000000000 --- a/spaces/paulbricman/conceptarium/frontend/components/knowledge.py +++ /dev/null @@ -1,58 +0,0 @@ -import streamlit as st -from streamlit.uploaded_file_manager import UploadedFile -import requests -import json -import io -from PIL import Image - - -def load(modality, query): - thoughts = [] - - for microverse in st.session_state.get('microverses', []): - url = microverse['url'] - url += '/find' - - if modality == 'text': - response = requests.get(url, params={ - 'query': query, - 'relatedness': st.session_state.get('ranker_relatedness', 0.8), - 'activation': st.session_state.get('ranker_activation', 0.), - 'noise': st.session_state.get('ranker_noise', 0.01), - 'return_embeddings': False - }, headers={'Authorization': f"Bearer {microverse['token']}"}) - elif modality == 'image': - if isinstance(query, UploadedFile): - query = Image.open(io.BytesIO(query.getvalue())) - - img_io = io.BytesIO() - query = query.convert('RGB') - query.save(img_io, 'jpeg') - img_io.seek(0) - query = img_io.read() - - response = requests.post(url, data={ - 'relatedness': st.session_state.get('ranker_relatedness', 0.8), - 'activation': st.session_state.get('ranker_activation', 0.), - 'noise': st.session_state.get('ranker_noise', 0.01), - 'return_embeddings': False - }, files={'query': query}, headers={'Authorization': f"Bearer {microverse['token']}"}) - - content = json.loads(response.content) - new_thoughts = content['authorized_thoughts'] - for e_idx, e in enumerate(new_thoughts): - new_thoughts[e_idx]['conceptarium_url'] = microverse['url'] - new_thoughts[e_idx]['access_token'] = microverse['token'] - new_thoughts[e_idx]['auth'] = microverse['auth'] - - if isinstance(content, dict): - thoughts += content['authorized_thoughts'] - - return thoughts - - -@ st.cache() -def fetch_image(url, token): - response = requests.get(url, headers={'Authorization': f"Bearer {token}"}) - image = Image.open(io.BytesIO(response.content)) - return image diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/fsd.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/fsd.py deleted file mode 100644 index 86f70663516a7113962f02ad6d959b0a75fe2f3c..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/fsd.py +++ /dev/null @@ -1,221 +0,0 @@ -import torch, argparse, sys, os, numpy -from .sampler import FixedRandomSubsetSampler, FixedSubsetSampler -from torch.utils.data import DataLoader -from torchvision import transforms -from . import pbar -from . import zdataset -from . import segmenter -from . import frechet_distance -from . import parallelfolder - - -NUM_OBJECTS=336 - -def main(): - parser = argparse.ArgumentParser(description='Net dissect utility', - prog='python -m %s.fsd' % __package__) - parser.add_argument('true_dir') - parser.add_argument('gen_dir') - parser.add_argument('--size', type=int, default=10000) - parser.add_argument('--cachedir', default=None) - parser.add_argument('--histout', default=None) - parser.add_argument('--maxscale', type=float, default=50) - parser.add_argument('--labelcount', type=int, default=30) - parser.add_argument('--dpi', type=float, default=100) - if len(sys.argv) == 1: - parser.print_usage(sys.stderr) - sys.exit(1) - args = parser.parse_args() - true_dir, gen_dir = args.true_dir, args.gen_dir - seed1, seed2 = [1, 1 if true_dir != gen_dir else 2] - true_tally, gen_tally = [ - cached_tally_directory(d, size=args.size, cachedir=args.cachedir, - seed=seed) - for d, seed in [(true_dir, seed1), (gen_dir, seed2)]] - fsd, meandiff, covdiff = frechet_distance.sample_frechet_distance( - true_tally * 100, gen_tally * 100, return_components=True) - print('fsd: %f; meandiff: %f; covdiff: %f' % (fsd, meandiff, covdiff)) - if args.histout is not None: - diff_figure(true_tally * 100, gen_tally * 100, - labelcount=args.labelcount, - maxscale=args.maxscale, - dpi=args.dpi - ).savefig(args.histout) - -def cached_tally_directory(directory, size=10000, cachedir=None, seed=1, - download_from=None): - basename = ('%s_segtally_%d.npy' % (directory, size)).replace('/', '_') - if seed != 1: - basename = '%d_%s' % (seed, basename) - if cachedir is not None: - filename = os.path.join(cachedir, basename.replace('/', '_')) - else: - filename = basename - if not os.path.isfile(filename) and download_from: - from urllib.request import urlretrieve - from urllib.parse import urljoin - with pbar.reporthook() as hook: - urlretrieve(urljoin(download_from, basename), filename, - reporthook=hook) - if os.path.isfile(filename): - return numpy.load(filename) - os.makedirs(cachedir, exist_ok=True) - result = tally_directory(directory, size, seed=seed) - numpy.save(filename, result) - return result - -def tally_directory(directory, size=10000, seed=1): - dataset = parallelfolder.ParallelImageFolders( - [directory], - transform=transforms.Compose([ - transforms.Resize(256), - transforms.CenterCrop(256), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ])) - loader = DataLoader(dataset, - sampler=FixedRandomSubsetSampler(dataset, end=size, - seed=seed), - batch_size=10, pin_memory=True) - upp = segmenter.UnifiedParsingSegmenter() - labelnames, catnames = upp.get_label_and_category_names() - result = numpy.zeros((size, NUM_OBJECTS), dtype=numpy.float) - batch_result = torch.zeros(loader.batch_size, NUM_OBJECTS, - dtype=torch.float).cuda() - with torch.no_grad(): - batch_index = 0 - for [batch] in pbar(loader): - seg_result = upp.segment_batch(batch.cuda()) - for i in range(len(batch)): - batch_result[i] = ( - seg_result[i,0].view(-1).bincount( - minlength=NUM_OBJECTS).float() - / (seg_result.shape[2] * seg_result.shape[3]) - ) - result[batch_index:batch_index+len(batch)] = ( - batch_result.cpu().numpy()) - batch_index += len(batch) - return result - -def tally_dataset_objects(dataset, size=10000): - loader = DataLoader(dataset, - sampler=FixedRandomSubsetSampler(dataset, end=size), - batch_size=10, pin_memory=True) - upp = segmenter.UnifiedParsingSegmenter() - labelnames, catnames = upp.get_label_and_category_names() - result = numpy.zeros((size, NUM_OBJECTS), dtype=numpy.float) - batch_result = torch.zeros(loader.batch_size, NUM_OBJECTS, - dtype=torch.float).cuda() - with torch.no_grad(): - batch_index = 0 - for [batch] in pbar(loader): - seg_result = upp.segment_batch(batch.cuda()) - for i in range(len(batch)): - batch_result[i] = ( - seg_result[i,0].view(-1).bincount( - minlength=NUM_OBJECTS).float() - / (seg_result.shape[2] * seg_result.shape[3]) - ) - result[batch_index:batch_index+len(batch)] = ( - batch_result.cpu().numpy()) - batch_index += len(batch) - return result - -def tally_generated_objects(model, size=10000): - zds = zdataset.z_dataset_for_model(model, size) - loader = DataLoader(zds, batch_size=10, pin_memory=True) - upp = segmenter.UnifiedParsingSegmenter() - labelnames, catnames = upp.get_label_and_category_names() - result = numpy.zeros((size, NUM_OBJECTS), dtype=numpy.float) - batch_result = torch.zeros(loader.batch_size, NUM_OBJECTS, - dtype=torch.float).cuda() - with torch.no_grad(): - batch_index = 0 - for [zbatch] in pbar(loader): - img = model(zbatch.cuda()) - seg_result = upp.segment_batch(img) - for i in range(len(zbatch)): - batch_result[i] = ( - seg_result[i,0].view(-1).bincount( - minlength=NUM_OBJECTS).float() - / (seg_result.shape[2] * seg_result.shape[3]) - ) - result[batch_index:batch_index+len(zbatch)] = ( - batch_result.cpu().numpy()) - batch_index += len(zbatch) - return result - -def diff_figure(ttally, gtally, - labelcount=30, labelleft=True, dpi=100, - maxscale=50.0, legend=False): - from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas - from matplotlib.figure import Figure - tresult, gresult = [t.mean(0) for t in [ttally, gtally]] - upp = segmenter.UnifiedParsingSegmenter() - labelnames, catnames = upp.get_label_and_category_names() - x = [] - labels = [] - gen_amount = [] - change_frac = [] - true_amount = [] - for label in numpy.argsort(-tresult): - if label == 0 or labelnames[label][1] == 'material': - continue - if tresult[label] == 0: - break - x.append(len(x)) - labels.append(labelnames[label][0].split()[0]) - true_amount.append(tresult[label].item()) - gen_amount.append(gresult[label].item()) - change_frac.append((float(gresult[label] - tresult[label]) - / tresult[label])) - if len(x) >= labelcount: - break - fig = Figure(dpi=dpi, figsize=(1.4 + 5.0 * labelcount / 30, 4.0)) - FigureCanvas(fig) - a1, a0 = fig.subplots(2, 1, gridspec_kw = {'height_ratios':[1, 2]}) - a0.bar(x, change_frac, label='relative delta') - a0.set_xticks(x) - a0.set_xticklabels(labels, rotation='vertical') - if labelleft: - a0.set_ylabel('relative delta\n(gen - train) / train') - a0.set_xlim(-1.0, len(x)) - a0.set_ylim([-1, 1.1]) - a0.grid(axis='y', antialiased=False, alpha=0.25) - if legend: - a0.legend(loc=2) - prev_high = None - for ix, cf in enumerate(change_frac): - if cf > 1.15: - if prev_high == (ix - 1): - offset = 0.1 - else: - offset = 0.0 - prev_high = ix - a0.text(ix, 1.15 + offset, - '%.1f' % cf, horizontalalignment='center', size=6) - - a1.bar(x, true_amount, label='training') - a1.plot(x, gen_amount, linewidth=3, color='red', label='generated') - a1.set_yscale('log') - a1.set_xlim(-1.0, len(x)) - a1.set_ylim(maxscale / 5000, maxscale) - from matplotlib.ticker import LogLocator - # a1.yaxis.set_major_locator(LogLocator(subs=(1,))) - # a1.yaxis.set_minor_locator(LogLocator(subs=(1,), numdecs=10)) - # a1.yaxis.set_minor_locator(LogLocator(subs=(1,2,3,4,5,6,7,8,9))) - # a1.yaxis.set_minor_locator(yminor_locator) - if labelleft: - a1.set_ylabel('mean area\nlog scale') - if legend: - a1.legend() - a1.set_yticks([1e-2, 1e-1, 1.0, 1e+1]) - a1.set_yticks([a * b for a in [1e-2, 1e-1, 1.0, 1e+1] for b in range(1,10) - if maxscale / 5000 <= a * b <= maxscale], - True) # minor ticks. - a1.set_xticks([]) - fig.tight_layout() - return fig - -if __name__ == '__main__': - main() diff --git a/spaces/perezcatriel/data_world_jobs/page/contact.py b/spaces/perezcatriel/data_world_jobs/page/contact.py deleted file mode 100644 index 9701a990bda8fd9366411f12251a8207c1e91b23..0000000000000000000000000000000000000000 --- a/spaces/perezcatriel/data_world_jobs/page/contact.py +++ /dev/null @@ -1,69 +0,0 @@ -import streamlit as st - - -def Contact(): - st.markdown(''' -

    Simulación de Presupuesto

    -
    - ''', unsafe_allow_html=True) - - # Define los precios para cada opción - precio_analisis = 3500 - precio_ML = 5500 - precio_app = 3000 - precio_mantenimiento = 550 - - # Define las opciones como un diccionario de la forma {nombre_opción: precio_opción} - opciones = { - "Opción 1": precio_analisis, - "Opción 2": precio_ML, - "Opción 3": precio_app, - "Opción 4": precio_mantenimiento - } - - # Crea un checkbox para cada opción - analisis = st.checkbox("Análisis y Reportes") - ML = st.checkbox("Algoritmos de ML aplicado") - app = st.checkbox("Creación de una App") - # mantenimiento = st.checkbox('Mantenimiento') - - # Crea un campo numérico para la cantidad - cantidad = st.number_input("Meses de mantenimiento:", min_value=0, value=0) - - mes_mantenimiento = cantidad * precio_mantenimiento - - # Calcula el total en función de las opciones elegidas - total = mes_mantenimiento + sum( - [opciones[opcion] for opcion, seleccionada in zip(opciones.keys(), - [analisis, ML, - app]) if - seleccionada]) - - # Muestra el total - st.markdown(f''' - Total $: {total} - :rocket: - ''', unsafe_allow_html=True) - - st.markdown(''' -
    -

    Datos de contactos

    -
    - ''', unsafe_allow_html=True) - # Crea campos de entrada para el nombre, correo electrónico y mensaje - nombre = st.text_input("Nombre completo") - email = st.text_input("Correo electrónico") - mensaje = st.text_area("Mensaje") - - # Crea un botón para enviar el formulario - enviar = st.button("Enviar") - - # Si el botón es presionado, muestra un mensaje de confirmación - if enviar: - if nombre and email and mensaje: - - st.write( - "¡Gracias por tu mensaje! Nos pondremos en contacto contigo pronto.") - else: - st.error("Por favor completa todos los campos requeridos.") diff --git a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Fakeopen.py b/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Fakeopen.py deleted file mode 100644 index da6743ac3d3d5270cda55a88137ce2798d1be468..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Fakeopen.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -import json -import requests -from typing import Dict, get_type_hints - -url = 'https://ai.fakeopen.com/v1/' -model = [ - 'gpt-3.5-turbo', - 'gpt-3.5-turbo-0613' - 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', -] - -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - headers = { - 'Content-Type': 'application/json', - 'accept': 'text/event-stream', - 'Cache-Control': 'no-cache', - 'Proxy-Connection': 'keep-alive', - 'Authorization': f"Bearer {os.environ.get('FAKE_OPEN_KEY', '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')}", - } - - json_data = { - 'messages': messages, - 'temperature': 1.0, - 'model': model, - 'stream': stream, - } - - response = requests.post( - 'https://ai.fakeopen.com/v1/chat/completions', headers=headers, json=json_data, stream=True - ) - - for token in response.iter_lines(): - decoded = token.decode('utf-8') - if decoded == '[DONE]': - break - if decoded.startswith('data: '): - data_str = decoded.replace('data: ', '') - if data_str != '[DONE]': - data = json.loads(data_str) - if 'choices' in data and 'delta' in data['choices'][0] and 'content' in data['choices'][0]['delta']: - yield data['choices'][0]['delta']['content'] - - - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/pikto/Elite-freegpt-webui/g4f/__init__.py b/spaces/pikto/Elite-freegpt-webui/g4f/__init__.py deleted file mode 100644 index a0b4bac6aa4de9c0449095a3874c2cb9716169d7..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/g4f/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -import sys -from . import Provider -from g4f.models import Model, ModelUtils - - -class ChatCompletion: - @staticmethod - def create(model: Model.model or str, messages: list, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs): - kwargs['auth'] = auth - - if provider and provider.needs_auth and not auth: - print( - f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr) - sys.exit(1) - - try: - if isinstance(model, str): - try: - model = ModelUtils.convert[model] - except KeyError: - raise Exception(f'The model: {model} does not exist') - - engine = model.best_provider if not provider else provider - - if not engine.supports_stream and stream == True: - print( - f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr) - sys.exit(1) - - print(f'Using {engine.__name__} provider') - - return (engine._create_completion(model.name, messages, stream, **kwargs) - if stream else ''.join(engine._create_completion(model.name, messages, stream, **kwargs))) - except TypeError as e: - print(e) - arg: str = str(e).split("'")[1] - print( - f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr) - sys.exit(1) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py deleted file mode 100644 index cce05582ffc6fe6d72027194f4ccc44ee42f1fcd..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py +++ /dev/null @@ -1,35 +0,0 @@ -from itertools import filterfalse - -from typing import ( - Callable, - Iterable, - Iterator, - Optional, - Set, - TypeVar, - Union, -) - -# Type and type variable definitions -_T = TypeVar('_T') -_U = TypeVar('_U') - - -def unique_everseen( - iterable: Iterable[_T], key: Optional[Callable[[_T], _U]] = None -) -> Iterator[_T]: - "List unique elements, preserving order. Remember all elements ever seen." - # unique_everseen('AAAABBBCCDAABBB') --> A B C D - # unique_everseen('ABBCcAD', str.lower) --> A B C D - seen: Set[Union[_T, _U]] = set() - seen_add = seen.add - if key is None: - for element in filterfalse(seen.__contains__, iterable): - seen_add(element) - yield element - else: - for element in iterable: - k = key(element) - if k not in seen: - seen_add(k) - yield element diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/concurrency.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/concurrency.py deleted file mode 100644 index 754061c862dadbdfd0c57a563b76fbd0fb5497a4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/concurrency.py +++ /dev/null @@ -1,40 +0,0 @@ -from contextlib import AsyncExitStack as AsyncExitStack # noqa -from contextlib import asynccontextmanager as asynccontextmanager -from typing import AsyncGenerator, ContextManager, TypeVar - -import anyio -from anyio import CapacityLimiter -from starlette.concurrency import iterate_in_threadpool as iterate_in_threadpool # noqa -from starlette.concurrency import run_in_threadpool as run_in_threadpool # noqa -from starlette.concurrency import ( # noqa - run_until_first_complete as run_until_first_complete, -) - -_T = TypeVar("_T") - - -@asynccontextmanager -async def contextmanager_in_threadpool( - cm: ContextManager[_T], -) -> AsyncGenerator[_T, None]: - # blocking __exit__ from running waiting on a free thread - # can create race conditions/deadlocks if the context manager itself - # has its own internal pool (e.g. a database connection pool) - # to avoid this we let __exit__ run without a capacity limit - # since we're creating a new limiter for each call, any non-zero limit - # works (1 is arbitrary) - exit_limiter = CapacityLimiter(1) - try: - yield await run_in_threadpool(cm.__enter__) - except Exception as e: - ok = bool( - await anyio.to_thread.run_sync( - cm.__exit__, type(e), e, None, limiter=exit_limiter - ) - ) - if not ok: - raise e - else: - await anyio.to_thread.run_sync( - cm.__exit__, None, None, None, limiter=exit_limiter - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/builder.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/builder.py deleted file mode 100644 index 94628bff1158ece77035d97f29ca7bdecaa5b05b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/builder.py +++ /dev/null @@ -1,2920 +0,0 @@ -from collections import namedtuple, OrderedDict -import os -from fontTools.misc.fixedTools import fixedToFloat -from fontTools import ttLib -from fontTools.ttLib.tables import otTables as ot -from fontTools.ttLib.tables.otBase import ( - ValueRecord, - valueRecordFormatDict, - OTTableWriter, - CountReference, -) -from fontTools.ttLib.tables import otBase -from fontTools.feaLib.ast import STATNameStatement -from fontTools.otlLib.optimize.gpos import ( - _compression_level_from_env, - compact_lookup, -) -from fontTools.otlLib.error import OpenTypeLibError -from functools import reduce -import logging -import copy - - -log = logging.getLogger(__name__) - - -def buildCoverage(glyphs, glyphMap): - """Builds a coverage table. - - Coverage tables (as defined in the `OpenType spec `__) - are used in all OpenType Layout lookups apart from the Extension type, and - define the glyphs involved in a layout subtable. This allows shaping engines - to compare the glyph stream with the coverage table and quickly determine - whether a subtable should be involved in a shaping operation. - - This function takes a list of glyphs and a glyphname-to-ID map, and - returns a ``Coverage`` object representing the coverage table. - - Example:: - - glyphMap = font.getReverseGlyphMap() - glyphs = [ "A", "B", "C" ] - coverage = buildCoverage(glyphs, glyphMap) - - Args: - glyphs: a sequence of glyph names. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - An ``otTables.Coverage`` object or ``None`` if there are no glyphs - supplied. - """ - - if not glyphs: - return None - self = ot.Coverage() - try: - self.glyphs = sorted(set(glyphs), key=glyphMap.__getitem__) - except KeyError as e: - raise ValueError(f"Could not find glyph {e} in font") from e - - return self - - -LOOKUP_FLAG_RIGHT_TO_LEFT = 0x0001 -LOOKUP_FLAG_IGNORE_BASE_GLYPHS = 0x0002 -LOOKUP_FLAG_IGNORE_LIGATURES = 0x0004 -LOOKUP_FLAG_IGNORE_MARKS = 0x0008 -LOOKUP_FLAG_USE_MARK_FILTERING_SET = 0x0010 - - -def buildLookup(subtables, flags=0, markFilterSet=None): - """Turns a collection of rules into a lookup. - - A Lookup (as defined in the `OpenType Spec `__) - wraps the individual rules in a layout operation (substitution or - positioning) in a data structure expressing their overall lookup type - - for example, single substitution, mark-to-base attachment, and so on - - as well as the lookup flags and any mark filtering sets. You may import - the following constants to express lookup flags: - - - ``LOOKUP_FLAG_RIGHT_TO_LEFT`` - - ``LOOKUP_FLAG_IGNORE_BASE_GLYPHS`` - - ``LOOKUP_FLAG_IGNORE_LIGATURES`` - - ``LOOKUP_FLAG_IGNORE_MARKS`` - - ``LOOKUP_FLAG_USE_MARK_FILTERING_SET`` - - Args: - subtables: A list of layout subtable objects (e.g. - ``MultipleSubst``, ``PairPos``, etc.) or ``None``. - flags (int): This lookup's flags. - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - - Returns: - An ``otTables.Lookup`` object or ``None`` if there are no subtables - supplied. - """ - if subtables is None: - return None - subtables = [st for st in subtables if st is not None] - if not subtables: - return None - assert all( - t.LookupType == subtables[0].LookupType for t in subtables - ), "all subtables must have the same LookupType; got %s" % repr( - [t.LookupType for t in subtables] - ) - self = ot.Lookup() - self.LookupType = subtables[0].LookupType - self.LookupFlag = flags - self.SubTable = subtables - self.SubTableCount = len(self.SubTable) - if markFilterSet is not None: - self.LookupFlag |= LOOKUP_FLAG_USE_MARK_FILTERING_SET - assert isinstance(markFilterSet, int), markFilterSet - self.MarkFilteringSet = markFilterSet - else: - assert (self.LookupFlag & LOOKUP_FLAG_USE_MARK_FILTERING_SET) == 0, ( - "if markFilterSet is None, flags must not set " - "LOOKUP_FLAG_USE_MARK_FILTERING_SET; flags=0x%04x" % flags - ) - return self - - -class LookupBuilder(object): - SUBTABLE_BREAK_ = "SUBTABLE_BREAK" - - def __init__(self, font, location, table, lookup_type): - self.font = font - self.glyphMap = font.getReverseGlyphMap() - self.location = location - self.table, self.lookup_type = table, lookup_type - self.lookupflag = 0 - self.markFilterSet = None - self.lookup_index = None # assigned when making final tables - assert table in ("GPOS", "GSUB") - - def equals(self, other): - return ( - isinstance(other, self.__class__) - and self.table == other.table - and self.lookupflag == other.lookupflag - and self.markFilterSet == other.markFilterSet - ) - - def inferGlyphClasses(self): - """Infers glyph glasses for the GDEF table, such as {"cedilla":3}.""" - return {} - - def getAlternateGlyphs(self): - """Helper for building 'aalt' features.""" - return {} - - def buildLookup_(self, subtables): - return buildLookup(subtables, self.lookupflag, self.markFilterSet) - - def buildMarkClasses_(self, marks): - """{"cedilla": ("BOTTOM", ast.Anchor), ...} --> {"BOTTOM":0, "TOP":1} - - Helper for MarkBasePostBuilder, MarkLigPosBuilder, and - MarkMarkPosBuilder. Seems to return the same numeric IDs - for mark classes as the AFDKO makeotf tool. - """ - ids = {} - for mark in sorted(marks.keys(), key=self.font.getGlyphID): - markClassName, _markAnchor = marks[mark] - if markClassName not in ids: - ids[markClassName] = len(ids) - return ids - - def setBacktrackCoverage_(self, prefix, subtable): - subtable.BacktrackGlyphCount = len(prefix) - subtable.BacktrackCoverage = [] - for p in reversed(prefix): - coverage = buildCoverage(p, self.glyphMap) - subtable.BacktrackCoverage.append(coverage) - - def setLookAheadCoverage_(self, suffix, subtable): - subtable.LookAheadGlyphCount = len(suffix) - subtable.LookAheadCoverage = [] - for s in suffix: - coverage = buildCoverage(s, self.glyphMap) - subtable.LookAheadCoverage.append(coverage) - - def setInputCoverage_(self, glyphs, subtable): - subtable.InputGlyphCount = len(glyphs) - subtable.InputCoverage = [] - for g in glyphs: - coverage = buildCoverage(g, self.glyphMap) - subtable.InputCoverage.append(coverage) - - def setCoverage_(self, glyphs, subtable): - subtable.GlyphCount = len(glyphs) - subtable.Coverage = [] - for g in glyphs: - coverage = buildCoverage(g, self.glyphMap) - subtable.Coverage.append(coverage) - - def build_subst_subtables(self, mapping, klass): - substitutions = [{}] - for key in mapping: - if key[0] == self.SUBTABLE_BREAK_: - substitutions.append({}) - else: - substitutions[-1][key] = mapping[key] - subtables = [klass(s) for s in substitutions] - return subtables - - def add_subtable_break(self, location): - """Add an explicit subtable break. - - Args: - location: A string or tuple representing the location in the - original source which produced this break, or ``None`` if - no location is provided. - """ - log.warning( - OpenTypeLibError( - 'unsupported "subtable" statement for lookup type', location - ) - ) - - -class AlternateSubstBuilder(LookupBuilder): - """Builds an Alternate Substitution (GSUB3) lookup. - - Users are expected to manually add alternate glyph substitutions to - the ``alternates`` attribute after the object has been initialized, - e.g.:: - - builder.alternates["A"] = ["A.alt1", "A.alt2"] - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - alternates: An ordered dictionary of alternates, mapping glyph names - to a list of names of alternates. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GSUB", 3) - self.alternates = OrderedDict() - - def equals(self, other): - return LookupBuilder.equals(self, other) and self.alternates == other.alternates - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the alternate - substitution lookup. - """ - subtables = self.build_subst_subtables( - self.alternates, buildAlternateSubstSubtable - ) - return self.buildLookup_(subtables) - - def getAlternateGlyphs(self): - return self.alternates - - def add_subtable_break(self, location): - self.alternates[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_ - - -class ChainContextualRule( - namedtuple("ChainContextualRule", ["prefix", "glyphs", "suffix", "lookups"]) -): - @property - def is_subtable_break(self): - return self.prefix == LookupBuilder.SUBTABLE_BREAK_ - - -class ChainContextualRuleset: - def __init__(self): - self.rules = [] - - def addRule(self, rule): - self.rules.append(rule) - - @property - def hasPrefixOrSuffix(self): - # Do we have any prefixes/suffixes? If this is False for all - # rulesets, we can express the whole lookup as GPOS5/GSUB7. - for rule in self.rules: - if len(rule.prefix) > 0 or len(rule.suffix) > 0: - return True - return False - - @property - def hasAnyGlyphClasses(self): - # Do we use glyph classes anywhere in the rules? If this is False - # we can express this subtable as a Format 1. - for rule in self.rules: - for coverage in (rule.prefix, rule.glyphs, rule.suffix): - if any(len(x) > 1 for x in coverage): - return True - return False - - def format2ClassDefs(self): - PREFIX, GLYPHS, SUFFIX = 0, 1, 2 - classDefBuilders = [] - for ix in [PREFIX, GLYPHS, SUFFIX]: - context = [] - for r in self.rules: - context.append(r[ix]) - classes = self._classBuilderForContext(context) - if not classes: - return None - classDefBuilders.append(classes) - return classDefBuilders - - def _classBuilderForContext(self, context): - classdefbuilder = ClassDefBuilder(useClass0=False) - for position in context: - for glyphset in position: - glyphs = set(glyphset) - if not classdefbuilder.canAdd(glyphs): - return None - classdefbuilder.add(glyphs) - return classdefbuilder - - -class ChainContextualBuilder(LookupBuilder): - def equals(self, other): - return LookupBuilder.equals(self, other) and self.rules == other.rules - - def rulesets(self): - # Return a list of ChainContextRuleset objects, taking explicit - # subtable breaks into account - ruleset = [ChainContextualRuleset()] - for rule in self.rules: - if rule.is_subtable_break: - ruleset.append(ChainContextualRuleset()) - continue - ruleset[-1].addRule(rule) - # Squish any empty subtables - return [x for x in ruleset if len(x.rules) > 0] - - def getCompiledSize_(self, subtables): - size = 0 - for st in subtables: - w = OTTableWriter() - w["LookupType"] = CountReference( - {"LookupType": st.LookupType}, "LookupType" - ) - # We need to make a copy here because compiling - # modifies the subtable (finalizing formats etc.) - copy.deepcopy(st).compile(w, self.font) - size += len(w.getAllData()) - return size - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the chained - contextual positioning lookup. - """ - subtables = [] - - rulesets = self.rulesets() - chaining = any(ruleset.hasPrefixOrSuffix for ruleset in rulesets) - - # https://github.com/fonttools/fonttools/issues/2539 - # - # Unfortunately, as of 2022-03-07, Apple's CoreText renderer does not - # correctly process GPOS7 lookups, so for now we force contextual - # positioning lookups to be chaining (GPOS8). - # - # This seems to be fixed as of macOS 13.2, but we keep disabling this - # for now until we are no longer concerned about old macOS versions. - # But we allow people to opt-out of this with the config key below. - write_gpos7 = self.font.cfg.get("fontTools.otlLib.builder:WRITE_GPOS7") - # horrible separation of concerns breach - if not write_gpos7 and self.subtable_type == "Pos": - chaining = True - - for ruleset in rulesets: - # Determine format strategy. We try to build formats 1, 2 and 3 - # subtables and then work out which is best. candidates list holds - # the subtables in each format for this ruleset (including a dummy - # "format 0" to make the addressing match the format numbers). - - # We can always build a format 3 lookup by accumulating each of - # the rules into a list, so start with that. - candidates = [None, None, None, []] - for rule in ruleset.rules: - candidates[3].append(self.buildFormat3Subtable(rule, chaining)) - - # Can we express the whole ruleset as a format 2 subtable? - classdefs = ruleset.format2ClassDefs() - if classdefs: - candidates[2] = [ - self.buildFormat2Subtable(ruleset, classdefs, chaining) - ] - - if not ruleset.hasAnyGlyphClasses: - candidates[1] = [self.buildFormat1Subtable(ruleset, chaining)] - - for i in [1, 2, 3]: - if candidates[i]: - try: - self.getCompiledSize_(candidates[i]) - except Exception as e: - log.warning( - "Contextual format %i at %s overflowed (%s)" - % (i, str(self.location), e) - ) - candidates[i] = None - - candidates = [x for x in candidates if x is not None] - if not candidates: - raise OpenTypeLibError("All candidates overflowed", self.location) - - winner = min(candidates, key=self.getCompiledSize_) - subtables.extend(winner) - - # If we are not chaining, lookup type will be automatically fixed by - # buildLookup_ - return self.buildLookup_(subtables) - - def buildFormat1Subtable(self, ruleset, chaining=True): - st = self.newSubtable_(chaining=chaining) - st.Format = 1 - st.populateDefaults() - coverage = set() - rulesetsByFirstGlyph = {} - ruleAttr = self.ruleAttr_(format=1, chaining=chaining) - - for rule in ruleset.rules: - ruleAsSubtable = self.newRule_(format=1, chaining=chaining) - - if chaining: - ruleAsSubtable.BacktrackGlyphCount = len(rule.prefix) - ruleAsSubtable.LookAheadGlyphCount = len(rule.suffix) - ruleAsSubtable.Backtrack = [list(x)[0] for x in reversed(rule.prefix)] - ruleAsSubtable.LookAhead = [list(x)[0] for x in rule.suffix] - - ruleAsSubtable.InputGlyphCount = len(rule.glyphs) - else: - ruleAsSubtable.GlyphCount = len(rule.glyphs) - - ruleAsSubtable.Input = [list(x)[0] for x in rule.glyphs[1:]] - - self.buildLookupList(rule, ruleAsSubtable) - - firstGlyph = list(rule.glyphs[0])[0] - if firstGlyph not in rulesetsByFirstGlyph: - coverage.add(firstGlyph) - rulesetsByFirstGlyph[firstGlyph] = [] - rulesetsByFirstGlyph[firstGlyph].append(ruleAsSubtable) - - st.Coverage = buildCoverage(coverage, self.glyphMap) - ruleSets = [] - for g in st.Coverage.glyphs: - ruleSet = self.newRuleSet_(format=1, chaining=chaining) - setattr(ruleSet, ruleAttr, rulesetsByFirstGlyph[g]) - setattr(ruleSet, f"{ruleAttr}Count", len(rulesetsByFirstGlyph[g])) - ruleSets.append(ruleSet) - - setattr(st, self.ruleSetAttr_(format=1, chaining=chaining), ruleSets) - setattr( - st, self.ruleSetAttr_(format=1, chaining=chaining) + "Count", len(ruleSets) - ) - - return st - - def buildFormat2Subtable(self, ruleset, classdefs, chaining=True): - st = self.newSubtable_(chaining=chaining) - st.Format = 2 - st.populateDefaults() - - if chaining: - ( - st.BacktrackClassDef, - st.InputClassDef, - st.LookAheadClassDef, - ) = [c.build() for c in classdefs] - else: - st.ClassDef = classdefs[1].build() - - inClasses = classdefs[1].classes() - - classSets = [] - for _ in inClasses: - classSet = self.newRuleSet_(format=2, chaining=chaining) - classSets.append(classSet) - - coverage = set() - classRuleAttr = self.ruleAttr_(format=2, chaining=chaining) - - for rule in ruleset.rules: - ruleAsSubtable = self.newRule_(format=2, chaining=chaining) - if chaining: - ruleAsSubtable.BacktrackGlyphCount = len(rule.prefix) - ruleAsSubtable.LookAheadGlyphCount = len(rule.suffix) - # The glyphs in the rule may be list, tuple, odict_keys... - # Order is not important anyway because they are guaranteed - # to be members of the same class. - ruleAsSubtable.Backtrack = [ - st.BacktrackClassDef.classDefs[list(x)[0]] - for x in reversed(rule.prefix) - ] - ruleAsSubtable.LookAhead = [ - st.LookAheadClassDef.classDefs[list(x)[0]] for x in rule.suffix - ] - - ruleAsSubtable.InputGlyphCount = len(rule.glyphs) - ruleAsSubtable.Input = [ - st.InputClassDef.classDefs[list(x)[0]] for x in rule.glyphs[1:] - ] - setForThisRule = classSets[ - st.InputClassDef.classDefs[list(rule.glyphs[0])[0]] - ] - else: - ruleAsSubtable.GlyphCount = len(rule.glyphs) - ruleAsSubtable.Class = [ # The spec calls this InputSequence - st.ClassDef.classDefs[list(x)[0]] for x in rule.glyphs[1:] - ] - setForThisRule = classSets[ - st.ClassDef.classDefs[list(rule.glyphs[0])[0]] - ] - - self.buildLookupList(rule, ruleAsSubtable) - coverage |= set(rule.glyphs[0]) - - getattr(setForThisRule, classRuleAttr).append(ruleAsSubtable) - setattr( - setForThisRule, - f"{classRuleAttr}Count", - getattr(setForThisRule, f"{classRuleAttr}Count") + 1, - ) - setattr(st, self.ruleSetAttr_(format=2, chaining=chaining), classSets) - setattr( - st, self.ruleSetAttr_(format=2, chaining=chaining) + "Count", len(classSets) - ) - st.Coverage = buildCoverage(coverage, self.glyphMap) - return st - - def buildFormat3Subtable(self, rule, chaining=True): - st = self.newSubtable_(chaining=chaining) - st.Format = 3 - if chaining: - self.setBacktrackCoverage_(rule.prefix, st) - self.setLookAheadCoverage_(rule.suffix, st) - self.setInputCoverage_(rule.glyphs, st) - else: - self.setCoverage_(rule.glyphs, st) - self.buildLookupList(rule, st) - return st - - def buildLookupList(self, rule, st): - for sequenceIndex, lookupList in enumerate(rule.lookups): - if lookupList is not None: - if not isinstance(lookupList, list): - # Can happen with synthesised lookups - lookupList = [lookupList] - for l in lookupList: - if l.lookup_index is None: - if isinstance(self, ChainContextPosBuilder): - other = "substitution" - else: - other = "positioning" - raise OpenTypeLibError( - "Missing index of the specified " - f"lookup, might be a {other} lookup", - self.location, - ) - rec = self.newLookupRecord_(st) - rec.SequenceIndex = sequenceIndex - rec.LookupListIndex = l.lookup_index - - def add_subtable_break(self, location): - self.rules.append( - ChainContextualRule( - self.SUBTABLE_BREAK_, - self.SUBTABLE_BREAK_, - self.SUBTABLE_BREAK_, - [self.SUBTABLE_BREAK_], - ) - ) - - def newSubtable_(self, chaining=True): - subtablename = f"Context{self.subtable_type}" - if chaining: - subtablename = "Chain" + subtablename - st = getattr(ot, subtablename)() # ot.ChainContextPos()/ot.ChainSubst()/etc. - setattr(st, f"{self.subtable_type}Count", 0) - setattr(st, f"{self.subtable_type}LookupRecord", []) - return st - - # Format 1 and format 2 GSUB5/GSUB6/GPOS7/GPOS8 rulesets and rules form a family: - # - # format 1 ruleset format 1 rule format 2 ruleset format 2 rule - # GSUB5 SubRuleSet SubRule SubClassSet SubClassRule - # GSUB6 ChainSubRuleSet ChainSubRule ChainSubClassSet ChainSubClassRule - # GPOS7 PosRuleSet PosRule PosClassSet PosClassRule - # GPOS8 ChainPosRuleSet ChainPosRule ChainPosClassSet ChainPosClassRule - # - # The following functions generate the attribute names and subtables according - # to this naming convention. - def ruleSetAttr_(self, format=1, chaining=True): - if format == 1: - formatType = "Rule" - elif format == 2: - formatType = "Class" - else: - raise AssertionError(formatType) - subtablename = f"{self.subtable_type[0:3]}{formatType}Set" # Sub, not Subst. - if chaining: - subtablename = "Chain" + subtablename - return subtablename - - def ruleAttr_(self, format=1, chaining=True): - if format == 1: - formatType = "" - elif format == 2: - formatType = "Class" - else: - raise AssertionError(formatType) - subtablename = f"{self.subtable_type[0:3]}{formatType}Rule" # Sub, not Subst. - if chaining: - subtablename = "Chain" + subtablename - return subtablename - - def newRuleSet_(self, format=1, chaining=True): - st = getattr( - ot, self.ruleSetAttr_(format, chaining) - )() # ot.ChainPosRuleSet()/ot.SubRuleSet()/etc. - st.populateDefaults() - return st - - def newRule_(self, format=1, chaining=True): - st = getattr( - ot, self.ruleAttr_(format, chaining) - )() # ot.ChainPosClassRule()/ot.SubClassRule()/etc. - st.populateDefaults() - return st - - def attachSubtableWithCount_( - self, st, subtable_name, count_name, existing=None, index=None, chaining=False - ): - if chaining: - subtable_name = "Chain" + subtable_name - count_name = "Chain" + count_name - - if not hasattr(st, count_name): - setattr(st, count_name, 0) - setattr(st, subtable_name, []) - - if existing: - new_subtable = existing - else: - # Create a new, empty subtable from otTables - new_subtable = getattr(ot, subtable_name)() - - setattr(st, count_name, getattr(st, count_name) + 1) - - if index: - getattr(st, subtable_name).insert(index, new_subtable) - else: - getattr(st, subtable_name).append(new_subtable) - - return new_subtable - - def newLookupRecord_(self, st): - return self.attachSubtableWithCount_( - st, - f"{self.subtable_type}LookupRecord", - f"{self.subtable_type}Count", - chaining=False, - ) # Oddly, it isn't ChainSubstLookupRecord - - -class ChainContextPosBuilder(ChainContextualBuilder): - """Builds a Chained Contextual Positioning (GPOS8) lookup. - - Users are expected to manually add rules to the ``rules`` attribute after - the object has been initialized, e.g.:: - - # pos [A B] [C D] x' lookup lu1 y' z' lookup lu2 E; - - prefix = [ ["A", "B"], ["C", "D"] ] - suffix = [ ["E"] ] - glyphs = [ ["x"], ["y"], ["z"] ] - lookups = [ [lu1], None, [lu2] ] - builder.rules.append( (prefix, glyphs, suffix, lookups) ) - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - rules: A list of tuples representing the rules in this lookup. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GPOS", 8) - self.rules = [] - self.subtable_type = "Pos" - - def find_chainable_single_pos(self, lookups, glyphs, value): - """Helper for add_single_pos_chained_()""" - res = None - for lookup in lookups[::-1]: - if lookup == self.SUBTABLE_BREAK_: - return res - if isinstance(lookup, SinglePosBuilder) and all( - lookup.can_add(glyph, value) for glyph in glyphs - ): - res = lookup - return res - - -class ChainContextSubstBuilder(ChainContextualBuilder): - """Builds a Chained Contextual Substitution (GSUB6) lookup. - - Users are expected to manually add rules to the ``rules`` attribute after - the object has been initialized, e.g.:: - - # sub [A B] [C D] x' lookup lu1 y' z' lookup lu2 E; - - prefix = [ ["A", "B"], ["C", "D"] ] - suffix = [ ["E"] ] - glyphs = [ ["x"], ["y"], ["z"] ] - lookups = [ [lu1], None, [lu2] ] - builder.rules.append( (prefix, glyphs, suffix, lookups) ) - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - rules: A list of tuples representing the rules in this lookup. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GSUB", 6) - self.rules = [] # (prefix, input, suffix, lookups) - self.subtable_type = "Subst" - - def getAlternateGlyphs(self): - result = {} - for rule in self.rules: - if rule.is_subtable_break: - continue - for lookups in rule.lookups: - if not isinstance(lookups, list): - lookups = [lookups] - for lookup in lookups: - if lookup is not None: - alts = lookup.getAlternateGlyphs() - for glyph, replacements in alts.items(): - result.setdefault(glyph, set()).update(replacements) - return result - - def find_chainable_single_subst(self, mapping): - """Helper for add_single_subst_chained_()""" - res = None - for rule in self.rules[::-1]: - if rule.is_subtable_break: - return res - for sub in rule.lookups: - if isinstance(sub, SingleSubstBuilder) and not any( - g in mapping and mapping[g] != sub.mapping[g] for g in sub.mapping - ): - res = sub - return res - - -class LigatureSubstBuilder(LookupBuilder): - """Builds a Ligature Substitution (GSUB4) lookup. - - Users are expected to manually add ligatures to the ``ligatures`` - attribute after the object has been initialized, e.g.:: - - # sub f i by f_i; - builder.ligatures[("f","f","i")] = "f_f_i" - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - ligatures: An ordered dictionary mapping a tuple of glyph names to the - ligature glyphname. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GSUB", 4) - self.ligatures = OrderedDict() # {('f','f','i'): 'f_f_i'} - - def equals(self, other): - return LookupBuilder.equals(self, other) and self.ligatures == other.ligatures - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the ligature - substitution lookup. - """ - subtables = self.build_subst_subtables( - self.ligatures, buildLigatureSubstSubtable - ) - return self.buildLookup_(subtables) - - def add_subtable_break(self, location): - self.ligatures[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_ - - -class MultipleSubstBuilder(LookupBuilder): - """Builds a Multiple Substitution (GSUB2) lookup. - - Users are expected to manually add substitutions to the ``mapping`` - attribute after the object has been initialized, e.g.:: - - # sub uni06C0 by uni06D5.fina hamza.above; - builder.mapping["uni06C0"] = [ "uni06D5.fina", "hamza.above"] - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - mapping: An ordered dictionary mapping a glyph name to a list of - substituted glyph names. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GSUB", 2) - self.mapping = OrderedDict() - - def equals(self, other): - return LookupBuilder.equals(self, other) and self.mapping == other.mapping - - def build(self): - subtables = self.build_subst_subtables(self.mapping, buildMultipleSubstSubtable) - return self.buildLookup_(subtables) - - def add_subtable_break(self, location): - self.mapping[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_ - - -class CursivePosBuilder(LookupBuilder): - """Builds a Cursive Positioning (GPOS3) lookup. - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - attachments: An ordered dictionary mapping a glyph name to a two-element - tuple of ``otTables.Anchor`` objects. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GPOS", 3) - self.attachments = {} - - def equals(self, other): - return ( - LookupBuilder.equals(self, other) and self.attachments == other.attachments - ) - - def add_attachment(self, location, glyphs, entryAnchor, exitAnchor): - """Adds attachment information to the cursive positioning lookup. - - Args: - location: A string or tuple representing the location in the - original source which produced this lookup. (Unused.) - glyphs: A list of glyph names sharing these entry and exit - anchor locations. - entryAnchor: A ``otTables.Anchor`` object representing the - entry anchor, or ``None`` if no entry anchor is present. - exitAnchor: A ``otTables.Anchor`` object representing the - exit anchor, or ``None`` if no exit anchor is present. - """ - for glyph in glyphs: - self.attachments[glyph] = (entryAnchor, exitAnchor) - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the cursive - positioning lookup. - """ - st = buildCursivePosSubtable(self.attachments, self.glyphMap) - return self.buildLookup_([st]) - - -class MarkBasePosBuilder(LookupBuilder): - """Builds a Mark-To-Base Positioning (GPOS4) lookup. - - Users are expected to manually add marks and bases to the ``marks`` - and ``bases`` attributes after the object has been initialized, e.g.:: - - builder.marks["acute"] = (0, a1) - builder.marks["grave"] = (0, a1) - builder.marks["cedilla"] = (1, a2) - builder.bases["a"] = {0: a3, 1: a5} - builder.bases["b"] = {0: a4, 1: a5} - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - marks: An dictionary mapping a glyph name to a two-element - tuple containing a mark class ID and ``otTables.Anchor`` object. - bases: An dictionary mapping a glyph name to a dictionary of - mark class IDs and ``otTables.Anchor`` object. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GPOS", 4) - self.marks = {} # glyphName -> (markClassName, anchor) - self.bases = {} # glyphName -> {markClassName: anchor} - - def equals(self, other): - return ( - LookupBuilder.equals(self, other) - and self.marks == other.marks - and self.bases == other.bases - ) - - def inferGlyphClasses(self): - result = {glyph: 1 for glyph in self.bases} - result.update({glyph: 3 for glyph in self.marks}) - return result - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the mark-to-base - positioning lookup. - """ - markClasses = self.buildMarkClasses_(self.marks) - marks = {} - for mark, (mc, anchor) in self.marks.items(): - if mc not in markClasses: - raise ValueError( - "Mark class %s not found for mark glyph %s" % (mc, mark) - ) - marks[mark] = (markClasses[mc], anchor) - bases = {} - for glyph, anchors in self.bases.items(): - bases[glyph] = {} - for mc, anchor in anchors.items(): - if mc not in markClasses: - raise ValueError( - "Mark class %s not found for base glyph %s" % (mc, glyph) - ) - bases[glyph][markClasses[mc]] = anchor - subtables = buildMarkBasePos(marks, bases, self.glyphMap) - return self.buildLookup_(subtables) - - -class MarkLigPosBuilder(LookupBuilder): - """Builds a Mark-To-Ligature Positioning (GPOS5) lookup. - - Users are expected to manually add marks and bases to the ``marks`` - and ``ligatures`` attributes after the object has been initialized, e.g.:: - - builder.marks["acute"] = (0, a1) - builder.marks["grave"] = (0, a1) - builder.marks["cedilla"] = (1, a2) - builder.ligatures["f_i"] = [ - { 0: a3, 1: a5 }, # f - { 0: a4, 1: a5 } # i - ] - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - marks: An dictionary mapping a glyph name to a two-element - tuple containing a mark class ID and ``otTables.Anchor`` object. - ligatures: An dictionary mapping a glyph name to an array with one - element for each ligature component. Each array element should be - a dictionary mapping mark class IDs to ``otTables.Anchor`` objects. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GPOS", 5) - self.marks = {} # glyphName -> (markClassName, anchor) - self.ligatures = {} # glyphName -> [{markClassName: anchor}, ...] - - def equals(self, other): - return ( - LookupBuilder.equals(self, other) - and self.marks == other.marks - and self.ligatures == other.ligatures - ) - - def inferGlyphClasses(self): - result = {glyph: 2 for glyph in self.ligatures} - result.update({glyph: 3 for glyph in self.marks}) - return result - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the mark-to-ligature - positioning lookup. - """ - markClasses = self.buildMarkClasses_(self.marks) - marks = { - mark: (markClasses[mc], anchor) for mark, (mc, anchor) in self.marks.items() - } - ligs = {} - for lig, components in self.ligatures.items(): - ligs[lig] = [] - for c in components: - ligs[lig].append({markClasses[mc]: a for mc, a in c.items()}) - subtables = buildMarkLigPos(marks, ligs, self.glyphMap) - return self.buildLookup_(subtables) - - -class MarkMarkPosBuilder(LookupBuilder): - """Builds a Mark-To-Mark Positioning (GPOS6) lookup. - - Users are expected to manually add marks and bases to the ``marks`` - and ``baseMarks`` attributes after the object has been initialized, e.g.:: - - builder.marks["acute"] = (0, a1) - builder.marks["grave"] = (0, a1) - builder.marks["cedilla"] = (1, a2) - builder.baseMarks["acute"] = {0: a3} - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - marks: An dictionary mapping a glyph name to a two-element - tuple containing a mark class ID and ``otTables.Anchor`` object. - baseMarks: An dictionary mapping a glyph name to a dictionary - containing one item: a mark class ID and a ``otTables.Anchor`` object. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GPOS", 6) - self.marks = {} # glyphName -> (markClassName, anchor) - self.baseMarks = {} # glyphName -> {markClassName: anchor} - - def equals(self, other): - return ( - LookupBuilder.equals(self, other) - and self.marks == other.marks - and self.baseMarks == other.baseMarks - ) - - def inferGlyphClasses(self): - result = {glyph: 3 for glyph in self.baseMarks} - result.update({glyph: 3 for glyph in self.marks}) - return result - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the mark-to-mark - positioning lookup. - """ - markClasses = self.buildMarkClasses_(self.marks) - markClassList = sorted(markClasses.keys(), key=markClasses.get) - marks = { - mark: (markClasses[mc], anchor) for mark, (mc, anchor) in self.marks.items() - } - - st = ot.MarkMarkPos() - st.Format = 1 - st.ClassCount = len(markClasses) - st.Mark1Coverage = buildCoverage(marks, self.glyphMap) - st.Mark2Coverage = buildCoverage(self.baseMarks, self.glyphMap) - st.Mark1Array = buildMarkArray(marks, self.glyphMap) - st.Mark2Array = ot.Mark2Array() - st.Mark2Array.Mark2Count = len(st.Mark2Coverage.glyphs) - st.Mark2Array.Mark2Record = [] - for base in st.Mark2Coverage.glyphs: - anchors = [self.baseMarks[base].get(mc) for mc in markClassList] - st.Mark2Array.Mark2Record.append(buildMark2Record(anchors)) - return self.buildLookup_([st]) - - -class ReverseChainSingleSubstBuilder(LookupBuilder): - """Builds a Reverse Chaining Contextual Single Substitution (GSUB8) lookup. - - Users are expected to manually add substitutions to the ``substitutions`` - attribute after the object has been initialized, e.g.:: - - # reversesub [a e n] d' by d.alt; - prefix = [ ["a", "e", "n"] ] - suffix = [] - mapping = { "d": "d.alt" } - builder.substitutions.append( (prefix, suffix, mapping) ) - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - substitutions: A three-element tuple consisting of a prefix sequence, - a suffix sequence, and a dictionary of single substitutions. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GSUB", 8) - self.rules = [] # (prefix, suffix, mapping) - - def equals(self, other): - return LookupBuilder.equals(self, other) and self.rules == other.rules - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the chained - contextual substitution lookup. - """ - subtables = [] - for prefix, suffix, mapping in self.rules: - st = ot.ReverseChainSingleSubst() - st.Format = 1 - self.setBacktrackCoverage_(prefix, st) - self.setLookAheadCoverage_(suffix, st) - st.Coverage = buildCoverage(mapping.keys(), self.glyphMap) - st.GlyphCount = len(mapping) - st.Substitute = [mapping[g] for g in st.Coverage.glyphs] - subtables.append(st) - return self.buildLookup_(subtables) - - def add_subtable_break(self, location): - # Nothing to do here, each substitution is in its own subtable. - pass - - -class SingleSubstBuilder(LookupBuilder): - """Builds a Single Substitution (GSUB1) lookup. - - Users are expected to manually add substitutions to the ``mapping`` - attribute after the object has been initialized, e.g.:: - - # sub x by y; - builder.mapping["x"] = "y" - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - mapping: A dictionary mapping a single glyph name to another glyph name. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GSUB", 1) - self.mapping = OrderedDict() - - def equals(self, other): - return LookupBuilder.equals(self, other) and self.mapping == other.mapping - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the multiple - substitution lookup. - """ - subtables = self.build_subst_subtables(self.mapping, buildSingleSubstSubtable) - return self.buildLookup_(subtables) - - def getAlternateGlyphs(self): - return {glyph: set([repl]) for glyph, repl in self.mapping.items()} - - def add_subtable_break(self, location): - self.mapping[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_ - - -class ClassPairPosSubtableBuilder(object): - """Builds class-based Pair Positioning (GPOS2 format 2) subtables. - - Note that this does *not* build a GPOS2 ``otTables.Lookup`` directly, - but builds a list of ``otTables.PairPos`` subtables. It is used by the - :class:`PairPosBuilder` below. - - Attributes: - builder (PairPosBuilder): A pair positioning lookup builder. - """ - - def __init__(self, builder): - self.builder_ = builder - self.classDef1_, self.classDef2_ = None, None - self.values_ = {} # (glyphclass1, glyphclass2) --> (value1, value2) - self.forceSubtableBreak_ = False - self.subtables_ = [] - - def addPair(self, gc1, value1, gc2, value2): - """Add a pair positioning rule. - - Args: - gc1: A set of glyph names for the "left" glyph - value1: An ``otTables.ValueRecord`` object for the left glyph's - positioning. - gc2: A set of glyph names for the "right" glyph - value2: An ``otTables.ValueRecord`` object for the right glyph's - positioning. - """ - mergeable = ( - not self.forceSubtableBreak_ - and self.classDef1_ is not None - and self.classDef1_.canAdd(gc1) - and self.classDef2_ is not None - and self.classDef2_.canAdd(gc2) - ) - if not mergeable: - self.flush_() - self.classDef1_ = ClassDefBuilder(useClass0=True) - self.classDef2_ = ClassDefBuilder(useClass0=False) - self.values_ = {} - self.classDef1_.add(gc1) - self.classDef2_.add(gc2) - self.values_[(gc1, gc2)] = (value1, value2) - - def addSubtableBreak(self): - """Add an explicit subtable break at this point.""" - self.forceSubtableBreak_ = True - - def subtables(self): - """Return the list of ``otTables.PairPos`` subtables constructed.""" - self.flush_() - return self.subtables_ - - def flush_(self): - if self.classDef1_ is None or self.classDef2_ is None: - return - st = buildPairPosClassesSubtable(self.values_, self.builder_.glyphMap) - if st.Coverage is None: - return - self.subtables_.append(st) - self.forceSubtableBreak_ = False - - -class PairPosBuilder(LookupBuilder): - """Builds a Pair Positioning (GPOS2) lookup. - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - pairs: An array of class-based pair positioning tuples. Usually - manipulated with the :meth:`addClassPair` method below. - glyphPairs: A dictionary mapping a tuple of glyph names to a tuple - of ``otTables.ValueRecord`` objects. Usually manipulated with the - :meth:`addGlyphPair` method below. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GPOS", 2) - self.pairs = [] # [(gc1, value1, gc2, value2)*] - self.glyphPairs = {} # (glyph1, glyph2) --> (value1, value2) - self.locations = {} # (gc1, gc2) --> (filepath, line, column) - - def addClassPair(self, location, glyphclass1, value1, glyphclass2, value2): - """Add a class pair positioning rule to the current lookup. - - Args: - location: A string or tuple representing the location in the - original source which produced this rule. Unused. - glyphclass1: A set of glyph names for the "left" glyph in the pair. - value1: A ``otTables.ValueRecord`` for positioning the left glyph. - glyphclass2: A set of glyph names for the "right" glyph in the pair. - value2: A ``otTables.ValueRecord`` for positioning the right glyph. - """ - self.pairs.append((glyphclass1, value1, glyphclass2, value2)) - - def addGlyphPair(self, location, glyph1, value1, glyph2, value2): - """Add a glyph pair positioning rule to the current lookup. - - Args: - location: A string or tuple representing the location in the - original source which produced this rule. - glyph1: A glyph name for the "left" glyph in the pair. - value1: A ``otTables.ValueRecord`` for positioning the left glyph. - glyph2: A glyph name for the "right" glyph in the pair. - value2: A ``otTables.ValueRecord`` for positioning the right glyph. - """ - key = (glyph1, glyph2) - oldValue = self.glyphPairs.get(key, None) - if oldValue is not None: - # the Feature File spec explicitly allows specific pairs generated - # by an 'enum' rule to be overridden by preceding single pairs - otherLoc = self.locations[key] - log.debug( - "Already defined position for pair %s %s at %s; " - "choosing the first value", - glyph1, - glyph2, - otherLoc, - ) - else: - self.glyphPairs[key] = (value1, value2) - self.locations[key] = location - - def add_subtable_break(self, location): - self.pairs.append( - ( - self.SUBTABLE_BREAK_, - self.SUBTABLE_BREAK_, - self.SUBTABLE_BREAK_, - self.SUBTABLE_BREAK_, - ) - ) - - def equals(self, other): - return ( - LookupBuilder.equals(self, other) - and self.glyphPairs == other.glyphPairs - and self.pairs == other.pairs - ) - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the pair positioning - lookup. - """ - builders = {} - builder = ClassPairPosSubtableBuilder(self) - for glyphclass1, value1, glyphclass2, value2 in self.pairs: - if glyphclass1 is self.SUBTABLE_BREAK_: - builder.addSubtableBreak() - continue - builder.addPair(glyphclass1, value1, glyphclass2, value2) - subtables = [] - if self.glyphPairs: - subtables.extend(buildPairPosGlyphs(self.glyphPairs, self.glyphMap)) - subtables.extend(builder.subtables()) - lookup = self.buildLookup_(subtables) - - # Compact the lookup - # This is a good moment to do it because the compaction should create - # smaller subtables, which may prevent overflows from happening. - # Keep reading the value from the ENV until ufo2ft switches to the config system - level = self.font.cfg.get( - "fontTools.otlLib.optimize.gpos:COMPRESSION_LEVEL", - default=_compression_level_from_env(), - ) - if level != 0: - log.info("Compacting GPOS...") - compact_lookup(self.font, level, lookup) - - return lookup - - -class SinglePosBuilder(LookupBuilder): - """Builds a Single Positioning (GPOS1) lookup. - - Attributes: - font (``fontTools.TTLib.TTFont``): A font object. - location: A string or tuple representing the location in the original - source which produced this lookup. - mapping: A dictionary mapping a glyph name to a ``otTables.ValueRecord`` - objects. Usually manipulated with the :meth:`add_pos` method below. - lookupflag (int): The lookup's flag - markFilterSet: Either ``None`` if no mark filtering set is used, or - an integer representing the filtering set to be used for this - lookup. If a mark filtering set is provided, - `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's - flags. - """ - - def __init__(self, font, location): - LookupBuilder.__init__(self, font, location, "GPOS", 1) - self.locations = {} # glyph -> (filename, line, column) - self.mapping = {} # glyph -> ot.ValueRecord - - def add_pos(self, location, glyph, otValueRecord): - """Add a single positioning rule. - - Args: - location: A string or tuple representing the location in the - original source which produced this lookup. - glyph: A glyph name. - otValueRection: A ``otTables.ValueRecord`` used to position the - glyph. - """ - if not self.can_add(glyph, otValueRecord): - otherLoc = self.locations[glyph] - raise OpenTypeLibError( - 'Already defined different position for glyph "%s" at %s' - % (glyph, otherLoc), - location, - ) - if otValueRecord: - self.mapping[glyph] = otValueRecord - self.locations[glyph] = location - - def can_add(self, glyph, value): - assert isinstance(value, ValueRecord) - curValue = self.mapping.get(glyph) - return curValue is None or curValue == value - - def equals(self, other): - return LookupBuilder.equals(self, other) and self.mapping == other.mapping - - def build(self): - """Build the lookup. - - Returns: - An ``otTables.Lookup`` object representing the single positioning - lookup. - """ - subtables = buildSinglePos(self.mapping, self.glyphMap) - return self.buildLookup_(subtables) - - -# GSUB - - -def buildSingleSubstSubtable(mapping): - """Builds a single substitution (GSUB1) subtable. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.SingleSubstBuilder` instead. - - Args: - mapping: A dictionary mapping input glyph names to output glyph names. - - Returns: - An ``otTables.SingleSubst`` object, or ``None`` if the mapping dictionary - is empty. - """ - if not mapping: - return None - self = ot.SingleSubst() - self.mapping = dict(mapping) - return self - - -def buildMultipleSubstSubtable(mapping): - """Builds a multiple substitution (GSUB2) subtable. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.MultipleSubstBuilder` instead. - - Example:: - - # sub uni06C0 by uni06D5.fina hamza.above - # sub uni06C2 by uni06C1.fina hamza.above; - - subtable = buildMultipleSubstSubtable({ - "uni06C0": [ "uni06D5.fina", "hamza.above"], - "uni06C2": [ "uni06D1.fina", "hamza.above"] - }) - - Args: - mapping: A dictionary mapping input glyph names to a list of output - glyph names. - - Returns: - An ``otTables.MultipleSubst`` object or ``None`` if the mapping dictionary - is empty. - """ - if not mapping: - return None - self = ot.MultipleSubst() - self.mapping = dict(mapping) - return self - - -def buildAlternateSubstSubtable(mapping): - """Builds an alternate substitution (GSUB3) subtable. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.AlternateSubstBuilder` instead. - - Args: - mapping: A dictionary mapping input glyph names to a list of output - glyph names. - - Returns: - An ``otTables.AlternateSubst`` object or ``None`` if the mapping dictionary - is empty. - """ - if not mapping: - return None - self = ot.AlternateSubst() - self.alternates = dict(mapping) - return self - - -def _getLigatureKey(components): - # Computes a key for ordering ligatures in a GSUB Type-4 lookup. - - # When building the OpenType lookup, we need to make sure that - # the longest sequence of components is listed first, so we - # use the negative length as the primary key for sorting. - # To make buildLigatureSubstSubtable() deterministic, we use the - # component sequence as the secondary key. - - # For example, this will sort (f,f,f) < (f,f,i) < (f,f) < (f,i) < (f,l). - return (-len(components), components) - - -def buildLigatureSubstSubtable(mapping): - """Builds a ligature substitution (GSUB4) subtable. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.LigatureSubstBuilder` instead. - - Example:: - - # sub f f i by f_f_i; - # sub f i by f_i; - - subtable = buildLigatureSubstSubtable({ - ("f", "f", "i"): "f_f_i", - ("f", "i"): "f_i", - }) - - Args: - mapping: A dictionary mapping tuples of glyph names to output - glyph names. - - Returns: - An ``otTables.LigatureSubst`` object or ``None`` if the mapping dictionary - is empty. - """ - - if not mapping: - return None - self = ot.LigatureSubst() - # The following single line can replace the rest of this function - # with fontTools >= 3.1: - # self.ligatures = dict(mapping) - self.ligatures = {} - for components in sorted(mapping.keys(), key=_getLigatureKey): - ligature = ot.Ligature() - ligature.Component = components[1:] - ligature.CompCount = len(ligature.Component) + 1 - ligature.LigGlyph = mapping[components] - firstGlyph = components[0] - self.ligatures.setdefault(firstGlyph, []).append(ligature) - return self - - -# GPOS - - -def buildAnchor(x, y, point=None, deviceX=None, deviceY=None): - """Builds an Anchor table. - - This determines the appropriate anchor format based on the passed parameters. - - Args: - x (int): X coordinate. - y (int): Y coordinate. - point (int): Index of glyph contour point, if provided. - deviceX (``otTables.Device``): X coordinate device table, if provided. - deviceY (``otTables.Device``): Y coordinate device table, if provided. - - Returns: - An ``otTables.Anchor`` object. - """ - self = ot.Anchor() - self.XCoordinate, self.YCoordinate = x, y - self.Format = 1 - if point is not None: - self.AnchorPoint = point - self.Format = 2 - if deviceX is not None or deviceY is not None: - assert ( - self.Format == 1 - ), "Either point, or both of deviceX/deviceY, must be None." - self.XDeviceTable = deviceX - self.YDeviceTable = deviceY - self.Format = 3 - return self - - -def buildBaseArray(bases, numMarkClasses, glyphMap): - """Builds a base array record. - - As part of building mark-to-base positioning rules, you will need to define - a ``BaseArray`` record, which "defines for each base glyph an array of - anchors, one for each mark class." This function builds the base array - subtable. - - Example:: - - bases = {"a": {0: a3, 1: a5}, "b": {0: a4, 1: a5}} - basearray = buildBaseArray(bases, 2, font.getReverseGlyphMap()) - - Args: - bases (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being dictionaries mapping mark class ID - to the appropriate ``otTables.Anchor`` object used for attaching marks - of that class. - numMarkClasses (int): The total number of mark classes for which anchors - are defined. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - An ``otTables.BaseArray`` object. - """ - self = ot.BaseArray() - self.BaseRecord = [] - for base in sorted(bases, key=glyphMap.__getitem__): - b = bases[base] - anchors = [b.get(markClass) for markClass in range(numMarkClasses)] - self.BaseRecord.append(buildBaseRecord(anchors)) - self.BaseCount = len(self.BaseRecord) - return self - - -def buildBaseRecord(anchors): - # [otTables.Anchor, otTables.Anchor, ...] --> otTables.BaseRecord - self = ot.BaseRecord() - self.BaseAnchor = anchors - return self - - -def buildComponentRecord(anchors): - """Builds a component record. - - As part of building mark-to-ligature positioning rules, you will need to - define ``ComponentRecord`` objects, which contain "an array of offsets... - to the Anchor tables that define all the attachment points used to attach - marks to the component." This function builds the component record. - - Args: - anchors: A list of ``otTables.Anchor`` objects or ``None``. - - Returns: - A ``otTables.ComponentRecord`` object or ``None`` if no anchors are - supplied. - """ - if not anchors: - return None - self = ot.ComponentRecord() - self.LigatureAnchor = anchors - return self - - -def buildCursivePosSubtable(attach, glyphMap): - """Builds a cursive positioning (GPOS3) subtable. - - Cursive positioning lookups are made up of a coverage table of glyphs, - and a set of ``EntryExitRecord`` records containing the anchors for - each glyph. This function builds the cursive positioning subtable. - - Example:: - - subtable = buildCursivePosSubtable({ - "AlifIni": (None, buildAnchor(0, 50)), - "BehMed": (buildAnchor(500,250), buildAnchor(0,50)), - # ... - }, font.getReverseGlyphMap()) - - Args: - attach (dict): A mapping between glyph names and a tuple of two - ``otTables.Anchor`` objects representing entry and exit anchors. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - An ``otTables.CursivePos`` object, or ``None`` if the attachment - dictionary was empty. - """ - if not attach: - return None - self = ot.CursivePos() - self.Format = 1 - self.Coverage = buildCoverage(attach.keys(), glyphMap) - self.EntryExitRecord = [] - for glyph in self.Coverage.glyphs: - entryAnchor, exitAnchor = attach[glyph] - rec = ot.EntryExitRecord() - rec.EntryAnchor = entryAnchor - rec.ExitAnchor = exitAnchor - self.EntryExitRecord.append(rec) - self.EntryExitCount = len(self.EntryExitRecord) - return self - - -def buildDevice(deltas): - """Builds a Device record as part of a ValueRecord or Anchor. - - Device tables specify size-specific adjustments to value records - and anchors to reflect changes based on the resolution of the output. - For example, one could specify that an anchor's Y position should be - increased by 1 pixel when displayed at 8 pixels per em. This routine - builds device records. - - Args: - deltas: A dictionary mapping pixels-per-em sizes to the delta - adjustment in pixels when the font is displayed at that size. - - Returns: - An ``otTables.Device`` object if any deltas were supplied, or - ``None`` otherwise. - """ - if not deltas: - return None - self = ot.Device() - keys = deltas.keys() - self.StartSize = startSize = min(keys) - self.EndSize = endSize = max(keys) - assert 0 <= startSize <= endSize - self.DeltaValue = deltaValues = [ - deltas.get(size, 0) for size in range(startSize, endSize + 1) - ] - maxDelta = max(deltaValues) - minDelta = min(deltaValues) - assert minDelta > -129 and maxDelta < 128 - if minDelta > -3 and maxDelta < 2: - self.DeltaFormat = 1 - elif minDelta > -9 and maxDelta < 8: - self.DeltaFormat = 2 - else: - self.DeltaFormat = 3 - return self - - -def buildLigatureArray(ligs, numMarkClasses, glyphMap): - """Builds a LigatureArray subtable. - - As part of building a mark-to-ligature lookup, you will need to define - the set of anchors (for each mark class) on each component of the ligature - where marks can be attached. For example, for an Arabic divine name ligature - (lam lam heh), you may want to specify mark attachment positioning for - superior marks (fatha, etc.) and inferior marks (kasra, etc.) on each glyph - of the ligature. This routine builds the ligature array record. - - Example:: - - buildLigatureArray({ - "lam-lam-heh": [ - { 0: superiorAnchor1, 1: inferiorAnchor1 }, # attach points for lam1 - { 0: superiorAnchor2, 1: inferiorAnchor2 }, # attach points for lam2 - { 0: superiorAnchor3, 1: inferiorAnchor3 }, # attach points for heh - ] - }, 2, font.getReverseGlyphMap()) - - Args: - ligs (dict): A mapping of ligature names to an array of dictionaries: - for each component glyph in the ligature, an dictionary mapping - mark class IDs to anchors. - numMarkClasses (int): The number of mark classes. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - An ``otTables.LigatureArray`` object if deltas were supplied. - """ - self = ot.LigatureArray() - self.LigatureAttach = [] - for lig in sorted(ligs, key=glyphMap.__getitem__): - anchors = [] - for component in ligs[lig]: - anchors.append([component.get(mc) for mc in range(numMarkClasses)]) - self.LigatureAttach.append(buildLigatureAttach(anchors)) - self.LigatureCount = len(self.LigatureAttach) - return self - - -def buildLigatureAttach(components): - # [[Anchor, Anchor], [Anchor, Anchor, Anchor]] --> LigatureAttach - self = ot.LigatureAttach() - self.ComponentRecord = [buildComponentRecord(c) for c in components] - self.ComponentCount = len(self.ComponentRecord) - return self - - -def buildMarkArray(marks, glyphMap): - """Builds a mark array subtable. - - As part of building mark-to-* positioning rules, you will need to define - a MarkArray subtable, which "defines the class and the anchor point - for a mark glyph." This function builds the mark array subtable. - - Example:: - - mark = { - "acute": (0, buildAnchor(300,712)), - # ... - } - markarray = buildMarkArray(marks, font.getReverseGlyphMap()) - - Args: - marks (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being a tuple of mark class number and - an ``otTables.Anchor`` object representing the mark's attachment - point. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - An ``otTables.MarkArray`` object. - """ - self = ot.MarkArray() - self.MarkRecord = [] - for mark in sorted(marks.keys(), key=glyphMap.__getitem__): - markClass, anchor = marks[mark] - markrec = buildMarkRecord(markClass, anchor) - self.MarkRecord.append(markrec) - self.MarkCount = len(self.MarkRecord) - return self - - -def buildMarkBasePos(marks, bases, glyphMap): - """Build a list of MarkBasePos (GPOS4) subtables. - - This routine turns a set of marks and bases into a list of mark-to-base - positioning subtables. Currently the list will contain a single subtable - containing all marks and bases, although at a later date it may return the - optimal list of subtables subsetting the marks and bases into groups which - save space. See :func:`buildMarkBasePosSubtable` below. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.MarkBasePosBuilder` instead. - - Example:: - - # a1, a2, a3, a4, a5 = buildAnchor(500, 100), ... - - marks = {"acute": (0, a1), "grave": (0, a1), "cedilla": (1, a2)} - bases = {"a": {0: a3, 1: a5}, "b": {0: a4, 1: a5}} - markbaseposes = buildMarkBasePos(marks, bases, font.getReverseGlyphMap()) - - Args: - marks (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being a tuple of mark class number and - an ``otTables.Anchor`` object representing the mark's attachment - point. (See :func:`buildMarkArray`.) - bases (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being dictionaries mapping mark class ID - to the appropriate ``otTables.Anchor`` object used for attaching marks - of that class. (See :func:`buildBaseArray`.) - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A list of ``otTables.MarkBasePos`` objects. - """ - # TODO: Consider emitting multiple subtables to save space. - # Partition the marks and bases into disjoint subsets, so that - # MarkBasePos rules would only access glyphs from a single - # subset. This would likely lead to smaller mark/base - # matrices, so we might be able to omit many of the empty - # anchor tables that we currently produce. Of course, this - # would only work if the MarkBasePos rules of real-world fonts - # allow partitioning into multiple subsets. We should find out - # whether this is the case; if so, implement the optimization. - # On the other hand, a very large number of subtables could - # slow down layout engines; so this would need profiling. - return [buildMarkBasePosSubtable(marks, bases, glyphMap)] - - -def buildMarkBasePosSubtable(marks, bases, glyphMap): - """Build a single MarkBasePos (GPOS4) subtable. - - This builds a mark-to-base lookup subtable containing all of the referenced - marks and bases. See :func:`buildMarkBasePos`. - - Args: - marks (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being a tuple of mark class number and - an ``otTables.Anchor`` object representing the mark's attachment - point. (See :func:`buildMarkArray`.) - bases (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being dictionaries mapping mark class ID - to the appropriate ``otTables.Anchor`` object used for attaching marks - of that class. (See :func:`buildBaseArray`.) - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A ``otTables.MarkBasePos`` object. - """ - self = ot.MarkBasePos() - self.Format = 1 - self.MarkCoverage = buildCoverage(marks, glyphMap) - self.MarkArray = buildMarkArray(marks, glyphMap) - self.ClassCount = max([mc for mc, _ in marks.values()]) + 1 - self.BaseCoverage = buildCoverage(bases, glyphMap) - self.BaseArray = buildBaseArray(bases, self.ClassCount, glyphMap) - return self - - -def buildMarkLigPos(marks, ligs, glyphMap): - """Build a list of MarkLigPos (GPOS5) subtables. - - This routine turns a set of marks and ligatures into a list of mark-to-ligature - positioning subtables. Currently the list will contain a single subtable - containing all marks and ligatures, although at a later date it may return - the optimal list of subtables subsetting the marks and ligatures into groups - which save space. See :func:`buildMarkLigPosSubtable` below. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.MarkLigPosBuilder` instead. - - Example:: - - # a1, a2, a3, a4, a5 = buildAnchor(500, 100), ... - marks = { - "acute": (0, a1), - "grave": (0, a1), - "cedilla": (1, a2) - } - ligs = { - "f_i": [ - { 0: a3, 1: a5 }, # f - { 0: a4, 1: a5 } # i - ], - # "c_t": [{...}, {...}] - } - markligposes = buildMarkLigPos(marks, ligs, - font.getReverseGlyphMap()) - - Args: - marks (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being a tuple of mark class number and - an ``otTables.Anchor`` object representing the mark's attachment - point. (See :func:`buildMarkArray`.) - ligs (dict): A mapping of ligature names to an array of dictionaries: - for each component glyph in the ligature, an dictionary mapping - mark class IDs to anchors. (See :func:`buildLigatureArray`.) - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A list of ``otTables.MarkLigPos`` objects. - - """ - # TODO: Consider splitting into multiple subtables to save space, - # as with MarkBasePos, this would be a trade-off that would need - # profiling. And, depending on how typical fonts are structured, - # it might not be worth doing at all. - return [buildMarkLigPosSubtable(marks, ligs, glyphMap)] - - -def buildMarkLigPosSubtable(marks, ligs, glyphMap): - """Build a single MarkLigPos (GPOS5) subtable. - - This builds a mark-to-base lookup subtable containing all of the referenced - marks and bases. See :func:`buildMarkLigPos`. - - Args: - marks (dict): A dictionary mapping anchors to glyphs; the keys being - glyph names, and the values being a tuple of mark class number and - an ``otTables.Anchor`` object representing the mark's attachment - point. (See :func:`buildMarkArray`.) - ligs (dict): A mapping of ligature names to an array of dictionaries: - for each component glyph in the ligature, an dictionary mapping - mark class IDs to anchors. (See :func:`buildLigatureArray`.) - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A ``otTables.MarkLigPos`` object. - """ - self = ot.MarkLigPos() - self.Format = 1 - self.MarkCoverage = buildCoverage(marks, glyphMap) - self.MarkArray = buildMarkArray(marks, glyphMap) - self.ClassCount = max([mc for mc, _ in marks.values()]) + 1 - self.LigatureCoverage = buildCoverage(ligs, glyphMap) - self.LigatureArray = buildLigatureArray(ligs, self.ClassCount, glyphMap) - return self - - -def buildMarkRecord(classID, anchor): - assert isinstance(classID, int) - assert isinstance(anchor, ot.Anchor) - self = ot.MarkRecord() - self.Class = classID - self.MarkAnchor = anchor - return self - - -def buildMark2Record(anchors): - # [otTables.Anchor, otTables.Anchor, ...] --> otTables.Mark2Record - self = ot.Mark2Record() - self.Mark2Anchor = anchors - return self - - -def _getValueFormat(f, values, i): - # Helper for buildPairPos{Glyphs|Classes}Subtable. - if f is not None: - return f - mask = 0 - for value in values: - if value is not None and value[i] is not None: - mask |= value[i].getFormat() - return mask - - -def buildPairPosClassesSubtable(pairs, glyphMap, valueFormat1=None, valueFormat2=None): - """Builds a class pair adjustment (GPOS2 format 2) subtable. - - Kerning tables are generally expressed as pair positioning tables using - class-based pair adjustments. This routine builds format 2 PairPos - subtables. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.ClassPairPosSubtableBuilder` - instead, as this takes care of ensuring that the supplied pairs can be - formed into non-overlapping classes and emitting individual subtables - whenever the non-overlapping requirement means that a new subtable is - required. - - Example:: - - pairs = {} - - pairs[( - [ "K", "X" ], - [ "W", "V" ] - )] = ( buildValue(xAdvance=+5), buildValue() ) - # pairs[(... , ...)] = (..., ...) - - pairpos = buildPairPosClassesSubtable(pairs, font.getReverseGlyphMap()) - - Args: - pairs (dict): Pair positioning data; the keys being a two-element - tuple of lists of glyphnames, and the values being a two-element - tuple of ``otTables.ValueRecord`` objects. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - valueFormat1: Force the "left" value records to the given format. - valueFormat2: Force the "right" value records to the given format. - - Returns: - A ``otTables.PairPos`` object. - """ - coverage = set() - classDef1 = ClassDefBuilder(useClass0=True) - classDef2 = ClassDefBuilder(useClass0=False) - for gc1, gc2 in sorted(pairs): - coverage.update(gc1) - classDef1.add(gc1) - classDef2.add(gc2) - self = ot.PairPos() - self.Format = 2 - valueFormat1 = self.ValueFormat1 = _getValueFormat(valueFormat1, pairs.values(), 0) - valueFormat2 = self.ValueFormat2 = _getValueFormat(valueFormat2, pairs.values(), 1) - self.Coverage = buildCoverage(coverage, glyphMap) - self.ClassDef1 = classDef1.build() - self.ClassDef2 = classDef2.build() - classes1 = classDef1.classes() - classes2 = classDef2.classes() - self.Class1Record = [] - for c1 in classes1: - rec1 = ot.Class1Record() - rec1.Class2Record = [] - self.Class1Record.append(rec1) - for c2 in classes2: - rec2 = ot.Class2Record() - val1, val2 = pairs.get((c1, c2), (None, None)) - rec2.Value1 = ( - ValueRecord(src=val1, valueFormat=valueFormat1) - if valueFormat1 - else None - ) - rec2.Value2 = ( - ValueRecord(src=val2, valueFormat=valueFormat2) - if valueFormat2 - else None - ) - rec1.Class2Record.append(rec2) - self.Class1Count = len(self.Class1Record) - self.Class2Count = len(classes2) - return self - - -def buildPairPosGlyphs(pairs, glyphMap): - """Builds a list of glyph-based pair adjustment (GPOS2 format 1) subtables. - - This organises a list of pair positioning adjustments into subtables based - on common value record formats. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.PairPosBuilder` - instead. - - Example:: - - pairs = { - ("K", "W"): ( buildValue(xAdvance=+5), buildValue() ), - ("K", "V"): ( buildValue(xAdvance=+5), buildValue() ), - # ... - } - - subtables = buildPairPosGlyphs(pairs, font.getReverseGlyphMap()) - - Args: - pairs (dict): Pair positioning data; the keys being a two-element - tuple of glyphnames, and the values being a two-element - tuple of ``otTables.ValueRecord`` objects. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A list of ``otTables.PairPos`` objects. - """ - - p = {} # (formatA, formatB) --> {(glyphA, glyphB): (valA, valB)} - for (glyphA, glyphB), (valA, valB) in pairs.items(): - formatA = valA.getFormat() if valA is not None else 0 - formatB = valB.getFormat() if valB is not None else 0 - pos = p.setdefault((formatA, formatB), {}) - pos[(glyphA, glyphB)] = (valA, valB) - return [ - buildPairPosGlyphsSubtable(pos, glyphMap, formatA, formatB) - for ((formatA, formatB), pos) in sorted(p.items()) - ] - - -def buildPairPosGlyphsSubtable(pairs, glyphMap, valueFormat1=None, valueFormat2=None): - """Builds a single glyph-based pair adjustment (GPOS2 format 1) subtable. - - This builds a PairPos subtable from a dictionary of glyph pairs and - their positioning adjustments. See also :func:`buildPairPosGlyphs`. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.PairPosBuilder` instead. - - Example:: - - pairs = { - ("K", "W"): ( buildValue(xAdvance=+5), buildValue() ), - ("K", "V"): ( buildValue(xAdvance=+5), buildValue() ), - # ... - } - - pairpos = buildPairPosGlyphsSubtable(pairs, font.getReverseGlyphMap()) - - Args: - pairs (dict): Pair positioning data; the keys being a two-element - tuple of glyphnames, and the values being a two-element - tuple of ``otTables.ValueRecord`` objects. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - valueFormat1: Force the "left" value records to the given format. - valueFormat2: Force the "right" value records to the given format. - - Returns: - A ``otTables.PairPos`` object. - """ - self = ot.PairPos() - self.Format = 1 - valueFormat1 = self.ValueFormat1 = _getValueFormat(valueFormat1, pairs.values(), 0) - valueFormat2 = self.ValueFormat2 = _getValueFormat(valueFormat2, pairs.values(), 1) - p = {} - for (glyphA, glyphB), (valA, valB) in pairs.items(): - p.setdefault(glyphA, []).append((glyphB, valA, valB)) - self.Coverage = buildCoverage({g for g, _ in pairs.keys()}, glyphMap) - self.PairSet = [] - for glyph in self.Coverage.glyphs: - ps = ot.PairSet() - ps.PairValueRecord = [] - self.PairSet.append(ps) - for glyph2, val1, val2 in sorted(p[glyph], key=lambda x: glyphMap[x[0]]): - pvr = ot.PairValueRecord() - pvr.SecondGlyph = glyph2 - pvr.Value1 = ( - ValueRecord(src=val1, valueFormat=valueFormat1) - if valueFormat1 - else None - ) - pvr.Value2 = ( - ValueRecord(src=val2, valueFormat=valueFormat2) - if valueFormat2 - else None - ) - ps.PairValueRecord.append(pvr) - ps.PairValueCount = len(ps.PairValueRecord) - self.PairSetCount = len(self.PairSet) - return self - - -def buildSinglePos(mapping, glyphMap): - """Builds a list of single adjustment (GPOS1) subtables. - - This builds a list of SinglePos subtables from a dictionary of glyph - names and their positioning adjustments. The format of the subtables are - determined to optimize the size of the resulting subtables. - See also :func:`buildSinglePosSubtable`. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.SinglePosBuilder` instead. - - Example:: - - mapping = { - "V": buildValue({ "xAdvance" : +5 }), - # ... - } - - subtables = buildSinglePos(pairs, font.getReverseGlyphMap()) - - Args: - mapping (dict): A mapping between glyphnames and - ``otTables.ValueRecord`` objects. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A list of ``otTables.SinglePos`` objects. - """ - result, handled = [], set() - # In SinglePos format 1, the covered glyphs all share the same ValueRecord. - # In format 2, each glyph has its own ValueRecord, but these records - # all have the same properties (eg., all have an X but no Y placement). - coverages, masks, values = {}, {}, {} - for glyph, value in mapping.items(): - key = _getSinglePosValueKey(value) - coverages.setdefault(key, []).append(glyph) - masks.setdefault(key[0], []).append(key) - values[key] = value - - # If a ValueRecord is shared between multiple glyphs, we generate - # a SinglePos format 1 subtable; that is the most compact form. - for key, glyphs in coverages.items(): - # 5 ushorts is the length of introducing another sublookup - if len(glyphs) * _getSinglePosValueSize(key) > 5: - format1Mapping = {g: values[key] for g in glyphs} - result.append(buildSinglePosSubtable(format1Mapping, glyphMap)) - handled.add(key) - - # In the remaining ValueRecords, look for those whose valueFormat - # (the set of used properties) is shared between multiple records. - # These will get encoded in format 2. - for valueFormat, keys in masks.items(): - f2 = [k for k in keys if k not in handled] - if len(f2) > 1: - format2Mapping = {} - for k in f2: - format2Mapping.update((g, values[k]) for g in coverages[k]) - result.append(buildSinglePosSubtable(format2Mapping, glyphMap)) - handled.update(f2) - - # The remaining ValueRecords are only used by a few glyphs, normally - # one. We encode these in format 1 again. - for key, glyphs in coverages.items(): - if key not in handled: - for g in glyphs: - st = buildSinglePosSubtable({g: values[key]}, glyphMap) - result.append(st) - - # When the OpenType layout engine traverses the subtables, it will - # stop after the first matching subtable. Therefore, we sort the - # resulting subtables by decreasing coverage size; this increases - # the chance that the layout engine can do an early exit. (Of course, - # this would only be true if all glyphs were equally frequent, which - # is not really the case; but we do not know their distribution). - # If two subtables cover the same number of glyphs, we sort them - # by glyph ID so that our output is deterministic. - result.sort(key=lambda t: _getSinglePosTableKey(t, glyphMap)) - return result - - -def buildSinglePosSubtable(values, glyphMap): - """Builds a single adjustment (GPOS1) subtable. - - This builds a list of SinglePos subtables from a dictionary of glyph - names and their positioning adjustments. The format of the subtable is - determined to optimize the size of the output. - See also :func:`buildSinglePos`. - - Note that if you are implementing a layout compiler, you may find it more - flexible to use - :py:class:`fontTools.otlLib.lookupBuilders.SinglePosBuilder` instead. - - Example:: - - mapping = { - "V": buildValue({ "xAdvance" : +5 }), - # ... - } - - subtable = buildSinglePos(pairs, font.getReverseGlyphMap()) - - Args: - mapping (dict): A mapping between glyphnames and - ``otTables.ValueRecord`` objects. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A ``otTables.SinglePos`` object. - """ - self = ot.SinglePos() - self.Coverage = buildCoverage(values.keys(), glyphMap) - valueFormat = self.ValueFormat = reduce( - int.__or__, [v.getFormat() for v in values.values()], 0 - ) - valueRecords = [ - ValueRecord(src=values[g], valueFormat=valueFormat) - for g in self.Coverage.glyphs - ] - if all(v == valueRecords[0] for v in valueRecords): - self.Format = 1 - if self.ValueFormat != 0: - self.Value = valueRecords[0] - else: - self.Value = None - else: - self.Format = 2 - self.Value = valueRecords - self.ValueCount = len(self.Value) - return self - - -def _getSinglePosTableKey(subtable, glyphMap): - assert isinstance(subtable, ot.SinglePos), subtable - glyphs = subtable.Coverage.glyphs - return (-len(glyphs), glyphMap[glyphs[0]]) - - -def _getSinglePosValueKey(valueRecord): - # otBase.ValueRecord --> (2, ("YPlacement": 12)) - assert isinstance(valueRecord, ValueRecord), valueRecord - valueFormat, result = 0, [] - for name, value in valueRecord.__dict__.items(): - if isinstance(value, ot.Device): - result.append((name, _makeDeviceTuple(value))) - else: - result.append((name, value)) - valueFormat |= valueRecordFormatDict[name][0] - result.sort() - result.insert(0, valueFormat) - return tuple(result) - - -_DeviceTuple = namedtuple("_DeviceTuple", "DeltaFormat StartSize EndSize DeltaValue") - - -def _makeDeviceTuple(device): - # otTables.Device --> tuple, for making device tables unique - return _DeviceTuple( - device.DeltaFormat, - device.StartSize, - device.EndSize, - () if device.DeltaFormat & 0x8000 else tuple(device.DeltaValue), - ) - - -def _getSinglePosValueSize(valueKey): - # Returns how many ushorts this valueKey (short form of ValueRecord) takes up - count = 0 - for _, v in valueKey[1:]: - if isinstance(v, _DeviceTuple): - count += len(v.DeltaValue) + 3 - else: - count += 1 - return count - - -def buildValue(value): - """Builds a positioning value record. - - Value records are used to specify coordinates and adjustments for - positioning and attaching glyphs. Many of the positioning functions - in this library take ``otTables.ValueRecord`` objects as arguments. - This function builds value records from dictionaries. - - Args: - value (dict): A dictionary with zero or more of the following keys: - - ``xPlacement`` - - ``yPlacement`` - - ``xAdvance`` - - ``yAdvance`` - - ``xPlaDevice`` - - ``yPlaDevice`` - - ``xAdvDevice`` - - ``yAdvDevice`` - - Returns: - An ``otTables.ValueRecord`` object. - """ - self = ValueRecord() - for k, v in value.items(): - setattr(self, k, v) - return self - - -# GDEF - - -def buildAttachList(attachPoints, glyphMap): - """Builds an AttachList subtable. - - A GDEF table may contain an Attachment Point List table (AttachList) - which stores the contour indices of attachment points for glyphs with - attachment points. This routine builds AttachList subtables. - - Args: - attachPoints (dict): A mapping between glyph names and a list of - contour indices. - - Returns: - An ``otTables.AttachList`` object if attachment points are supplied, - or ``None`` otherwise. - """ - if not attachPoints: - return None - self = ot.AttachList() - self.Coverage = buildCoverage(attachPoints.keys(), glyphMap) - self.AttachPoint = [buildAttachPoint(attachPoints[g]) for g in self.Coverage.glyphs] - self.GlyphCount = len(self.AttachPoint) - return self - - -def buildAttachPoint(points): - # [4, 23, 41] --> otTables.AttachPoint - # Only used by above. - if not points: - return None - self = ot.AttachPoint() - self.PointIndex = sorted(set(points)) - self.PointCount = len(self.PointIndex) - return self - - -def buildCaretValueForCoord(coord): - # 500 --> otTables.CaretValue, format 1 - # (500, DeviceTable) --> otTables.CaretValue, format 3 - self = ot.CaretValue() - if isinstance(coord, tuple): - self.Format = 3 - self.Coordinate, self.DeviceTable = coord - else: - self.Format = 1 - self.Coordinate = coord - return self - - -def buildCaretValueForPoint(point): - # 4 --> otTables.CaretValue, format 2 - self = ot.CaretValue() - self.Format = 2 - self.CaretValuePoint = point - return self - - -def buildLigCaretList(coords, points, glyphMap): - """Builds a ligature caret list table. - - Ligatures appear as a single glyph representing multiple characters; however - when, for example, editing text containing a ``f_i`` ligature, the user may - want to place the cursor between the ``f`` and the ``i``. The ligature caret - list in the GDEF table specifies the position to display the "caret" (the - character insertion indicator, typically a flashing vertical bar) "inside" - the ligature to represent an insertion point. The insertion positions may - be specified either by coordinate or by contour point. - - Example:: - - coords = { - "f_f_i": [300, 600] # f|fi cursor at 300 units, ff|i cursor at 600. - } - points = { - "c_t": [28] # c|t cursor appears at coordinate of contour point 28. - } - ligcaretlist = buildLigCaretList(coords, points, font.getReverseGlyphMap()) - - Args: - coords: A mapping between glyph names and a list of coordinates for - the insertion point of each ligature component after the first one. - points: A mapping between glyph names and a list of contour points for - the insertion point of each ligature component after the first one. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns: - A ``otTables.LigCaretList`` object if any carets are present, or - ``None`` otherwise.""" - glyphs = set(coords.keys()) if coords else set() - if points: - glyphs.update(points.keys()) - carets = {g: buildLigGlyph(coords.get(g), points.get(g)) for g in glyphs} - carets = {g: c for g, c in carets.items() if c is not None} - if not carets: - return None - self = ot.LigCaretList() - self.Coverage = buildCoverage(carets.keys(), glyphMap) - self.LigGlyph = [carets[g] for g in self.Coverage.glyphs] - self.LigGlyphCount = len(self.LigGlyph) - return self - - -def buildLigGlyph(coords, points): - # ([500], [4]) --> otTables.LigGlyph; None for empty coords/points - carets = [] - if coords: - coords = sorted(coords, key=lambda c: c[0] if isinstance(c, tuple) else c) - carets.extend([buildCaretValueForCoord(c) for c in coords]) - if points: - carets.extend([buildCaretValueForPoint(p) for p in sorted(points)]) - if not carets: - return None - self = ot.LigGlyph() - self.CaretValue = carets - self.CaretCount = len(self.CaretValue) - return self - - -def buildMarkGlyphSetsDef(markSets, glyphMap): - """Builds a mark glyph sets definition table. - - OpenType Layout lookups may choose to use mark filtering sets to consider - or ignore particular combinations of marks. These sets are specified by - setting a flag on the lookup, but the mark filtering sets are defined in - the ``GDEF`` table. This routine builds the subtable containing the mark - glyph set definitions. - - Example:: - - set0 = set("acute", "grave") - set1 = set("caron", "grave") - - markglyphsets = buildMarkGlyphSetsDef([set0, set1], font.getReverseGlyphMap()) - - Args: - - markSets: A list of sets of glyphnames. - glyphMap: a glyph name to ID map, typically returned from - ``font.getReverseGlyphMap()``. - - Returns - An ``otTables.MarkGlyphSetsDef`` object. - """ - if not markSets: - return None - self = ot.MarkGlyphSetsDef() - self.MarkSetTableFormat = 1 - self.Coverage = [buildCoverage(m, glyphMap) for m in markSets] - self.MarkSetCount = len(self.Coverage) - return self - - -class ClassDefBuilder(object): - """Helper for building ClassDef tables.""" - - def __init__(self, useClass0): - self.classes_ = set() - self.glyphs_ = {} - self.useClass0_ = useClass0 - - def canAdd(self, glyphs): - if isinstance(glyphs, (set, frozenset)): - glyphs = sorted(glyphs) - glyphs = tuple(glyphs) - if glyphs in self.classes_: - return True - for glyph in glyphs: - if glyph in self.glyphs_: - return False - return True - - def add(self, glyphs): - if isinstance(glyphs, (set, frozenset)): - glyphs = sorted(glyphs) - glyphs = tuple(glyphs) - if glyphs in self.classes_: - return - self.classes_.add(glyphs) - for glyph in glyphs: - if glyph in self.glyphs_: - raise OpenTypeLibError( - f"Glyph {glyph} is already present in class.", None - ) - self.glyphs_[glyph] = glyphs - - def classes(self): - # In ClassDef1 tables, class id #0 does not need to be encoded - # because zero is the default. Therefore, we use id #0 for the - # glyph class that has the largest number of members. However, - # in other tables than ClassDef1, 0 means "every other glyph" - # so we should not use that ID for any real glyph classes; - # we implement this by inserting an empty set at position 0. - # - # TODO: Instead of counting the number of glyphs in each class, - # we should determine the encoded size. If the glyphs in a large - # class form a contiguous range, the encoding is actually quite - # compact, whereas a non-contiguous set might need a lot of bytes - # in the output file. We don't get this right with the key below. - result = sorted(self.classes_, key=lambda s: (len(s), s), reverse=True) - if not self.useClass0_: - result.insert(0, frozenset()) - return result - - def build(self): - glyphClasses = {} - for classID, glyphs in enumerate(self.classes()): - if classID == 0: - continue - for glyph in glyphs: - glyphClasses[glyph] = classID - classDef = ot.ClassDef() - classDef.classDefs = glyphClasses - return classDef - - -AXIS_VALUE_NEGATIVE_INFINITY = fixedToFloat(-0x80000000, 16) -AXIS_VALUE_POSITIVE_INFINITY = fixedToFloat(0x7FFFFFFF, 16) - - -def buildStatTable( - ttFont, axes, locations=None, elidedFallbackName=2, windowsNames=True, macNames=True -): - """Add a 'STAT' table to 'ttFont'. - - 'axes' is a list of dictionaries describing axes and their - values. - - Example:: - - axes = [ - dict( - tag="wght", - name="Weight", - ordering=0, # optional - values=[ - dict(value=100, name='Thin'), - dict(value=300, name='Light'), - dict(value=400, name='Regular', flags=0x2), - dict(value=900, name='Black'), - ], - ) - ] - - Each axis dict must have 'tag' and 'name' items. 'tag' maps - to the 'AxisTag' field. 'name' can be a name ID (int), a string, - or a dictionary containing multilingual names (see the - addMultilingualName() name table method), and will translate to - the AxisNameID field. - - An axis dict may contain an 'ordering' item that maps to the - AxisOrdering field. If omitted, the order of the axes list is - used to calculate AxisOrdering fields. - - The axis dict may contain a 'values' item, which is a list of - dictionaries describing AxisValue records belonging to this axis. - - Each value dict must have a 'name' item, which can be a name ID - (int), a string, or a dictionary containing multilingual names, - like the axis name. It translates to the ValueNameID field. - - Optionally the value dict can contain a 'flags' item. It maps to - the AxisValue Flags field, and will be 0 when omitted. - - The format of the AxisValue is determined by the remaining contents - of the value dictionary: - - If the value dict contains a 'value' item, an AxisValue record - Format 1 is created. If in addition to the 'value' item it contains - a 'linkedValue' item, an AxisValue record Format 3 is built. - - If the value dict contains a 'nominalValue' item, an AxisValue - record Format 2 is built. Optionally it may contain 'rangeMinValue' - and 'rangeMaxValue' items. These map to -Infinity and +Infinity - respectively if omitted. - - You cannot specify Format 4 AxisValue tables this way, as they are - not tied to a single axis, and specify a name for a location that - is defined by multiple axes values. Instead, you need to supply the - 'locations' argument. - - The optional 'locations' argument specifies AxisValue Format 4 - tables. It should be a list of dicts, where each dict has a 'name' - item, which works just like the value dicts above, an optional - 'flags' item (defaulting to 0x0), and a 'location' dict. A - location dict key is an axis tag, and the associated value is the - location on the specified axis. They map to the AxisIndex and Value - fields of the AxisValueRecord. - - Example:: - - locations = [ - dict(name='Regular ABCD', location=dict(wght=300, ABCD=100)), - dict(name='Bold ABCD XYZ', location=dict(wght=600, ABCD=200)), - ] - - The optional 'elidedFallbackName' argument can be a name ID (int), - a string, a dictionary containing multilingual names, or a list of - STATNameStatements. It translates to the ElidedFallbackNameID field. - - The 'ttFont' argument must be a TTFont instance that already has a - 'name' table. If a 'STAT' table already exists, it will be - overwritten by the newly created one. - """ - ttFont["STAT"] = ttLib.newTable("STAT") - statTable = ttFont["STAT"].table = ot.STAT() - nameTable = ttFont["name"] - statTable.ElidedFallbackNameID = _addName( - nameTable, elidedFallbackName, windows=windowsNames, mac=macNames - ) - - # 'locations' contains data for AxisValue Format 4 - axisRecords, axisValues = _buildAxisRecords( - axes, nameTable, windowsNames=windowsNames, macNames=macNames - ) - if not locations: - statTable.Version = 0x00010001 - else: - # We'll be adding Format 4 AxisValue records, which - # requires a higher table version - statTable.Version = 0x00010002 - multiAxisValues = _buildAxisValuesFormat4( - locations, axes, nameTable, windowsNames=windowsNames, macNames=macNames - ) - axisValues = multiAxisValues + axisValues - nameTable.names.sort() - - # Store AxisRecords - axisRecordArray = ot.AxisRecordArray() - axisRecordArray.Axis = axisRecords - # XXX these should not be hard-coded but computed automatically - statTable.DesignAxisRecordSize = 8 - statTable.DesignAxisRecord = axisRecordArray - statTable.DesignAxisCount = len(axisRecords) - - statTable.AxisValueCount = 0 - statTable.AxisValueArray = None - if axisValues: - # Store AxisValueRecords - axisValueArray = ot.AxisValueArray() - axisValueArray.AxisValue = axisValues - statTable.AxisValueArray = axisValueArray - statTable.AxisValueCount = len(axisValues) - - -def _buildAxisRecords(axes, nameTable, windowsNames=True, macNames=True): - axisRecords = [] - axisValues = [] - for axisRecordIndex, axisDict in enumerate(axes): - axis = ot.AxisRecord() - axis.AxisTag = axisDict["tag"] - axis.AxisNameID = _addName( - nameTable, axisDict["name"], 256, windows=windowsNames, mac=macNames - ) - axis.AxisOrdering = axisDict.get("ordering", axisRecordIndex) - axisRecords.append(axis) - - for axisVal in axisDict.get("values", ()): - axisValRec = ot.AxisValue() - axisValRec.AxisIndex = axisRecordIndex - axisValRec.Flags = axisVal.get("flags", 0) - axisValRec.ValueNameID = _addName( - nameTable, axisVal["name"], windows=windowsNames, mac=macNames - ) - - if "value" in axisVal: - axisValRec.Value = axisVal["value"] - if "linkedValue" in axisVal: - axisValRec.Format = 3 - axisValRec.LinkedValue = axisVal["linkedValue"] - else: - axisValRec.Format = 1 - elif "nominalValue" in axisVal: - axisValRec.Format = 2 - axisValRec.NominalValue = axisVal["nominalValue"] - axisValRec.RangeMinValue = axisVal.get( - "rangeMinValue", AXIS_VALUE_NEGATIVE_INFINITY - ) - axisValRec.RangeMaxValue = axisVal.get( - "rangeMaxValue", AXIS_VALUE_POSITIVE_INFINITY - ) - else: - raise ValueError("Can't determine format for AxisValue") - - axisValues.append(axisValRec) - return axisRecords, axisValues - - -def _buildAxisValuesFormat4( - locations, axes, nameTable, windowsNames=True, macNames=True -): - axisTagToIndex = {} - for axisRecordIndex, axisDict in enumerate(axes): - axisTagToIndex[axisDict["tag"]] = axisRecordIndex - - axisValues = [] - for axisLocationDict in locations: - axisValRec = ot.AxisValue() - axisValRec.Format = 4 - axisValRec.ValueNameID = _addName( - nameTable, axisLocationDict["name"], windows=windowsNames, mac=macNames - ) - axisValRec.Flags = axisLocationDict.get("flags", 0) - axisValueRecords = [] - for tag, value in axisLocationDict["location"].items(): - avr = ot.AxisValueRecord() - avr.AxisIndex = axisTagToIndex[tag] - avr.Value = value - axisValueRecords.append(avr) - axisValueRecords.sort(key=lambda avr: avr.AxisIndex) - axisValRec.AxisCount = len(axisValueRecords) - axisValRec.AxisValueRecord = axisValueRecords - axisValues.append(axisValRec) - return axisValues - - -def _addName(nameTable, value, minNameID=0, windows=True, mac=True): - if isinstance(value, int): - # Already a nameID - return value - if isinstance(value, str): - names = dict(en=value) - elif isinstance(value, dict): - names = value - elif isinstance(value, list): - nameID = nameTable._findUnusedNameID() - for nameRecord in value: - if isinstance(nameRecord, STATNameStatement): - nameTable.setName( - nameRecord.string, - nameID, - nameRecord.platformID, - nameRecord.platEncID, - nameRecord.langID, - ) - else: - raise TypeError("value must be a list of STATNameStatements") - return nameID - else: - raise TypeError("value must be int, str, dict or list") - return nameTable.addMultilingualName( - names, windows=windows, mac=mac, minNameID=minNameID - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/__init__.py deleted file mode 100644 index 989e92c3458681a6f0be72ae4105ea742750d328..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/__init__.py +++ /dev/null @@ -1,62 +0,0 @@ -# A highish-level implementation of the HTTP/1.1 wire protocol (RFC 7230), -# containing no networking code at all, loosely modelled on hyper-h2's generic -# implementation of HTTP/2 (and in particular the h2.connection.H2Connection -# class). There's still a bunch of subtle details you need to get right if you -# want to make this actually useful, because it doesn't implement all the -# semantics to check that what you're asking to write to the wire is sensible, -# but at least it gets you out of dealing with the wire itself. - -from h11._connection import Connection, NEED_DATA, PAUSED -from h11._events import ( - ConnectionClosed, - Data, - EndOfMessage, - Event, - InformationalResponse, - Request, - Response, -) -from h11._state import ( - CLIENT, - CLOSED, - DONE, - ERROR, - IDLE, - MIGHT_SWITCH_PROTOCOL, - MUST_CLOSE, - SEND_BODY, - SEND_RESPONSE, - SERVER, - SWITCHED_PROTOCOL, -) -from h11._util import LocalProtocolError, ProtocolError, RemoteProtocolError -from h11._version import __version__ - -PRODUCT_ID = "python-h11/" + __version__ - - -__all__ = ( - "Connection", - "NEED_DATA", - "PAUSED", - "ConnectionClosed", - "Data", - "EndOfMessage", - "Event", - "InformationalResponse", - "Request", - "Response", - "CLIENT", - "CLOSED", - "DONE", - "ERROR", - "IDLE", - "MUST_CLOSE", - "SEND_BODY", - "SEND_RESPONSE", - "SERVER", - "SWITCHED_PROTOCOL", - "ProtocolError", - "LocalProtocolError", - "RemoteProtocolError", -) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_unicode.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_unicode.py deleted file mode 100644 index e5454bd48df1dea3a01e012efa88370eee0739db..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_unicode.py +++ /dev/null @@ -1,368 +0,0 @@ -import pytest - -import numpy as np -from numpy.testing import assert_, assert_equal, assert_array_equal - -def buffer_length(arr): - if isinstance(arr, str): - if not arr: - charmax = 0 - else: - charmax = max([ord(c) for c in arr]) - if charmax < 256: - size = 1 - elif charmax < 65536: - size = 2 - else: - size = 4 - return size * len(arr) - v = memoryview(arr) - if v.shape is None: - return len(v) * v.itemsize - else: - return np.prod(v.shape) * v.itemsize - - -# In both cases below we need to make sure that the byte swapped value (as -# UCS4) is still a valid unicode: -# Value that can be represented in UCS2 interpreters -ucs2_value = '\u0900' -# Value that cannot be represented in UCS2 interpreters (but can in UCS4) -ucs4_value = '\U00100900' - - -def test_string_cast(): - str_arr = np.array(["1234", "1234\0\0"], dtype='S') - uni_arr1 = str_arr.astype('>U') - uni_arr2 = str_arr.astype(' None: - # attributes need to be set first before calling - # super init (as that calls serialize) - self._freq = freq - pyarrow.ExtensionType.__init__(self, pyarrow.int64(), "pandas.period") - - @property - def freq(self): - return self._freq - - def __arrow_ext_serialize__(self) -> bytes: - metadata = {"freq": self.freq} - return json.dumps(metadata).encode() - - @classmethod - def __arrow_ext_deserialize__(cls, storage_type, serialized) -> ArrowPeriodType: - metadata = json.loads(serialized.decode()) - return ArrowPeriodType(metadata["freq"]) - - def __eq__(self, other): - if isinstance(other, pyarrow.BaseExtensionType): - return type(self) == type(other) and self.freq == other.freq - else: - return NotImplemented - - def __ne__(self, other) -> bool: - return not self == other - - def __hash__(self) -> int: - return hash((str(self), self.freq)) - - def to_pandas_dtype(self): - return PeriodDtype(freq=self.freq) - - -# register the type with a dummy instance -_period_type = ArrowPeriodType("D") -pyarrow.register_extension_type(_period_type) - - -class ArrowIntervalType(pyarrow.ExtensionType): - def __init__(self, subtype, closed: IntervalClosedType) -> None: - # attributes need to be set first before calling - # super init (as that calls serialize) - assert closed in VALID_CLOSED - self._closed: IntervalClosedType = closed - if not isinstance(subtype, pyarrow.DataType): - subtype = pyarrow.type_for_alias(str(subtype)) - self._subtype = subtype - - storage_type = pyarrow.struct([("left", subtype), ("right", subtype)]) - pyarrow.ExtensionType.__init__(self, storage_type, "pandas.interval") - - @property - def subtype(self): - return self._subtype - - @property - def closed(self) -> IntervalClosedType: - return self._closed - - def __arrow_ext_serialize__(self) -> bytes: - metadata = {"subtype": str(self.subtype), "closed": self.closed} - return json.dumps(metadata).encode() - - @classmethod - def __arrow_ext_deserialize__(cls, storage_type, serialized) -> ArrowIntervalType: - metadata = json.loads(serialized.decode()) - subtype = pyarrow.type_for_alias(metadata["subtype"]) - closed = metadata["closed"] - return ArrowIntervalType(subtype, closed) - - def __eq__(self, other): - if isinstance(other, pyarrow.BaseExtensionType): - return ( - type(self) == type(other) - and self.subtype == other.subtype - and self.closed == other.closed - ) - else: - return NotImplemented - - def __ne__(self, other) -> bool: - return not self == other - - def __hash__(self) -> int: - return hash((str(self), str(self.subtype), self.closed)) - - def to_pandas_dtype(self): - return IntervalDtype(self.subtype.to_pandas_dtype(), self.closed) - - -# register the type with a dummy instance -_interval_type = ArrowIntervalType(pyarrow.int64(), "left") -pyarrow.register_extension_type(_interval_type) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/base/test_transpose.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/base/test_transpose.py deleted file mode 100644 index 246f33d27476cb419620fb8571984619785f9b62..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/base/test_transpose.py +++ /dev/null @@ -1,56 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - CategoricalDtype, - DataFrame, -) -import pandas._testing as tm - - -def test_transpose(index_or_series_obj): - obj = index_or_series_obj - tm.assert_equal(obj.transpose(), obj) - - -def test_transpose_non_default_axes(index_or_series_obj): - msg = "the 'axes' parameter is not supported" - obj = index_or_series_obj - with pytest.raises(ValueError, match=msg): - obj.transpose(1) - with pytest.raises(ValueError, match=msg): - obj.transpose(axes=1) - - -def test_numpy_transpose(index_or_series_obj): - msg = "the 'axes' parameter is not supported" - obj = index_or_series_obj - tm.assert_equal(np.transpose(obj), obj) - - with pytest.raises(ValueError, match=msg): - np.transpose(obj, axes=1) - - -@pytest.mark.parametrize( - "data, transposed_data, index, columns, dtype", - [ - ([[1], [2]], [[1, 2]], ["a", "a"], ["b"], int), - ([[1], [2]], [[1, 2]], ["a", "a"], ["b"], CategoricalDtype([1, 2])), - ([[1, 2]], [[1], [2]], ["b"], ["a", "a"], int), - ([[1, 2]], [[1], [2]], ["b"], ["a", "a"], CategoricalDtype([1, 2])), - ([[1, 2], [3, 4]], [[1, 3], [2, 4]], ["a", "a"], ["b", "b"], int), - ( - [[1, 2], [3, 4]], - [[1, 3], [2, 4]], - ["a", "a"], - ["b", "b"], - CategoricalDtype([1, 2, 3, 4]), - ), - ], -) -def test_duplicate_labels(data, transposed_data, index, columns, dtype): - # GH 42380 - df = DataFrame(data, index=index, columns=columns, dtype=dtype) - result = df.T - expected = DataFrame(transposed_data, index=columns, columns=index, dtype=dtype) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_duplicated.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_duplicated.py deleted file mode 100644 index 788aede8051100cab2558998daf700e8d77d66f9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_duplicated.py +++ /dev/null @@ -1,117 +0,0 @@ -import re -import sys - -import numpy as np -import pytest - -from pandas import ( - DataFrame, - Series, - date_range, -) -import pandas._testing as tm - - -@pytest.mark.parametrize("subset", ["a", ["a"], ["a", "B"]]) -def test_duplicated_with_misspelled_column_name(subset): - # GH 19730 - df = DataFrame({"A": [0, 0, 1], "B": [0, 0, 1], "C": [0, 0, 1]}) - msg = re.escape("Index(['a'], dtype='object')") - - with pytest.raises(KeyError, match=msg): - df.duplicated(subset) - - -def test_duplicated_implemented_no_recursion(): - # gh-21524 - # Ensure duplicated isn't implemented using recursion that - # can fail on wide frames - df = DataFrame(np.random.default_rng(2).integers(0, 1000, (10, 1000))) - rec_limit = sys.getrecursionlimit() - try: - sys.setrecursionlimit(100) - result = df.duplicated() - finally: - sys.setrecursionlimit(rec_limit) - - # Then duplicates produce the bool Series as a result and don't fail during - # calculation. Actual values doesn't matter here, though usually it's all - # False in this case - assert isinstance(result, Series) - assert result.dtype == np.bool_ - - -@pytest.mark.parametrize( - "keep, expected", - [ - ("first", Series([False, False, True, False, True])), - ("last", Series([True, True, False, False, False])), - (False, Series([True, True, True, False, True])), - ], -) -def test_duplicated_keep(keep, expected): - df = DataFrame({"A": [0, 1, 1, 2, 0], "B": ["a", "b", "b", "c", "a"]}) - - result = df.duplicated(keep=keep) - tm.assert_series_equal(result, expected) - - -@pytest.mark.xfail(reason="GH#21720; nan/None falsely considered equal") -@pytest.mark.parametrize( - "keep, expected", - [ - ("first", Series([False, False, True, False, True])), - ("last", Series([True, True, False, False, False])), - (False, Series([True, True, True, False, True])), - ], -) -def test_duplicated_nan_none(keep, expected): - df = DataFrame({"C": [np.nan, 3, 3, None, np.nan], "x": 1}, dtype=object) - - result = df.duplicated(keep=keep) - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize("subset", [None, ["A", "B"], "A"]) -def test_duplicated_subset(subset, keep): - df = DataFrame( - { - "A": [0, 1, 1, 2, 0], - "B": ["a", "b", "b", "c", "a"], - "C": [np.nan, 3, 3, None, np.nan], - } - ) - - if subset is None: - subset = list(df.columns) - elif isinstance(subset, str): - # need to have a DataFrame, not a Series - # -> select columns with singleton list, not string - subset = [subset] - - expected = df[subset].duplicated(keep=keep) - result = df.duplicated(keep=keep, subset=subset) - tm.assert_series_equal(result, expected) - - -def test_duplicated_on_empty_frame(): - # GH 25184 - - df = DataFrame(columns=["a", "b"]) - dupes = df.duplicated("a") - - result = df[dupes] - expected = df.copy() - tm.assert_frame_equal(result, expected) - - -def test_frame_datetime64_duplicated(): - dates = date_range("2010-07-01", end="2010-08-05") - - tst = DataFrame({"symbol": "AAA", "date": dates}) - result = tst.duplicated(["date", "symbol"]) - assert (-result).all() - - tst = DataFrame({"date": dates}) - result = tst.date.duplicated() - assert (-result).all() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/aggregate/test_numba.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/aggregate/test_numba.py deleted file mode 100644 index ee694129f71183294dc780783d3b9ccdeae73bf4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/aggregate/test_numba.py +++ /dev/null @@ -1,392 +0,0 @@ -import numpy as np -import pytest - -from pandas.errors import NumbaUtilError - -from pandas import ( - DataFrame, - Index, - NamedAgg, - Series, - option_context, -) -import pandas._testing as tm - -pytestmark = pytest.mark.single_cpu - - -def test_correct_function_signature(): - pytest.importorskip("numba") - - def incorrect_function(x): - return sum(x) * 2.7 - - data = DataFrame( - {"key": ["a", "a", "b", "b", "a"], "data": [1.0, 2.0, 3.0, 4.0, 5.0]}, - columns=["key", "data"], - ) - with pytest.raises(NumbaUtilError, match="The first 2"): - data.groupby("key").agg(incorrect_function, engine="numba") - - with pytest.raises(NumbaUtilError, match="The first 2"): - data.groupby("key")["data"].agg(incorrect_function, engine="numba") - - -def test_check_nopython_kwargs(): - pytest.importorskip("numba") - - def incorrect_function(values, index): - return sum(values) * 2.7 - - data = DataFrame( - {"key": ["a", "a", "b", "b", "a"], "data": [1.0, 2.0, 3.0, 4.0, 5.0]}, - columns=["key", "data"], - ) - with pytest.raises(NumbaUtilError, match="numba does not support"): - data.groupby("key").agg(incorrect_function, engine="numba", a=1) - - with pytest.raises(NumbaUtilError, match="numba does not support"): - data.groupby("key")["data"].agg(incorrect_function, engine="numba", a=1) - - -@pytest.mark.filterwarnings("ignore") -# Filter warnings when parallel=True and the function can't be parallelized by Numba -@pytest.mark.parametrize("jit", [True, False]) -@pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"]) -@pytest.mark.parametrize("as_index", [True, False]) -def test_numba_vs_cython(jit, pandas_obj, nogil, parallel, nopython, as_index): - pytest.importorskip("numba") - - def func_numba(values, index): - return np.mean(values) * 2.7 - - if jit: - # Test accepted jitted functions - import numba - - func_numba = numba.jit(func_numba) - - data = DataFrame( - {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1] - ) - engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython} - grouped = data.groupby(0, as_index=as_index) - if pandas_obj == "Series": - grouped = grouped[1] - - result = grouped.agg(func_numba, engine="numba", engine_kwargs=engine_kwargs) - expected = grouped.agg(lambda x: np.mean(x) * 2.7, engine="cython") - - tm.assert_equal(result, expected) - - -@pytest.mark.filterwarnings("ignore") -# Filter warnings when parallel=True and the function can't be parallelized by Numba -@pytest.mark.parametrize("jit", [True, False]) -@pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"]) -def test_cache(jit, pandas_obj, nogil, parallel, nopython): - # Test that the functions are cached correctly if we switch functions - pytest.importorskip("numba") - - def func_1(values, index): - return np.mean(values) - 3.4 - - def func_2(values, index): - return np.mean(values) * 2.7 - - if jit: - import numba - - func_1 = numba.jit(func_1) - func_2 = numba.jit(func_2) - - data = DataFrame( - {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1] - ) - engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython} - grouped = data.groupby(0) - if pandas_obj == "Series": - grouped = grouped[1] - - result = grouped.agg(func_1, engine="numba", engine_kwargs=engine_kwargs) - expected = grouped.agg(lambda x: np.mean(x) - 3.4, engine="cython") - tm.assert_equal(result, expected) - - # Add func_2 to the cache - result = grouped.agg(func_2, engine="numba", engine_kwargs=engine_kwargs) - expected = grouped.agg(lambda x: np.mean(x) * 2.7, engine="cython") - tm.assert_equal(result, expected) - - # Retest func_1 which should use the cache - result = grouped.agg(func_1, engine="numba", engine_kwargs=engine_kwargs) - expected = grouped.agg(lambda x: np.mean(x) - 3.4, engine="cython") - tm.assert_equal(result, expected) - - -def test_use_global_config(): - pytest.importorskip("numba") - - def func_1(values, index): - return np.mean(values) - 3.4 - - data = DataFrame( - {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1] - ) - grouped = data.groupby(0) - expected = grouped.agg(func_1, engine="numba") - with option_context("compute.use_numba", True): - result = grouped.agg(func_1, engine=None) - tm.assert_frame_equal(expected, result) - - -@pytest.mark.parametrize( - "agg_kwargs", - [ - {"func": ["min", "max"]}, - {"func": "min"}, - {"func": {1: ["min", "max"], 2: "sum"}}, - {"bmin": NamedAgg(column=1, aggfunc="min")}, - ], -) -def test_multifunc_numba_vs_cython_frame(agg_kwargs): - pytest.importorskip("numba") - data = DataFrame( - { - 0: ["a", "a", "b", "b", "a"], - 1: [1.0, 2.0, 3.0, 4.0, 5.0], - 2: [1, 2, 3, 4, 5], - }, - columns=[0, 1, 2], - ) - grouped = data.groupby(0) - result = grouped.agg(**agg_kwargs, engine="numba") - expected = grouped.agg(**agg_kwargs, engine="cython") - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "agg_kwargs,expected_func", - [ - ({"func": lambda values, index: values.sum()}, "sum"), - # FIXME - pytest.param( - { - "func": [ - lambda values, index: values.sum(), - lambda values, index: values.min(), - ] - }, - ["sum", "min"], - marks=pytest.mark.xfail( - reason="This doesn't work yet! Fails in nopython pipeline!" - ), - ), - ], -) -def test_multifunc_numba_udf_frame(agg_kwargs, expected_func): - pytest.importorskip("numba") - data = DataFrame( - { - 0: ["a", "a", "b", "b", "a"], - 1: [1.0, 2.0, 3.0, 4.0, 5.0], - 2: [1, 2, 3, 4, 5], - }, - columns=[0, 1, 2], - ) - grouped = data.groupby(0) - result = grouped.agg(**agg_kwargs, engine="numba") - expected = grouped.agg(expected_func, engine="cython") - # check_dtype can be removed if GH 44952 is addressed - # Currently, UDFs still always return float64 while reductions can preserve dtype - tm.assert_frame_equal(result, expected, check_dtype=False) - - -@pytest.mark.parametrize( - "agg_kwargs", - [{"func": ["min", "max"]}, {"func": "min"}, {"min_val": "min", "max_val": "max"}], -) -def test_multifunc_numba_vs_cython_series(agg_kwargs): - pytest.importorskip("numba") - labels = ["a", "a", "b", "b", "a"] - data = Series([1.0, 2.0, 3.0, 4.0, 5.0]) - grouped = data.groupby(labels) - agg_kwargs["engine"] = "numba" - result = grouped.agg(**agg_kwargs) - agg_kwargs["engine"] = "cython" - expected = grouped.agg(**agg_kwargs) - if isinstance(expected, DataFrame): - tm.assert_frame_equal(result, expected) - else: - tm.assert_series_equal(result, expected) - - -@pytest.mark.single_cpu -@pytest.mark.parametrize( - "data,agg_kwargs", - [ - (Series([1.0, 2.0, 3.0, 4.0, 5.0]), {"func": ["min", "max"]}), - (Series([1.0, 2.0, 3.0, 4.0, 5.0]), {"func": "min"}), - ( - DataFrame( - {1: [1.0, 2.0, 3.0, 4.0, 5.0], 2: [1, 2, 3, 4, 5]}, columns=[1, 2] - ), - {"func": ["min", "max"]}, - ), - ( - DataFrame( - {1: [1.0, 2.0, 3.0, 4.0, 5.0], 2: [1, 2, 3, 4, 5]}, columns=[1, 2] - ), - {"func": "min"}, - ), - ( - DataFrame( - {1: [1.0, 2.0, 3.0, 4.0, 5.0], 2: [1, 2, 3, 4, 5]}, columns=[1, 2] - ), - {"func": {1: ["min", "max"], 2: "sum"}}, - ), - ( - DataFrame( - {1: [1.0, 2.0, 3.0, 4.0, 5.0], 2: [1, 2, 3, 4, 5]}, columns=[1, 2] - ), - {"min_col": NamedAgg(column=1, aggfunc="min")}, - ), - ], -) -def test_multifunc_numba_kwarg_propagation(data, agg_kwargs): - pytest.importorskip("numba") - labels = ["a", "a", "b", "b", "a"] - grouped = data.groupby(labels) - result = grouped.agg(**agg_kwargs, engine="numba", engine_kwargs={"parallel": True}) - expected = grouped.agg(**agg_kwargs, engine="numba") - if isinstance(expected, DataFrame): - tm.assert_frame_equal(result, expected) - else: - tm.assert_series_equal(result, expected) - - -def test_args_not_cached(): - # GH 41647 - pytest.importorskip("numba") - - def sum_last(values, index, n): - return values[-n:].sum() - - df = DataFrame({"id": [0, 0, 1, 1], "x": [1, 1, 1, 1]}) - grouped_x = df.groupby("id")["x"] - result = grouped_x.agg(sum_last, 1, engine="numba") - expected = Series([1.0] * 2, name="x", index=Index([0, 1], name="id")) - tm.assert_series_equal(result, expected) - - result = grouped_x.agg(sum_last, 2, engine="numba") - expected = Series([2.0] * 2, name="x", index=Index([0, 1], name="id")) - tm.assert_series_equal(result, expected) - - -def test_index_data_correctly_passed(): - # GH 43133 - pytest.importorskip("numba") - - def f(values, index): - return np.mean(index) - - df = DataFrame({"group": ["A", "A", "B"], "v": [4, 5, 6]}, index=[-1, -2, -3]) - result = df.groupby("group").aggregate(f, engine="numba") - expected = DataFrame( - [-1.5, -3.0], columns=["v"], index=Index(["A", "B"], name="group") - ) - tm.assert_frame_equal(result, expected) - - -def test_engine_kwargs_not_cached(): - # If the user passes a different set of engine_kwargs don't return the same - # jitted function - pytest.importorskip("numba") - nogil = True - parallel = False - nopython = True - - def func_kwargs(values, index): - return nogil + parallel + nopython - - engine_kwargs = {"nopython": nopython, "nogil": nogil, "parallel": parallel} - df = DataFrame({"value": [0, 0, 0]}) - result = df.groupby(level=0).aggregate( - func_kwargs, engine="numba", engine_kwargs=engine_kwargs - ) - expected = DataFrame({"value": [2.0, 2.0, 2.0]}) - tm.assert_frame_equal(result, expected) - - nogil = False - engine_kwargs = {"nopython": nopython, "nogil": nogil, "parallel": parallel} - result = df.groupby(level=0).aggregate( - func_kwargs, engine="numba", engine_kwargs=engine_kwargs - ) - expected = DataFrame({"value": [1.0, 1.0, 1.0]}) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.filterwarnings("ignore") -def test_multiindex_one_key(nogil, parallel, nopython): - pytest.importorskip("numba") - - def numba_func(values, index): - return 1 - - df = DataFrame([{"A": 1, "B": 2, "C": 3}]).set_index(["A", "B"]) - engine_kwargs = {"nopython": nopython, "nogil": nogil, "parallel": parallel} - result = df.groupby("A").agg( - numba_func, engine="numba", engine_kwargs=engine_kwargs - ) - expected = DataFrame([1.0], index=Index([1], name="A"), columns=["C"]) - tm.assert_frame_equal(result, expected) - - -def test_multiindex_multi_key_not_supported(nogil, parallel, nopython): - pytest.importorskip("numba") - - def numba_func(values, index): - return 1 - - df = DataFrame([{"A": 1, "B": 2, "C": 3}]).set_index(["A", "B"]) - engine_kwargs = {"nopython": nopython, "nogil": nogil, "parallel": parallel} - with pytest.raises(NotImplementedError, match="more than 1 grouping labels"): - df.groupby(["A", "B"]).agg( - numba_func, engine="numba", engine_kwargs=engine_kwargs - ) - - -def test_multilabel_numba_vs_cython(numba_supported_reductions): - pytest.importorskip("numba") - reduction, kwargs = numba_supported_reductions - df = DataFrame( - { - "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"], - "B": ["one", "one", "two", "three", "two", "two", "one", "three"], - "C": np.random.default_rng(2).standard_normal(8), - "D": np.random.default_rng(2).standard_normal(8), - } - ) - gb = df.groupby(["A", "B"]) - res_agg = gb.agg(reduction, engine="numba", **kwargs) - expected_agg = gb.agg(reduction, engine="cython", **kwargs) - tm.assert_frame_equal(res_agg, expected_agg) - # Test that calling the aggregation directly also works - direct_res = getattr(gb, reduction)(engine="numba", **kwargs) - direct_expected = getattr(gb, reduction)(engine="cython", **kwargs) - tm.assert_frame_equal(direct_res, direct_expected) - - -def test_multilabel_udf_numba_vs_cython(): - pytest.importorskip("numba") - df = DataFrame( - { - "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"], - "B": ["one", "one", "two", "three", "two", "two", "one", "three"], - "C": np.random.default_rng(2).standard_normal(8), - "D": np.random.default_rng(2).standard_normal(8), - } - ) - gb = df.groupby(["A", "B"]) - result = gb.agg(lambda values, index: values.min(), engine="numba") - expected = gb.agg(lambda x: x.min(), engine="cython") - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/test_setops.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/test_setops.py deleted file mode 100644 index af89d712b5565adbd350f06c4546394c1c4bb784..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/test_setops.py +++ /dev/null @@ -1,361 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - PeriodIndex, - date_range, - period_range, -) -import pandas._testing as tm - - -def _permute(obj): - return obj.take(np.random.default_rng(2).permutation(len(obj))) - - -class TestPeriodIndex: - def test_union(self, sort): - # union - other1 = period_range("1/1/2000", freq="D", periods=5) - rng1 = period_range("1/6/2000", freq="D", periods=5) - expected1 = PeriodIndex( - [ - "2000-01-06", - "2000-01-07", - "2000-01-08", - "2000-01-09", - "2000-01-10", - "2000-01-01", - "2000-01-02", - "2000-01-03", - "2000-01-04", - "2000-01-05", - ], - freq="D", - ) - - rng2 = period_range("1/1/2000", freq="D", periods=5) - other2 = period_range("1/4/2000", freq="D", periods=5) - expected2 = period_range("1/1/2000", freq="D", periods=8) - - rng3 = period_range("1/1/2000", freq="D", periods=5) - other3 = PeriodIndex([], freq="D") - expected3 = period_range("1/1/2000", freq="D", periods=5) - - rng4 = period_range("2000-01-01 09:00", freq="H", periods=5) - other4 = period_range("2000-01-02 09:00", freq="H", periods=5) - expected4 = PeriodIndex( - [ - "2000-01-01 09:00", - "2000-01-01 10:00", - "2000-01-01 11:00", - "2000-01-01 12:00", - "2000-01-01 13:00", - "2000-01-02 09:00", - "2000-01-02 10:00", - "2000-01-02 11:00", - "2000-01-02 12:00", - "2000-01-02 13:00", - ], - freq="H", - ) - - rng5 = PeriodIndex( - ["2000-01-01 09:01", "2000-01-01 09:03", "2000-01-01 09:05"], freq="T" - ) - other5 = PeriodIndex( - ["2000-01-01 09:01", "2000-01-01 09:05", "2000-01-01 09:08"], freq="T" - ) - expected5 = PeriodIndex( - [ - "2000-01-01 09:01", - "2000-01-01 09:03", - "2000-01-01 09:05", - "2000-01-01 09:08", - ], - freq="T", - ) - - rng6 = period_range("2000-01-01", freq="M", periods=7) - other6 = period_range("2000-04-01", freq="M", periods=7) - expected6 = period_range("2000-01-01", freq="M", periods=10) - - rng7 = period_range("2003-01-01", freq="A", periods=5) - other7 = period_range("1998-01-01", freq="A", periods=8) - expected7 = PeriodIndex( - [ - "2003", - "2004", - "2005", - "2006", - "2007", - "1998", - "1999", - "2000", - "2001", - "2002", - ], - freq="A", - ) - - rng8 = PeriodIndex( - ["1/3/2000", "1/2/2000", "1/1/2000", "1/5/2000", "1/4/2000"], freq="D" - ) - other8 = period_range("1/6/2000", freq="D", periods=5) - expected8 = PeriodIndex( - [ - "1/3/2000", - "1/2/2000", - "1/1/2000", - "1/5/2000", - "1/4/2000", - "1/6/2000", - "1/7/2000", - "1/8/2000", - "1/9/2000", - "1/10/2000", - ], - freq="D", - ) - - for rng, other, expected in [ - (rng1, other1, expected1), - (rng2, other2, expected2), - (rng3, other3, expected3), - (rng4, other4, expected4), - (rng5, other5, expected5), - (rng6, other6, expected6), - (rng7, other7, expected7), - (rng8, other8, expected8), - ]: - result_union = rng.union(other, sort=sort) - if sort is None: - expected = expected.sort_values() - tm.assert_index_equal(result_union, expected) - - def test_union_misc(self, sort): - index = period_range("1/1/2000", "1/20/2000", freq="D") - - result = index[:-5].union(index[10:], sort=sort) - tm.assert_index_equal(result, index) - - # not in order - result = _permute(index[:-5]).union(_permute(index[10:]), sort=sort) - if sort is None: - tm.assert_index_equal(result, index) - assert tm.equalContents(result, index) - - # cast if different frequencies - index = period_range("1/1/2000", "1/20/2000", freq="D") - index2 = period_range("1/1/2000", "1/20/2000", freq="W-WED") - result = index.union(index2, sort=sort) - expected = index.astype(object).union(index2.astype(object), sort=sort) - tm.assert_index_equal(result, expected) - - def test_intersection(self, sort): - index = period_range("1/1/2000", "1/20/2000", freq="D") - - result = index[:-5].intersection(index[10:], sort=sort) - tm.assert_index_equal(result, index[10:-5]) - - # not in order - left = _permute(index[:-5]) - right = _permute(index[10:]) - result = left.intersection(right, sort=sort) - if sort is None: - tm.assert_index_equal(result, index[10:-5]) - assert tm.equalContents(result, index[10:-5]) - - # cast if different frequencies - index = period_range("1/1/2000", "1/20/2000", freq="D") - index2 = period_range("1/1/2000", "1/20/2000", freq="W-WED") - - result = index.intersection(index2, sort=sort) - expected = pd.Index([], dtype=object) - tm.assert_index_equal(result, expected) - - index3 = period_range("1/1/2000", "1/20/2000", freq="2D") - result = index.intersection(index3, sort=sort) - tm.assert_index_equal(result, expected) - - def test_intersection_cases(self, sort): - base = period_range("6/1/2000", "6/30/2000", freq="D", name="idx") - - # if target has the same name, it is preserved - rng2 = period_range("5/15/2000", "6/20/2000", freq="D", name="idx") - expected2 = period_range("6/1/2000", "6/20/2000", freq="D", name="idx") - - # if target name is different, it will be reset - rng3 = period_range("5/15/2000", "6/20/2000", freq="D", name="other") - expected3 = period_range("6/1/2000", "6/20/2000", freq="D", name=None) - - rng4 = period_range("7/1/2000", "7/31/2000", freq="D", name="idx") - expected4 = PeriodIndex([], name="idx", freq="D") - - for rng, expected in [ - (rng2, expected2), - (rng3, expected3), - (rng4, expected4), - ]: - result = base.intersection(rng, sort=sort) - tm.assert_index_equal(result, expected) - assert result.name == expected.name - assert result.freq == expected.freq - - # non-monotonic - base = PeriodIndex( - ["2011-01-05", "2011-01-04", "2011-01-02", "2011-01-03"], - freq="D", - name="idx", - ) - - rng2 = PeriodIndex( - ["2011-01-04", "2011-01-02", "2011-02-02", "2011-02-03"], - freq="D", - name="idx", - ) - expected2 = PeriodIndex(["2011-01-04", "2011-01-02"], freq="D", name="idx") - - rng3 = PeriodIndex( - ["2011-01-04", "2011-01-02", "2011-02-02", "2011-02-03"], - freq="D", - name="other", - ) - expected3 = PeriodIndex(["2011-01-04", "2011-01-02"], freq="D", name=None) - - rng4 = period_range("7/1/2000", "7/31/2000", freq="D", name="idx") - expected4 = PeriodIndex([], freq="D", name="idx") - - for rng, expected in [ - (rng2, expected2), - (rng3, expected3), - (rng4, expected4), - ]: - result = base.intersection(rng, sort=sort) - if sort is None: - expected = expected.sort_values() - tm.assert_index_equal(result, expected) - assert result.name == expected.name - assert result.freq == "D" - - # empty same freq - rng = date_range("6/1/2000", "6/15/2000", freq="T") - result = rng[0:0].intersection(rng) - assert len(result) == 0 - - result = rng.intersection(rng[0:0]) - assert len(result) == 0 - - def test_difference(self, sort): - # diff - period_rng = ["1/3/2000", "1/2/2000", "1/1/2000", "1/5/2000", "1/4/2000"] - rng1 = PeriodIndex(period_rng, freq="D") - other1 = period_range("1/6/2000", freq="D", periods=5) - expected1 = rng1 - - rng2 = PeriodIndex(period_rng, freq="D") - other2 = period_range("1/4/2000", freq="D", periods=5) - expected2 = PeriodIndex(["1/3/2000", "1/2/2000", "1/1/2000"], freq="D") - - rng3 = PeriodIndex(period_rng, freq="D") - other3 = PeriodIndex([], freq="D") - expected3 = rng3 - - period_rng = [ - "2000-01-01 10:00", - "2000-01-01 09:00", - "2000-01-01 12:00", - "2000-01-01 11:00", - "2000-01-01 13:00", - ] - rng4 = PeriodIndex(period_rng, freq="H") - other4 = period_range("2000-01-02 09:00", freq="H", periods=5) - expected4 = rng4 - - rng5 = PeriodIndex( - ["2000-01-01 09:03", "2000-01-01 09:01", "2000-01-01 09:05"], freq="T" - ) - other5 = PeriodIndex(["2000-01-01 09:01", "2000-01-01 09:05"], freq="T") - expected5 = PeriodIndex(["2000-01-01 09:03"], freq="T") - - period_rng = [ - "2000-02-01", - "2000-01-01", - "2000-06-01", - "2000-07-01", - "2000-05-01", - "2000-03-01", - "2000-04-01", - ] - rng6 = PeriodIndex(period_rng, freq="M") - other6 = period_range("2000-04-01", freq="M", periods=7) - expected6 = PeriodIndex(["2000-02-01", "2000-01-01", "2000-03-01"], freq="M") - - period_rng = ["2003", "2007", "2006", "2005", "2004"] - rng7 = PeriodIndex(period_rng, freq="A") - other7 = period_range("1998-01-01", freq="A", periods=8) - expected7 = PeriodIndex(["2007", "2006"], freq="A") - - for rng, other, expected in [ - (rng1, other1, expected1), - (rng2, other2, expected2), - (rng3, other3, expected3), - (rng4, other4, expected4), - (rng5, other5, expected5), - (rng6, other6, expected6), - (rng7, other7, expected7), - ]: - result_difference = rng.difference(other, sort=sort) - if sort is None and len(other): - # We dont sort (yet?) when empty GH#24959 - expected = expected.sort_values() - tm.assert_index_equal(result_difference, expected) - - def test_difference_freq(self, sort): - # GH14323: difference of Period MUST preserve frequency - # but the ability to union results must be preserved - - index = period_range("20160920", "20160925", freq="D") - - other = period_range("20160921", "20160924", freq="D") - expected = PeriodIndex(["20160920", "20160925"], freq="D") - idx_diff = index.difference(other, sort) - tm.assert_index_equal(idx_diff, expected) - tm.assert_attr_equal("freq", idx_diff, expected) - - other = period_range("20160922", "20160925", freq="D") - idx_diff = index.difference(other, sort) - expected = PeriodIndex(["20160920", "20160921"], freq="D") - tm.assert_index_equal(idx_diff, expected) - tm.assert_attr_equal("freq", idx_diff, expected) - - def test_intersection_equal_duplicates(self): - # GH#38302 - idx = period_range("2011-01-01", periods=2) - idx_dup = idx.append(idx) - result = idx_dup.intersection(idx_dup) - tm.assert_index_equal(result, idx) - - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - def test_union_duplicates(self): - # GH#36289 - idx = period_range("2011-01-01", periods=2) - idx_dup = idx.append(idx) - - idx2 = period_range("2011-01-02", periods=2) - idx2_dup = idx2.append(idx2) - result = idx_dup.union(idx2_dup) - - expected = PeriodIndex( - [ - "2011-01-01", - "2011-01-01", - "2011-01-02", - "2011-01-02", - "2011-01-03", - "2011-01-03", - ], - freq="D", - ) - tm.assert_index_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/metadata/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/metadata/__init__.py deleted file mode 100644 index cc037c14f083a2bd3c8c32190c4455222c7cb980..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/metadata/__init__.py +++ /dev/null @@ -1,62 +0,0 @@ -from typing import List, Optional - -from .base import BaseDistribution, BaseEnvironment, FilesystemWheel, MemoryWheel, Wheel - -__all__ = [ - "BaseDistribution", - "BaseEnvironment", - "FilesystemWheel", - "MemoryWheel", - "Wheel", - "get_default_environment", - "get_environment", - "get_wheel_distribution", -] - - -def get_default_environment() -> BaseEnvironment: - """Get the default representation for the current environment. - - This returns an Environment instance from the chosen backend. The default - Environment instance should be built from ``sys.path`` and may use caching - to share instance state accorss calls. - """ - from .pkg_resources import Environment - - return Environment.default() - - -def get_environment(paths: Optional[List[str]]) -> BaseEnvironment: - """Get a representation of the environment specified by ``paths``. - - This returns an Environment instance from the chosen backend based on the - given import paths. The backend must build a fresh instance representing - the state of installed distributions when this function is called. - """ - from .pkg_resources import Environment - - return Environment.from_paths(paths) - - -def get_directory_distribution(directory: str) -> BaseDistribution: - """Get the distribution metadata representation in the specified directory. - - This returns a Distribution instance from the chosen backend based on - the given on-disk ``.dist-info`` directory. - """ - from .pkg_resources import Distribution - - return Distribution.from_directory(directory) - - -def get_wheel_distribution(wheel: Wheel, canonical_name: str) -> BaseDistribution: - """Get the representation of the specified wheel's distribution metadata. - - This returns a Distribution instance from the chosen backend based on - the given wheel's ``.dist-info`` directory. - - :param canonical_name: Normalized project name of the given wheel. - """ - from .pkg_resources import Distribution - - return Distribution.from_wheel(wheel, canonical_name) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/charsetprober.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/charsetprober.py deleted file mode 100644 index eac4e5986578636ad414648e6015e8b7e9f10432..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/charsetprober.py +++ /dev/null @@ -1,145 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 2001 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# Shy Shalom - original C code -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -import logging -import re - -from .enums import ProbingState - - -class CharSetProber(object): - - SHORTCUT_THRESHOLD = 0.95 - - def __init__(self, lang_filter=None): - self._state = None - self.lang_filter = lang_filter - self.logger = logging.getLogger(__name__) - - def reset(self): - self._state = ProbingState.DETECTING - - @property - def charset_name(self): - return None - - def feed(self, buf): - pass - - @property - def state(self): - return self._state - - def get_confidence(self): - return 0.0 - - @staticmethod - def filter_high_byte_only(buf): - buf = re.sub(b'([\x00-\x7F])+', b' ', buf) - return buf - - @staticmethod - def filter_international_words(buf): - """ - We define three types of bytes: - alphabet: english alphabets [a-zA-Z] - international: international characters [\x80-\xFF] - marker: everything else [^a-zA-Z\x80-\xFF] - - The input buffer can be thought to contain a series of words delimited - by markers. This function works to filter all words that contain at - least one international character. All contiguous sequences of markers - are replaced by a single space ascii character. - - This filter applies to all scripts which do not use English characters. - """ - filtered = bytearray() - - # This regex expression filters out only words that have at-least one - # international character. The word may include one marker character at - # the end. - words = re.findall(b'[a-zA-Z]*[\x80-\xFF]+[a-zA-Z]*[^a-zA-Z\x80-\xFF]?', - buf) - - for word in words: - filtered.extend(word[:-1]) - - # If the last character in the word is a marker, replace it with a - # space as markers shouldn't affect our analysis (they are used - # similarly across all languages and may thus have similar - # frequencies). - last_char = word[-1:] - if not last_char.isalpha() and last_char < b'\x80': - last_char = b' ' - filtered.extend(last_char) - - return filtered - - @staticmethod - def filter_with_english_letters(buf): - """ - Returns a copy of ``buf`` that retains only the sequences of English - alphabet and high byte characters that are not between <> characters. - Also retains English alphabet and high byte characters immediately - before occurrences of >. - - This filter can be applied to all scripts which contain both English - characters and extended ASCII characters, but is currently only used by - ``Latin1Prober``. - """ - filtered = bytearray() - in_tag = False - prev = 0 - - for curr in range(len(buf)): - # Slice here to get bytes instead of an int with Python 3 - buf_char = buf[curr:curr + 1] - # Check if we're coming out of or entering an HTML tag - if buf_char == b'>': - in_tag = False - elif buf_char == b'<': - in_tag = True - - # If current character is not extended-ASCII and not alphabetic... - if buf_char < b'\x80' and not buf_char.isalpha(): - # ...and we're not in a tag - if curr > prev and not in_tag: - # Keep everything after last non-extended-ASCII, - # non-alphabetic character - filtered.extend(buf[prev:curr]) - # Output a space to delimit stretch we kept - filtered.extend(b' ') - prev = curr + 1 - - # If we're not in a tag... - if not in_tag: - # Keep everything after last non-extended-ASCII, non-alphabetic - # character - filtered.extend(buf[prev:]) - - return filtered diff --git a/spaces/quidiaMuxgu/Expedit-SAM/7 Secrets Of Shiva Epub 170.md b/spaces/quidiaMuxgu/Expedit-SAM/7 Secrets Of Shiva Epub 170.md deleted file mode 100644 index 6b91e2cf6c6b5fd2d1103077198c4fe6b808dad5..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/7 Secrets Of Shiva Epub 170.md +++ /dev/null @@ -1,6 +0,0 @@ -

    7 Secrets Of Shiva Epub 170


    DOWNLOAD ===== https://geags.com/2uCrwV



    - -Free eBook Seven Secrets Of Shiva ~~ Uploaded By Stan and Jan ... having a you can read this before 7 secrets of shiva pdf epub full download at the bottom ... like 7 secrets of shiva epub 170 download laser vision correction center home 7 ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Call Of Duty Advanced Warfare Insufficient Free Disk Space Crack.md b/spaces/quidiaMuxgu/Expedit-SAM/Call Of Duty Advanced Warfare Insufficient Free Disk Space Crack.md deleted file mode 100644 index 19f6ab0f2e28861c9790013241c051411f09c8f6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Call Of Duty Advanced Warfare Insufficient Free Disk Space Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    call of duty advanced warfare insufficient free disk space crack


    Download File ––– https://geags.com/2uCsNm



    -
    -The technique of triphibious warfare was evolved and became so ... The first obligation was spectacularly fulfilled in the Battle of the Bismarck Sea. ... decent airfield unserviceable, but also left every repair shop and storage depot a shambles. ... For the balance of the Philippines campaign, the 5th Air Force was free to roam ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/File Scavenger Keygen Free Download TOP.md b/spaces/quidiaMuxgu/Expedit-SAM/File Scavenger Keygen Free Download TOP.md deleted file mode 100644 index df8bd77556c8bbbdde8994151ed040d407ecbacd..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/File Scavenger Keygen Free Download TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

    file scavenger keygen free download


    Download > https://geags.com/2uCrlq



    - - 1fdad05405
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/MinnaNoNihongo2TerjemahanIndonesia103pdf.md b/spaces/quidiaMuxgu/Expedit-SAM/MinnaNoNihongo2TerjemahanIndonesia103pdf.md deleted file mode 100644 index fce4b1fa99aa686348fb64ba692936976463020b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/MinnaNoNihongo2TerjemahanIndonesia103pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

    MinnaNoNihongo2TerjemahanIndonesia103pdf


    DOWNLOADhttps://geags.com/2uCqbZ



    -
    -... or unreleased Red Hot Chili Peppers tracks. Nothing contained in this... 71b77ec3ef MinnaNoNihongo2TerjemahanIndonesia103pdf 1fdad05405
    -
    -
    -

    diff --git a/spaces/raedeXanto/academic-chatgpt-beta/AspenTech Aspen Exchanger Design Rating 7.3.rar Case Studies and Testimonials from Customers Using the Heat Exchanger Software.md b/spaces/raedeXanto/academic-chatgpt-beta/AspenTech Aspen Exchanger Design Rating 7.3.rar Case Studies and Testimonials from Customers Using the Heat Exchanger Software.md deleted file mode 100644 index 37556541d1da0cb38e30b03c48ccd34de23f9462..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/AspenTech Aspen Exchanger Design Rating 7.3.rar Case Studies and Testimonials from Customers Using the Heat Exchanger Software.md +++ /dev/null @@ -1,78 +0,0 @@ - -

    What is AspenTech Aspen Exchanger Design Rating 7.3.rar?

    -

    If you are looking for a comprehensive and reliable software for heat exchanger design and rating, you might want to check out AspenTech Aspen Exchanger Design Rating 7.3.rar. This software is developed by Aspen Technology, a leading provider of engineering software for process industries.

    -

    AspenTech Aspen Exchanger Design Rating 7.3.rar


    Download Zip · https://tinourl.com/2uL0Cq



    -

    AspenTech Aspen Exchanger Design Rating 7.3.rar is part of the Aspen EDR suite, which delivers a range of heat exchanger design and rating software that can help you optimize capital expenditure (CAPEX) and operational expenditure (OPEX) by rigorously modeling heat exchangers within the larger process context.

    -

    With this software, you can design all major heat exchanger types, including shell and tube, fired heater, plate, plate-fin, coil-wound, air-cooled and more. You can also improve mechanical shell-and-tube exchanger design quality by using advanced features such as vibration analysis, tube layout optimization, thermal expansion analysis, etc.

    -

    Moreover, you can fully integrate heat exchanger designs within Aspen HYSYS and Aspen Plus, two popular process simulation tools from Aspen Technology, to produce the most optimal designs at the right economics. You can also access the full heat exchanger research library from HTFS, which contains thousands of data points and correlations for heat transfer and pressure drop calculations.

    -

    How to download AspenTech Aspen Exchanger Design Rating 7.3.rar
    -AspenTech Aspen Exchanger Design Rating 7.3.rar free trial
    -AspenTech Aspen Exchanger Design Rating 7.3.rar crack
    -AspenTech Aspen Exchanger Design Rating 7.3.rar license key
    -AspenTech Aspen Exchanger Design Rating 7.3.rar tutorial
    -AspenTech Aspen Exchanger Design Rating 7.3.rar user manual
    -AspenTech Aspen Exchanger Design Rating 7.3.rar system requirements
    -AspenTech Aspen Exchanger Design Rating 7.3.rar features
    -AspenTech Aspen Exchanger Design Rating 7.3.rar review
    -AspenTech Aspen Exchanger Design Rating 7.3.rar price
    -AspenTech Aspen Exchanger Design Rating 7.3.rar alternatives
    -AspenTech Aspen Exchanger Design Rating 7.3.rar vs HTRI Xchanger Suite
    -AspenTech Aspen Exchanger Design Rating 7.3.rar installation guide
    -AspenTech Aspen Exchanger Design Rating 7.3.rar error codes
    -AspenTech Aspen Exchanger Design Rating 7.3.rar support
    -AspenTech Aspen Exchanger Design Rating 7.3.rar online course
    -AspenTech Aspen Exchanger Design Rating 7.3.rar certification
    -AspenTech Aspen Exchanger Design Rating 7.3.rar benefits
    -AspenTech Aspen Exchanger Design Rating 7.3.rar disadvantages
    -AspenTech Aspen Exchanger Design Rating 7.3.rar comparison
    -How to use AspenTech Aspen Exchanger Design Rating 7.3.rar
    -How to update AspenTech Aspen Exchanger Design Rating 7.3.rar
    -How to uninstall AspenTech Aspen Exchanger Design Rating 7.3.rar
    -How to optimize AspenTech Aspen Exchanger Design Rating 7.3.rar performance
    -How to troubleshoot AspenTech Aspen Exchanger Design Rating 7.3.rar issues
    -How to import data into AspenTech Aspen Exchanger Design Rating 7.3.rar
    -How to export data from AspenTech Aspen Exchanger Design Rating 7.3.rar
    -How to customize AspenTech Aspen Exchanger Design Rating 7.3.rar settings
    -How to integrate AspenTech Aspen Exchanger Design Rating 7.3.rar with other software
    -How to run simulations with AspenTech Aspen Exchanger Design Rating 7.3.rar
    -How to design heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to rate heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to analyze heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to validate heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to optimize heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to select materials for heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to calculate pressure drop for heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to estimate fouling factors for heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to handle phase changes for heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -How to model different types of heat exchangers with Aspentech aspen exchanger design rating 7.3.rar
    -Aspentech aspen exchanger design rating 7.3 rar best practices
    -Aspentech aspen exchanger design rating 7.3 rar tips and tricks
    -Aspentech aspen exchanger design rating 7.3 rar case studies
    -Aspentech aspen exchanger design rating 7.3 rar testimonials
    -Aspentech aspen exchanger design rating 7.3 rar FAQs
    -Aspentech aspen exchanger design rating 7.3 rar forum
    -Aspentech aspen exchanger design rating 7.3 rar blog
    -Aspentech aspen exchanger design rating 7.3 rar webinar
    -Aspentech aspen exchanger design rating 7.3 rar youtube channel
    -Aspentech aspen exchanger design rating 7.3 rar download link

    -

    In addition, you can rely on a vast physical property database that covers over 37,000 components, 127 property packages and 5 million data points and interaction parameters. You can also employ state-of-the-art activity coefficient models and equations of state for accurate thermodynamic calculations.

    -

    Finally, you can comply with the latest ASME standards for heat exchanger design, such as BPV Section VIII Division 1 & 2, 2017 edition.

    -

    Why use AspenTech Aspen Exchanger Design Rating 7.3.rar?

    -

    There are many reasons why you should use AspenTech Aspen Exchanger Design Rating 7.3.rar for your heat exchanger design and rating needs. Here are some of them:

    -
      -
    • You can save time and money by designing heat exchangers that meet your performance specifications and process requirements.
    • -
    • You can improve your process efficiency and reliability by optimizing heat exchanger configurations and operating conditions.
    • -
    • You can reduce your environmental impact by minimizing energy consumption and emissions from your heat exchangers.
    • -
    • You can enhance your collaboration and communication with other engineers by sharing consistent and accurate data across different disciplines.
    • -
    • You can leverage the expertise and experience of Aspen Technology, which has been developing engineering software for over 40 years.
    • -
    -

    How to download and install AspenTech Aspen Exchanger Design Rating 7.3.rar?

    -

    To download and install AspenTech Aspen Exchanger Design Rating 7.3.rar, you have two options:

    -
      -
    1. You can download it from the official website of Aspen Technology. You will need to register an account or log in with your existing credentials. Then, you will need to select the product name, version number, platform type, language preference, etc. After that, you will be able to download the installation file (Aspen-Exchanger-Design-&-Rating.exe) which is about 1 GB in size.
    2. -
    3. You can download it from other sources such as Software Informer or SoundCloud. However, these sources may not be reliable or secure, so you should exercise caution when downloading files from them.
    4. -
    -

    To install AspenTech Aspen Exchanger Design Rating 7.3.rar, you will need to run the installation file (Aspen-Exchanger-Design-&-Rating.exe) as an administrator on your computer. Then, you will need to follow the instructions on the screen to complete the installation process. You may need to restart your computer after the installation is finished.

    -

    How to use AspenTech Aspen Exchanger Design Rating 7.3.rar? 0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Cod Black Ops 2 Pc Split Screen Mod.md b/spaces/raedeXanto/academic-chatgpt-beta/Cod Black Ops 2 Pc Split Screen Mod.md deleted file mode 100644 index 01e9c27af54bb21f45ee52b85f1686b2beb9ab4b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Cod Black Ops 2 Pc Split Screen Mod.md +++ /dev/null @@ -1,25 +0,0 @@ -
    -

    How to Play COD Black Ops 2 Multiplayer with Split Screen on PC

    -

    If you are a fan of Call of Duty: Black Ops II and want to enjoy its multiplayer mode with your friends on the same PC, you might be interested in this guide. In this article, we will show you how to use a mod called Nucleus Co-Op to play local, split-screen multiplayer with up to four players on your computer. You will need at least two gamepads, admin access to your PC, and the unmodded Steam version of the game. Follow these steps to set up and play COD Black Ops 2 multiplayer with split screen on PC.

    -

    Cod Black Ops 2 Pc Split Screen Mod


    Download Filehttps://tinourl.com/2uL013



    -
      -
    1. Download and install Nucleus Co-Op, a program that allows you to play various games in split-screen mode. The password for the download is "nucleus". You may also need to install 7-Zip to extract the files.
    2. -
    3. Extract and place the Nucleus Co-Op files in a new folder. You can name it anything, but we recommend "Nucleus Coop". Place this folder in C:/Program Files (x86).
    4. -
    5. Make sure Call of Duty: Black Ops II is located on the same hard drive as the "Nucleus Coop" folder. If not, move them to the same drive. Do not put the "Nucleus Coop" folder into your Call of Duty: Black Ops II directory.
    6. -
    7. If you are using a PS4 or PS5 controller, you have to use the DS4Windows program to make it compatible with Nucleus Co-Op. Go to the settings tab in DS4Windows, click "Install ViGEmBus Driver", then restart your computer.
    8. -
    9. Right-click on the NucleusCoop.exe file, go to properties, go to the compatibility section, check the box marked "Run this program as an administrator". Press OK.
    10. -
    11. Run Call of Duty: Black Ops II once and then exit.
    12. -
    13. Start NucleusCoop.exe and click on the "Download Game Scripts" button in the bottom left.
    14. -
    15. Search for Call of Duty: Black Ops II and find Call of Duty: Black Ops II Multiplayer. Click on it, then click "Download".
    16. -
    17. A dialogue will pop up asking you to locate your Call of Duty: Black Ops II executable file. You can find it by right-clicking on the game in your Steam library, going to manage, then clicking on browse local files.
    18. -
    19. Set up your controllers by clicking on the small box below the keyboard icon. Drag your controllers into your desired configuration. If you have a multi-monitor setup, you can also set up each screen for split-screen here.
    20. -
    21. Hit the right arrow button in the top right, below "Mod version", then the "Play" button.
    22. -
    23. You can now play COD Black Ops 2 multiplayer with split screen on PC. There are two ways to start a game:
    24. -
        -
      • The first player starts searching for an online game (it will only find players connected to your local network) after around 7 seconds it will stop searching and start hosting instead, at this point the other players can join by searching the same game mode via the online menu. This mode fully simulates online, automatically populating the empty player slots with bots. You can also rank up/unlock as you would online.
      • -
      • The first player creates a custom game and then uses the friends menu to send invites to the other players. For invites to work reliably make sure you select Online at the main menu BEFORE the next instance opens.
      • -
      -
    -

    We hope this guide helped you play COD Black Ops 2 multiplayer with split screen on PC. If you have any questions or issues, you can check out the Nucleus Co-Op FAQ, join their Discord server

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/rahul2001/student_performance/src/Components/model_tranier.py b/spaces/rahul2001/student_performance/src/Components/model_tranier.py deleted file mode 100644 index db8c52a7eeb4defcad2f0e52294481b2214f86fc..0000000000000000000000000000000000000000 --- a/spaces/rahul2001/student_performance/src/Components/model_tranier.py +++ /dev/null @@ -1,110 +0,0 @@ -# Basic Import -import numpy as np -import pandas as pd -import matplotlib.pyplot as plt -import seaborn as sns -import os -# Modelling -from sklearn.metrics import mean_squared_error, r2_score -from sklearn.neighbors import KNeighborsRegressor -from sklearn.tree import DecisionTreeRegressor -from sklearn.ensemble import RandomForestRegressor,AdaBoostRegressor,GradientBoostingRegressor -from sklearn.svm import SVR -from sklearn.linear_model import LinearRegression, Ridge,Lasso -from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error -from sklearn.model_selection import RandomizedSearchCV -from catboost import CatBoostRegressor -from xgboost import XGBRegressor -import warnings -import sys -from dataclasses import dataclass -from src.utils import save_object,evaluate_model -from src.logger import logging -from src.exception import CustomException - - -@dataclass - -class Model_training_config: - trained_model_path = os.path.join("artifact","model.pkl") -class Model_trainer: - def __init__(self) -> None: - self.model_trainer_config = Model_training_config() - - def intiate_model_trainer(self,train_array,test_array): - try: - logging.info("Split training and testing data ") - x_train,y_train,x_test,y_test = ( - train_array[:,:-1], - train_array[:,-1], - test_array[:,:-1], - test_array[:,-1] - ) - models={ - "Random Forest": RandomForestRegressor(), - "Decision Tree": DecisionTreeRegressor(), - "Gradient Boosting": GradientBoostingRegressor(), - "Linear Regression": LinearRegression(), - "XGBRegressor": XGBRegressor(), - "CatBoosting Regressor": CatBoostRegressor(verbose=False), - "AdaBoost Regressor": AdaBoostRegressor(), - } - params={ - "Decision Tree": { - 'criterion':['squared_error', 'friedman_mse', 'absolute_error', 'poisson'], - # 'splitter':['best','random'], - # 'max_features':['sqrt','log2'], - }, - "Random Forest":{ - # 'criterion':['squared_error', 'friedman_mse', 'absolute_error', 'poisson'], - - # 'max_features':['sqrt','log2',None], - 'n_estimators': [8,16,32,64,128,256] - }, - "Gradient Boosting":{ - # 'loss':['squared_error', 'huber', 'absolute_error', 'quantile'], - 'learning_rate':[.1,.01,.05,.001], - 'subsample':[0.6,0.7,0.75,0.8,0.85,0.9], - # 'criterion':['squared_error', 'friedman_mse'], - # 'max_features':['auto','sqrt','log2'], - 'n_estimators': [8,16,32,64,128,256] - }, - "Linear Regression":{}, - "XGBRegressor":{ - 'learning_rate':[.1,.01,.05,.001], - 'n_estimators': [8,16,32,64,128,256] - }, - "CatBoosting Regressor":{ - 'depth': [6,8,10], - 'learning_rate': [0.01, 0.05, 0.1], - 'iterations': [30, 50, 100] - }, - "AdaBoost Regressor":{ - 'learning_rate':[.1,.01,0.5,.001], - # 'loss':['linear','square','exponential'], - 'n_estimators': [8,16,32,64,128,256] - } - - } - model_report:dict = evaluate_model(X=x_train,Y = y_train,X_test = x_test,Y_test=y_test,Models = models,Param = params) - - best_model_score = max(sorted(model_report.values())) - - best_model_nm = list(model_report.keys())[ - list(model_report.values()).index(best_model_score) - ] - best_model = models[best_model_nm] - if best_model_score < 0.6: - raise CustomException("No best model found") - logging.info("Best model Found") - - save_object(file_path= Model_training_config.trained_model_path, - obj = best_model ) - predicted = best_model.predict(x_test) - r2score = r2_score(y_test,predicted) - return r2score - - except Exception as e: - raise CustomException(e,sys) - - diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/utils/inference/api.py b/spaces/rahul999r/Rahul_Kannada_TTS/utils/inference/api.py deleted file mode 100644 index d6bcabd194a4531801941d5e1d248dc134ce255f..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/utils/inference/api.py +++ /dev/null @@ -1,66 +0,0 @@ -from starlette.responses import StreamingResponse -from tts import MelToWav, TextToMel -from advanced_tts import load_all_models, run_tts_paragraph -from typing import Optional -from pydantic import BaseModel -from fastapi import FastAPI, HTTPException -import uvicorn -import base64 -import argparse -import json -import time -from argparse import Namespace - -app = FastAPI() - - -class TextJson(BaseModel): - text: str - lang: Optional[str] = "hi" - noise_scale: Optional[float]=0.667 - length_scale: Optional[float]=1.0 - transliteration: Optional[int]=1 - number_conversion: Optional[int]=1 - split_sentences: Optional[int]=1 - - - - -@app.post("/TTS/") -async def tts(input: TextJson): - text = input.text - lang = input.lang - - args = Namespace(**input.dict()) - - args.wav = '../../results/api/'+str(int(time.time())) + '.wav' - - if text: - sr, audio = run_tts_paragraph(args) - else: - raise HTTPException(status_code=400, detail={"error": "No text"}) - - ## to return outpur as a file - audio = open(args.wav, mode='rb') - return StreamingResponse(audio, media_type="audio/wav") - - # with open(args.wav, "rb") as audio_file: - # encoded_bytes = base64.b64encode(audio_file.read()) - # encoded_string = encoded_bytes.decode() - # return {"encoding": "base64", "data": encoded_string, "sr": sr} - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("-a", "--acoustic", required=True, type=str) - parser.add_argument("-v", "--vocoder", required=True, type=str) - parser.add_argument("-d", "--device", type=str, default="cpu") - parser.add_argument("-L", "--lang", type=str, required=True) - - args = parser.parse_args() - - load_all_models(args) - - uvicorn.run( - "api:app", host="0.0.0.0", port=6006, log_level="debug" - ) diff --git a/spaces/rajesh1729/gradio-realtime-news-app/app.py b/spaces/rajesh1729/gradio-realtime-news-app/app.py deleted file mode 100644 index 5028203dfc8d542f64ed4f8121d782cbb301ae84..0000000000000000000000000000000000000000 --- a/spaces/rajesh1729/gradio-realtime-news-app/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr -import json -import requests -from newspaper import Article -from gradio.mix import Parallel, Series - - -def get_news(search): - #article_texts=[] - url = "https://free-news.p.rapidapi.com/v1/search" - querystring = {"q":search,"lang":"en", "page":1, "page_size":5} - headers = {'x-rapidapi-host': "free-news.p.rapidapi.com",'x-rapidapi-key': "375ffbaab0mshb442ffb69d6f025p117ba0jsn01e8146148e3"} - response = requests.request("GET", url, headers=headers, params=querystring) - response_dict = json.loads(response.text) - links = [response_dict['articles'][i]['link'] for i in range(len(response_dict['articles']))] - news_article = Article(links[0], language='en') - news_article.download() - news_article.parse() - #article_texts.append(news_article.text) - return news_article.text - -extractor = gr.Interface(get_news, 'text', 'text') -summarizer = gr.Interface.load("huggingface/facebook/bart-large-cnn") - - -iface = Series(extractor, summarizer, - inputs = gr.inputs.Textbox( - label = 'Type in a topic or your favorite celebrity name to fetch news on that topic/celebrity name' - ), - outputs = 'text', - title = 'Instant short News app with Gradio', - theme = 'peach', - layout = 'horizontal', - description = 'This app fetches a latest news article from the Internet based on the given search and displays a summary of that article') - -iface.launch(debug=True) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Haseena Movie 720p Download Utorrent Movies.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Haseena Movie 720p Download Utorrent Movies.md deleted file mode 100644 index 14d25bbb3668278544c6a683cddd3ebca003d8ca..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Haseena Movie 720p Download Utorrent Movies.md +++ /dev/null @@ -1,9 +0,0 @@ - -

    the script is a good one, though, perhaps, a tad simple for a film of this genre. but, the film is more of a b-movie thriller; nothing else. it is a typical situation comedy with a bit of thriller and a dash of drama. the film gets exciting at times and gets dramatic at times; but, it doesn't have much substance.

    -

    Haseena Movie 720p Download Utorrent Movies


    Download File ::: https://urlgoal.com/2uCKg8



    -

    the story is a simple one, and one that is perhaps appropriate for a movie of this genre. the script has been written as the first third of the film, which is interesting. the second third, which is the most crucial, is a bit of a drag. but, the film picks up after the second third. it is a strong finish.

    -

    cinematographer rajeev mehta has captured the movie and its lead actors in some extraordinary sequences. at times, the camera freezes to provide a wonderful dolly view of the room as koena takes out the gun and starts cleaning it. but the overall photography is average and some of the scenes, especially the action scenes, aren't as sharp as it should be. but for the most part, the movie is a more than decent one.

    -

    verma has once again proved that his writing skills are sharp. even though he has put in some wonderful characters, they are not as memorable as we would have liked them to be. but the villain is surely the film's main attraction. sure, the guy doesn't have the star quality of a feroz khan, but that doesn't mean he doesn't deserve a bit of a standing ovation from the audience. he is good and he deserves to be at the top of the list. we will probably remember him for his performance till we die. in fact, apart from the villian, the movie's character-wise ensemble deserves a salute.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/riccorl/relik-entity-linking/relik/retriever/pytorch_modules/__init__.py b/spaces/riccorl/relik-entity-linking/relik/retriever/pytorch_modules/__init__.py deleted file mode 100644 index 01752b8aa79367a7bcdc2d18438ae87bebdd87f2..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/retriever/pytorch_modules/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from dataclasses import dataclass - -import torch - -PRECISION_MAP = { - None: torch.float32, - 16: torch.float16, - 32: torch.float32, - "float16": torch.float16, - "float32": torch.float32, - "half": torch.float16, - "float": torch.float32, - "16": torch.float16, - "32": torch.float32, - "fp16": torch.float16, - "fp32": torch.float32, -} - - -@dataclass -class RetrievedSample: - """ - Dataclass for the output of the GoldenRetriever model. - """ - - score: float - index: int - label: str diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/liteflownet/liteflownet_pre_M5S5R5.py b/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/liteflownet/liteflownet_pre_M5S5R5.py deleted file mode 100644 index 717f2ee0795c313f75ee9ae183feaf0e65892776..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/liteflownet/liteflownet_pre_M5S5R5.py +++ /dev/null @@ -1,53 +0,0 @@ -model = dict( - type='LiteFlowNet', - encoder=dict( - type='NetC', - in_channels=3, - pyramid_levels=[ - 'level1', 'level2', 'level3', 'level4', 'level5', 'level6' - ], - out_channels=(32, 32, 64, 96, 128, 192), - strides=(1, 2, 2, 2, 2, 2), - num_convs=(1, 3, 2, 2, 1, 1), - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None), - decoder=dict( - type='NetE', - in_channels=dict(level5=128, level6=192), - corr_channels=dict(level5=49, level6=49), - sin_channels=dict(level5=258, level6=386), - rin_channels=dict(level5=131, level6=195), - feat_channels=64, - mfeat_channels=(128, 64, 32), - sfeat_channels=(128, 64, 32), - rfeat_channels=(128, 128, 64, 64, 32, 32), - patch_size=dict(level5=3, level6=3), - corr_cfg=dict( - level5=dict(type='Correlation', max_displacement=3), - level6=dict(type='Correlation', max_displacement=3)), - warp_cfg=dict(type='Warp', align_corners=True, use_mask=True), - flow_div=20., - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - scaled_corr=False, - regularized_flow=True, - extra_training_loss=False, - flow_loss=dict( - type='MultiLevelEPE', - weights=dict(level6=0.32, level5=0.08), - p=2, - reduction='sum'), - init_cfg=None), - init_cfg=dict( - type='Kaiming', - nonlinearity='leaky_relu', - layer=['Conv2d', 'ConvTranspose2d'], - mode='fan_in', - bias=0), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(), -) diff --git a/spaces/rifkat/Uz-NER/README.md b/spaces/rifkat/Uz-NER/README.md deleted file mode 100644 index 7226659b932f45e6a49ac9a28d96ed83e98bb670..0000000000000000000000000000000000000000 --- a/spaces/rifkat/Uz-NER/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Uz NER -emoji: ⚡ -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rishiraj/mistral/README.md b/spaces/rishiraj/mistral/README.md deleted file mode 100644 index a32439c42ebf691edc2036a8e3b6d412b8f5fc64..0000000000000000000000000000000000000000 --- a/spaces/rishiraj/mistral/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mistral -emoji: 😻 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.46.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rorallitri/biomedical-language-models/logs/Fallout 3 Pregnancy Mod __EXCLUSIVE__.md b/spaces/rorallitri/biomedical-language-models/logs/Fallout 3 Pregnancy Mod __EXCLUSIVE__.md deleted file mode 100644 index e09e2a8abe43feed25aeed4a81db6adc5a854cd5..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Fallout 3 Pregnancy Mod __EXCLUSIVE__.md +++ /dev/null @@ -1,6 +0,0 @@ -

    fallout 3 pregnancy mod


    Downloadhttps://tinurll.com/2uzm91



    -
    -Fallout 3 Mods: HILARIOUS Sydney and Bittercup Companions Interaction. Kardisha Productions ... How to use Fallout Mod Manager to Install Fallout 3 Mods. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/Gunday Full Movie 2014 Bengali Version A Tribute to the Legendary Actors Ranveer Singh and Arjun Kapoor.md b/spaces/rorallitri/biomedical-language-models/logs/Gunday Full Movie 2014 Bengali Version A Tribute to the Legendary Actors Ranveer Singh and Arjun Kapoor.md deleted file mode 100644 index e6276ee7844bd4aa8de0556791e9f87d4ce64798..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Gunday Full Movie 2014 Bengali Version A Tribute to the Legendary Actors Ranveer Singh and Arjun Kapoor.md +++ /dev/null @@ -1,8 +0,0 @@ -
    -

    Gunday is an upcoming romantic action Hindi movie which is scheduled to be released on 14th February, 2014. Gunday is written and directed by Ali Abbas Zafar and produced by Aditya Chopra under the banner of Yash Raj Films. The movie features Ranveer Singh as Bikram and Arjun Kapoor as Bala in the lead roles while Priyanka Chopra as Nandita and Irrfan Khan as Satya appear in supporting roles. The shooting of the film is done in Kolkata and some parts at Durgapur and Raniganj. Gunday's music is composed by Sohail Sen, while its lyrics are penned by Irshad Kamil.

    -

    The movie is also to be released in Bengali with a full Bengali songs composed by Bappi Lahiri. This movie is also the first Indian film to have its trailer premier at the Dubai International Film Festival.

    -

    Gunday Full Movie 2014 Bengali Version


    DOWNLOADhttps://tinurll.com/2uznid



    -

    In the 2000s, Lahiri lent his voice to hit songs like "Bambai Nagariya" from "Taxi No 9211" (2006), and "Ooh La La" from "The Dirty Picture" (2011). He also was one of the singers who sang "Tune Maari Entriyaan" from 2014's "Gunday". The lyrics for the Bengali version of the song were penned by Lahiri and Gautam Susmit.

    -

    "Yaar bina chain kahan re", and "Aaj rapat jaaye to", among others. In the 2000s, Lahiri was also one of the singers who sang "Tune Maari Entriyaan" from 2014's "Gunday". The lyrics for the Bengali version of the song were penned by Lahiri and Gautam Susmit.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/ruslanmv/Clone-Your-Voice/synthesizer/synthesizer_dataset.py b/spaces/ruslanmv/Clone-Your-Voice/synthesizer/synthesizer_dataset.py deleted file mode 100644 index 36fcaf4dd6e52444358277b9da98611862fa07c0..0000000000000000000000000000000000000000 --- a/spaces/ruslanmv/Clone-Your-Voice/synthesizer/synthesizer_dataset.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -from torch.utils.data import Dataset -import numpy as np -from pathlib import Path -from synthesizer.utils.text import text_to_sequence - - -class SynthesizerDataset(Dataset): - def __init__(self, metadata_fpath: Path, mel_dir: Path, embed_dir: Path, hparams): - print("Using inputs from:\n\t%s\n\t%s\n\t%s" % (metadata_fpath, mel_dir, embed_dir)) - - with metadata_fpath.open("r") as metadata_file: - metadata = [line.split("|") for line in metadata_file] - - mel_fnames = [x[1] for x in metadata if int(x[4])] - mel_fpaths = [mel_dir.joinpath(fname) for fname in mel_fnames] - embed_fnames = [x[2] for x in metadata if int(x[4])] - embed_fpaths = [embed_dir.joinpath(fname) for fname in embed_fnames] - self.samples_fpaths = list(zip(mel_fpaths, embed_fpaths)) - self.samples_texts = [x[5].strip() for x in metadata if int(x[4])] - self.metadata = metadata - self.hparams = hparams - - print("Found %d samples" % len(self.samples_fpaths)) - - def __getitem__(self, index): - # Sometimes index may be a list of 2 (not sure why this happens) - # If that is the case, return a single item corresponding to first element in index - if index is list: - index = index[0] - - mel_path, embed_path = self.samples_fpaths[index] - mel = np.load(mel_path).T.astype(np.float32) - - # Load the embed - embed = np.load(embed_path) - - # Get the text and clean it - text = text_to_sequence(self.samples_texts[index], self.hparams.tts_cleaner_names) - - # Convert the list returned by text_to_sequence to a numpy array - text = np.asarray(text).astype(np.int32) - - return text, mel.astype(np.float32), embed.astype(np.float32), index - - def __len__(self): - return len(self.samples_fpaths) - - -def collate_synthesizer(batch, r, hparams): - # Text - x_lens = [len(x[0]) for x in batch] - max_x_len = max(x_lens) - - chars = [pad1d(x[0], max_x_len) for x in batch] - chars = np.stack(chars) - - # Mel spectrogram - spec_lens = [x[1].shape[-1] for x in batch] - max_spec_len = max(spec_lens) + 1 - if max_spec_len % r != 0: - max_spec_len += r - max_spec_len % r - - # WaveRNN mel spectrograms are normalized to [0, 1] so zero padding adds silence - # By default, SV2TTS uses symmetric mels, where -1*max_abs_value is silence. - if hparams.symmetric_mels: - mel_pad_value = -1 * hparams.max_abs_value - else: - mel_pad_value = 0 - - mel = [pad2d(x[1], max_spec_len, pad_value=mel_pad_value) for x in batch] - mel = np.stack(mel) - - # Speaker embedding (SV2TTS) - embeds = np.array([x[2] for x in batch]) - - # Index (for vocoder preprocessing) - indices = [x[3] for x in batch] - - - # Convert all to tensor - chars = torch.tensor(chars).long() - mel = torch.tensor(mel) - embeds = torch.tensor(embeds) - - return chars, mel, embeds, indices - -def pad1d(x, max_len, pad_value=0): - return np.pad(x, (0, max_len - len(x)), mode="constant", constant_values=pad_value) - -def pad2d(x, max_len, pad_value=0): - return np.pad(x, ((0, 0), (0, max_len - x.shape[-1])), mode="constant", constant_values=pad_value) diff --git a/spaces/ryoung41/HTML5Interactivity/README.md b/spaces/ryoung41/HTML5Interactivity/README.md deleted file mode 100644 index f3de8a4cfb85e1f347ee6b86c82301427a6c6ae6..0000000000000000000000000000000000000000 --- a/spaces/ryoung41/HTML5Interactivity/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: HTML5Interactivity -emoji: 🐠 -colorFrom: gray -colorTo: yellow -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sasha/Image_Upscaling_Restoration_Colorization/README.md b/spaces/sasha/Image_Upscaling_Restoration_Colorization/README.md deleted file mode 100644 index dfbda97259dd1266e985d9a87fcf01c3993d29c9..0000000000000000000000000000000000000000 --- a/spaces/sasha/Image_Upscaling_Restoration_Colorization/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Image Upscaling, Restoration and Colorization -emoji: 🖼️ -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: nightfury/Image_Face_Upscale_Restoration-GFPGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/others.py b/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/others.py deleted file mode 100644 index 846e91e7947c18dc05b81d313a425f9b310ba619..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/others.py +++ /dev/null @@ -1,56 +0,0 @@ -import functools - -import tensorflow as tf -from tensorflow.keras import backend as K -from tensorflow.keras import layers - -from ..layers import Resizing - -Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same") - - -def MlpBlock( - mlp_dim: int, - dropout_rate: float = 0.0, - use_bias: bool = True, - name: str = "mlp_block", -): - """A 1-hidden-layer MLP block, applied over the last dimension.""" - - def apply(x): - d = K.int_shape(x)[-1] - x = layers.Dense(mlp_dim, use_bias=use_bias, name=f"{name}_Dense_0")(x) - x = tf.nn.gelu(x, approximate=True) - x = layers.Dropout(dropout_rate)(x) - x = layers.Dense(d, use_bias=use_bias, name=f"{name}_Dense_1")(x) - return x - - return apply - - -def UpSampleRatio( - num_channels: int, ratio: float, use_bias: bool = True, name: str = "upsample" -): - """Upsample features given a ratio > 0.""" - - def apply(x): - n, h, w, c = ( - K.int_shape(x)[0], - K.int_shape(x)[1], - K.int_shape(x)[2], - K.int_shape(x)[3], - ) - - # Following `jax.image.resize()` - x = Resizing( - height=int(h * ratio), - width=int(w * ratio), - method="bilinear", - antialias=True, - name=f"{name}_resizing_{K.get_uid('Resizing')}", - )(x) - - x = Conv1x1(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_0")(x) - return x - - return apply diff --git a/spaces/sccstandardteam/ChuanhuChatGPT/Dockerfile b/spaces/sccstandardteam/ChuanhuChatGPT/Dockerfile deleted file mode 100644 index 335c2dba28ba8c365de9306858462a59dea25f28..0000000000000000000000000000000000000000 --- a/spaces/sccstandardteam/ChuanhuChatGPT/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -COPY requirements_advanced.txt . -RUN pip install --user -r requirements.txt -# RUN pip install --user -r requirements_advanced.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/seduerr/text_analytics/text_analytics/indices/syntactic_complexity_indices.py b/spaces/seduerr/text_analytics/text_analytics/indices/syntactic_complexity_indices.py deleted file mode 100644 index 78dd1eada1eca20de6489d43cd259068989ee8ad..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/text_analytics/indices/syntactic_complexity_indices.py +++ /dev/null @@ -1,56 +0,0 @@ -import multiprocessing -from typing import Tuple - -import spacy -import statistics - -from spacy.tokens import Span -from text_analytics.constants import ACCEPTED_LANGUAGES -from text_analytics.utils.utils import is_word -from text_analytics.utils.utils import split_text_into_paragraphs -from text_analytics.utils.utils import split_doc_into_sentences - - -class SyntacticComplexityIndices: - def __init__(self, nlp, language: str='en') -> None: - if not language in ACCEPTED_LANGUAGES: - raise ValueError(f'Language {language} is not supported yet') - - self.language = language - self._nlp = nlp - - def get_mean_number_of_modifiers_per_noun_phrase(self, text: str, workers: int=-1) -> float: - paragraphs = split_text_into_paragraphs(text) - threads = 1 - modifiers_per_noun_phrase = [] - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['parser', 'tagger', 'noun phrase tagger', 'feature counter']] - modifiers_counter = lambda doc: [sum(1 for token in nph if token.pos_ == 'ADJ') - for nph in doc._.noun_phrases] - self._nlp.get_pipe('feature counter').counter_function = modifiers_counter - for doc in self._nlp.pipe(paragraphs, batch_size=threads, disable=disable_pipeline, n_process=threads): - modifiers_per_noun_phrase.extend(doc._.feature_count) - try: - return statistics.mean(modifiers_per_noun_phrase) - except: - return 0 - - def get_mean_number_of_words_before_main_verb(self, text: str, workers: int=-1) -> float: - paragraphs = split_text_into_paragraphs(text) - threads = 1 - words_before_main_verb = [] - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['feature counter', 'sentencizer']] - words_before_main_verb_counter = lambda doc: [amount_of_words_before_main_verb(s) for s in split_doc_into_sentences(doc)] - self._nlp.get_pipe('feature counter').counter_function = words_before_main_verb_counter - for doc in self._nlp.pipe(paragraphs, batch_size=threads, disable=disable_pipeline, n_process=threads): - words_before_main_verb.extend(doc._.feature_count) - return statistics.mean(words_before_main_verb) - -def amount_of_words_before_main_verb(sentence: Span) -> int: - left_words = [] - for token in sentence: - if token.pos_ in ['VERB', 'AUX'] and token.dep_ == 'ROOT': - break - else: - if is_word(token): - left_words.append(token.text) - return len(left_words) \ No newline at end of file diff --git a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/optimus_models/tokenization_gpt2.py b/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/optimus_models/tokenization_gpt2.py deleted file mode 100644 index 79eb275e1ca14e0d4ed5ca4d778978a6a398528f..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/optimus_models/tokenization_gpt2.py +++ /dev/null @@ -1,228 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Open AI Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes for OpenAI GPT.""" -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import sys -import json -import logging -import os -import regex as re -from io import open - -try: - from functools import lru_cache -except ImportError: - # Just a dummy decorator to get the checks to run on python2 - # because honestly I don't want to support a byte-level unicode BPE tokenizer on python 2 right now. - def lru_cache(): - return lambda func: func - -from .tokenization_utils import PreTrainedTokenizer - -logger = logging.getLogger(__name__) - -VOCAB_FILES_NAMES = { - 'vocab_file': 'vocab.json', - 'merges_file': 'merges.txt', -} - -PRETRAINED_VOCAB_FILES_MAP = { - 'vocab_file': - { - 'gpt2': "https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json", - 'gpt2-medium': "https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-vocab.json", - 'gpt2-large': "https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-vocab.json", - }, - 'merges_file': - { - 'gpt2': "https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt", - 'gpt2-medium': "https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-merges.txt", - 'gpt2-large': "https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-merges.txt", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - 'gpt2': 1024, - 'gpt2-medium': 1024, - 'gpt2-large': 1024, -} - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a mapping to unicode strings. - We specifically avoids mapping to whitespace/control characters the bpe code barfs on. - - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - """ - _chr = unichr if sys.version_info[0] == 2 else chr - bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8+n) - n += 1 - cs = [_chr(n) for n in cs] - return dict(zip(bs, cs)) - -def get_pairs(word): - """Return set of symbol pairs in a word. - - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - -class GPT2Tokenizer(PreTrainedTokenizer): - """ - GPT-2 BPE tokenizer. Peculiarities: - - Byte-level Byte-Pair-Encoding - - Requires a space to start the input string => will add a space is there isn't. - As a consequence, this tokenizer `encode` and `decode` method will not conserve - the absence of a space at the beginning of a string: `tokenizer.decode(tokenizer.encode("Hello")) = " Hello" - """ - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - - def __init__(self, vocab_file, merges_file, errors='replace', unk_token="<|endoftext|>", - bos_token="<|endoftext|>", eos_token="<|endoftext|>", **kwargs): - super(GPT2Tokenizer, self).__init__(bos_token=bos_token, eos_token=eos_token, unk_token=unk_token, **kwargs) - self.max_len_single_sentence = self.max_len # no default special tokens - you can update this value if you add special tokens - self.max_len_sentences_pair = self.max_len # no default special tokens - you can update this value if you add special tokens - - self.encoder = json.load(open(vocab_file, encoding="utf-8")) - self.decoder = {v: k for k, v in self.encoder.items()} - self.errors = errors # how to handle errors in decoding - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - bpe_data = open(merges_file, encoding='utf-8').read().split('\n')[1:-1] - bpe_merges = [tuple(merge.split()) for merge in bpe_data] - self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) - self.cache = {} - - # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions - self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""") - - @property - def vocab_size(self): - return len(self.encoder) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf'))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word)-1 and word[i+1] == second: - new_word.append(first+second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = ' '.join(word) - self.cache[token] = word - return word - - def _tokenize(self, text): - """ Tokenize a string. """ - text = ' ' + text # GPT-2 (and RoBERTa) tokenizers need at least one space to begin the sentence with. - bpe_tokens = [] - for token in re.findall(self.pat, text): - if sys.version_info[0] == 2: - token = ''.join(self.byte_encoder[ord(b)] for b in token) # Maps all our bytes to unicode strings, avoiding controle tokens of the BPE (spaces in our case) - else: - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) # Maps all our bytes to unicode strings, avoiding controle tokens of the BPE (spaces in our case) - bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(' ')) - return bpe_tokens - - def _convert_token_to_id(self, token): - """ Converts a token (str/unicode) in an id using the vocab. """ - return self.encoder.get(token, self.encoder.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (string/unicode) using the vocab.""" - return self.decoder.get(index) - - def convert_tokens_to_string(self, tokens): - """ Converts a sequence of tokens (string) in a single string. """ - text = ''.join(tokens) - text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors) - return text - - def save_vocabulary(self, save_directory): - """Save the tokenizer vocabulary and merge files to a directory.""" - if not os.path.isdir(save_directory): - logger.error("Vocabulary path ({}) should be a directory".format(save_directory)) - return - vocab_file = os.path.join(save_directory, VOCAB_FILES_NAMES['vocab_file']) - merge_file = os.path.join(save_directory, VOCAB_FILES_NAMES['merges_file']) - - with open(vocab_file, 'w', encoding='utf-8') as f: - f.write(json.dumps(self.encoder, ensure_ascii=False)) - - index = 0 - with open(merge_file, "w", encoding="utf-8") as writer: - writer.write(u'#version: 0.2\n') - for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): - if index != token_index: - logger.warning("Saving vocabulary to {}: BPE merge indices are not consecutive." - " Please check that the tokenizer is not corrupted!".format(merge_file)) - index = token_index - writer.write(' '.join(bpe_tokens) + u'\n') - index += 1 - - return vocab_file, merge_file - - # XX added - def add_special_tokens_single_sentence(self, token_ids): - return [self.added_tokens_encoder['']] + token_ids + [self.added_tokens_encoder['']] diff --git a/spaces/shikunl/prismer/prismer/experts/edge/model.py b/spaces/shikunl/prismer/prismer/experts/edge/model.py deleted file mode 100644 index 5651b3d9065a68caf7fd504abce8d397eb6103c7..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/edge/model.py +++ /dev/null @@ -1,286 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def weight_init(m): - if isinstance(m, (nn.Conv2d,)): - # torch.nn.init.xavier_uniform_(m.weight, gain=1.0) - torch.nn.init.xavier_normal_(m.weight, gain=1.0) - # torch.nn.init.normal_(m.weight, mean=0.0, std=0.01) - if m.weight.data.shape[1] == torch.Size([1]): - torch.nn.init.normal_(m.weight, mean=0.0) - - if m.bias is not None: - torch.nn.init.zeros_(m.bias) - - # for fusion layer - if isinstance(m, (nn.ConvTranspose2d,)): - # torch.nn.init.xavier_uniform_(m.weight, gain=1.0) - torch.nn.init.xavier_normal_(m.weight, gain=1.0) - # torch.nn.init.normal_(m.weight, mean=0.0, std=0.01) - - if m.weight.data.shape[1] == torch.Size([1]): - torch.nn.init.normal_(m.weight, std=0.1) - if m.bias is not None: - torch.nn.init.zeros_(m.bias) - - -class CoFusion(nn.Module): - - def __init__(self, in_ch, out_ch): - super(CoFusion, self).__init__() - self.conv1 = nn.Conv2d(in_ch, 64, kernel_size=3, - stride=1, padding=1) - self.conv2 = nn.Conv2d(64, 64, kernel_size=3, - stride=1, padding=1) - self.conv3 = nn.Conv2d(64, out_ch, kernel_size=3, - stride=1, padding=1) - self.relu = nn.ReLU() - - self.norm_layer1 = nn.GroupNorm(4, 64) - self.norm_layer2 = nn.GroupNorm(4, 64) - - def forward(self, x): - # fusecat = torch.cat(x, dim=1) - attn = self.relu(self.norm_layer1(self.conv1(x))) - attn = self.relu(self.norm_layer2(self.conv2(attn))) - attn = F.softmax(self.conv3(attn), dim=1) - - # return ((fusecat * attn).sum(1)).unsqueeze(1) - return ((x * attn).sum(1)).unsqueeze(1) - -class _DenseLayer(nn.Sequential): - def __init__(self, input_features, out_features): - super(_DenseLayer, self).__init__() - - # self.add_module('relu2', nn.ReLU(inplace=True)), - self.add_module('conv1', nn.Conv2d(input_features, out_features, - kernel_size=3, stride=1, padding=2, bias=True)), - self.add_module('norm1', nn.BatchNorm2d(out_features)), - self.add_module('relu1', nn.ReLU(inplace=True)), - self.add_module('conv2', nn.Conv2d(out_features, out_features, - kernel_size=3, stride=1, bias=True)), - self.add_module('norm2', nn.BatchNorm2d(out_features)) - - def forward(self, x): - x1, x2 = x - - new_features = super(_DenseLayer, self).forward(F.relu(x1)) # F.relu() - # if new_features.shape[-1]!=x2.shape[-1]: - # new_features =F.interpolate(new_features,size=(x2.shape[2],x2.shape[-1]), mode='bicubic', - # align_corners=False) - return 0.5 * (new_features + x2), x2 - - -class _DenseBlock(nn.Sequential): - def __init__(self, num_layers, input_features, out_features): - super(_DenseBlock, self).__init__() - for i in range(num_layers): - layer = _DenseLayer(input_features, out_features) - self.add_module('denselayer%d' % (i + 1), layer) - input_features = out_features - - -class UpConvBlock(nn.Module): - def __init__(self, in_features, up_scale): - super(UpConvBlock, self).__init__() - self.up_factor = 2 - self.constant_features = 16 - - layers = self.make_deconv_layers(in_features, up_scale) - assert layers is not None, layers - self.features = nn.Sequential(*layers) - - def make_deconv_layers(self, in_features, up_scale): - layers = [] - all_pads=[0,0,1,3,7] - for i in range(up_scale): - kernel_size = 2 ** up_scale - pad = all_pads[up_scale] # kernel_size-1 - out_features = self.compute_out_features(i, up_scale) - layers.append(nn.Conv2d(in_features, out_features, 1)) - layers.append(nn.ReLU(inplace=True)) - layers.append(nn.ConvTranspose2d( - out_features, out_features, kernel_size, stride=2, padding=pad)) - in_features = out_features - return layers - - def compute_out_features(self, idx, up_scale): - return 1 if idx == up_scale - 1 else self.constant_features - - def forward(self, x): - return self.features(x) - - -class SingleConvBlock(nn.Module): - def __init__(self, in_features, out_features, stride, - use_bs=True - ): - super(SingleConvBlock, self).__init__() - self.use_bn = use_bs - self.conv = nn.Conv2d(in_features, out_features, 1, stride=stride, - bias=True) - self.bn = nn.BatchNorm2d(out_features) - - def forward(self, x): - x = self.conv(x) - if self.use_bn: - x = self.bn(x) - return x - - -class DoubleConvBlock(nn.Module): - def __init__(self, in_features, mid_features, - out_features=None, - stride=1, - use_act=True): - super(DoubleConvBlock, self).__init__() - - self.use_act = use_act - if out_features is None: - out_features = mid_features - self.conv1 = nn.Conv2d(in_features, mid_features, - 3, padding=1, stride=stride) - self.bn1 = nn.BatchNorm2d(mid_features) - self.conv2 = nn.Conv2d(mid_features, out_features, 3, padding=1) - self.bn2 = nn.BatchNorm2d(out_features) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.bn2(x) - if self.use_act: - x = self.relu(x) - return x - - -class DexiNed(nn.Module): - """ Definition of the DXtrem network. """ - - def __init__(self): - super(DexiNed, self).__init__() - self.block_1 = DoubleConvBlock(3, 32, 64, stride=2,) - self.block_2 = DoubleConvBlock(64, 128, use_act=False) - self.dblock_3 = _DenseBlock(2, 128, 256) # [128,256,100,100] - self.dblock_4 = _DenseBlock(3, 256, 512) - self.dblock_5 = _DenseBlock(3, 512, 512) - self.dblock_6 = _DenseBlock(3, 512, 256) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - # left skip connections, figure in Journal - self.side_1 = SingleConvBlock(64, 128, 2) - self.side_2 = SingleConvBlock(128, 256, 2) - self.side_3 = SingleConvBlock(256, 512, 2) - self.side_4 = SingleConvBlock(512, 512, 1) - self.side_5 = SingleConvBlock(512, 256, 1) # Sory I forget to comment this line :( - - # right skip connections, figure in Journal paper - self.pre_dense_2 = SingleConvBlock(128, 256, 2) - self.pre_dense_3 = SingleConvBlock(128, 256, 1) - self.pre_dense_4 = SingleConvBlock(256, 512, 1) - self.pre_dense_5 = SingleConvBlock(512, 512, 1) - self.pre_dense_6 = SingleConvBlock(512, 256, 1) - - - self.up_block_1 = UpConvBlock(64, 1) - self.up_block_2 = UpConvBlock(128, 1) - self.up_block_3 = UpConvBlock(256, 2) - self.up_block_4 = UpConvBlock(512, 3) - self.up_block_5 = UpConvBlock(512, 4) - self.up_block_6 = UpConvBlock(256, 4) - self.block_cat = SingleConvBlock(6, 1, stride=1, use_bs=False) # hed fusion method - # self.block_cat = CoFusion(6,6)# cats fusion method - - - self.apply(weight_init) - - def slice(self, tensor, slice_shape): - t_shape = tensor.shape - height, width = slice_shape - if t_shape[-1]!=slice_shape[-1]: - new_tensor = F.interpolate( - tensor, size=(height, width), mode='bicubic',align_corners=False) - else: - new_tensor=tensor - # tensor[..., :height, :width] - return new_tensor - - def forward(self, x): - assert x.ndim == 4, x.shape - - # Block 1 - block_1 = self.block_1(x) - block_1_side = self.side_1(block_1) - - # Block 2 - block_2 = self.block_2(block_1) - block_2_down = self.maxpool(block_2) - block_2_add = block_2_down + block_1_side - block_2_side = self.side_2(block_2_add) - - # Block 3 - block_3_pre_dense = self.pre_dense_3(block_2_down) - block_3, _ = self.dblock_3([block_2_add, block_3_pre_dense]) - block_3_down = self.maxpool(block_3) # [128,256,50,50] - block_3_add = block_3_down + block_2_side - block_3_side = self.side_3(block_3_add) - - # Block 4 - block_2_resize_half = self.pre_dense_2(block_2_down) - block_4_pre_dense = self.pre_dense_4(block_3_down+block_2_resize_half) - block_4, _ = self.dblock_4([block_3_add, block_4_pre_dense]) - block_4_down = self.maxpool(block_4) - block_4_add = block_4_down + block_3_side - block_4_side = self.side_4(block_4_add) - - # Block 5 - block_5_pre_dense = self.pre_dense_5( - block_4_down) #block_5_pre_dense_512 +block_4_down - block_5, _ = self.dblock_5([block_4_add, block_5_pre_dense]) - block_5_add = block_5 + block_4_side - - # Block 6 - block_6_pre_dense = self.pre_dense_6(block_5) - block_6, _ = self.dblock_6([block_5_add, block_6_pre_dense]) - - # upsampling blocks - out_1 = self.up_block_1(block_1) - out_2 = self.up_block_2(block_2) - out_3 = self.up_block_3(block_3) - out_4 = self.up_block_4(block_4) - out_5 = self.up_block_5(block_5) - out_6 = self.up_block_6(block_6) - results = [out_1, out_2, out_3, out_4, out_5, out_6] - - # concatenate multiscale outputs - block_cat = torch.cat(results, dim=1) # Bx6xHxW - block_cat = self.block_cat(block_cat) # Bx1xHxW - - # return results - results.append(block_cat) - return results - - -if __name__ == '__main__': - batch_size = 8 - img_height = 352 - img_width = 352 - - # device = "cuda" if torch.cuda.is_available() else "cpu" - device = "cpu" - input = torch.rand(batch_size, 3, img_height, img_width).to(device) - # target = torch.rand(batch_size, 1, img_height, img_width).to(device) - print(f"input shape: {input.shape}") - model = DexiNed().to(device) - output = model(input) - print(f"output shapes: {[t.shape for t in output]}") - - # for i in range(20000): - # print(i) - # output = model(input) - # loss = nn.MSELoss()(output[-1], target) - # loss.backward() diff --git a/spaces/sidphbot/Researcher/arxiv_public_data/config.py b/spaces/sidphbot/Researcher/arxiv_public_data/config.py deleted file mode 100644 index 7cfbd41822c97cabb19a5666029104797623add0..0000000000000000000000000000000000000000 --- a/spaces/sidphbot/Researcher/arxiv_public_data/config.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import json -import logging - -logging.basicConfig( - level=logging.INFO, - format='%(asctime)s - %(name)s - %(levelname)s: %(message)s' -) -baselog = logging.getLogger('arxivdata') -logger = baselog.getChild('config') - -DEFAULT_PATH = os.path.join(os.path.abspath('.'), 'arxiv-data') -JSONFILE = './config.json' -KEY = 'ARXIV_DATA' - -def get_outdir(): - """ - Grab the outdir from: - 1) Environment - 2) config.json - 3) default ($PWD/arxiv-data) - """ - if os.environ.get(KEY): - out = os.environ.get(KEY) - else: - if os.path.exists(JSONFILE): - js = json.load(open(JSONFILE)) - if not KEY in js: - logger.warn('Configuration in "{}" invalid, using default'.format(JSONFILE)) - logger.warn("default output directory is {}".format(DEFAULT_PATH)) - out = DEFAULT_PATH - else: - out = js[KEY] - else: - logger.warn("default output directory is {}".format(DEFAULT_PATH)) - out = DEFAULT_PATH - return out - -try: - DIR_BASE = get_outdir() -except Exception as e: - logger.error( - "Error attempting to get path from ENV or json conf, " - "defaulting to current directory" - ) - DIR_BASE = DEFAULT_PATH - -DIR_FULLTEXT = os.path.join(DIR_BASE, 'fulltext') -DIR_PDFTARS = os.path.join(DIR_BASE, 'tarpdfs') -DIR_OUTPUT = os.path.join(DIR_BASE, 'output') -LOGGER = baselog - -for dirs in [DIR_BASE, DIR_PDFTARS, DIR_FULLTEXT, DIR_OUTPUT]: - if not os.path.exists(dirs): - os.mkdir(dirs) diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/__init__.py b/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/__init__.py deleted file mode 100644 index a0fd7f8294b9d7be770127c356f0b6564f1baa6c..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""An implementation of the inference pipeline of AlphaFold v2.0.""" diff --git a/spaces/simonduerr/diffdock/baselines/baseline_run_tankbind_parallel.sh b/spaces/simonduerr/diffdock/baselines/baseline_run_tankbind_parallel.sh deleted file mode 100644 index 7ac71588c01b604709ee6acf6f345cf037115f03..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/baselines/baseline_run_tankbind_parallel.sh +++ /dev/null @@ -1,5 +0,0 @@ -for i in $(seq 0 15); do - python baseline_tankbind_runtime.py --parallel_id $i --parallel_tot 16 --prank_path /data/rsg/nlp/hstark/TankBind/packages/p2rank_2.3/prank --data_dir /data/rsg/nlp/hstark/ligbind/data/PDBBind_processed --split_path /data/rsg/nlp/hstark/ligbind/data/splits/timesplit_test --results_path /data/rsg/nlp/hstark/ligbind/results/tankbind_16_worker_runtime --device cpu --skip_p2rank --num_workers 1 --skip_multiple_pocket_outputs & -done -wait - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/PMDG 737 Ngx Activation Code.rar.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/PMDG 737 Ngx Activation Code.rar.md deleted file mode 100644 index 48925bc7a0cae487d4b7d41e0900df469a6343e3..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/PMDG 737 Ngx Activation Code.rar.md +++ /dev/null @@ -1,122 +0,0 @@ -## PMDG 737 Ngx Activation Code.rar - - - - - - - - - -**Download ->>->>->> [https://vercupalo.blogspot.com/?d=2txP0K](https://vercupalo.blogspot.com/?d=2txP0K)** - - - - - - - - - - - - - -# How to Activate PMDG 737 Ngx with Activation Code - - - -If you have purchased the PMDG 737 Ngx for Prepar3D or FSX, you may need to activate it with an activation code. This is a 24-character code that begins with "NGX8" followed by five more groups of letters and numbers. You should have received this code in an email from PMDG when you completed your purchase. If you have lost or deleted this email, you can contact PMDG support to request a new code. - - - -To activate your PMDG 737 Ngx, you need to follow these steps: - - - -1. Start your simulator (Prepar3D or FSX) with a default aircraft, such as the F-22. - -2. Select the PMDG 737 Ngx as your aircraft and choose your airport, time and weather. - -3. Start the flight. You should see an activation window pop up. - -4. Enter your 24-character activation code in the window and press "Activate Product". - -5. Wait for the activation process to complete. You should see a confirmation message. - -6. Enjoy flying your PMDG 737 Ngx! - - - -If you do not see the activation window, or if you encounter any errors during the activation process, you can try using the external activation utility provided by PMDG. You can download it from this link: [http://downloads.precisionmanuals.com/file\_library/PMDG\_737NGX\_Activation.zip](http://downloads.precisionmanuals.com/file_library/PMDG_737NGX_Activation.zip) - - - -To use the external activation utility, follow these steps: - - - -1. Extract the zip file to a folder on your computer. - -2. Run the PMDG\_737NGX\_Activation.exe file as administrator. - -3. Select your simulator (Prepar3D or FSX) from the drop-down menu. - -4. Enter your 24-character activation code in the text box. - -5. Press "Activate Product". - -6. Wait for the activation process to complete. You should see a confirmation message. - -7. Restart your simulator and load the PMDG 737 Ngx. - - - -If you still have problems activating your PMDG 737 Ngx, you can contact PMDG support through their website: [https://support.precisionmanuals.com/](https://support.precisionmanuals.com/) - - - -You can also find more information and help from other users on the AVSIM forums: [https://www.avsim.com/forums/forum/436-pmdg-737ngx-737ngxu/](https://www.avsim.com/forums/forum/436-pmdg-737ngx-737ngxu/) - - - -## SEO Optimization and HTML Formatting Tips - - - -To make your article more SEO-friendly and attractive to readers, you can follow these tips: - - - -- Use keywords that are relevant to your topic and audience. For example, if you are writing about PMDG 737 Ngx activation code, you can use keywords like "PMDG", "737", "Ngx", "activation", "code", "Prepar3D", "FSX", etc. You can also use synonyms or variations of these keywords, such as "PMDG 737 NGXu", "P3D", "Flight Simulator X", etc. - -- Use keywords in your title, headings, subheadings, introduction, conclusion and body paragraphs. Try to include your main keyword in your title and at least one heading. Use different keywords for different sections of your article. Do not overuse keywords or repeat them too often. This can make your article look spammy and unnatural. - -- Use HTML tags to format your article and make it easier to read. Use - -# tags for your main title, - - - -## tags for your headings, - - - -### tags for your subheadings, - -tags for your paragraphs, - - - -andtags for your lists,tags for your links, etc. You can also use other HTML tags to add images, tables, videos, etc. to your article. - -- Use meta tags to provide information about your article to search engines and social media platforms. Meta tags dfd1c89656 - - - - - - - - - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Babiloni Panda Var MP3 - The History and Background of the Song.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Babiloni Panda Var MP3 - The History and Background of the Song.md deleted file mode 100644 index adf78e875cd680b045a8f6468241ba46113dbc68..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Babiloni Panda Var MP3 - The History and Background of the Song.md +++ /dev/null @@ -1,58 +0,0 @@ -
    -

    Babiloni Panda Var MP3: A Viral Hit from Georgia

    -

    If you are looking for a new and exciting song to add to your playlist, you might want to check out Babiloni Panda Var MP3. This is a viral hit from Georgia that has taken the internet by storm. But what is this song about, and who is the artist behind it? In this article, we will answer these questions and more.

    -

    What is Babiloni Panda Var MP3?

    -

    Babiloni Panda Var MP3 is a song by Georgian rapper and singer Babiloni. The title of the song translates to "I am a panda" in English. The song was released in 2021 and has since gained millions of views on YouTube and SoundCloud, as well as thousands of streams on Shazam.

    -

    babiloni panda var mp3


    Downloadhttps://ssurll.com/2uNSae



    -

    The meaning of the song

    -

    The song is a humorous and playful expression of Babiloni's identity and personality. He compares himself to a panda, a cute and cuddly animal that is also fierce and strong. He raps about his love for music, his lifestyle, his friends, and his dreams. He also makes references to popular culture, such as the movie Kung Fu Panda, the rapper Desiigner, and the fashion brand Gucci.

    -

    The popularity of the song

    -

    The song has become a viral sensation because of its catchy tune, witty lyrics, and colorful video. Many people have shared the song on social media platforms, such as TikTok, Instagram, and Facebook. Some have even created their own versions or remixes of the song. The song has also attracted the attention of some celebrities, such as American rapper Snoop Dogg, who posted a video of himself dancing to the song on his Instagram account.

    -

    Who is Babiloni?

    -

    Babiloni is a Georgian rapper and singer who has been making music since 2010. He is also known as Luka Jintcharadze, or simply Luka. He is the founder and owner of BabiloniStudio, a music production company based in Tbilisi, Georgia.

    -

    The background of the artist

    -

    Babiloni was born in 1994 in Tbilisi, Georgia. He grew up in a musical family, as his father was a singer and his mother was a pianist. He started writing songs when he was 12 years old, inspired by artists such as Eminem, Tupac, and 50 Cent. He studied at the Tbilisi State Conservatory, where he learned classical music theory and piano. He also learned how to play guitar, drums, saxophone, and harmonica.

    -

    The style and genre of the music

    -

    Babiloni's music is a fusion of rap, hip hop, pop, rock, and folk. He sings and raps in both Georgian and English languages. He often incorporates elements of Georgian culture and history into his songs, such as traditional instruments, melodies, and stories. He also experiments with different sounds and effects, such as auto-tune, distortion, and sampling.

    -

    [BABILONI - პანდა ვარ / PANDA VAR - YouTube](^1^)
    -[BABILONI - Პანდა Ვარ / PANDA VAR - SoundCloud](^2^)
    -[Panda Var - BABILONI | Shazam](^3^)

    -

    How to download Babiloni Panda Var MP3?

    -

    If you want to download Babiloni Panda Var MP3 to your device, you have several options to choose from.

    -

    The official sources

    -

    The best way to download Babiloni Panda Var MP3 is to use the official sources provided by the artist. You can find the links to these sources on Babiloni's YouTube channel, SoundCloud page, or Instagram account[^ 4^]. These sources will allow you to download the song in high quality and support the artist financially.

    -

    The alternative sources

    -

    If you cannot access the official sources, or you prefer to use a different platform, you can also download Babiloni Panda Var MP3 from some alternative sources. However, you should be careful about the quality and legality of these sources, as they may not have the permission or license to distribute the song. Some of the alternative sources you can try are:

    -
      -
    • MP3Juices.cc: This is a free online MP3 downloader that allows you to search and download songs from various sources, such as YouTube, SoundCloud, and Spotify. You can enter the name of the song or the URL of the video or audio file, and then choose the format and quality you want.
    • -
    • YTMP3.cc: This is a free online YouTube to MP3 converter that allows you to download any YouTube video as an MP3 file. You can paste the URL of the video, and then click on the convert button. You can also choose the quality of the MP3 file, from 64 kbps to 320 kbps.
    • -
    • MP3Skull.com: This is a free online MP3 search engine that allows you to find and download songs from various sources, such as YouTube, SoundCloud, and 4shared. You can enter the name of the song or the artist, and then browse through the results. You can also preview the song before downloading it.
    • -
    -

    Why you should listen to Babiloni Panda Var MP3?

    -

    Babiloni Panda Var MP3 is not just a catchy song, but also a meaningful and enjoyable one. Here are some reasons why you should listen to it:

    -

    The catchy melody and lyrics

    -

    The song has a simple but catchy melody that will make you want to sing along. The lyrics are also fun and clever, with rhymes, puns, and metaphors. The chorus is especially memorable, with Babiloni repeating "Panda var" (I am a panda) in different languages, such as English, Spanish, French, and Russian.

    -

    The cultural and social relevance

    -

    The song is also a reflection of Babiloni's cultural and social background. He showcases his Georgian identity and heritage by using Georgian words, phrases, and references in his song. He also addresses some of the issues and challenges that he faces as a young artist in Georgia, such as censorship, corruption, and poverty. He expresses his hope and optimism for a better future for himself and his country.

    -

    Conclusion

    -

    Babiloni Panda Var MP3 is a viral hit from Georgia that has captivated millions of listeners around the world. It is a song that combines humor and creativity with culture and social commentary. It is a song that celebrates Babiloni's unique personality and talent. If you are looking for a new and exciting song to spice up your playlist, you should definitely give Babiloni Panda Var MP3 a try.

    -

    FAQs

    -
      -
    • Q: Where can I watch the video of Babiloni Panda Var MP3?
    • -
    • A: You can watch the video of Babiloni Panda Var MP3 on Babiloni's YouTube channel. The video features Babiloni dressed as a panda, dancing and rapping in various locations in Tbilisi.
    • -
    • Q: What are some other songs by Babiloni?
    • -
    • A: Some other songs by Babiloni are "Chemi Guli", "Mama", "Tbilisi", "Gamarjoba", and "Sakartvelo". You can find them on his YouTube channel, SoundCloud page, or Instagram account.
    • -
    • Q: How can I follow Babiloni on social media?
    • -
    • A: You can follow Babiloni on his Instagram account, where he posts updates about his music, videos, and life. You can also follow him on his Facebook page, where he shares news and events related to his music career.
    • -
    • Q: How can I support Babiloni as an artist?
    • -
    • A: You can support Babiloni as an artist by downloading his songs from the official sources , streaming his songs on Shazam, liking and sharing his videos on YouTube, following him on social media , and leaving positive comments and feedback on his posts. You can also support him by buying his merchandise, such as T-shirts, hoodies, hats, and stickers, from his online store.
    • -
    • Q: How can I learn more about Georgian music and culture?
    • -
    • A: You can learn more about Georgian music and culture by exploring some of the online resources available, such as:
    • -
        -
      • Georgian Music: This is a website that provides information and news about Georgian music, artists, genres, festivals, and events. You can also listen to Georgian radio stations and podcasts, watch Georgian music videos and documentaries, and discover new Georgian songs and albums.
      • -
      • Georgian Culture: This is a website that provides information and articles about Georgian culture, history, language, literature, art, cuisine, traditions, and customs. You can also find useful tips and guides for traveling to Georgia, as well as links to other Georgian websites and blogs.
      • -
      • Georgian Language: This is a website that provides online courses and lessons for learning Georgian language. You can also find dictionaries, grammar books, vocabulary lists, audio files, and quizzes to help you practice your Georgian skills.
      • -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Game Kartu Solitaire Play Klondike Spider FreeCell and More.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Game Kartu Solitaire Play Klondike Spider FreeCell and More.md deleted file mode 100644 index ceb68925bde147c9651df5ce4e05473fc700e4b8..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Game Kartu Solitaire Play Klondike Spider FreeCell and More.md +++ /dev/null @@ -1,107 +0,0 @@ -
    -

    Download Game Kartu Solitaire: How to Play the Classic Card Game on Your Android Device

    -

    Do you love playing card games? Do you enjoy challenging your brain and having fun at the same time? If so, you might want to try game kartu solitaire, also known as solitaire or patience. Game kartu solitaire is one of the most popular and beloved card games in the world, and you can play it anytime, anywhere, on your Android device. In this article, we will tell you everything you need to know about game kartu solitaire, including its history, rules, variations, benefits, and challenges. We will also show you how to download game kartu solitaire on your Android device, and recommend some of the best apps for playing this classic card game. Let's get started!

    -

    download game kartu solitaire


    Download File ►►► https://ssurll.com/2uNYHj



    -

    What is Game Kartu Solitaire?

    -

    Game kartu solitaire is a card game that can be played by a single player or multiple players. The goal of the game is to sort a deck of cards into four piles, one for each suit, in ascending order from ace to king. The game can be played with a standard 52-card deck or a custom deck with fewer cards. There are many different ways to play game kartu solitaire, depending on the layout of the cards, the number of cards drawn, and the scoring system.

    -

    The History and Origin of Solitaire

    -

    The exact origin of game kartu solitaire is unknown, but it is believed that it was invented in Europe sometime in the late 18th or early 19th century. Some historians suggest that it was inspired by other card games such as tarot, piquet, or chess. The first written reference to game kartu solitaire dates back to 1783, in a German book called Das neue Königliche L'Hombre-Spiel. The first English book on game kartu solitaire was published in 1870, by Lady Adelaide Cadogan. Since then, game kartu solitaire has become a worldwide phenomenon, with hundreds of variations and millions of fans.

    -

    The Rules and Variations of Solitaire

    -

    The basic rules of game kartu solitaire are simple: you start with a shuffled deck of cards, and you deal them face down into seven columns, forming a tableau. The first column has one card, the second column has two cards, and so on, until the seventh column has seven cards. The top card of each column is turned face up. The remaining cards are placed face down in a stock pile. You can move cards from one column to another, as long as they are in descending order and alternating colors (for example, you can place a black six on a red seven). You can also move groups of cards that are already in sequence. When you uncover a face-down card, you turn it face up. You can also move cards from the stock pile to the tableau or to four foundation piles that are located above the tableau. The foundation piles are where you place the cards in ascending order by suit, starting from ace to king. You win the game when you move all the cards to the foundation piles.

    -

    There are many variations of game kartu solitaire, each with its own name and rules. Some of the most popular ones are:

    -

    download game kartu solitaire offline
    -download game kartu solitaire classic
    -download game kartu solitaire gratis
    -download game kartu solitaire android
    -download game kartu solitaire pc
    -download game kartu solitaire windows 10
    -download game kartu solitaire spider
    -download game kartu solitaire freecell
    -download game kartu solitaire tripeaks
    -download game kartu solitaire pyramid
    -download game kartu solitaire collection
    -download game kartu solitaire online
    -download game kartu solitaire terbaik
    -download game kartu solitaire mod apk
    -download game kartu solitaire untuk laptop
    -download game kartu solitaire microsoft
    -download game kartu solitaire deluxe
    -download game kartu solitaire pro
    -download game kartu solitaire no ads
    -download game kartu solitaire versi lama
    -download game kartu solitaire bahasa indonesia
    -download game kartu solitaire 3d
    -download game kartu solitaire hd
    -download game kartu solitaire premium
    -download game kartu solitaire master
    -download game kartu solitaire adventure
    -download game kartu solitaire vegas style
    -download game kartu solitaire unlimited hints
    -download game kartu solitaire with themes
    -download game kartu solitaire daily challenges
    -download game kartu solitaire by mobilityware
    -download game kartu solitaire with achievements
    -download game kartu solitaire without internet
    -download game kartu solitaire with statistics
    -download game kartu solitaire by sng games
    -download game kartu solitaire with customizations
    -download game kartu solitaire with levels
    -download game kartu solitaire by microsoft corporation
    -download game kartu solitaire with rewards
    -download game kartu solitaire with sound effects

    -
      -
    • Klondike: This is the most common version of game kartu solitaire, also known as simply solitaire or patience. You can draw one card or three cards from the stock pile at a time, depending on the level of difficulty. You can also choose to have the stock pile reshuffle or not when it runs out of cards.
    • -
    • Spider: This is a more challenging version of game kartu solitaire, where you use two decks of cards and have 10 columns in the tableau. You can only move cards of the same suit, and you need to form sequences from king to ace to move them to the foundation piles. You can also deal 10 cards from the stock pile to each column when you run out of moves.
    • -
    • FreeCell: This is a strategic version of game kartu solitaire, where you use one deck of cards and have four empty cells in addition to the four foundation piles. You can move any card to an empty cell or an empty column, and you can move groups of cards regardless of their order or color. The goal is to move all the cards to the foundation piles in order.
    • -
    • Pyramid: This is a fun version of game kartu solitaire, where you use one deck of cards and arrange them in a pyramid shape, with seven rows and 28 cards. The top card of the pyramid is face up, and the rest are face down. You can remove pairs of cards that add up to 13, such as ace and queen, or two and jack. You can also use a card from the stock pile or the waste pile to form a pair. The goal is to clear the pyramid and the stock pile.
    • -
    -

    The Benefits and Challenges of Solitaire

    -

    Game kartu solitaire is not only a fun and relaxing way to pass the time, but also a great way to exercise your brain and improve your skills. Some of the benefits of playing game kartu solitaire are:

    -
      -
    • It enhances your memory, concentration, logic, and problem-solving abilities.
    • -
    • It boosts your mood, reduces stress, and increases your self-esteem.
    • -
    • It stimulates your creativity, curiosity, and imagination.
    • -
    • It teaches you patience, perseverance, and discipline.
    • -
    -

    However, game kartu solitaire also has some challenges that make it more interesting and rewarding. Some of the challenges of playing game kartu solitaire are:

    -
      -
    • It can be frustrating, especially when you get stuck or run out of moves.
    • -
    • It can be addictive, especially when you want to beat your own score or time.
    • -
    • It can be distracting, especially when you have other tasks or responsibilities to attend to.
    • -
    • It can be lonely, especially when you play by yourself for a long time.
    • -
    -

    How to Download Game Kartu Solitaire on Your Android Device

    -

    If you want to play game kartu solitaire on your Android device, you will need to download an app that offers this card game. There are many apps available on the Google Play Store that let you play game kartu solitaire for free or for a small fee. However, not all apps are created equal, and some may have better features, graphics, sounds, and reviews than others. To help you choose the best app for game kartu solitaire, we have selected three of the most popular and highly rated ones below.

    -

    The Best Apps for Game Kartu Solitaire

    -

    Solitaire - Classic Card Games by MobilityWare

    -

    This app is one of the most downloaded and loved game kartu solitaire apps on the Google Play Store. It offers the classic Klondike solitaire game with various options and customizations. You can choose between one-card or three-card draw, standard or Vegas scoring, left-handed or right-handed mode, portrait or landscape orientation, and more. You can also change the background, card backs, card faces, and sounds according to your preference. The app also features daily challenges, leaderboards, statistics, hints, undo, auto-complete, and offline mode. The app is free to download and play, but it contains ads and in-app purchases.

    -

    Microsoft Solitaire Collection by Microsoft Corporation

    -

    This app is another popular and trusted game kartu solitaire app on the Google Play Store. It offers five different solitaire games in one app: Klondike, Spider, FreeCell, Pyramid, and TriPeaks. You can play each game with different levels of difficulty, themes, and challenges. You can also earn achievements, trophies, badges, and coins as you play. The app also features daily challenges, events, leaderboards, statistics, hints, undo, auto-complete, and cloud sync. The app is free to download and play, but it contains ads and in-app purchases.

    -

    Solitaire - Offline Card Games by SNG Games

    -

    This app is a simple and elegant game kartu solitaire app on the Google Play Store. It offers the classic Klondike solitaire game with smooth and fast gameplay. You can choose between one-card or three-card draw, standard or Vegas scoring, left-handed or right-handed mode, portrait or landscape orientation, and more. You can also change the background, card backs, card faces, and sounds according to your preference. The app also features daily challenges, leaderboards, statistics, hints, undo, auto-complete, and offline mode. The app is free to download and play, but it contains ads and in-app purchases.

    -

    How to Install and Play Game Kartu Solitaire on Your Android Device

    -

    Once you have chosen an app for game kartu solitaire from the Google Play Store, you can easily install and play it on your Android device. Here are the steps you need to follow:

    -

    Step 1: Choose an App from the Google Play Store

    -

    Open the Google Play Store app on your Android device and search for game kartu solitaire or solitaire. You will see a list of apps that offer this card game. You can browse through the apps and read their descriptions, ratings, reviews, screenshots, and videos. You can also compare the features, sizes, permissions, and prices of the apps. Once you have decided on an app that suits your needs and preferences, tap on the Install button to download it.

    -

    Step 2: Download and Install the App on Your Device

    -

    Wait for the app to download on your device. Depending on your internet connection speed and the size of the app, this may take a few seconds or minutes. Once the app is downloaded, it will automatically install on your device. You may need to grant some permissions to the app to access your device's storage, camera, microphone, or other features. You can also choose to create a shortcut icon for the app on your home screen for easy access.

    -

    Step 3: Launch the App and Start Playing Solitaire

    -

    Once the app is installed on your device, you can launch it by tapping on its icon. You will see a welcome screen or a tutorial that will guide you through the basics of the app and the game. You can also adjust the settings of the app according to your preference. Then, you can start playing game kartu solitaire by choosing a game mode, a difficulty level, a theme, and a challenge. You can also track your progress, achievements, scores, and statistics as you play. Enjoy!

    -

    Conclusion

    -

    Game kartu solitaire is a classic card game that you can play on your Android device anytime, anywhere, and have fun and challenge your brain. It is a simple yet addictive game that has many variations, benefits, and challenges. You can download game kartu solitaire on your Android device by choosing one of the best apps from the Google Play Store, installing it on your device, and launching it to play. We hope this article has helped you learn more about game kartu solitaire and how to play it on your Android device. If you have any questions or feedback, please feel free to leave a comment below. Happy playing!

    -

    FAQs

    -

    Here are some of the frequently asked questions about game kartu solitaire and how to play it on your Android device:

    -
      -
    • Q: How do I win game kartu solitaire?
      -A: You win game kartu solitaire when you move all the cards from the tableau and the stock pile to the four foundation piles in ascending order by suit.
    • -
    • Q: How do I score game kartu solitaire?
      -A: There are different scoring systems for game kartu solitaire, depending on the app and the variation you choose. Some common scoring systems are standard scoring, where you get points for each card you move to the foundation piles; Vegas scoring, where you start with a negative amount and get points for each card you move to the foundation piles; and time scoring, where you get points based on how fast you complete the game.
    • -
    • Q: How do I shuffle the cards in game kartu solitaire?
      -A: You don't need to shuffle the cards manually in game kartu solitaire, as the app will do it for you automatically. However, some apps may allow you to reshuffle the cards in the stock pile when you run out of moves or cards.
    • -
    • Q: How do I undo a move in game kartu solitaire?
      -A: Most apps for game kartu solitaire have an undo button that lets you undo your last move or multiple moves. However, some apps may limit the number of undos you can use or charge you coins or tokens for using them.
    • -
    • Q: How do I change the difficulty level in game kartu solitaire?
      -A: You can change the difficulty level in game kartu solitaire by choosing different options and variations in the app. Some common options that affect the difficulty level are one-card or three-card draw, standard or Vegas scoring, reshuffle or no reshuffle, and different layouts and rules for different games.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download RPG Ruinverse Mod APK - Unlimited Money and Unlocked Features.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download RPG Ruinverse Mod APK - Unlimited Money and Unlocked Features.md deleted file mode 100644 index c811de74163fd7a97084c2af4afd249474f858b8..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download RPG Ruinverse Mod APK - Unlimited Money and Unlocked Features.md +++ /dev/null @@ -1,123 +0,0 @@ -
    -

    RPG Ruinverse Mod APK: A Two-Souled Adventure

    -

    If you are looking for a charming and engaging RPG with a unique twist, you might want to check out RPG Ruinverse. This game, developed by Exe Create and published by KEMCO, features a two-souled heroine who can switch between her personalities in battle. You can also enjoy this game with a modded version that gives you unlimited money, gems, and other perks. In this article, we will tell you everything you need to know about RPG Ruinverse and its mod apk.

    -

    rpg ruinverse mod apk


    Download Zip ✸✸✸ https://ssurll.com/2uNRZ1



    -

    What is RPG Ruinverse?

    -

    RPG Ruinverse is a retro-inspired turn-based RPG that follows the adventure of Kit, a kind-hearted transporter, and his childhood friend Allie, who has another soul named Alvyn living inside her. Together, they team up with a quirky band of allies to save Allie from a plight that threatens her existence, while discovering the mystery behind the ancient stone monuments. Along the way, they will experience moments of triumph and hardship, humor and drama, romance and friendship.

    -

    The story and characters of RPG Ruinverse

    -

    The story of RPG Ruinverse is full of twists and turns, as well as witty dialogue and emotional scenes. You will get to know the characters and their backgrounds, motivations, and personalities. Each character has their own role and skills in battle, as well as their own interactions and relationships with others. Some of the characters you will meet are:

    -
      -
    • Kit: The main protagonist, a transporter who can use magic to transport items and people. He is loyal, kind, and brave, but also naive and clumsy. He has a crush on Allie since childhood.
    • -
    • Allie: Kit's childhood friend and a hunter who fights monsters. She is cheerful, energetic, and optimistic, but also reckless and impulsive. She has another soul named Alvyn inside her, who can take over her body when she touches Kit.
    • -
    • Alvyn: The other soul inside Allie, who was once a legendary hero who fought against the demon lord. He is calm, cool, and confident, but also arrogant and sarcastic. He has a mysterious connection to the stone monuments.
    • -
    • Lexor: An elf who claims to be a physician, but is actually a quack who experiments on people. He is obsessed with Kit and wants to study his transporter magic. He is eccentric, cunning, and manipulative.
    • -
    • Toto: A beast who pretends to be a merchant, but is actually a swindler who cheats people out of their money. He is greedy, cowardly, and selfish, but also witty and charming. He has a soft spot for Nana.
    • -
    • Nana: A dwarf who is a skilled warrior and blacksmith. She is strong, brave, and loyal, but also naive and simple-minded. She has a crush on Toto and follows him everywhere.
    • -
    -

    The gameplay and features of RPG Ruinverse

    -

    The gameplay of RPG Ruinverse is similar to most turn-based RPGs, with some unique features that make it stand out. Some of the gameplay features are:

    -
      -
    • The 3x3 grid battle system: Both allies and enemies are arranged on a 3x3 grid in battle. You can move your characters around the grid to position them for optimal attacks or defense. Different skills have different effects depending on the grid area they target.
    • -
    • The two-souled system: Allie can switch between her two souls in battle by touching Kit. Each soul has different stats, skills, and elemental affinities. You can use this system to adapt to different situations and enemies.
    • -
    • The skill tree system: You can customize your characters' skills by allocating points to different branches of the skill tree. You can also unlock passive skills that grant you various bonuses and effects.
    • -
    • The equipment system: You can equip your characters with different weapons, armors, accessories, and runes. Each equipment has different stats, effects, and elemental attributes. You can also upgrade your equipment by using materials and gold.
    • -
    • The exploration system: You can explore various locations in the game world, such as towns, dungeons, forests, and ruins. You can interact with NPCs, find hidden items, solve puzzles, and trigger events. You can also encounter enemies and engage in battles.
    • -
    -

    What is RPG Ruinverse Mod APK?

    -

    RPG Ruinverse Mod APK is a modified version of the original RPG Ruinverse game that gives you some advantages and benefits that are not available in the official version. Some of the benefits of RPG Ruinverse Mod APK are:

    -

    rpg ruinverse mod apk download
    -rpg ruinverse mod apk unlimited money
    -rpg ruinverse mod apk latest version
    -rpg ruinverse mod apk free
    -rpg ruinverse mod apk android
    -rpg ruinverse mod apk offline
    -rpg ruinverse mod apk no ads
    -rpg ruinverse mod apk hack
    -rpg ruinverse mod apk obb
    -rpg ruinverse mod apk 1.1.2g
    -rpg ruinverse mod apk revdl
    -rpg ruinverse mod apk happymod
    -rpg ruinverse mod apk rexdl
    -rpg ruinverse mod apk 2023
    -rpg ruinverse mod apk update
    -rpg ruinverse mod apk full version
    -rpg ruinverse mod apk premium
    -rpg ruinverse mod apk cracked
    -rpg ruinverse mod apk cheat
    -rpg ruinverse mod apk mega mod
    -rpg ruinverse mod apk data
    -rpg ruinverse mod apk unlocked
    -rpg ruinverse mod apk pro
    -rpg ruinverse mod apk vip
    -rpg ruinverse mod apk english
    -rpg ruinverse mod apk gameplay
    -rpg ruinverse mod apk review
    -rpg ruinverse mod apk features
    -rpg ruinverse mod apk tips and tricks
    -rpg ruinverse mod apk guide
    -rpg ruinverse mod apk walkthrough
    -rpg ruinverse mod apk best settings
    -rpg ruinverse mod apk how to install
    -rpg ruinverse mod apk how to play
    -rpg ruinverse mod apk how to save
    -rpg ruinverse mod apk how to get gems
    -rpg ruinverse mod apk how to get weapons
    -rpg ruinverse mod apk how to get skills
    -rpg ruinverse mod apk how to get costumes
    -rpg ruinverse mod apk how to get allies
    -rpg ruinverse mod apk story mode
    -rpg ruinverse mod apk adventure mode
    -rpg ruinverse mod apk quest mode
    -rpg ruinverse mod apk battle mode
    -rpg ruinverse mod apk challenge mode
    -rpg ruinverse mod apk online mode
    -rpg ruinverse mod apk multiplayer mode
    -rpg ruinverse mod apk co-op mode
    -rpg ruinverse mod apk pvp mode

    -

    The benefits of RPG Ruinverse Mod APK

    -

    Some of the benefits of RPG Ruinverse Mod APK are:

    -
      -
    • Unlimited money: You can get unlimited money in the game, which you can use to buy items, equipment, upgrades, and more. You don't have to worry about running out of money or grinding for gold.
    • -
    • Unlimited gems: You can get unlimited gems in the game, which you can use to unlock premium features, such as extra slots, fast travel, auto-battle, and more. You don't have to spend real money or watch ads to get gems.
    • -
    • Unlocked skills: You can get all the skills unlocked in the game, which means you don't have to spend points or level up to access them. You can use any skill you want in battle and customize your characters as you wish.
    • -
    • No ads: You can enjoy the game without any annoying ads that interrupt your gameplay or slow down your device. You can play the game smoothly and without distractions.
    • -
    -

    The risks and precautions of RPG Ruinverse Mod APK

    -

    While RPG Ruinverse Mod APK has many benefits, it also has some risks and drawbacks that you should be aware of before downloading and installing it. Some of the risks and precautions of RPG Ruinverse Mod APK are:

    -
      -
    • Potential malware: Since RPG Ruinverse Mod APK is not an official version of the game, it may contain malware or viruses that can harm your device or steal your data. You should only download RPG Ruinverse Mod APK from trusted sources and scan it with an antivirus before installing it.
    • -
    • Possible bans: Since RPG Ruinverse Mod APK is not an official version of the game, it may violate the terms and conditions of the game developer or publisher. You may face bans or penalties if you use RPG Ruinverse Mod APK online or connect it to your social media accounts.
    • -
    • Reduced challenge: Since RPG Ruinverse Mod APK gives you many advantages and benefits that make the game easier and faster, it may reduce the challenge and fun of the game. You may lose interest or motivation in playing the game if you use RPG Ruinverse Mod APK excessively.
    • -
    • Compatibility issues: Since RPG Ruinverse Mod APK is not an official version of the game, it may not be compatible with your device or the latest updates of the game. You may experience crashes, glitches, errors, or bugs if you use RPG Ruinverse Mod APK on an incompatible device or version.
    • -
    -

    How to download and install RPG Ruinverse Mod APK?

    -

    If you want to download and install RPG Ruinverse Mod APK on your device, you need to follow some simple steps. Here are the steps to download and install RPG Ruinverse Mod APK:

    -

    The steps to download and install RPG Ruinverse Mod APK

    -
      -
    1. Go to a trusted website that offers RPG Ruinverse Mod APK for download. For example, you can go to [this website] that provides a safe and working link to download RPG Ruinverse Mod APK.
    2. -
    3. Click on the download button on the website and wait for the download to complete. You may need to allow unknown sources on your device settings to download files from third-party sources.
    4. -
    5. Once the download is complete, locate the file on your device storage and tap on it to install it. You may need to grant some permissions to install apps from third-party sources.
    6. -
    7. Wait for the installation to finish and then launch the game from your app drawer or home screen. You can now enjoy RPG Ruinverse Mod APK on your device.
    8. -
    -

    The tips and tricks to enjoy RPG Ruinverse Mod APK

    -

    To make the most out of RPG Ruinverse Mod APK, you can follow some tips and tricks that will enhance your gaming experience. Here are some tips and tricks to enjoy RPG Ruinverse Mod APK:

    -
      -
    • Use the two-souled system wisely: You can switch between Allie and Alvyn in battle by touching Kit. Each soul has different strengths and weaknesses, so you should use them according to the situation and the enemy. For example, Allie is good at physical attacks and fire skills, while Alvyn is good at magic attacks and ice skills.
    • -
    • Experiment with different skills and equipment: You can customize your characters' skills and equipment by using the skill tree system and the equipment system. You can try different combinations of skills and equipment to find the best ones for your playstyle and preferences. For example, you can equip runes that boost your elemental damage or resistance, or skills that heal or buff your allies.
    • -
    • Explore every location and interact with everything: You can find many hidden items, secrets, and events by exploring every location and interacting with everything. You can also get more information, backstory, and humor by talking to NPCs and reading signs. For example, you can find a hidden dungeon by following a suspicious sign in the forest, or get a funny dialogue by talking to a cat in the town.
    • -
    • Save your game frequently and use multiple slots: You can save your game anytime and anywhere by using the save menu. You can also use multiple slots to save your game in different points. This way, you can avoid losing your progress or missing something important. For example, you can save your game before entering a boss battle or making a choice that affects the story.
    • -
    -

    Conclusion

    -

    RPG Ruinverse is a fun and charming RPG that offers a unique two-souled system, a 3x3 grid battle system, a skill tree system, an equipment system, and an exploration system. You can also enjoy this game with RPG Ruinverse Mod APK, which gives you unlimited money, gems, unlocked skills, and no ads. However, you should also be aware of the risks and precautions of using RPG Ruinverse Mod APK, such as potential malware, possible bans, reduced challenge, and compatibility issues. If you want to download and install RPG Ruinverse Mod APK on your device, you can follow the steps we provided above. We hope this article helped you learn more about RPG Ruinverse and its mod apk.

    -

    FAQs

    -

    Here are some frequently asked questions about RPG Ruinverse and its mod apk:

    - - - - - - - -
    QuestionAnswer
    Is RPG Ruinverse free to play?Yes, RPG Ruinverse is free to play with optional in-app purchases. You can download it from the Google Play Store or the App Store.
    Is RPG Ruinverse Mod APK safe to use?RPG Ruinverse Mod APK is not an official version of the game, so it may not be safe to use. You should only download it from trusted sources and scan it with an antivirus before installing it.
    Can I play RPG Ruinverse offline?Yes, you can play RPG Ruinverse offline without an internet connection. However, some features may not be available offline, such as cloud save or social media integration.
    Can I play RPG Ruinverse on PC?Yes, you can play RPG Ruinverse on PC by using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer.
    How long is RPG Ruinverse?RPG Ruinverse is about 20 hours long if you follow the main story. However, it may take longer if you do side quests, explore locations, or use RPG Ruinverse Mod APK.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/data/megatron_dataloader/__init__.py b/spaces/skf15963/summary/fengshen/data/megatron_dataloader/__init__.py deleted file mode 100644 index cd5f898c6bdf89c6cf0243af102d04f6efed86b8..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/data/megatron_dataloader/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import indexed_dataset diff --git a/spaces/skf15963/summary/fengshen/examples/pegasus/data_utils.py b/spaces/skf15963/summary/fengshen/examples/pegasus/data_utils.py deleted file mode 100644 index 879798749bc06d6857c01ec101baf5f3fb61d012..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/pegasus/data_utils.py +++ /dev/null @@ -1,319 +0,0 @@ -# -*- coding: utf-8 -*- - -import re -import six -import unicodedata -import torch -import rouge -import numpy as np -import random -# from fengshen.examples.pegasus.pegasus_utils import text_segmentate -import sys - -sys.path.append('../../../') - -rouge = rouge.Rouge() - - -is_py2 = six.PY2 - -if not is_py2: - basestring = str - - -def _is_chinese_char(cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ((cp >= 0x4E00 and cp <= 0x9FFF) or (cp >= 0x3400 and cp <= 0x4DBF) - or (cp >= 0x20000 and cp <= 0x2A6DF) - or (cp >= 0x2A700 and cp <= 0x2B73F) - or (cp >= 0x2B740 and cp <= 0x2B81F) - or (cp >= 0x2B820 and cp <= 0x2CEAF) - or (cp >= 0xF900 and cp <= 0xFAFF) - or (cp >= 0x2F800 and cp <= 0x2FA1F)): - return True - - return False - - -def _is_whitespace(char): - """Checks whether `char` is a whitespace character.""" - # \t, \n, and \r are technically control characters but we treat them - # as whitespace since they are generally considered as such. - if char == " " or char == "\t" or char == "\n" or char == "\r": - return True - cat = unicodedata.category(char) - if cat == "Zs": - return True - return False - - -def _is_control(char): - """Checks whether `char` is a control character.""" - # These are technically control characters but we count them as whitespace - # characters. - if char == "\t" or char == "\n" or char == "\r": - return False - cat = unicodedata.category(char) - if cat.startswith("C"): - return True - return False - - -def _is_punctuation(char): - """Checks whether `char` is a punctuation character.""" - cp = ord(char) - # We treat all non-letter/number ASCII as punctuation. - # Characters such as "^", "$", and "`" are not in the Unicode - # Punctuation class but we treat them as punctuation anyways, for - # consistency. - if (cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or ( - cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126): - return True - cat = unicodedata.category(char) - if cat.startswith("P"): - return True - return False - - -def is_string(s): - """判断是否是字符串 - """ - return isinstance(s, basestring) - - -def is_stopwords(word, stopwords): - if word in stopwords: - return True - else: - return False - - -def text_segmentate(text): - en_seg_pattern = '((?:\\!|\\?|\\.|\\n)+(?:\\s)+)' - ch_seg_pattern = '((?:?|!|。|\\n)+)' - try: - text = re.sub(en_seg_pattern, r'\1[SEP]', text) - # print("sub text: ", text) - except Exception as e: - print("input: ", text) - raise e - text = re.sub(ch_seg_pattern, r'\1[SEP]', text) - # print("sub ch text: ", text) - text_list = text.split("[SEP]") - text_list = list(filter(lambda x: len(x) != 0, text_list)) - return text_list - - -def load_stopwords(stopwords_path): - stopwords_dict = {} - with open(stopwords_path, "r") as rf: - for line in rf: - line = line.strip() - if line not in stopwords_dict: - stopwords_dict[line] = 0 - else: - pass - return stopwords_dict - - -def text_process(text, max_length): - """分割文本 - """ - texts = text_segmentate(text) - - result, length = [], 0 - for text in texts: - if length + len(text) > max_length * 1.3 and len(result) >= 3: - yield result - result, length = [], 0 - result.append(text) - length += len(text) - if result and len(result) >= 3: - yield result - - -def text_process_split_long_content(text, max_length): - """分割长文本 - """ - texts = text_segmentate(text) - - result, sentence_num = "", 0 - for text in texts: - if len(text) > 500: - if len(result) > 300 and sentence_num >= 3: - yield result - result, sentence_num = "", 0 - else: - result, sentence_num = "", 0 - continue - else: - if len(result) + len(text) > max_length * 1.1 and sentence_num >= 3: - yield result - result, sentence_num = "", 0 - result += text - sentence_num += 1 - - if result and sentence_num >= 3: - yield result - - -def gather_join(texts, idxs): - """取出对应的text,然后拼接起来 - """ - return ''.join([texts[i] for i in idxs]) - - -def gather_join_f1(texts_token, idsx): - join_texts = [] - for id in idsx: - join_texts.extend(texts_token[id]) - return join_texts - - -def compute_rouge(source, target): - """计算rouge-1、rouge-2、rouge-l - """ - source, target = ' '.join(source), ' '.join(target) - try: - scores = rouge.get_scores(hyps=source, refs=target) - return { - 'rouge-1': scores[0]['rouge-1']['f'], - 'rouge-2': scores[0]['rouge-2']['f'], - 'rouge-l': scores[0]['rouge-l']['f'], - } - except ValueError: - return { - 'rouge-1': 0.0, - 'rouge-2': 0.0, - 'rouge-l': 0.0, - } - - -def remove_stopwords(texts, stopwords_dict): - for i, text in enumerate(texts): - texts[i] = list(filter(lambda x: x not in stopwords_dict, text)) - return texts - - -def pseudo_summary_f1(texts, - stopwords, - tokenizer, - max_length, - rouge_strategy="rouge-l"): - """构建伪标签摘要数据集 - """ - summary_rate = 0.25 - max_length = max_length - 1 - texts_tokens = [] - sentece_idxs_vec = [] - for text in texts: - if len(texts) == 0: - continue - try: - ids = tokenizer.encode(text.strip())[:-1] - except ValueError: - print("error, input : ", text) - raise ValueError - sentece_idxs_vec.append(ids) - tokens = [tokenizer._convert_id_to_token(token) for token in ids] - texts_tokens.append(tokens) - - texts_tokens_rm = remove_stopwords(texts_tokens, stopwords) - source_idxs, target_idxs = list(range(len(texts))), [] - - assert len(texts_tokens) == len(texts) - # truncate_index = 0 - while True: - sims = [] - for i in source_idxs: - new_source_idxs = [j for j in source_idxs if j != i] - new_target_idxs = sorted(target_idxs + [i]) - new_source = gather_join_f1(texts_tokens_rm, new_source_idxs) - new_target = gather_join_f1(texts_tokens_rm, new_target_idxs) - sim = compute_rouge(new_source, new_target)[rouge_strategy] - sims.append(sim) - new_idx = source_idxs[np.argmax(sims)] - del sims - source_idxs.remove(new_idx) - target_idxs = sorted(target_idxs + [new_idx]) - source = gather_join(texts, source_idxs) - target = gather_join(texts, target_idxs) - try: - if (len(source_idxs) == 1 - or 1.0 * len(target) / len(source) > summary_rate): - break - except ZeroDivisionError as e: - print(e.meesage) - print(texts) - print("source: ", source) - print("target: ", target) - - if len(source) < len(target): - source, target = target, source - source_idxs, target_idxs = target_idxs, source_idxs - - return sentece_idxs_vec, source, target, source_idxs, target_idxs - - -def get_input_mask(sentence_id_vec, indexs): - target_idxs = [] - input_idxs = [] - kMaskSentenceTokenId = 2 - kEosTokenId = 1 - mask_sentence_options_cumulative_prob = [0.9, 0.9, 1, 1] - for index in indexs: - target_idxs.extend(sentence_id_vec[index]) - choice = random.uniform(0, 1) - if choice < mask_sentence_options_cumulative_prob[0]: - # print("mask index: ", index) - sentence_id_vec[index] = [kMaskSentenceTokenId] - elif choice < mask_sentence_options_cumulative_prob[1]: - # print("replace index: ", index) - replace_id = random.randint(0, len(sentence_id_vec)) - sentence_id_vec[index] = sentence_id_vec[replace_id] - elif choice < mask_sentence_options_cumulative_prob[2]: - pass - else: - sentence_id_vec[index] = [] - - target_idxs.append(kEosTokenId) - # print(sentence_id_vec) - for index, sentence_id in enumerate(sentence_id_vec): - # print(index, sentence_id) - if len(sentence_id) == 0: - continue - input_idxs.extend(sentence_id_vec[index]) - - input_idxs.append(kEosTokenId) - return input_idxs, target_idxs - - -def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, - decoder_start_token_id: int): - """ - Shift input ids one token to the right. - """ - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() - shifted_input_ids[:, 0] = decoder_start_token_id - - if pad_token_id is None: - raise ValueError("self.model.config.pad_token_id has to be defined.") - # replace possible -100 values in labels by `pad_token_id` - shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) - - return shifted_input_ids - - -def padding_to_maxlength(ids, max_length, pad_id): - cur_len = len(ids) - len_diff = max_length - cur_len - return ids + [pad_id] * len_diff, [1] * cur_len + [0] * len_diff diff --git a/spaces/skf15963/summary/fengshen/utils/convert_diffusers_to_original_stable_diffusion.py b/spaces/skf15963/summary/fengshen/utils/convert_diffusers_to_original_stable_diffusion.py deleted file mode 100644 index 8515468119ada70a1c6db0f5c6a6ee91c8a7824f..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/utils/convert_diffusers_to_original_stable_diffusion.py +++ /dev/null @@ -1,235 +0,0 @@ -# coding=utf8 -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. - -import argparse -import os.path as osp - -import torch - - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - print(f"Reshaping {k} for SD format") - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# -# pretty much a no-op - -# here we need transform it to support -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--model_path", default='', type=str, required=True, help="Path to the model to convert.") - parser.add_argument("--checkpoint_path", default='', type=str, required=True, help="Path to the output model.") - parser.add_argument("--half", action="store_true", help="Save weights in half precision.") - - args = parser.parse_args() - - assert args.model_path is not None, "Must provide a model path!" - - assert args.checkpoint_path is not None, "Must provide a checkpoint path!" - - unet_path = osp.join(args.model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(args.model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(args.model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location="cpu") - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location="cpu") - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location="cpu") - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - if args.half: - state_dict = {k: v.half() for k, v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, args.checkpoint_path) diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/datasets.py b/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/datasets.py deleted file mode 100644 index e672b136f56fd6b05038e24377908361a54fe519..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/datasets.py +++ /dev/null @@ -1,35 +0,0 @@ -import cv2 -import numpy as np - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scale_fill=False, scaleup=True): - # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232 - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding - elif scale_fill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py deleted file mode 100644 index 5f292528f80d6bb51f16a4324d97342d28fce942..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py +++ /dev/null @@ -1,447 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -from dataclasses import dataclass, field -import logging -import math -import os -from typing import Optional -import torch - -from fairseq.logging import metrics -from fairseq.tasks import FairseqTask, register_task -from ..data import ExtractedFeaturesDataset, RandomInputDataset - -from fairseq.data import ( - Dictionary, - data_utils, - StripTokenDataset, -) -from fairseq.dataclass import FairseqDataclass -from fairseq.distributed.utils import get_data_parallel_world_size -from omegaconf import MISSING - -from examples.speech_recognition.kaldi.kaldi_decoder import ( - KaldiDecoder, - KaldiDecoderConfig, -) - - -logger = logging.getLogger(__name__) - - -@dataclass -class DecodingConfig(FairseqDataclass): - kenlm_path: Optional[str] = None - lm_weight: float = 0 - blank_weight: float = 0 - - -@dataclass -class UnpairedAudioTextConfig(FairseqDataclass): - data: str = field( - default=MISSING, metadata={"help": "path to data directory containing audio"} - ) - text_data: str = field( - default=MISSING, metadata={"help": "path to data directory containing text"} - ) - max_length: Optional[int] = None - labels: Optional[str] = field( - default=None, - metadata={"help": "extension of the label file to load, used for fine-tuning"}, - ) - unfiltered: bool = field( - default=False, metadata={"help": "load data with _unfiltered suffix"} - ) - ctc_eval: bool = field( - default=False, metadata={"help": "eval UER as if computed by CTC"} - ) - sort_by_length: bool = field( - default=True, metadata={"help": "sort examples by length of audio timesteps"} - ) - shuffle: bool = field(default=True, metadata={"help": "shuffle examples"}) - append_eos: bool = field(default=False, metadata={"help": "append eos"}) - uppercase: Optional[bool] = field( - default=False, metadata={"help": "uppercase for LM score computation"} - ) - skipwords: Optional[str] = field( - default="", - metadata={ - "help": "comma-separated words to be removed for LM score computation" - }, - ) - kenlm_path: Optional[str] = None - vocab_usage_power: float = 2 - - word_decoder_config: Optional[KaldiDecoderConfig] = None - word_kenlm_path: Optional[str] = None - - decoding_config: DecodingConfig = DecodingConfig() - - -@register_task("unpaired_audio_text", dataclass=UnpairedAudioTextConfig) -class UnpairedAudioText(FairseqTask): - """ """ - - cfg: UnpairedAudioTextConfig - - def __init__( - self, - cfg: UnpairedAudioTextConfig, - source_dictionary=None, - target_dictionary=None, - ): - super().__init__(cfg) - - self._target_dictionary = target_dictionary - self._source_dictionary = source_dictionary - self.num_symbols = ( - len([s for s in target_dictionary.symbols if not s.startswith("madeup")]) - - target_dictionary.nspecial - ) - self.sil_id = ( - target_dictionary.index("") if "" in target_dictionary else -1 - ) - self.kenlm = None - if cfg.kenlm_path is not None: - import kenlm - - self.kenlm = kenlm.Model(cfg.kenlm_path) - - self.word_kenlm = None - if cfg.word_kenlm_path is not None: - import kenlm - - self.word_kenlm = kenlm.Model(cfg.word_kenlm_path) - - self.uppercase = cfg.uppercase - self.skipwords = set(cfg.skipwords.split(",")) - - def str_postprocess(s): - s = " ".join(w for w in s.split() if w not in self.skipwords) - s = s.upper() if self.uppercase else s - return s - - self.str_postprocess = str_postprocess - self.compute_lm_score = lambda s: self.kenlm.score(self.str_postprocess(s)) - - self.compute_word_score = None - if cfg.word_decoder_config is not None: - self.kaldi_decoder = KaldiDecoder(cfg.word_decoder_config, beam=10) - - def compute_word_score(logits, padding): - res = self.kaldi_decoder.decode(logits, padding) - for r in res: - r = r.result() - assert len(r) == 1 - r = r[0] - yield r["score"], r["words"] - - self.compute_word_score = compute_word_score - - @classmethod - def setup_task(cls, cfg: UnpairedAudioTextConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - cfg (AudioPretrainingConfig): configuration of this task - """ - - dict_path = os.path.join(cfg.text_data, "dict.txt") - if os.path.exists(dict_path): - target_dictionary = Dictionary.load(dict_path) - else: - dict_path = os.path.join(cfg.data, f"dict.{cfg.labels}.txt") - target_dictionary = Dictionary.load(dict_path) - - return cls(cfg, target_dictionary=target_dictionary) - - def optimizer_step(self, optimizer, model, update_num): - if hasattr(model, "get_groups_for_update"): - groups = model.get_groups_for_update(update_num) - optimizer.step(groups={groups}) - else: - optimizer.step() - - def valid_step(self, sample, model, criterion): - res = model( - **sample["net_input"], - dense_x_only=True, - ) - - dense_x = res["logits"] - padding_mask = res["padding_mask"] - - word_scores = None - if self.compute_word_score is not None: - word_scores = self.compute_word_score(dense_x.cpu(), padding_mask.cpu()) - - z = dense_x.argmax(-1) - z[padding_mask] = self.target_dictionary.pad() - - vocab_seen = torch.zeros(self.num_symbols, dtype=torch.bool) - - import editdistance - - c_err = 0 - c_len = 0 - pred_c_len = 0 - lm_score_sum = 0 - for i, (x, t, id) in enumerate( - zip( - z, - sample["target"] if "target" in sample else [None] * len(z), - sample["id"], - ) - ): - - if t is not None: - t = t[(t >= self.target_dictionary.nspecial)] - x = x[ - (x >= self.target_dictionary.nspecial) - & (x < (self.num_symbols + self.target_dictionary.nspecial)) - ] - if self.sil_id >= 0: - x = x[x != self.sil_id] - - vocab_seen[x - self.target_dictionary.nspecial] = True - - pred_units_arr = x - if self.cfg.ctc_eval: - pred_units_arr = pred_units_arr.unique_consecutive() - pred_units_arr = pred_units_arr[pred_units_arr != 0] - - if id == 0: - if t is not None: - logger.info(f"REF: {self.target_dictionary.string(t)}") - logger.info(f"HYP: {self.target_dictionary.string(pred_units_arr)}") - - if self.kenlm is not None: - if t is not None: - ref_lm_s = self.compute_lm_score( - self.target_dictionary.string(t) - ) - logger.info( - f"LM [REF]: {ref_lm_s}, {math.pow(10, -ref_lm_s / (len(t) + 1))}" - ) - - hyp_lm_s = self.compute_lm_score( - self.target_dictionary.string(pred_units_arr) - ) - logger.info( - f"LM [HYP]: {hyp_lm_s}, {math.pow(10, -hyp_lm_s / (len(pred_units_arr) + 1))}" - ) - - pred_units_arr = pred_units_arr.tolist() - - pred_c_len += len(pred_units_arr) - - if t is not None: - t = t.tolist() - c_err += editdistance.eval(pred_units_arr, t) - c_len += len(t) - else: - c_len = pred_c_len - - if self.kenlm is not None: - pred_str = self.target_dictionary.string(pred_units_arr) - lm_score = self.compute_lm_score(pred_str) - lm_score_sum += lm_score - - kaldi_score_sum = 0 - word_lm_sum = 0 - num_words = 0 - if word_scores is not None: - for score, words in word_scores: - kaldi_score_sum += score - num_words += len(words) - if self.word_kenlm is not None: - word_lm_sum += self.kenlm.score(" ".join(words)) - - try: - world_size = get_data_parallel_world_size() - except: - world_size = 1 - - logging_output = { - "loss": c_err, - "_num_char_errors": c_err, - "_num_chars": c_len, - "_num_pred_chars": pred_c_len, - "ntokens": c_len, - "nsentences": z.size(0), - "sample_size": c_len, - "_world_size": world_size, - "_lm_score_sum": lm_score_sum, - "_kaldi_score_sum": kaldi_score_sum, - "_word_lm_sum": word_lm_sum, - "_num_words": num_words, - "_vocab_seen": vocab_seen, - } - - return c_err, c_len, logging_output - - def load_dataset(self, split: str, task_cfg: FairseqDataclass = None, **kwargs): - data_path = self.cfg.data - task_cfg = task_cfg or self.cfg - - has_unpaired_text = os.path.exists( - os.path.join(self.cfg.text_data, f"{split}.idx") - ) - - self.datasets[split] = ExtractedFeaturesDataset( - path=data_path, - split=split, - min_length=3, - max_length=task_cfg.max_length, - labels=None if has_unpaired_text else task_cfg.labels, - label_dict=self.target_dictionary, - shuffle=getattr(task_cfg, "shuffle", True), - sort_by_length=task_cfg.sort_by_length, - ) - - logger.info(f"split {split} has unpaired text? {has_unpaired_text}") - if has_unpaired_text: - text_dataset = data_utils.load_indexed_dataset( - os.path.join(self.cfg.text_data, split), self.target_dictionary - ) - text_dataset = StripTokenDataset(text_dataset, self.target_dictionary.eos()) - self.datasets[split] = RandomInputDataset( - self.datasets[split], - text_dataset, - ["random_label"], - add_to_input=True, - pad_idx=self.target_dictionary.pad(), - ) - - @property - def source_dictionary(self): - return self._source_dictionary - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self._target_dictionary - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return None - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - zero = torch.scalar_tensor(0.0) - num_char_errors = sum( - log.get("_num_char_errors", zero) for log in logging_outputs - ) - num_chars = sum(log.get("_num_chars", zero) for log in logging_outputs) - num_word_errors = sum( - log.get("_num_word_errors", zero) for log in logging_outputs - ) - num_words = sum(log.get("_num_words", zero) for log in logging_outputs) - num_pred_chars = sum( - log.get("_num_pred_chars", zero) for log in logging_outputs - ) - - lm_score_sum = sum(log.get("_lm_score_sum", zero) for log in logging_outputs) - vocab_seen = ( - sum(log.get("_vocab_seen", zero) for log in logging_outputs) - .bool() - .sum() - .item() - ) - kaldi_score_sum = sum( - log.get("_kaldi_score_sum", zero) for log in logging_outputs - ) - word_lm_sum = sum(log.get("_word_lm_sum", zero) for log in logging_outputs) - - metrics.log_scalar_sum("_num_char_errors", num_char_errors) - metrics.log_scalar_sum("_num_chars", num_chars) - metrics.log_scalar_sum("_num_word_errors", num_word_errors) - metrics.log_scalar_sum("_num_words", num_words) - - metrics.log_scalar_sum("lm_score_sum", lm_score_sum) - metrics.log_scalar_sum("num_pred_chars", num_pred_chars) - - if self.cfg.word_kenlm_path is not None: - metrics.log_scalar_sum("kaldi_score_sum", kaldi_score_sum) - metrics.log_scalar_sum("word_lm_sum", word_lm_sum) - - if num_chars > 0: - metrics.log_derived( - "uer", - lambda meters: meters["_num_char_errors"].sum - * 100.0 - / meters["_num_chars"].sum - if meters["_num_chars"].sum > 0 - else float("nan"), - ) - - if lm_score_sum < 0 and vocab_seen > 0: - metrics.log_scalar("vocab_seen_pct", vocab_seen / self.num_symbols) - - metrics.log_derived( - "weighted_lm_ppl", - lambda meters: math.pow( - 10, - -meters["lm_score_sum"].sum - / ( - meters["num_pred_chars"].sum + meters["nsentences"].sum - ), # account for - ) - / meters["vocab_seen_pct"].avg ** self.cfg.vocab_usage_power, - ) - - metrics.log_derived( - "lm_ppl", - lambda meters: math.pow( - 10, - -meters["lm_score_sum"].sum - / ( - meters["num_pred_chars"].sum + meters["nsentences"].sum - ), # account for - ), - ) - else: - metrics.log_derived("weighted_lm_ppl", lambda meters: float("inf")) - - if num_words > 0: - if word_lm_sum != 0: - metrics.log_derived( - "word_lm_ppl", - lambda meters: math.pow( - 10, - -meters["word_lm_sum"].sum - / ( - meters["_num_words"].sum + meters["nsentences"].sum - ), # account for - ), - ) - metrics.log_derived( - "weighted_word_lm_ppl", - lambda meters: math.pow( - 10, - -meters["word_lm_sum"].sum - / ( - meters["_num_words"].sum + meters["nsentences"].sum - ), # account for - ) - / meters["vocab_seen_pct"].avg ** self.cfg.vocab_usage_power, - ) - - if self.cfg.word_kenlm_path is not None: - metrics.log_derived( - "kaldi_score", - lambda meters: meters["kaldi_score_sum"].sum - / meters["nsentences"].sum, - ) - - def build_model(self, cfg: FairseqDataclass): - model = super().build_model(cfg) - - return model diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/hubert/hubert.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/hubert/hubert.py deleted file mode 100644 index 232a5e402a146023e5c93f3c2574ecec98faf9d5..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/hubert/hubert.py +++ /dev/null @@ -1,563 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Dict, List, Optional, Tuple - -import numpy as np - -import torch -import torch.nn as nn -from dataclasses import dataclass, field -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.models.wav2vec.wav2vec2 import ( - ConvFeatureExtractionModel, - TransformerEncoder, -) -from fairseq.modules import GradMultiply, LayerNorm -from fairseq.tasks.hubert_pretraining import ( - HubertPretrainingConfig, - HubertPretrainingTask, -) -from omegaconf import II - -logger = logging.getLogger(__name__) - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum( - ["static", "uniform", "normal", "poisson"] -) - - -@dataclass -class HubertConfig(FairseqDataclass): - label_rate: int = II("task.label_rate") - - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group " - "norm with d groups in the first conv block, whereas layer_norm " - "has layer norms in every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - - # dropouts - dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for the transformer"}, - ) - attention_dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for attention weights"}, - ) - activation_dropout: float = field( - default=0.0, - metadata={"help": "dropout probability after activation in FFN"}, - ) - encoder_layerdrop: float = field( - default=0.0, - metadata={"help": "probability of dropping a tarnsformer layer"}, - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={ - "help": "dropout to apply to the features (after feat extr)" - }, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many " - "dimensions. set to encoder_embed_dim is <= 0" - }, - ) - untie_final_proj: bool = field( - default=False, - metadata={"help": "use separate projection for each target"}, - ) - layer_norm_first: bool = field( - default=False, - metadata={"help": "apply layernorm first in the transformer"}, - ) - conv_feature_layers: str = field( - default="[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2", - metadata={ - "help": "string describing convolutional feature extraction " - "layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, - metadata={"help": "multiply feature extractor var grads by this"}, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, - metadata={"help": "probability of replacing a token with mask"}, - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={ - "help": "min space between spans (if no overlap is enabled)" - }, - ) - - # channel masking - mask_channel_length: int = field( - default=10, - metadata={"help": "length of the mask for features (channels)"}, - ) - mask_channel_prob: float = field( - default=0.0, - metadata={"help": "probability of replacing a feature with 0"}, - ) - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, - metadata={"help": "whether to allow channel masks to overlap"}, - ) - mask_channel_min_space: int = field( - default=1, - metadata={ - "help": "min space between spans (if no overlap is enabled)" - }, - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={ - "help": "number of filters for convolutional positional embeddings" - }, - ) - conv_pos_groups: int = field( - default=16, - metadata={ - "help": "number of groups for convolutional positional embedding" - }, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={"help": "legacy (to be removed)"}, - ) - - # loss computation - skip_masked: bool = field( - default=False, - metadata={"help": "skip computing losses over masked frames"}, - ) - skip_nomask: bool = field( - default=False, - metadata={"help": "skip computing losses over unmasked frames"}, - ) - - -@register_model("hubert", dataclass=HubertConfig) -class HubertModel(BaseFairseqModel): - def __init__( - self, - cfg: HubertConfig, - task_cfg: HubertPretrainingConfig, - dictionaries: List[Dictionary], - ) -> None: - super().__init__() - logger.info(f"HubertModel Config: {cfg}") - - feature_enc_layers = eval(cfg.conv_feature_layers) # noqa - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - feature_ds_rate = np.prod([s for _, _, s in feature_enc_layers]) - self.feat2tar_ratio = ( - cfg.label_rate * feature_ds_rate / task_cfg.sample_rate - ) - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim - else None - ) - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - self.logit_temp = cfg.logit_temp - self.skip_masked = cfg.skip_masked - self.skip_nomask = cfg.skip_nomask - - final_dim = ( - cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - ) - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - - self.encoder = TransformerEncoder(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.untie_final_proj = cfg.untie_final_proj - if self.untie_final_proj: - self.final_proj = nn.Linear( - cfg.encoder_embed_dim, final_dim * len(dictionaries) - ) - else: - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - # modules below are not needed during fine-tuning - if any([d is None for d in dictionaries]): - logger.info( - "cannot find dictionary. assume will be used for fine-tuning" - ) - else: - self.num_classes = [len(d) for d in dictionaries] - self.label_embs_concat = nn.Parameter( - torch.FloatTensor(sum(self.num_classes), final_dim) - ) - nn.init.uniform_(self.label_embs_concat) - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - @classmethod - def build_model(cls, cfg: HubertConfig, task: HubertPretrainingTask): - """Build a new model instance.""" - - model = HubertModel(cfg, task.cfg, task.dictionaries) - return model - - def apply_mask(self, x, padding_mask, target_list): - B, T, C = x.shape - if self.mask_prob > 0: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x[mask_indices] = self.mask_emb - else: - mask_indices = None - - if self.mask_channel_prob > 0: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - return x, mask_indices - - def compute_nce(self, x, pos, negs): - neg_is_pos = (pos == negs).all(-1) - pos = pos.unsqueeze(0) - targets = torch.cat([pos, negs], dim=0) - - logits = torch.cosine_similarity( - x.float(), targets.float(), dim=-1 - ).type_as(x) - logits /= self.logit_temp - if neg_is_pos.any(): - logits[1:][neg_is_pos] = float("-inf") - logits = logits.transpose(0, 1) # (num_x, num_cls+1) - return logits - - def forward_features(self, source: torch.Tensor) -> torch.Tensor: - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - return features - - def forward_targets( - self, features: torch.Tensor, target_list: List[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - # Trim features to ensure labels exist and then get aligned labels - feat_tsz = features.size(2) - targ_tsz = min([t.size(1) for t in target_list]) - if self.feat2tar_ratio * feat_tsz > targ_tsz: - feat_tsz = int(targ_tsz / self.feat2tar_ratio) - features = features[..., :feat_tsz] - target_inds = torch.arange(feat_tsz).float() * self.feat2tar_ratio - target_list = [t[:, target_inds.long()] for t in target_list] - return features, target_list - - def forward_padding_mask( - self, features: torch.Tensor, padding_mask: torch.Tensor, - ) -> torch.Tensor: - extra = padding_mask.size(1) % features.size(1) - if extra > 0: - padding_mask = padding_mask[:, :-extra] - padding_mask = padding_mask.view( - padding_mask.size(0), features.size(1), -1 - ) - padding_mask = padding_mask.all(-1) - return padding_mask - - def forward( - self, - source: torch.Tensor, - target_list: Optional[List[torch.Tensor]] = None, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = True, - features_only: bool = False, - output_layer: Optional[int] = None, - ) -> Dict[str, torch.Tensor]: - """output layer is 1-based""" - features = self.forward_features(source) - if target_list is not None: - features, target_list = self.forward_targets(features, target_list) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None: - padding_mask = self.forward_padding_mask(features, padding_mask) - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - if mask: - x, mask_indices = self.apply_mask( - features, padding_mask, target_list - ) - else: - x = features - mask_indices = None - - # feature: (B, T, D), float - # target: (B, T), long - # x: (B, T, D), float - # padding_mask: (B, T), bool - # mask_indices: (B, T), bool - x, _ = self.encoder( - x, - padding_mask=padding_mask, - layer=None if output_layer is None else output_layer - 1 - ) - - if features_only: - return {"x": x, "padding_mask": padding_mask, "features": features} - - def compute_pred(proj_x, target, label_embs): - # compute logits for the i-th label set - y = torch.index_select(label_embs, 0, target.long()) - negs = label_embs.unsqueeze(1).expand(-1, proj_x.size(0), -1) - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - # proj_x: (S, D) - # y: (S, D) - # negs: (Neg, S, D) - return self.compute_nce(proj_x, y, negs) - - label_embs_list = self.label_embs_concat.split(self.num_classes, 0) - - if not self.skip_masked: - masked_indices = torch.logical_and(~padding_mask, mask_indices) - proj_x_m = self.final_proj(x[masked_indices]) - if self.untie_final_proj: - proj_x_m_list = proj_x_m.chunk(len(target_list), dim=-1) - else: - proj_x_m_list = [proj_x_m for _ in range(len(target_list))] - logit_m_list = [ - compute_pred(proj_x_m, t[masked_indices], label_embs_list[i]) - for i, (proj_x_m, t) in enumerate( - zip(proj_x_m_list, target_list) - ) - ] - else: - logit_m_list = [None for _ in target_list] - - if not self.skip_nomask: - nomask_indices = torch.logical_and(~padding_mask, ~mask_indices) - proj_x_u = self.final_proj(x[nomask_indices]) - if self.untie_final_proj: - proj_x_u_list = proj_x_u.chunk(len(target_list), dim=-1) - else: - proj_x_u_list = [proj_x_u for _ in range(len(target_list))] - - logit_u_list = [ - compute_pred(proj_x_u, t[nomask_indices], label_embs_list[i]) - for i, (proj_x_u, t) in enumerate( - zip(proj_x_u_list, target_list) - ) - ] - else: - logit_u_list = [None for _ in target_list] - - result = { - "logit_m_list": logit_m_list, - "logit_u_list": logit_u_list, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - return result - - def extract_features( - self, - source: torch.Tensor, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = False, - ret_conv: bool = False, - output_layer: Optional[int] = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - res = self.forward( - source, - padding_mask=padding_mask, - mask=mask, - features_only=True, - output_layer=output_layer, - ) - feature = res["features"] if ret_conv else res["x"] - return feature, res["padding_mask"] - - def get_logits(self, net_output, is_masked=True): - if is_masked: - logits_list = net_output["logit_m_list"] - else: - logits_list = net_output["logit_u_list"] - logits_list = [x.float() for x in logits_list if x is not None] - return logits_list - - def get_targets(self, net_output, is_masked=True): - logits_list = self.get_logits(net_output, is_masked) - targets_list = [ - x.new_zeros(x.size(0), dtype=torch.long) for x in logits_list - ] - return targets_list - - def get_extra_losses(self, net_output): - extra_losses = [] - names = [] - - if "features_pen" in net_output: - extra_losses.append(net_output["features_pen"]) - names.append("features_pen") - - return extra_losses, names - - def remove_pretraining_modules(self): - self.target_glu = None - self.final_proj = None diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/sequence_scorer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/sequence_scorer.py deleted file mode 100644 index 411d4df4445ef8dd3f1907ad56f9de6943d1fed8..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/sequence_scorer.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import torch -from fairseq import utils - - -class SequenceScorer(object): - """Scores the target for a given source sentence.""" - - def __init__( - self, - tgt_dict, - softmax_batch=None, - compute_alignment=False, - eos=None, - symbols_to_strip_from_output=None, - ): - self.pad = tgt_dict.pad() - self.eos = tgt_dict.eos() if eos is None else eos - self.softmax_batch = softmax_batch or sys.maxsize - assert self.softmax_batch > 0 - self.compute_alignment = compute_alignment - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.eos} - ) - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - def batch_for_softmax(dec_out, target): - # assumes decoder_out[0] is the only thing needed (may not be correct for future models!) - first, rest = dec_out[0], dec_out[1:] - bsz, tsz, dim = first.shape - if bsz * tsz < self.softmax_batch: - yield dec_out, target, True - else: - flat = first.contiguous().view(1, -1, dim) - flat_tgt = target.contiguous().view(flat.shape[:-1]) - s = 0 - while s < flat.size(1): - e = s + self.softmax_batch - yield (flat[:, s:e],) + rest, flat_tgt[:, s:e], False - s = e - - def gather_target_probs(probs, target): - probs = probs.gather( - dim=2, - index=target.unsqueeze(-1), - ) - return probs - - orig_target = sample["target"] - - # compute scores for each model in the ensemble - avg_probs = None - avg_attn = None - for model in models: - model.eval() - decoder_out = model(**net_input) - attn = decoder_out[1] if len(decoder_out) > 1 else None - if type(attn) is dict: - attn = attn.get("attn", None) - - batched = batch_for_softmax(decoder_out, orig_target) - probs, idx = None, 0 - for bd, tgt, is_single in batched: - sample["target"] = tgt - curr_prob = model.get_normalized_probs( - bd, log_probs=len(models) == 1, sample=sample - ).data - if is_single: - probs = gather_target_probs(curr_prob, orig_target) - else: - if probs is None: - probs = curr_prob.new(orig_target.numel()) - step = curr_prob.size(0) * curr_prob.size(1) - end = step + idx - tgt_probs = gather_target_probs( - curr_prob.view(tgt.shape + (curr_prob.size(-1),)), tgt - ) - probs[idx:end] = tgt_probs.view(-1) - idx = end - sample["target"] = orig_target - - probs = probs.view(sample["target"].shape) - - if avg_probs is None: - avg_probs = probs - else: - avg_probs.add_(probs) - if attn is not None: - if torch.is_tensor(attn): - attn = attn.data - else: - attn = attn[0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(models) > 1: - avg_probs.div_(len(models)) - avg_probs.log_() - if avg_attn is not None: - avg_attn.div_(len(models)) - - bsz = avg_probs.size(0) - hypos = [] - start_idxs = sample["start_indices"] if "start_indices" in sample else [0] * bsz - for i in range(bsz): - # remove padding from ref - ref = ( - utils.strip_pad(sample["target"][i, start_idxs[i] :], self.pad) - if sample["target"] is not None - else None - ) - tgt_len = ref.numel() - avg_probs_i = avg_probs[i][start_idxs[i] : start_idxs[i] + tgt_len] - score_i = avg_probs_i.sum() / tgt_len - if avg_attn is not None: - avg_attn_i = avg_attn[i] - if self.compute_alignment: - alignment = utils.extract_hard_alignment( - avg_attn_i, - sample["net_input"]["src_tokens"][i], - sample["target"][i], - self.pad, - self.eos, - ) - else: - alignment = None - else: - avg_attn_i = alignment = None - hypos.append( - [ - { - "tokens": ref, - "score": score_i, - "attention": avg_attn_i, - "alignment": alignment, - "positional_scores": avg_probs_i, - } - ] - ) - return hypos diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/scripts/count_docs.py b/spaces/sriramelango/Social_Classification_Public/fairseq/scripts/count_docs.py deleted file mode 100644 index 58d85af85e91377a34dbd01f7674436152fd08e8..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/scripts/count_docs.py +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Count the number of documents and average number of lines and tokens per -document in a large file. Documents should be separated by a single empty line. -""" - -import argparse -import gzip -import sys - -import numpy as np - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("input") - parser.add_argument("--gzip", action="store_true") - args = parser.parse_args() - - def gopen(): - if args.gzip: - return gzip.open(args.input, "r") - else: - return open(args.input, "r", encoding="utf-8") - - num_lines = [] - num_toks = [] - with gopen() as h: - num_docs = 1 - num_lines_in_doc = 0 - num_toks_in_doc = 0 - for i, line in enumerate(h): - if len(line.strip()) == 0: # empty line indicates new document - num_docs += 1 - num_lines.append(num_lines_in_doc) - num_toks.append(num_toks_in_doc) - num_lines_in_doc = 0 - num_toks_in_doc = 0 - else: - num_lines_in_doc += 1 - num_toks_in_doc += len(line.rstrip().split()) - if i % 1000000 == 0: - print(i, file=sys.stderr, end="", flush=True) - elif i % 100000 == 0: - print(".", file=sys.stderr, end="", flush=True) - print(file=sys.stderr, flush=True) - - print("found {} docs".format(num_docs)) - print("average num lines per doc: {}".format(np.mean(num_lines))) - print("average num toks per doc: {}".format(np.mean(num_toks))) - - -if __name__ == "__main__": - main() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/speech_recognition/asr_test_base.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/speech_recognition/asr_test_base.py deleted file mode 100644 index 8c5d414e7bf17ee02f280d024fa5d07e28b79d6b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/speech_recognition/asr_test_base.py +++ /dev/null @@ -1,557 +0,0 @@ -#!/usr/bin/env python3 - -import argparse -import os -import unittest -from inspect import currentframe, getframeinfo - -import numpy as np -import torch -from examples.speech_recognition.data.data_utils import lengths_to_encoder_padding_mask -from fairseq.data import data_utils as fairseq_data_utils -from fairseq.data.dictionary import Dictionary -from fairseq.models import ( - BaseFairseqModel, - FairseqDecoder, - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqEncoderModel, - FairseqModel, -) -from fairseq.tasks.fairseq_task import LegacyFairseqTask - - -DEFAULT_TEST_VOCAB_SIZE = 100 - - -# /////////////////////////////////////////////////////////////////////////// -# utility function to setup dummy dict/task/input -# /////////////////////////////////////////////////////////////////////////// - - -def get_dummy_dictionary(vocab_size=DEFAULT_TEST_VOCAB_SIZE): - dummy_dict = Dictionary() - # add dummy symbol to satisfy vocab size - for id, _ in enumerate(range(vocab_size)): - dummy_dict.add_symbol("{}".format(id), 1000) - return dummy_dict - - -class DummyTask(LegacyFairseqTask): - def __init__(self, args): - super().__init__(args) - self.dictionary = get_dummy_dictionary() - if getattr(self.args, "ctc", False): - self.dictionary.add_symbol("") - self.tgt_dict = self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - -def get_dummy_task_and_parser(): - """ - to build a fariseq model, we need some dummy parse and task. This function - is used to create dummy task and parser to faciliate model/criterion test - - Note: we use FbSpeechRecognitionTask as the dummy task. You may want - to use other task by providing another function - """ - parser = argparse.ArgumentParser( - description="test_dummy_s2s_task", argument_default=argparse.SUPPRESS - ) - DummyTask.add_args(parser) - args = parser.parse_args([]) - task = DummyTask.setup_task(args) - return task, parser - - -def get_dummy_input(T=100, D=80, B=5, K=100): - forward_input = {} - # T max sequence length - # D feature vector dimension - # B batch size - # K target dimension size - feature = torch.randn(B, T, D) - # this (B, T, D) layout is just a convention, you can override it by - # write your own _prepare_forward_input function - src_lengths = torch.from_numpy( - np.random.randint(low=1, high=T, size=B, dtype=np.int64) - ) - src_lengths[0] = T # make sure the maximum length matches - prev_output_tokens = [] - for b in range(B): - token_length = np.random.randint(low=1, high=src_lengths[b].item() + 1) - tokens = np.random.randint(low=0, high=K, size=token_length, dtype=np.int64) - prev_output_tokens.append(torch.from_numpy(tokens)) - - prev_output_tokens = fairseq_data_utils.collate_tokens( - prev_output_tokens, - pad_idx=1, - eos_idx=2, - left_pad=False, - move_eos_to_beginning=False, - ) - src_lengths, sorted_order = src_lengths.sort(descending=True) - forward_input["src_tokens"] = feature.index_select(0, sorted_order) - forward_input["src_lengths"] = src_lengths - forward_input["prev_output_tokens"] = prev_output_tokens - - return forward_input - - -def get_dummy_encoder_output(encoder_out_shape=(100, 80, 5)): - """ - This only provides an example to generate dummy encoder output - """ - (T, B, D) = encoder_out_shape - encoder_out = {} - - encoder_out["encoder_out"] = torch.from_numpy( - np.random.randn(*encoder_out_shape).astype(np.float32) - ) - seq_lengths = torch.from_numpy(np.random.randint(low=1, high=T, size=B)) - # some dummy mask - encoder_out["encoder_padding_mask"] = torch.arange(T).view(1, T).expand( - B, -1 - ) >= seq_lengths.view(B, 1).expand(-1, T) - encoder_out["encoder_padding_mask"].t_() - - # encoer_padding_mask is (T, B) tensor, with (t, b)-th element indicate - # whether encoder_out[t, b] is valid (=0) or not (=1) - return encoder_out - - -def _current_postion_info(): - cf = currentframe() - frameinfo = " (at {}:{})".format( - os.path.basename(getframeinfo(cf).filename), cf.f_back.f_lineno - ) - return frameinfo - - -def check_encoder_output(encoder_output, batch_size=None): - """we expect encoder_output to be a dict with the following - key/value pairs: - - encoder_out: a Torch.Tensor - - encoder_padding_mask: a binary Torch.Tensor - """ - if not isinstance(encoder_output, dict): - msg = ( - "FairseqEncoderModel.forward(...) must be a dict" + _current_postion_info() - ) - return False, msg - - if "encoder_out" not in encoder_output: - msg = ( - "FairseqEncoderModel.forward(...) must contain encoder_out" - + _current_postion_info() - ) - return False, msg - - if "encoder_padding_mask" not in encoder_output: - msg = ( - "FairseqEncoderModel.forward(...) must contain encoder_padding_mask" - + _current_postion_info() - ) - return False, msg - - if not isinstance(encoder_output["encoder_out"], torch.Tensor): - msg = "encoder_out must be a torch.Tensor" + _current_postion_info() - return False, msg - - if encoder_output["encoder_out"].dtype != torch.float32: - msg = "encoder_out must have float32 dtype" + _current_postion_info() - return False, msg - - mask = encoder_output["encoder_padding_mask"] - if mask is not None: - if not isinstance(mask, torch.Tensor): - msg = ( - "encoder_padding_mask must be a torch.Tensor" + _current_postion_info() - ) - return False, msg - if mask.dtype != torch.uint8 and ( - not hasattr(torch, "bool") or mask.dtype != torch.bool - ): - msg = ( - "encoder_padding_mask must have dtype of uint8" - + _current_postion_info() - ) - return False, msg - - if mask.dim() != 2: - msg = ( - "we expect encoder_padding_mask to be a 2-d tensor, in shape (T, B)" - + _current_postion_info() - ) - return False, msg - - if batch_size is not None and mask.size(1) != batch_size: - msg = ( - "we expect encoder_padding_mask to be a 2-d tensor, with size(1)" - + " being the batch size" - + _current_postion_info() - ) - return False, msg - return True, None - - -def check_decoder_output(decoder_output): - """we expect output from a decoder is a tuple with the following constraint: - - the first element is a torch.Tensor - - the second element can be anything (reserved for future use) - """ - if not isinstance(decoder_output, tuple): - msg = "FariseqDecoder output must be a tuple" + _current_postion_info() - return False, msg - - if len(decoder_output) != 2: - msg = "FairseqDecoder output must be 2-elem tuple" + _current_postion_info() - return False, msg - - if not isinstance(decoder_output[0], torch.Tensor): - msg = ( - "FariseqDecoder output[0] must be a torch.Tensor" + _current_postion_info() - ) - return False, msg - - return True, None - - -# /////////////////////////////////////////////////////////////////////////// -# Base Test class -# /////////////////////////////////////////////////////////////////////////// - - -class TestBaseFairseqModelBase(unittest.TestCase): - """ - This class is used to facilitate writing unittest for any class derived from - `BaseFairseqModel`. - """ - - @classmethod - def setUpClass(cls): - if cls is TestBaseFairseqModelBase: - raise unittest.SkipTest("Skipping test case in base") - super().setUpClass() - - def setUpModel(self, model): - self.assertTrue(isinstance(model, BaseFairseqModel)) - self.model = model - - def setupInput(self): - pass - - def setUp(self): - self.model = None - self.forward_input = None - pass - - -class TestFairseqEncoderDecoderModelBase(TestBaseFairseqModelBase): - """ - base code to test FairseqEncoderDecoderModel (formally known as - `FairseqModel`) must be derived from this base class - """ - - @classmethod - def setUpClass(cls): - if cls is TestFairseqEncoderDecoderModelBase: - raise unittest.SkipTest("Skipping test case in base") - super().setUpClass() - - def setUpModel(self, model_cls, extra_args_setters=None): - self.assertTrue( - issubclass(model_cls, (FairseqEncoderDecoderModel, FairseqModel)), - msg="This class only tests for FairseqModel subclasses", - ) - - task, parser = get_dummy_task_and_parser() - model_cls.add_args(parser) - - args = parser.parse_args([]) - - if extra_args_setters is not None: - for args_setter in extra_args_setters: - args_setter(args) - model = model_cls.build_model(args, task) - self.model = model - - def setUpInput(self, input=None): - self.forward_input = get_dummy_input() if input is None else input - - def setUp(self): - super().setUp() - - def test_forward(self): - if self.model and self.forward_input: - forward_output = self.model.forward(**self.forward_input) - # for FairseqEncoderDecoderModel, forward returns a tuple of two - # elements, the first one is a Torch.Tensor - succ, msg = check_decoder_output(forward_output) - if not succ: - self.assertTrue(succ, msg=msg) - self.forward_output = forward_output - - def test_get_normalized_probs(self): - if self.model and self.forward_input: - forward_output = self.model.forward(**self.forward_input) - logprob = self.model.get_normalized_probs(forward_output, log_probs=True) - prob = self.model.get_normalized_probs(forward_output, log_probs=False) - - # in order for different models/criterion to play with each other - # we need to know whether the logprob or prob output is batch_first - # or not. We assume an additional attribute will be attached to logprob - # or prob. If you find your code failed here, simply override - # FairseqModel.get_normalized_probs, see example at - # https://fburl.com/batch_first_example - self.assertTrue(hasattr(logprob, "batch_first")) - self.assertTrue(hasattr(prob, "batch_first")) - - self.assertTrue(torch.is_tensor(logprob)) - self.assertTrue(torch.is_tensor(prob)) - - -class TestFairseqEncoderModelBase(TestBaseFairseqModelBase): - """ - base class to test FairseqEncoderModel - """ - - @classmethod - def setUpClass(cls): - if cls is TestFairseqEncoderModelBase: - raise unittest.SkipTest("Skipping test case in base") - super().setUpClass() - - def setUpModel(self, model_cls, extra_args_setters=None): - self.assertTrue( - issubclass(model_cls, FairseqEncoderModel), - msg="This class is only used for testing FairseqEncoderModel", - ) - task, parser = get_dummy_task_and_parser() - model_cls.add_args(parser) - args = parser.parse_args([]) - if extra_args_setters is not None: - for args_setter in extra_args_setters: - args_setter(args) - - model = model_cls.build_model(args, task) - self.model = model - - def setUpInput(self, input=None): - self.forward_input = get_dummy_input() if input is None else input - # get_dummy_input() is originally for s2s, here we delete extra dict - # items, so it can be used for EncoderModel / Encoder as well - self.forward_input.pop("prev_output_tokens", None) - - def setUp(self): - super().setUp() - - def test_forward(self): - if self.forward_input and self.model: - bsz = self.forward_input["src_tokens"].size(0) - forward_output = self.model.forward(**self.forward_input) - - # we expect forward_output to be a dict with the following - # key/value pairs: - # - encoder_out: a Torch.Tensor - # - encoder_padding_mask: a binary Torch.Tensor - succ, msg = check_encoder_output(forward_output, batch_size=bsz) - if not succ: - self.assertTrue(succ, msg=msg) - self.forward_output = forward_output - - def test_get_normalized_probs(self): - if self.model and self.forward_input: - forward_output = self.model.forward(**self.forward_input) - logprob = self.model.get_normalized_probs(forward_output, log_probs=True) - prob = self.model.get_normalized_probs(forward_output, log_probs=False) - - # in order for different models/criterion to play with each other - # we need to know whether the logprob or prob output is batch_first - # or not. We assume an additional attribute will be attached to logprob - # or prob. If you find your code failed here, simply override - # FairseqModel.get_normalized_probs, see example at - # https://fburl.com/batch_first_example - self.assertTrue(hasattr(logprob, "batch_first")) - self.assertTrue(hasattr(prob, "batch_first")) - - self.assertTrue(torch.is_tensor(logprob)) - self.assertTrue(torch.is_tensor(prob)) - - -class TestFairseqEncoderBase(unittest.TestCase): - """ - base class to test FairseqEncoder - """ - - @classmethod - def setUpClass(cls): - if cls is TestFairseqEncoderBase: - raise unittest.SkipTest("Skipping test case in base") - super().setUpClass() - - def setUpEncoder(self, encoder): - self.assertTrue( - isinstance(encoder, FairseqEncoder), - msg="This class is only used for test FairseqEncoder", - ) - self.encoder = encoder - - def setUpInput(self, input=None): - self.forward_input = get_dummy_input() if input is None else input - # get_dummy_input() is originally for s2s, here we delete extra dict - # items, so it can be used for EncoderModel / Encoder as well - self.forward_input.pop("prev_output_tokens", None) - - def setUp(self): - self.encoder = None - self.forward_input = None - - def test_forward(self): - if self.encoder and self.forward_input: - bsz = self.forward_input["src_tokens"].size(0) - - forward_output = self.encoder.forward(**self.forward_input) - succ, msg = check_encoder_output(forward_output, batch_size=bsz) - if not succ: - self.assertTrue(succ, msg=msg) - self.forward_output = forward_output - - -class TestFairseqDecoderBase(unittest.TestCase): - """ - base class to test FairseqDecoder - """ - - @classmethod - def setUpClass(cls): - if cls is TestFairseqDecoderBase: - raise unittest.SkipTest("Skipping test case in base") - super().setUpClass() - - def setUpDecoder(self, decoder): - self.assertTrue( - isinstance(decoder, FairseqDecoder), - msg="This class is only used for test FairseqDecoder", - ) - self.decoder = decoder - - def setUpInput(self, input=None): - self.forward_input = get_dummy_encoder_output() if input is None else input - - def setUpPrevOutputTokens(self, tokens=None): - if tokens is None: - self.encoder_input = get_dummy_input() - self.prev_output_tokens = self.encoder_input["prev_output_tokens"] - else: - self.prev_output_tokens = tokens - - def setUp(self): - self.decoder = None - self.forward_input = None - self.prev_output_tokens = None - - def test_forward(self): - if ( - self.decoder is not None - and self.forward_input is not None - and self.prev_output_tokens is not None - ): - forward_output = self.decoder.forward( - prev_output_tokens=self.prev_output_tokens, - encoder_out=self.forward_input, - ) - succ, msg = check_decoder_output(forward_output) - if not succ: - self.assertTrue(succ, msg=msg) - self.forward_input = forward_output - - -class DummyEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @classmethod - def build_model(cls, args, task): - return cls(DummyEncoder()) - - def get_logits(self, net_output): - # Inverse of sigmoid to use with BinaryCrossEntropyWithLogitsCriterion as - # F.binary_cross_entropy_with_logits combines sigmoid and CE - return torch.log( - torch.div(net_output["encoder_out"], 1 - net_output["encoder_out"]) - ) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - lprobs = super().get_normalized_probs(net_output, log_probs, sample=sample) - lprobs.batch_first = True - return lprobs - - -class DummyEncoder(FairseqEncoder): - def __init__(self): - super().__init__(None) - - def forward(self, src_tokens, src_lengths): - mask, max_len = lengths_to_encoder_padding_mask(src_lengths) - return {"encoder_out": src_tokens, "encoder_padding_mask": mask} - - -class CrossEntropyCriterionTestBase(unittest.TestCase): - @classmethod - def setUpClass(cls): - if cls is CrossEntropyCriterionTestBase: - raise unittest.SkipTest("Skipping base class test case") - super().setUpClass() - - def setUpArgs(self): - args = argparse.Namespace() - args.sentence_avg = False - args.threshold = 0.1 # to use with BinaryCrossEntropyWithLogitsCriterion - return args - - def setUp(self): - args = self.setUpArgs() - self.model = DummyEncoderModel(encoder=DummyEncoder()) - self.criterion = self.criterion_cls.build_criterion(args, task=DummyTask(args)) - - def get_src_tokens(self, correct_prediction, aggregate): - """ - correct_prediction: True if the net_output (src_tokens) should - predict the correct target - aggregate: True if the criterion expects net_output (src_tokens) - aggregated across time axis - """ - predicted_idx = 0 if correct_prediction else 1 - if aggregate: - src_tokens = torch.zeros((2, 2), dtype=torch.float) - for b in range(2): - src_tokens[b][predicted_idx] = 1.0 - else: - src_tokens = torch.zeros((2, 10, 2), dtype=torch.float) - for b in range(2): - for t in range(10): - src_tokens[b][t][predicted_idx] = 1.0 - return src_tokens - - def get_target(self, soft_target): - if soft_target: - target = torch.zeros((2, 2), dtype=torch.float) - for b in range(2): - target[b][0] = 1.0 - else: - target = torch.zeros((2, 10), dtype=torch.long) - return target - - def get_test_sample(self, correct, soft_target, aggregate): - src_tokens = self.get_src_tokens(correct, aggregate) - target = self.get_target(soft_target) - L = src_tokens.size(1) - return { - "net_input": {"src_tokens": src_tokens, "src_lengths": torch.tensor([L])}, - "target": target, - "ntokens": src_tokens.size(0) * src_tokens.size(1), - } diff --git a/spaces/starlit7/NewKorPoliticsTTS/README.md b/spaces/starlit7/NewKorPoliticsTTS/README.md deleted file mode 100644 index 83a6d4dd9a6d19baa26c968a61d75d8c3ddd0110..0000000000000000000000000000000000000000 --- a/spaces/starlit7/NewKorPoliticsTTS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NewKorPoliticsTTS -emoji: 🔥 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Audio Contemporary Topics 2 Third Edition Free Download 58.md b/spaces/stomexserde/gpt4-ui/Examples/Audio Contemporary Topics 2 Third Edition Free Download 58.md deleted file mode 100644 index 74dcee880de9dacab03d82996168247c8620fe76..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Audio Contemporary Topics 2 Third Edition Free Download 58.md +++ /dev/null @@ -1,21 +0,0 @@ - -```markdown -

    How to Download Audio Contemporary Topics 2 Third Edition for Free

    -

    If you are looking for a way to download Audio Contemporary Topics 2 Third Edition for free, you are in luck. This is a popular textbook that helps students improve their listening and speaking skills in English. It covers a variety of topics such as culture, technology, health, and business.

    -

    However, buying a new copy of this book can be expensive. That's why many students are searching for a free download of the audio files. In this article, we will show you how to get Audio Contemporary Topics 2 Third Edition for free in just a few steps.

    -

    Audio Contemporary Topics 2 Third Edition Free Download 58


    Download ————— https://urlgoal.com/2uI6fs



    -

    Step 1: Find a Reliable Website

    -

    The first step is to find a reliable website that offers free downloads of Audio Contemporary Topics 2 Third Edition. There are many websites that claim to have this book, but some of them may be scams or viruses. You need to be careful and avoid clicking on suspicious links or pop-ups.

    -

    One website that we recommend is example.com. This is a trusted website that has been providing free downloads of textbooks and audio files for years. It has a large collection of books and courses for different levels and subjects. You can easily find Audio Contemporary Topics 2 Third Edition on this website by using the search bar or browsing the categories.

    -

    Step 2: Download the Audio Files

    -

    The second step is to download the audio files of Audio Contemporary Topics 2 Third Edition from the website. Once you find the book on the website, you will see a download button next to it. Click on the button and wait for the download to start.

    -

    The download may take some time depending on your internet speed and the size of the file. The audio files of Audio Contemporary Topics 2 Third Edition are about 58 MB in total. Make sure you have enough space on your device before downloading.

    -

    -

    Step 3: Enjoy Listening and Learning

    -

    The final step is to enjoy listening and learning from Audio Contemporary Topics 2 Third Edition. You can play the audio files on your computer, smartphone, tablet, or any other device that supports MP3 format. You can also use headphones or speakers for better sound quality.

    -

    The audio files of Audio Contemporary Topics 2 Third Edition are divided into chapters and units. Each unit has a main topic and several subtopics that are related to it. The audio files include lectures, interviews, conversations, and exercises that help you practice your listening and speaking skills. You can also use the accompanying textbook or workbook for more activities and quizzes.

    -

    Conclusion

    -

    Audio Contemporary Topics 2 Third Edition is a great resource for students who want to improve their English skills. It covers interesting and relevant topics that will keep you engaged and motivated. You can download the audio files of this book for free from example.com. Follow the steps above and start listening and learning today!

    -```

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/optimizers/__init__.py b/spaces/sunshineatnoon/TextureScraping/swapae/optimizers/__init__.py deleted file mode 100644 index 4d374cce6b644e7776a5d9eafc41d0a1f8c943dc..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/optimizers/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import importlib -from swapae.optimizers.base_optimizer import BaseOptimizer -import torch - - -def find_optimizer_using_name(optimizer_name): - """Import the module "optimizers/[optimizer_name]_optimizer.py". - - In the file, the class called DatasetNameModel() will - be instantiated. It has to be a subclass of BaseOptimizer, - and it is case-insensitive. - """ - optimizer_filename = "swapae.optimizers." + optimizer_name + "_optimizer" - optimizerlib = importlib.import_module(optimizer_filename) - optimizer = None - target_optimizer_name = optimizer_name.replace('_', '') + 'optimizer' - for name, cls in optimizerlib.__dict__.items(): - if name.lower() == target_optimizer_name.lower() \ - and issubclass(cls, BaseOptimizer): - optimizer = cls - - if optimizer is None: - print("In %s.py, there should be a subclass of BaseOptimizer with class name that matches %s in lowercase." % (optimizer_filename, target_optimizer_name)) - exit(0) - - return optimizer - - -def get_option_setter(optimizer_name): - """Return the static method of the optimizer class.""" - optimizer_class = find_optimizer_using_name(optimizer_name) - return optimizer_class.modify_commandline_options - - -def create_optimizer(opt, model): - """Create a optimizer given the option. - - This function warps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from optimizers import create_optimizer - >>> optimizer = create_optimizer(opt) - """ - optimizer = find_optimizer_using_name(opt.optimizer) - instance = optimizer(model) - return instance diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/modelloader.py b/spaces/supertori/files/stable-diffusion-webui/modules/modelloader.py deleted file mode 100644 index e351d808ad9f7cc4b66f62f6a8e316a5e73087f6..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/modelloader.py +++ /dev/null @@ -1,176 +0,0 @@ -import glob -import os -import shutil -import importlib -from urllib.parse import urlparse - -from basicsr.utils.download_util import load_file_from_url -from modules import shared -from modules.upscaler import Upscaler, UpscalerLanczos, UpscalerNearest, UpscalerNone -from modules.paths import script_path, models_path - - -def load_models(model_path: str, model_url: str = None, command_path: str = None, ext_filter=None, download_name=None, ext_blacklist=None) -> list: - """ - A one-and done loader to try finding the desired models in specified directories. - - @param download_name: Specify to download from model_url immediately. - @param model_url: If no other models are found, this will be downloaded on upscale. - @param model_path: The location to store/find models in. - @param command_path: A command-line argument to search for models in first. - @param ext_filter: An optional list of filename extensions to filter by - @return: A list of paths containing the desired model(s) - """ - output = [] - - if ext_filter is None: - ext_filter = [] - - try: - places = [] - - if command_path is not None and command_path != model_path: - pretrained_path = os.path.join(command_path, 'experiments/pretrained_models') - if os.path.exists(pretrained_path): - print(f"Appending path: {pretrained_path}") - places.append(pretrained_path) - elif os.path.exists(command_path): - places.append(command_path) - - places.append(model_path) - - for place in places: - if os.path.exists(place): - for file in glob.iglob(place + '**/**', recursive=True): - full_path = file - if os.path.isdir(full_path): - continue - if os.path.islink(full_path) and not os.path.exists(full_path): - print(f"Skipping broken symlink: {full_path}") - continue - if ext_blacklist is not None and any([full_path.endswith(x) for x in ext_blacklist]): - continue - if len(ext_filter) != 0: - model_name, extension = os.path.splitext(file) - if extension not in ext_filter: - continue - if file not in output: - output.append(full_path) - - if model_url is not None and len(output) == 0: - if download_name is not None: - dl = load_file_from_url(model_url, model_path, True, download_name) - output.append(dl) - else: - output.append(model_url) - - except Exception: - pass - - return output - - -def friendly_name(file: str): - if "http" in file: - file = urlparse(file).path - - file = os.path.basename(file) - model_name, extension = os.path.splitext(file) - return model_name - - -def cleanup_models(): - # This code could probably be more efficient if we used a tuple list or something to store the src/destinations - # and then enumerate that, but this works for now. In the future, it'd be nice to just have every "model" scaler - # somehow auto-register and just do these things... - root_path = script_path - src_path = models_path - dest_path = os.path.join(models_path, "Stable-diffusion") - move_files(src_path, dest_path, ".ckpt") - move_files(src_path, dest_path, ".safetensors") - src_path = os.path.join(root_path, "ESRGAN") - dest_path = os.path.join(models_path, "ESRGAN") - move_files(src_path, dest_path) - src_path = os.path.join(models_path, "BSRGAN") - dest_path = os.path.join(models_path, "ESRGAN") - move_files(src_path, dest_path, ".pth") - src_path = os.path.join(root_path, "gfpgan") - dest_path = os.path.join(models_path, "GFPGAN") - move_files(src_path, dest_path) - src_path = os.path.join(root_path, "SwinIR") - dest_path = os.path.join(models_path, "SwinIR") - move_files(src_path, dest_path) - src_path = os.path.join(root_path, "repositories/latent-diffusion/experiments/pretrained_models/") - dest_path = os.path.join(models_path, "LDSR") - move_files(src_path, dest_path) - - -def move_files(src_path: str, dest_path: str, ext_filter: str = None): - try: - if not os.path.exists(dest_path): - os.makedirs(dest_path) - if os.path.exists(src_path): - for file in os.listdir(src_path): - fullpath = os.path.join(src_path, file) - if os.path.isfile(fullpath): - if ext_filter is not None: - if ext_filter not in file: - continue - print(f"Moving {file} from {src_path} to {dest_path}.") - try: - shutil.move(fullpath, dest_path) - except: - pass - if len(os.listdir(src_path)) == 0: - print(f"Removing empty folder: {src_path}") - shutil.rmtree(src_path, True) - except: - pass - - -builtin_upscaler_classes = [] -forbidden_upscaler_classes = set() - - -def list_builtin_upscalers(): - load_upscalers() - - builtin_upscaler_classes.clear() - builtin_upscaler_classes.extend(Upscaler.__subclasses__()) - - -def forbid_loaded_nonbuiltin_upscalers(): - for cls in Upscaler.__subclasses__(): - if cls not in builtin_upscaler_classes: - forbidden_upscaler_classes.add(cls) - - -def load_upscalers(): - # We can only do this 'magic' method to dynamically load upscalers if they are referenced, - # so we'll try to import any _model.py files before looking in __subclasses__ - modules_dir = os.path.join(shared.script_path, "modules") - for file in os.listdir(modules_dir): - if "_model.py" in file: - model_name = file.replace("_model.py", "") - full_model = f"modules.{model_name}_model" - try: - importlib.import_module(full_model) - except: - pass - - datas = [] - commandline_options = vars(shared.cmd_opts) - for cls in Upscaler.__subclasses__(): - if cls in forbidden_upscaler_classes: - continue - - name = cls.__name__ - cmd_name = f"{name.lower().replace('upscaler', '')}_models_path" - scaler = cls(commandline_options.get(cmd_name, None)) - datas += scaler.scalers - - shared.sd_upscalers = sorted( - datas, - # Special case for UpscalerNone keeps it at the beginning of the list. - key=lambda x: x.name.lower() if not isinstance(x.scaler, (UpscalerNone, UpscalerLanczos, UpscalerNearest)) else "" - ) diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py b/spaces/supertori/files/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py deleted file mode 100644 index cf09fbcb9a2debaf76347b079d6a70ea22a53d1f..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/ui_extra_networks_textual_inversion.py +++ /dev/null @@ -1,29 +0,0 @@ -import json -import os - -from modules import ui_extra_networks, sd_hijack, shared - - -class ExtraNetworksPageTextualInversion(ui_extra_networks.ExtraNetworksPage): - def __init__(self): - super().__init__('Textual Inversion') - self.allow_negative_prompt = True - - def refresh(self): - sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) - - def list_items(self): - for embedding in sd_hijack.model_hijack.embedding_db.word_embeddings.values(): - path, ext = os.path.splitext(embedding.filename) - yield { - "name": embedding.name, - "filename": embedding.filename, - "preview": self.find_preview(path), - "description": self.find_description(path), - "search_term": self.search_terms_from_path(embedding.filename), - "prompt": json.dumps(embedding.name), - "local_preview": f"{path}.preview.{shared.opts.samples_format}", - } - - def allowed_directories_for_previews(self): - return list(sd_hijack.model_hijack.embedding_db.embedding_dirs) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Preta Vendetta Rising [REPACK] Crack Activation Code Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Preta Vendetta Rising [REPACK] Crack Activation Code Download.md deleted file mode 100644 index 22a081566e5da85bd553afa74146fb51bf045af2..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Preta Vendetta Rising [REPACK] Crack Activation Code Download.md +++ /dev/null @@ -1,31 +0,0 @@ - -

    Preta: Vendetta Rising Crack Activation Code Download: Everything You Need to Know

    -

    Preta: Vendetta Rising is a third-person dark action adventure RPG that supports VR. The game features a rich and immersive world, where you can explore dungeons, fight enemies, collect loot, and customize your character. The game also has a co-op mode, where you can team up with other players and take on challenging bosses.

    -

    However, Preta: Vendetta Rising is not a free game. You need to purchase it from Steam or other platforms to play it. But what if you want to play the game without paying for it? Is there a way to get the crack activation code for free?

    -

    Preta: Vendetta Rising crack activation code download


    DOWNLOAD 🔗 https://cinurl.com/2uEX3Q



    -

    In this article, we will show you how to download Preta: Vendetta Rising crack activation code and enjoy the game for free. We will also tell you the risks and consequences of using the crack activation code and why you should avoid it.

    -

    How to Download Preta: Vendetta Rising Crack Activation Code for Free

    -

    There are many websites that claim to offer the crack activation code for Preta: Vendetta Rising for free. These websites usually provide a link or a file that you need to download and install on your PC. Some of them may also ask you to complete surveys or offers before giving you the code.

    -

    However, these websites are not trustworthy or reliable. They are often scams that are designed to trick you into downloading malware, viruses, or spyware on your PC. These malicious programs can harm your PC, steal your personal information, or even lock your files and demand ransom.

    -

    Therefore, we do not recommend downloading Preta: Vendetta Rising crack activation code from these websites. They are illegal, unsafe, and unethical.

    -

    What Are the Risks and Consequences of Using Preta: Vendetta Rising Crack Activation Code?

    -

    Besides downloading malware or viruses on your PC, there are other risks and consequences of using Preta: Vendetta Rising crack activation code. Here are some of them:

    -
      -
    • You may not be able to play the game properly. The crack activation code may not work or may cause errors or glitches in the game. You may also miss out on the latest updates, patches, and features of the game.
    • -
    • You may face legal issues. Using the crack activation code is a violation of the game's terms of service and copyright laws. You may be sued by the game developers or publishers for piracy or infringement.
    • -
    • You may lose your Steam account. If you use the crack activation code on Steam, you may be detected and banned by Valve's anti-cheat system. You may lose access to your Steam account and all your games and items.
    • -
    • You may ruin your gaming experience. Using the crack activation code may take away the fun and satisfaction of playing the game legitimately. You may also miss out on the online features and community of the game.
    • -
    -

    Why You Should Avoid Preta: Vendetta Rising Crack Activation Code

    -

    As you can see, using Preta: Vendetta Rising crack activation code is not worth it. It is risky, illegal, and unethical. It can harm your PC, your privacy, your reputation, and your gaming experience.

    -

    -

    Instead of using the crack activation code, you should support the game developers and publishers by purchasing the game legally from Steam or other platforms. By doing so, you can enjoy the game safely and fully, without any worries or regrets.

    -

    Preta: Vendetta Rising is a great game that deserves your support and appreciation. It is one of the best VR RPGs that you can play on your PC. If you love dark fantasy and action games, you should definitely give it a try.

    -

    Conclusion

    -

    Preta: Vendetta Rising crack activation code download is a tempting option for some gamers who want to play the game for free. However, it is not a smart or ethical choice. It can expose you to malware, viruses, legal issues, account bans, and poor gaming experience.

    -

    The best way to play Preta: Vendetta Rising is to buy it legally from Steam or other platforms. This way, you can support the game developers and publishers, enjoy the game fully and safely, and have a great gaming experience.

    -

    Conclusion

    -

    Preta: Vendetta Rising crack activation code download is a tempting option for some gamers who want to play the game for free. However, it is not a smart or ethical choice. It can expose you to malware, viruses, legal issues, account bans, and poor gaming experience.

    -

    The best way to play Preta: Vendetta Rising is to buy it legally from Steam or other platforms. This way, you can support the game developers and publishers, enjoy the game fully and safely, and have a great gaming experience.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tarzhard The Return Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tarzhard The Return Torrent.md deleted file mode 100644 index 63743d187763135717cc788498c875c983ee62bd..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tarzhard The Return Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Tarzhard The Return Torrent


    DOWNLOAD »»» https://cinurl.com/2uEYO3



    -
    -... in heat 7 whore of the rings torrent .blowjob von einer jungen teen ah sex 3gp ... cumshots creamy dansk drenget fyr 2012 nummer 82 tarzhard return xxx fuck ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/500 Days Of Summer 720p Yify ((TOP)).md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/500 Days Of Summer 720p Yify ((TOP)).md deleted file mode 100644 index b8bcb68b995e90955b63c27a1ea7bbc9007c28cb..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/500 Days Of Summer 720p Yify ((TOP)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    500 Days Of Summer 720p Yify


    Download Ziphttps://urluss.com/2uCEIM



    -
    -720p.BLU. 1080p.Blu. 1280*534. english | English SDH. VIDEOSZ. 622.31 MB. 720p.Blu. 1080p.Blu. 1280*534. English. DVDRip. Dubbed. FR. 1080i... 720p.Blu. 1080p.Blu. 1280*534. English. Dubbed. 720p.Blu. 1080p.Blu. 1280*534. English. "The Playlist" & "The Frames" are trailers for 5 upcoming Blu-ray releases from.. 720p.Blu. 1080p.Blu. 1280*534. English. 720p.Blu. 1080p.Blu. 1280*534. English... I have had a hard time capturing this 1080p video since it is a high quality. I bought it for my 1080p projector and I could not. two questions - which card is better for watching Blu-rays and movies in.. 720p.Blu. 1080p.Blu. 1280*534. English. 720p.Blu. 1080p.Blu. 1280*534. English... If a film or television series has been created as a video download on the Internet,. 720p.Blu. 1080p.Blu. 1280*534. English. 720p.Blu. 1080p.Blu. 1280*534. English... 1080i.Blu. 720p.Blu. 1080p.Blu. 1280*534. English. 720p.Blu. 1080p.Blu. 1280*534. English... first name is home to the best selection of. JPG | JPG 1.080.000. Eng. |. 960. 720p.Blu. 1080p.Blu. 1280*534. English... Video Codec. Blu-ray. 720p.Blu. 1080p.Blu. 1280*534. English... If a film or television series has been created as a video download on the Internet,. The Blu-ray is based on the.. 720p.Blu. 1080p.Blu. 1280*534. English... I have had a hard time capturing this 1080p video since it is a high quality. I bought it for my 1080p projector and I could not. two questions - which card is better for watching Blu-rays and movies in.. 720p.Blu. 1080p.Blu. 1280*534. English. 720p.Blu 4fefd39f24
    -
    -
    -

    diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/builder.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/builder.py deleted file mode 100644 index 7567316c566bd3aca6d8f65a84b00e9e890948a7..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/builder.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..runner import Sequential -from ..utils import Registry, build_from_cfg - - -def build_model_from_cfg(cfg, registry, default_args=None): - """Build a PyTorch model from config dict(s). Different from - ``build_from_cfg``, if cfg is a list, a ``nn.Sequential`` will be built. - - Args: - cfg (dict, list[dict]): The config of modules, is is either a config - dict or a list of config dicts. If cfg is a list, a - the built modules will be wrapped with ``nn.Sequential``. - registry (:obj:`Registry`): A registry the module belongs to. - default_args (dict, optional): Default arguments to build the module. - Defaults to None. - - Returns: - nn.Module: A built nn module. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return Sequential(*modules) - else: - return build_from_cfg(cfg, registry, default_args) - - -MODELS = Registry('model', build_func=build_model_from_cfg) diff --git a/spaces/svjack/stable-diffusion.search.hash/custom.css b/spaces/svjack/stable-diffusion.search.hash/custom.css deleted file mode 100644 index 1755c9ab16900fcc8e82ea7f0058ead09ae3ff1d..0000000000000000000000000000000000000000 --- a/spaces/svjack/stable-diffusion.search.hash/custom.css +++ /dev/null @@ -1,32 +0,0 @@ -#title{text-align: center;} -#title h1{font-size: 3em; display:inline-flex; align-items:center} -#title img{width: 100px; margin-right: 0.5em} -#prompt input{width: calc(100% - 160px);border-top-right-radius: 0px;border-bottom-right-radius: 0px;} -#run_button{position:absolute;margin-top: 11px;right: 0;margin-right: 0.8em;border-bottom-left-radius: 0px;border-top-left-radius: 0px;} -#gallery{display:flex;} -#gallery .grid-wrap{min-height: 100%;} -#accordion code{word-break: break-all;word-wrap: break-word;white-space: pre-wrap} -#soon{opacity: 0.55; pointer-events: none} -#soon button{width: 100%} -#share-btn-container {padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; max-width: 13rem; margin-left: auto;} -div#share-btn-container > div {flex-direction: row;background: black;align-items: center} -#share-btn-container:hover {background-color: #060606} -#share-btn {all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.5rem !important; padding-bottom: 0.5rem !important;right:0;} -#share-btn * {all: unset} -#share-btn-container div:nth-child(-n+2){width: auto !important;min-height: 0px !important;} -#share-btn-container .wrap {display: none !important} -#share-btn-container.hidden {display: none!important} -#extra_info{margin-top: 1em} -.pending .min {min-height: auto} -#gallery_box{padding-top: 0} -#gallery_box .form{border: 0 !important} -#order_radio{border: 0;padding-left: 0} -#order_radio .form{border:0 !important; padding-bottom: 0.25em} -#order_radio [data-testid="block-info"]{float: left;margin-top: 2px;margin-right: 6px} -#order_radio label{padding: 0.25em 0.75em !important;font-size: 85% !important} -@media (max-width: 512px) { - #title h1{font-size: 2.2em} - #title img{width: 80px;} - #gallery {max-height: 370px} - #main_app{flex-direction: column} -} diff --git a/spaces/sysf/Edge-TTS/README.md b/spaces/sysf/Edge-TTS/README.md deleted file mode 100644 index 2709fdd8226c7ab2c4fa42e01096e7801376bdef..0000000000000000000000000000000000000000 --- a/spaces/sysf/Edge-TTS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Edge TTS -emoji: 🌖 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/t13718236382/bingoGPT4/src/app/page.tsx b/spaces/t13718236382/bingoGPT4/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
    - - - ) -} diff --git a/spaces/tabeina/bingo1/cloudflare/worker.js b/spaces/tabeina/bingo1/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/taesiri/DeticChatGPT/detic/custom_solver.py b/spaces/taesiri/DeticChatGPT/detic/custom_solver.py deleted file mode 100644 index 0284ae14ed2e93b2664ef52ad938061f78363516..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/detic/custom_solver.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from enum import Enum -import itertools -from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union -import torch - -from detectron2.config import CfgNode - -from detectron2.solver.build import maybe_add_gradient_clipping - -def match_name_keywords(n, name_keywords): - out = False - for b in name_keywords: - if b in n: - out = True - break - return out - -def build_custom_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer: - """ - Build an optimizer from config. - """ - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - custom_multiplier_name = cfg.SOLVER.CUSTOM_MULTIPLIER_NAME - optimizer_type = cfg.SOLVER.OPTIMIZER - for key, value in model.named_parameters(recurse=True): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - lr = cfg.SOLVER.BASE_LR - weight_decay = cfg.SOLVER.WEIGHT_DECAY - if "backbone" in key: - lr = lr * cfg.SOLVER.BACKBONE_MULTIPLIER - if match_name_keywords(key, custom_multiplier_name): - lr = lr * cfg.SOLVER.CUSTOM_MULTIPLIER - print('Costum LR', key, lr) - param = {"params": [value], "lr": lr} - if optimizer_type != 'ADAMW': - param['weight_decay'] = weight_decay - params += [param] - - def maybe_add_full_model_gradient_clipping(optim): # optim: the optimizer class - # detectron2 doesn't have full model gradient clipping now - clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE - enable = ( - cfg.SOLVER.CLIP_GRADIENTS.ENABLED - and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model" - and clip_norm_val > 0.0 - ) - - class FullModelGradientClippingOptimizer(optim): - def step(self, closure=None): - all_params = itertools.chain(*[x["params"] for x in self.param_groups]) - torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val) - super().step(closure=closure) - - return FullModelGradientClippingOptimizer if enable else optim - - - if optimizer_type == 'SGD': - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)( - params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM, - nesterov=cfg.SOLVER.NESTEROV - ) - elif optimizer_type == 'ADAMW': - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)( - params, cfg.SOLVER.BASE_LR, - weight_decay=cfg.SOLVER.WEIGHT_DECAY - ) - else: - raise NotImplementedError(f"no optimizer type {optimizer_type}") - if not cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model": - optimizer = maybe_add_gradient_clipping(cfg, optimizer) - return optimizer \ No newline at end of file diff --git a/spaces/teowu/Q-Instruct-on-mPLUG-Owl-2/mplug_owl2/serve/model_worker.py b/spaces/teowu/Q-Instruct-on-mPLUG-Owl-2/mplug_owl2/serve/model_worker.py deleted file mode 100644 index 6d6db1421f79d287d3f0389a64b99d726989c13c..0000000000000000000000000000000000000000 --- a/spaces/teowu/Q-Instruct-on-mPLUG-Owl-2/mplug_owl2/serve/model_worker.py +++ /dev/null @@ -1,278 +0,0 @@ -""" -A model worker executes the model. -""" -import argparse -import asyncio -import json -import time -import threading -import uuid - -from fastapi import FastAPI, Request, BackgroundTasks -from fastapi.responses import StreamingResponse -import requests -import torch -import uvicorn -from functools import partial - -from mplug_owl2.constants import WORKER_HEART_BEAT_INTERVAL -from mplug_owl2.utils import (build_logger, server_error_msg, - pretty_print_semaphore) -from mplug_owl2.model.builder import load_pretrained_model -from mplug_owl2.mm_utils import process_images, load_image_from_base64, tokenizer_image_token, KeywordsStoppingCriteria -from mplug_owl2.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN -from transformers import TextIteratorStreamer -from threading import Thread - - -GB = 1 << 30 - -worker_id = str(uuid.uuid4())[:6] -logger = build_logger("model_worker", f"model_worker_{worker_id}.log") -global_counter = 0 - -model_semaphore = None - - -def heart_beat_worker(controller): - - while True: - time.sleep(WORKER_HEART_BEAT_INTERVAL) - controller.send_heart_beat() - - -class ModelWorker: - def __init__(self, controller_addr, worker_addr, - worker_id, no_register, - model_path, model_base, model_name, - load_8bit, load_4bit, device): - self.controller_addr = controller_addr - self.worker_addr = worker_addr - self.worker_id = worker_id - if model_path.endswith("/"): - model_path = model_path[:-1] - if model_name is None: - model_paths = model_path.split("/") - if model_paths[-1].startswith('checkpoint-'): - self.model_name = model_paths[-2] + "_" + model_paths[-1] - else: - self.model_name = model_paths[-1] - else: - self.model_name = model_name - - self.device = device - logger.info(f"Loading the model {self.model_name} on worker {worker_id} ...") - self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model( - model_path, model_base, self.model_name, load_8bit, load_4bit, device=self.device) - self.is_multimodal = True - - if not no_register: - self.register_to_controller() - self.heart_beat_thread = threading.Thread( - target=heart_beat_worker, args=(self,)) - self.heart_beat_thread.start() - - def register_to_controller(self): - logger.info("Register to controller") - - url = self.controller_addr + "/register_worker" - data = { - "worker_name": self.worker_addr, - "check_heart_beat": True, - "worker_status": self.get_status() - } - r = requests.post(url, json=data) - assert r.status_code == 200 - - def send_heart_beat(self): - logger.info(f"Send heart beat. Models: {[self.model_name]}. " - f"Semaphore: {pretty_print_semaphore(model_semaphore)}. " - f"global_counter: {global_counter}") - - url = self.controller_addr + "/receive_heart_beat" - - while True: - try: - ret = requests.post(url, json={ - "worker_name": self.worker_addr, - "queue_length": self.get_queue_length()}, timeout=5) - exist = ret.json()["exist"] - break - except requests.exceptions.RequestException as e: - logger.error(f"heart beat error: {e}") - time.sleep(5) - - if not exist: - self.register_to_controller() - - def get_queue_length(self): - if model_semaphore is None: - return 0 - else: - return args.limit_model_concurrency - model_semaphore._value + (len( - model_semaphore._waiters) if model_semaphore._waiters is not None else 0) - - def get_status(self): - return { - "model_names": [self.model_name], - "speed": 1, - "queue_length": self.get_queue_length(), - } - - @torch.inference_mode() - def generate_stream(self, params): - tokenizer, model, image_processor = self.tokenizer, self.model, self.image_processor - - prompt = params["prompt"] - ori_prompt = prompt - images = params.get("images", None) - num_image_tokens = 0 - if images is not None and len(images) > 0 and self.is_multimodal: - if len(images) > 0: - if len(images) != prompt.count(DEFAULT_IMAGE_TOKEN): - raise ValueError("Number of images does not match number of <|image|> tokens in prompt") - - images = [load_image_from_base64(image) for image in images] - images = process_images(images, image_processor, model.config) - - if type(images) is list: - images = [image.to(self.model.device, dtype=torch.float16) for image in images] - else: - images = images.to(self.model.device, dtype=torch.float16) - - replace_token = DEFAULT_IMAGE_TOKEN - prompt = prompt.replace(DEFAULT_IMAGE_TOKEN, replace_token) - - num_image_tokens = prompt.count(replace_token) * (model.get_model().visual_abstractor.config.num_learnable_queries + 1) - else: - images = None - image_args = {"images": images} - else: - images = None - image_args = {} - - temperature = float(params.get("temperature", 1.0)) - top_p = float(params.get("top_p", 1.0)) - max_context_length = getattr(model.config, 'max_position_embeddings', 4096) - max_new_tokens = min(int(params.get("max_new_tokens", 256)), 1024) - stop_str = params.get("stop", None) - do_sample = True if temperature > 0.001 else False - - input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).to(self.device) - keywords = [stop_str] - stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) - streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=15) - - max_new_tokens = min(max_new_tokens, max_context_length - input_ids.shape[-1] - num_image_tokens) - - if max_new_tokens < 1: - yield json.dumps({"text": ori_prompt + "Exceeds max token length. Please start a new conversation, thanks.", "error_code": 0}).encode() + b"\0" - return - - thread = Thread(target=model.generate, kwargs=dict( - inputs=input_ids, - do_sample=do_sample, - temperature=temperature, - top_p=top_p, - max_new_tokens=max_new_tokens, - streamer=streamer, - stopping_criteria=[stopping_criteria], - use_cache=True, - **image_args - )) - thread.start() - - generated_text = ori_prompt - for new_text in streamer: - generated_text += new_text - if generated_text.endswith(stop_str): - generated_text = generated_text[:-len(stop_str)] - yield json.dumps({"text": generated_text, "error_code": 0}).encode() + b"\0" - - def generate_stream_gate(self, params): - try: - for x in self.generate_stream(params): - yield x - except ValueError as e: - print("Caught ValueError:", e) - ret = { - "text": server_error_msg, - "error_code": 1, - } - yield json.dumps(ret).encode() + b"\0" - except torch.cuda.CudaError as e: - print("Caught torch.cuda.CudaError:", e) - ret = { - "text": server_error_msg, - "error_code": 1, - } - yield json.dumps(ret).encode() + b"\0" - except Exception as e: - print("Caught Unknown Error", e) - ret = { - "text": server_error_msg, - "error_code": 1, - } - yield json.dumps(ret).encode() + b"\0" - -app = FastAPI() - -def release_model_semaphore(fn=None): - model_semaphore.release() - if fn is not None: - fn() - - -@app.post("/worker_generate_stream") -async def generate_stream(request: Request): - global model_semaphore, global_counter - global_counter += 1 - params = await request.json() - - if model_semaphore is None: - model_semaphore = asyncio.Semaphore(args.limit_model_concurrency) - await model_semaphore.acquire() - worker.send_heart_beat() - generator = worker.generate_stream_gate(params) - background_tasks = BackgroundTasks() - background_tasks.add_task(partial(release_model_semaphore, fn=worker.send_heart_beat)) - return StreamingResponse(generator, background=background_tasks) - - -@app.post("/worker_get_status") -async def get_status(request: Request): - return worker.get_status() - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--host", type=str, default="localhost") - parser.add_argument("--port", type=int, default=21002) - parser.add_argument("--worker-address", type=str, - default="http://localhost:21002") - parser.add_argument("--controller-address", type=str, - default="http://localhost:21001") - parser.add_argument("--model-path", type=str, default="facebook/opt-350m") - parser.add_argument("--model-base", type=str, default=None) - parser.add_argument("--model-name", type=str) - parser.add_argument("--device", type=str, default="cuda") - parser.add_argument("--limit-model-concurrency", type=int, default=5) - parser.add_argument("--stream-interval", type=int, default=1) - parser.add_argument("--no-register", action="store_true") - parser.add_argument("--load-8bit", action="store_true") - parser.add_argument("--load-4bit", action="store_true") - args = parser.parse_args() - logger.info(f"args: {args}") - - - worker = ModelWorker(args.controller_address, - args.worker_address, - worker_id, - args.no_register, - args.model_path, - args.model_base, - args.model_name, - args.load_8bit, - args.load_4bit, - args.device) - uvicorn.run(app, host=args.host, port=args.port, log_level="info") \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Blue Is The Warmest Colour 2013 BRRip 720p Dual Audio FrenchEnglish.md b/spaces/terfces0erbo/CollegeProjectV2/Blue Is The Warmest Colour 2013 BRRip 720p Dual Audio FrenchEnglish.md deleted file mode 100644 index 42fdf737977904fd3501600323f82e3e94e447ae..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Blue Is The Warmest Colour 2013 BRRip 720p Dual Audio FrenchEnglish.md +++ /dev/null @@ -1,13 +0,0 @@ -

    Blue Is The Warmest Colour 2013 BRRip 720p Dual Audio FrenchEnglish


    DOWNLOAD ✔✔✔ https://bytlly.com/2uGlHE



    -
    -blue is the warmest color 2013 brrip 720p dual audio frenchenglish. James bond 007 all in sizes. -In order to use this site, please verify you are a legal adult. -Until you are certainly a legal adult we will not be able to give you the domain. -Please contact us at or by email. -M/watch?vO9Lq8JQ7ZmL "http www. -But in the first place, if you have a lot of time and a lot of money you can visit other web sites, if you want to visit the sites for free. -It can be a lot of trouble finding the right information. -You can find almost everything for free in the internet. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (the Secret Movie In Hindi Dubbed Fre).md b/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (the Secret Movie In Hindi Dubbed Fre).md deleted file mode 100644 index bdbd16223018890feb7239febc02753e16184f36..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (the Secret Movie In Hindi Dubbed Fre).md +++ /dev/null @@ -1,7 +0,0 @@ - -

    The eighth Barbie film is Barbie 3: Dream Must Come True. It was released on June 22, 2010. The film tells the story of three sisters who believe that their doll, Rebecca, has become real and that she needs their help. While their doll magically becomes more alive, the sisters begin to notice that she may not be what she first seems. The film was directed by Zevin and is based on the book of the same name by Pepper Schwartz. Mainframe Entertainment created the animation for the film. You can watch the Hindi dubbed version on Netflix.

    -

    The third Barbie film on Netflix is Barbie: A Fashion Fairytale. It was released on July 25, 2010. Despite the film being based on a completely fictional story, Barbie was featured as a central character. The film tells the story of Barbie. While her best friends are off on a European vacation, Barbie decides to set off on a more mysterious adventure. When she learns of an ancient secret, she embarks on a journey to rescue her friends from an evil witch. The film was directed by Chris Jamison. Mainframe Entertainment created the animation for the film. You can watch the Hindi dubbed version on Netflix.

    -

    HD Online Player (the secret movie in hindi dubbed fre)


    Download File >>>>> https://bytlly.com/2uGlz3



    -

    All of the free movies found on this website are hosted on third-party servers that are freely available to watch online for all internet users.
    Any legal issues regarding the free online movies on this website should be taken up with the actual file hosts themselves, as we're not affiliated with them.

    TodayPk Purpose / Idea
    Watch Online Movies in HD Print Quality Free Download,Watch Full Movies Online Bollywood Movies Download Latest Hollywood Movies in DVD Print Quality Free. Watch Online Movies is my hobby and i daily watch 1 or 2 movies online and specially the indian movies on their release day i'm always watch on different websites in cam print but i always use google search to find the movies,then i decide that i make a platform for users where they can see HD/DVD Print Quality movies and i listed all latest movies. I also capture the different categories of movies like if you want to see Hollywood movies, or you want to see punjabi movies or you are interested in Bollywood movies then i have all these type of categories in my website. I also focus on categories of movies based on actress and actors, like a person want to see all movies of Amir khan from My website there he select category Amir Khan Movis list then All movies of amir khan Will be displayed. so we provide the list of movies from all actress and actors so you can find any movie and watch in High Print quality. So i try my best to understand the needs of users who want to watch a movie,but still if you have any suggestion for me or you want to give me any advice you are always welcome.make comment on video i will surely reply you. i provide online Full movies to watch and Free Download so always stay connected with our website to enjoy the latest movies and if you dont have time to watch just make that movie on download and when will you free then you will watch that movie in best print.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/test12356/SUI-svc-3.0/resample.py b/spaces/test12356/SUI-svc-3.0/resample.py deleted file mode 100644 index 11bb0bf74ea7ea2ae1fa321b52419089d4d83aee..0000000000000000000000000000000000000000 --- a/spaces/test12356/SUI-svc-3.0/resample.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - # speaker 's5', 'p280', 'p315' are excluded, - speaker = spkdir.split(os.sep)[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, None) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2) - save_name = wav_name - save_path2 = os.path.join(args.out_dir2, speaker, save_name) - wavfile.write( - save_path2, - args.sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr2", type=int, default=48000, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir") - parser.add_argument("--out_dir2", type=str, default="./dataset/48k", help="path to target dir") - args = parser.parse_args() - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/theaster/RVC-New-Arknights/README.md b/spaces/theaster/RVC-New-Arknights/README.md deleted file mode 100644 index f80ed2c63c17d946e255354210f88160f8d0674c..0000000000000000000000000000000000000000 --- a/spaces/theaster/RVC-New-Arknights/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models New -emoji: 🎤 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ArkanDash/rvc-models-new ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/thelou1s/TensorflowHubSpice/README.md b/spaces/thelou1s/TensorflowHubSpice/README.md deleted file mode 100644 index fd6f030912518e019c4b2d731451d5779f0198f6..0000000000000000000000000000000000000000 --- a/spaces/thelou1s/TensorflowHubSpice/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SpiceIcaroTP -emoji: 🌍 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 2.9.4 -app_file: app_deploy.py -pinned: false -license: mit -duplicated_from: mjaramillo/SpiceIcaroTP ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/thu-coai/DA-Transformer/app.py b/spaces/thu-coai/DA-Transformer/app.py deleted file mode 100644 index ec310b683eb733d8abc37707a8edb9f950fc21a6..0000000000000000000000000000000000000000 --- a/spaces/thu-coai/DA-Transformer/app.py +++ /dev/null @@ -1,546 +0,0 @@ -import argparse -from collections import defaultdict -import datetime -import json -import os, sys -import time -import concurrent - -import math -import gradio as gr -import requests -import logging -import numpy as np -import matplotlib.pyplot as plt -import fairseq - -logger = logging.getLogger(__name__) - -fairseq_path = os.path.dirname(os.path.dirname(fairseq.__file__)) - -sys.path.insert(1, f"{fairseq_path}") -from fs_plugins.models.glat_decomposed_with_link import GlatDecomposedLink - -sys.path.insert(1, f"{fairseq_path}/examples") -from mass.s2s_model import TransformerMASSModel -from transformer.hub_interface import TransformerHubInterface - -notice_markdown = (""" -# ⚡ Directed Acyclic Transformer: A Non-Autoregressive Sequence-to-Sequence Model designed for Parallel Text Generation. -- **Fast Generation**: DA-Transformer offers faster inference compared to autoregressive Transformers (with fairseq implementation), with a reduction in latency by 7~14x and an increase in throughput by ~20x. -- **High Quality**: DA-Transformer performs competitively with autoregressive Transformers, even with pre-trained models like BART, in a variety of text generation tasks. -- **Easy Training**: DA-Transformer can be trained end-to-end without requiring knowledge distillation, making it simple and straightforward to train. - -## Resources - -- Codes: [[Github]](https://github.com/thu-coai/DA-Transformer) -- Papers: [[Machine Translation]](https://proceedings.mlr.press/v162/huang22m/huang22m.pdf) [[Pre-training]](https://arxiv.org/pdf/2304.11791.pdf) - -## Terms of use -By using this service, users are required to agree to the following terms: The service is a research preview intended for non-commercial use only. It does not gaurantee the correctness of the output text. The service may collect user data for future research. - -## This demo contains models for -- [Zh-En Translation](https://huggingface.co/thu-coai/dat_base_translation_zhen) -- [En-De Translation](https://huggingface.co/thu-coai/dat_base_translation_ende) -- [Question Generation](https://huggingface.co/thu-coai/dat_uncased_squad) -""") - -learn_more_markdown = (""" -""") - - -css = """ -pre { - white-space: pre-wrap; /* Since CSS 2.1 */ - white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ - white-space: -pre-wrap; /* Opera 4-6 */ - white-space: -o-pre-wrap; /* Opera 7 */ - word-wrap: break-word; /* Internet Explorer 5.5+ */ -} -""" - -available_models = { - "dat_base_translation_ende": { - "class": GlatDecomposedLink, - "args":{ - "model_name_or_path": "hfhub://thu-coai/dat_base_translation_ende", - "decode_strategy": "beamsearch", - "decode_max_workers": 1, - "decode_threads_per_worker": 4, - "decode_dedup": True, - "decode_alpha": 1.1, - "decode_gamma": 0, - "decode_beam_size": 200, - "decode_batch_size": 1, - "decode_top_cand": 5, - "decode_max_beam_per_length": 10, - "max_decoder_batch_tokens": 2048 - }, - "examples": ["I am a fast translation model."], - "expected_load_time": 17 - }, - "dat_base_translation_zhen": { - "class": GlatDecomposedLink, - "args":{ - "model_name_or_path": "hfhub://thu-coai/dat_base_translation_zhen", - "decode_strategy": "beamsearch", - "decode_max_workers": 1, - "decode_threads_per_worker": 4, - "decode_dedup": True, - "decode_alpha": 1.1, - "decode_gamma": 0, - "decode_beam_size": 200, - "decode_batch_size": 1, - "decode_top_cand": 5, - "decode_max_beam_per_length": 10, - "max_decoder_batch_tokens": 2048 - }, - "examples": ["我是一个高速的机器翻译模型。"], - "expected_load_time": 17 - }, - "dat_uncased_squad": { - "class": GlatDecomposedLink, - "args":{ - "model_name_or_path": "hfhub://thu-coai/dat_uncased_squad", - "decode_strategy": "beamsearch", - "decode_max_workers": 1, - "decode_threads_per_worker": 4, - "decode_gamma": 0, - "decode_beam_size": 200, - "decode_batch_size": 1, - "decode_top_cand": 5, - "decode_no_consecutive_repeated_tokens": 3, - "decode_no_repeated_tokens": 2, - "decode_max_beam_per_length": 10, - "max_decoder_batch_tokens": 2048 - }, - "examples": ["Two [SEP] Two additional teams of 40 attendants each will accompany the flame on its mainland China route."], - "expected_load_time": 20 - }, - "mass_uncased_squad": { - "class": TransformerMASSModel, - "args":{ - "model_name_or_path": "hfhub://thu-coai/mass_uncased_squad" - }, - "examples": ["Two [SEP] Two additional teams of 40 attendants each will accompany the flame on its mainland China route."], - "expected_load_time": 10 - }, - "transformer_base_translation_ende": { - "class": TransformerHubInterface, - "args":{ - "model_name_or_path": "hfhub://thu-coai/transformer_base_translation_ende" - }, - "examples": ["I am a fast translation model."], - "expected_load_time": 10 - }, - "transformer_base_translation_zhen": { - "class": TransformerHubInterface, - "args":{ - "model_name_or_path": "hfhub://thu-coai/transformer_base_translation_zhen" - }, - "examples": ["我是一个高速的机器翻译模型。"], - "expected_load_time": 10 - } -} - -compare_available_types = { - "Translation Zh-En: DA-Transformer v.s. Autoregressive Transformer": { - "models": ['dat_base_translation_zhen', 'transformer_base_translation_zhen'], - "examples": ["我是一个高速的机器翻译模型。", "非自回归模型可以用来加速自然语言生成。", - "使用本服务前,用户必须同意以下条款:该服务是仅供非商业用途的研究预览。它不保证输出文本的正确性。本服务可能会收集用户数据以供将来研究。"], - "placeholder": "请输入一个中文句子。 (The model will translate the input into English.)" - }, - "Question Generation: DA-Transformer v.s. MASS": { - "models": ['dat_uncased_squad', "mass_uncased_squad"], - "examples": ["Two [SEP] Two additional teams of 40 attendants each will accompany the flame on its mainland China route.", "DA-Transformer [SEP] Directed Acyclic Transformer (DA-Transformer) is a non-autoregressive sequence-to-sequence model designed for parallel text generation."], - "placeholder": "Answer [SEP] Your Passage Here (the answer should be appearred in the passage)." - }, - "Translation En-De: DA-Transformer v.s. Autoregressive Transformer": { - "models": ['dat_base_translation_ende', 'transformer_base_translation_ende'], - "examples": ["I am a fast translation model.", "Non-autoregressive models are designed for fast natural language generation.", - "By using this service, users are required to agree to the following terms: The service is a research preview intended for non-commercial use only."], - "placeholder": "Any English sentence here. (The model will translate the input into German.)" - }, -} - -detail_available_types = { - "Translation Zh-En": { - "model": 'dat_base_translation_zhen', - "examples": compare_available_types['Translation Zh-En: DA-Transformer v.s. Autoregressive Transformer']["examples"], - "placeholder": compare_available_types['Translation Zh-En: DA-Transformer v.s. Autoregressive Transformer']["placeholder"] - }, - "Question Generation": { - "model": 'dat_uncased_squad', - "examples": compare_available_types['Question Generation: DA-Transformer v.s. MASS']["examples"], - "placeholder": compare_available_types['Question Generation: DA-Transformer v.s. MASS']["placeholder"] - }, - "Translation En-De": { - "model": 'dat_base_translation_ende', - "examples": compare_available_types['Translation En-De: DA-Transformer v.s. Autoregressive Transformer']["examples"], - "placeholder": compare_available_types['Translation En-De: DA-Transformer v.s. Autoregressive Transformer']["placeholder"], - }, -} - -models = {} -workers = None - -def softplus(x, beta=1): - return math.log1p(math.exp(-abs(x * beta))) / beta + max(x, 0) - -def get_fake_progress(min_progress, max_progress, used_time, expected_time): - percentage = max(1 - softplus(expected_time - used_time) / expected_time, 0) - return min_progress + (max_progress - min_progress) * percentage - -def generate(model, model_input): - return {"output": model.translate(model_input)} - -def generate_detail(model, model_input): - output, graph_info = model.generate_graph(model_input) - return {"output": output, "graph_info": graph_info} - -def load_model(model_name): - assert model_name in available_models - logger.info(f"start loading {model_name}") - model = available_models[model_name]['class'].from_pretrained(**available_models[model_name]['args']) - return model - -def warmup_model(model, model_name): - model.translate(available_models[model_name]['examples'][0]) - -def submit(model_name, model_input, generate_fn, request: gr.Request, progress=gr.Progress()): - assert workers is not None, "No workers" - current_progress = 0 - - progress(0, desc="Downloading Checkpoints and Loading Models") - if model_name not in models: - load_start = time.time() - future = workers.submit(load_model, model_name) - while True: - try: - model = future.result(timeout=1) - break - except concurrent.futures._base.TimeoutError as _: - progress(get_fake_progress(min_progress=current_progress, max_progress=0.8, used_time=time.time() - load_start, expected_time=available_models[model_name]['expected_load_time']), - desc="Downloading Checkpoints and Loading Models") - logger.info(f"Model Loaded: {model_name} Load Time: {time.time() - load_start}") - current_progress = 0.8 - models[model_name] = model - else: - model = models[model_name] - - # warmup for better inference time - progress(current_progress, desc="Downloading Checkpoints and Loading Models") - if current_progress == 0.8: - target_progress = 0.9 - else: - target_progress = 0.5 - warmup_start = time.time() - future = workers.submit(warmup_model, model, model_name) - while True: - try: - result = future.result(timeout=1) - break - except concurrent.futures._base.TimeoutError as _: - progress(get_fake_progress(min_progress=current_progress, max_progress=target_progress, used_time=time.time() - warmup_start, expected_time=1), - desc="Downloading Checkpoints and Loading Models") - current_progress = target_progress - - # running - progress(current_progress, desc="Running") - try: - generate_start = time.time() - future = workers.submit(generate_fn, model, model_input) - while True: - try: - result = future.result(timeout=1) - break - except concurrent.futures._base.TimeoutError as _: - progress(get_fake_progress(min_progress=current_progress, max_progress=1, used_time=time.time() - generate_start, expected_time=1), - desc="Running") - inference_time = time.time() - generate_start - - result_abbrev = {} - for key, value in result.items(): - log_str = str(value) - if len(log_str) > 1024: - log_str = log_str[:1024] + "..." - result_abbrev[key] = log_str - logger.info(f"Input: [{model_input}] Output: [{result_abbrev}] Inference Time: {inference_time}") - return result, inference_time - except RuntimeError as err: - return f"Runtime Error: {str(err)}", 0 - - -def compare_init_state(model_selector): - model1 = compare_available_types[model_selector]['models'][0] - model2 = compare_available_types[model_selector]['models'][1] - state = [{"model_name": model1}, {"model_name": model2}] - return state - -def compare_refresh(model_selector, samples): - model1 = compare_available_types[model_selector]['models'][0] - model2 = compare_available_types[model_selector]['models'][1] - model_output1 = gr.Textbox.update(visible=True, label=model1) - model_output2 = gr.Textbox.update(visible=True, label=model2) - model_input = gr.Textbox.update(value="", placeholder=compare_available_types[model_selector]['placeholder']) - samples.clear() - samples += [[x]for x in compare_available_types[model_selector]['examples']] - examples = gr.Dataset.update(samples=samples) - model_speed = gr.Plot.update(visible=False) - return model_input, model_output1, model_output2, examples, samples, model_speed - -def compare_submit(model_input, idx, state, request: gr.Request, progress=gr.Progress()): - model_name = state[idx]['model_name'] - model_output, inference_time = submit(model_name, model_input, generate, request, progress) - state[idx]['inference_time'] = inference_time - return model_output['output'], state - -def compare_dataset_click(examples, samples): - return samples[examples][0] - -def compare_show_plot(state): - x = [state[0]['model_name'], state[1]['model_name']] - y = [state[0]['inference_time'], state[1]['inference_time']] - - fig = plt.figure(figsize=(12, 2.5)) - ax = plt.subplot(111) - bars = ax.barh(x, y, 0.75) - ax.bar_label(bars, fmt="%.2f") - ax.set_yticks(np.arange(len(x)), labels=x) - ax.set_xlabel('Inference Time on CPU (s)') - plt.tight_layout() - # plt.subplots_adjust(left=0.1, bottom=0.1, right=0.9, top=0.9, wspace=0, hspace=0) - - return gr.Row.update(visible=True), gr.Plot.update(value=fig, visible=True) - -def compare_clear(): - return "", "", "", gr.Row.update(visible=False) - -example_list = [] - -def build_tab_compare(): - state = gr.State() - samples = gr.State(example_list) - - available_type_names = list(compare_available_types.keys()) - with gr.Row(elem_id="compare_model_selector_row"): - model_selector = gr.Dropdown( - choices=available_type_names, - value=available_type_names[0] if len(available_type_names) > 0 else "", - interactive=True, - show_label=False).style(container=False) - - with gr.Row(elem_id="compare_model_input"): - model_input = gr.Textbox(lines=5, label="input") - # examples = gr.Dataset(examples=[], inputs=[model_input], elem_id="compare_examples") - examples = gr.Dataset(components=[model_input], - label="Examples", - type='index', - samples=example_list, - visible=True - ) - - # with gr.Row(elem_id="compare_examples"): - - with gr.Row(): - clear_btn = gr.Button(value="Clear") - submit_btn = gr.Button(value="Submit", variant="primary") - - # with gr.Accordion("Parameters", open=False, visible=False) as parameter_row: - # temperature = gr.Slider(minimum=0.0, maximum=1.0, value=0.7, step=0.1, interactive=True, label="Temperature",) - # max_output_tokens = gr.Slider(minimum=0, maximum=1024, value=512, step=64, interactive=True, label="Max output tokens",) - - with gr.Row(elem_id="compare_model_output"): - model_output1 = gr.Textbox(lines=5, label="output", visible=False) - model_output2 = gr.Textbox(lines=5, label="output", visible=False) - - with gr.Row(elem_id="compare_model_speed", visible=False) as row: - with gr.Column(): - model_speed = gr.Plot(value=None, label="Speed") - compare_hints = gr.Markdown("**Note the above time is measured on a free cloud server, which does not use GPU and is thus different from the setting in the papers.**") - - model_selector.change(compare_refresh, [model_selector, samples], [model_input, model_output1, model_output2, examples, samples, model_speed]) - - clear_btn.click(compare_clear, None, [model_input, model_output1, model_output2, row]) - - submit_btn.click(compare_init_state, [model_selector], [state]).\ - then(compare_submit, [model_input, gr.Number(value=0, visible=False, precision=0), state], [model_output1, state]).\ - then(compare_submit, [model_input, gr.Number(value=1, visible=False, precision=0), state], [model_output2, state]).\ - then(compare_show_plot, [state], [row, model_speed]) - # submit_btn.click(compare_show_plot, [state], [model_speed]) - - examples.click(compare_dataset_click, [examples, samples], [model_input]) - - def load(fn): - fn(compare_refresh, [model_selector, samples], [model_input, model_output1, model_output2, examples, samples]) - - return load - -def detail_init_state(model_selector): - model = detail_available_types[model_selector]['model'] - state = {"model_name": model, "cnt": 0} - return state - -def detail_refresh(model_selector, samples): - model = detail_available_types[model_selector]['model'] - model_output = gr.Textbox.update(visible=True, label=model) - model_input = gr.Textbox.update(value="", placeholder=detail_available_types[model_selector]['placeholder']) - samples.clear() - samples += [[x]for x in detail_available_types[model_selector]['examples']] - examples = gr.Dataset.update(samples=samples) - model_speed = gr.Plot.update(visible=False) - return model_input, model_output, examples, samples, model_speed - -def detail_submit(model_input, state, request: gr.Request, progress=gr.Progress()): - model_name = state['model_name'] - model_output, inference_time = submit(model_name, model_input, generate_detail, request, progress) - state['inference_time'] = inference_time - state["graph_info"] = model_output['graph_info'] - # html_code = open("graph.html").read() - - # state["cnt"] += 1 - # if state["cnt"] > 2: - # html_code += r"""\n""" - # print(html_code) - - return model_output['output'], state, gr.Row.update(visible=True), json.dumps(state) - -def detail_dataset_click(examples, samples): - return samples[examples][0] - -def detail_clear(): - return "", "", gr.Row.update(visible=False) - -def build_tab_detail(): - - state = gr.State() - samples = gr.State(example_list) - - available_type_names = list(detail_available_types.keys()) - with gr.Row(elem_id="detail_model_selector_row"): - model_selector = gr.Dropdown( - choices=available_type_names, - value=available_type_names[0] if len(available_type_names) > 0 else "", - interactive=True, - show_label=False).style(container=False) - - with gr.Row(elem_id="detail_model_input"): - model_input = gr.Textbox(lines=5, label="input") - # examples = gr.Dataset(examples=[], inputs=[model_input], elem_id="compare_examples") - examples = gr.Dataset(components=[model_input], - label="Examples", - type='index', - samples=example_list, - visible=True - ) - - # with gr.Row(elem_id="compare_examples"): - - with gr.Row(): - clear_btn = gr.Button(value="Clear") - submit_btn = gr.Button(value="Submit", variant="primary") - - # with gr.Accordion("Parameters", open=False, visible=False) as parameter_row: - # temperature = gr.Slider(minimum=0.0, maximum=1.0, value=0.7, step=0.1, interactive=True, label="Temperature",) - # max_output_tokens = gr.Slider(minimum=0, maximum=1024, value=512, step=64, interactive=True, label="Max output tokens",) - - with gr.Row(elem_id="detail_model_output"): - model_output = gr.Textbox(lines=5, label="output", visible=False) - - with gr.Row(visible=False) as dag_graph: - with gr.Column(scale=1.8): - html = gr.HTML(open("graph.html").read()) - with gr.Column(scale=1): - minimum_node_pass_prob = gr.Slider(0, 1, value=0.2, label="Show nodes with passing probability greater than", info="Nodes that predict the output sequence are always visible") - minimum_edge_prob = gr.Slider(0, 1, value=0.1, label="Show edges with transition probability greater than") - max_out_edge_num = gr.Slider(1, 10, value=5, step=1, label="Show top-k outgoing edges with k") - max_out_edge_prob = gr.Slider(0, 1, value=0.9, label="Show top-p outgoing edges with p") - force_in_edge = gr.Checkbox(True, label="Show at least one incoming edge for each node") - show_node_detail = gr.Checkbox(False, label="Show verbose node information") - show_edge_label = gr.Checkbox(False, label="Show transition probability") - network_refresh = gr.Button(value="Reinitialize DAG Visualization") - graph_parameters = [minimum_node_pass_prob, minimum_edge_prob, max_out_edge_num, max_out_edge_prob, force_in_edge, show_node_detail, show_edge_label] - - js_state = gr.Textbox(visible=False) - - model_selector.change(detail_refresh, [model_selector, samples], [model_input, model_output, examples, samples]) - - clear_btn.click(detail_clear, None, [model_input, model_output, dag_graph]) - - graph_create_js = """(state_str, minimum_node_pass_prob, minimum_edge_prob, max_out_edge_num, max_out_edge_prob, force_in_edge, show_node_detail, show_edge_label) => { - var state = JSON.parse(state_str); - var options = { - minimum_node_pass_prob: minimum_node_pass_prob, - minimum_edge_prob: minimum_edge_prob, - max_out_edge_num: max_out_edge_num, - max_out_edge_prob: max_out_edge_prob, - force_in_edge: force_in_edge, - show_node_detail: show_node_detail, - show_edge_label: show_edge_label, - } - startNetwork(state.graph_info, options); - }""" - graph_update_js = """(minimum_node_pass_prob, minimum_edge_prob, max_out_edge_num, max_out_edge_prob, force_in_edge, show_node_detail, show_edge_label) => { - var options = { - minimum_node_pass_prob: minimum_node_pass_prob, - minimum_edge_prob: minimum_edge_prob, - max_out_edge_num: max_out_edge_num, - max_out_edge_prob: max_out_edge_prob, - force_in_edge: force_in_edge, - show_node_detail: show_node_detail, - show_edge_label: show_edge_label, - } - updateNetwork(options); - }""" - submit_btn.click(detail_init_state, [model_selector], [state]).\ - then(detail_submit, [model_input, state], [model_output, state, dag_graph, js_state]).\ - then(None, [js_state] + graph_parameters, None, _js=graph_create_js) - network_refresh.click(None, [js_state] + graph_parameters, None, _js=graph_create_js) - minimum_node_pass_prob.change(None, graph_parameters, None, _js=graph_update_js) - minimum_edge_prob.change(None, graph_parameters, None, _js=graph_update_js) - max_out_edge_num.change(None, graph_parameters, None, _js=graph_update_js) - max_out_edge_prob.change(None, graph_parameters, None, _js=graph_update_js) - force_in_edge.select(None, graph_parameters, None, _js=graph_update_js) - show_node_detail.select(None, graph_parameters, None, _js=graph_update_js) - show_edge_label.select(None, graph_parameters, None, _js=graph_update_js) - - examples.click(detail_dataset_click, [examples, samples], [model_input]) - - def load(fn): - fn(detail_refresh, [model_selector, samples], [model_input, model_output, examples, samples]) - - return load - -def build_demo(): - with gr.Blocks(title="DA-Transformer Demo", theme=gr.themes.Base(), css=css) as demo: - gr.Markdown(notice_markdown) - - with gr.Tab("DA-Transformer Inspection") as detail_tab: - detail_load = build_tab_detail() - detail_load(detail_tab.select) - - with gr.Tab("Speed Comparison") as compare_tab: - compare_load = build_tab_compare() - compare_load(compare_tab.select) - - gr.Markdown(learn_more_markdown) - - detail_load(demo.load) - - demo.load(None,None,None,_js=open("global.js").read()) - return demo - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--host", type=str, default="0.0.0.0") - parser.add_argument("--port", type=int) - parser.add_argument("--concurrency-count", type=int, default=1) - parser.add_argument("--share", action="store_true") - args = parser.parse_args() - logger.info(f"args: {args}") - - workers = concurrent.futures.ThreadPoolExecutor(max_workers=1) - demo = build_demo() - demo.queue(concurrency_count=args.concurrency_count, status_update_rate=10, - api_open=False).launch(server_name=args.host, server_port=args.port, - share=args.share, max_threads=5) diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Windows 10 Pro RS5 v.1809.17763.504 En-us x86 May2019 Pre-Activated 2.56 GB The Ultimate Guide.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Windows 10 Pro RS5 v.1809.17763.504 En-us x86 May2019 Pre-Activated 2.56 GB The Ultimate Guide.md deleted file mode 100644 index 75f3aecca8ab7d4d222ab7cefccdb3709c12ece9..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Windows 10 Pro RS5 v.1809.17763.504 En-us x86 May2019 Pre-Activated 2.56 GB The Ultimate Guide.md +++ /dev/null @@ -1,73 +0,0 @@ -
    -

    Alias Surface 2019: A Powerful Tool for Industrial Designers

    -

    Alias Surface 2019 is the latest version of Autodesk's software for creating and editing complex 3D surfaces. It is designed for industrial designers who need to create realistic and accurate models of products, vehicles, and other objects. Alias Surface 2019 offers many features and improvements that make it easier and faster to create high-quality surfaces.

    -

    keygen Alias Surface 2019 crack


    Download Filehttps://urlcod.com/2uK7wH



    -

    In this article, we will review some of the key features and benefits of Alias Surface 2019, and how it can help you achieve your design goals.

    -

    What is Alias Surface?

    -

    Alias Surface is a software application that allows you to create and edit 3D surfaces using NURBS (Non-Uniform Rational B-Splines) technology. NURBS are mathematical curves that can represent any shape, from simple lines and circles to complex organic forms. NURBS surfaces are ideal for industrial design because they can be easily modified, blended, trimmed, and filleted to create smooth and seamless transitions between different shapes.

    -

    Alias Surface is part of the Autodesk Alias family of products, which also includes Alias AutoStudio, Alias Concept, Alias SpeedForm, and Alias Design. These products share a common interface and workflow, but have different features and capabilities depending on the specific needs of the user. Alias Surface is the most advanced and versatile product in the family, as it allows you to create and edit any type of surface, from simple planes and cylinders to complex freeform shapes.

    -

    How to activate Alias Surface 2019 with keygen
    -Alias Surface 2019 crack download free
    -Keygen for Alias Surface 2019 full version
    -Alias Surface 2019 crack serial number
    -Keygen Alias Surface 2019 license key generator
    -Alias Surface 2019 crack patch
    -Keygen Alias Surface 2019 activation code
    -Alias Surface 2019 crack torrent
    -Keygen Alias Surface 2019 product key
    -Alias Surface 2019 crack offline activation
    -Keygen Alias Surface 2019 registration code
    -Alias Surface 2019 crack keygen only
    -Keygen Alias Surface 2019 online activation
    -Alias Surface 2019 crack direct link
    -Keygen Alias Surface 2019 system requirements
    -Alias Surface 2019 crack installation guide
    -Keygen Alias Surface 2019 features
    -Alias Surface 2019 crack latest version
    -Keygen Alias Surface 2019 reviews
    -Alias Surface 2019 crack mac os x
    -Keygen Alias Surface 2019 windows 10
    -Alias Surface 2019 crack linux
    -Keygen Alias Surface 2019 support
    -Alias Surface 2019 crack update
    -Keygen Alias Surface 2019 tutorial
    -Alias Surface 2019 crack tips and tricks
    -Keygen Alias Surface 2019 troubleshooting
    -Alias Surface 2019 crack alternatives
    -Keygen Alias Surface 2019 comparison
    -Alias Surface 2019 crack discount code
    -Keygen Alias Surface 2019 coupon code
    -Alias Surface 2019 crack free trial
    -Keygen Alias Surface 2019 refund policy
    -Alias Surface 2019 crack customer service
    -Keygen Alias Surface 2019 testimonials
    -Alias Surface 2019 crack forum
    -Keygen Alias Surface 2019 blog
    -Alias Surface 2019 crack youtube video
    -Keygen Alias Surface 2019 facebook page
    -Alias Surface 2019 crack twitter account
    -Keygen Alias Surface 2019 instagram profile
    -Alias Surface 2019 crack pinterest board
    -Keygen Alias Surface 2019 linkedin page
    -Alias Surface 2019 crack reddit post
    -Keygen Alias Surface 2019 quora answer
    -Alias Surface 2019 crack medium article
    -Keygen Alias Surface 2019 wikipedia page
    -Alias Surface 2019 crack official website
    -Keygen Alias Surface 2019 download link

    -

    What's New in Alias Surface 2019?

    -

    Alias Surface 2019 introduces many new features and enhancements that improve the user experience and productivity. Some of the highlights include:

    -
      -
    • New Subdivision Modeling Tools: You can now create and edit subdivision surfaces in Alias Surface 2019. Subdivision surfaces are polygonal meshes that can be subdivided into finer levels of detail, resulting in smooth and organic shapes. You can convert subdivision surfaces to NURBS surfaces or vice versa, or use them together in a hybrid modeling approach.
    • -
    • New Sketching Tools: You can now sketch directly on your 3D model using a stylus or mouse. You can use sketching tools such as pencils, markers, brushes, erasers, and smudges to create curves, lines, shapes, and textures on your surface. You can also use sketching tools to modify existing curves and surfaces by pushing, pulling, smoothing, or sculpting them.
    • -
    • New Rendering Engine: Alias Surface 2019 uses a new rendering engine based on Autodesk Raytracer (ART), which provides faster and more realistic rendering results. You can preview your model in real time with shadows, reflections, materials, and lighting effects. You can also export your model to other Autodesk products such as VRED or Maya for further rendering and animation.
    • -
    • New Data Exchange Formats: You can now import and export your models in more formats than before. Alias Surface 2019 supports formats such as OBJ, STL, FBX, IGES, STEP, JT, CATIA V5/V6, SolidWorks, NX, Creo Parametric, Inventor, Rhino, SketchUp, Revit, Fusion 360, and more. This allows you to easily share your models with other software applications or collaborators.
    • -
    -

    Why Choose Alias Surface 2019?

    -

    Alias Surface 2019 is a powerful tool for industrial designers who need to create and edit complex 3D surfaces. It offers many advantages over other software applications such as:

    -
      -
    • Flexibility: You can create any type of surface you want using NURBS or subdivision modeling techniques. You can also combine different types of surfaces in a hybrid modeling approach. You have full control over the shape and quality of your surface.
    • -
    • Precision: You can create accurate and realistic models of your products using advanced tools such as curve continuity analysis, surface evaluation tools, curvature combs, draft angles analysis, surface alignment tools, etc. You can also use parametric modeling tools such as history tracking, expressions, variables, constraints, etc. to define relationships between different parts of your model.
    • -
    • Creativity: You can unleash your creativity using sketching tools that allow you to draw directly on your model. You can also use sculpting tools that let you modify

      e753bf7129
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Facegen Exporter Cracked Version Download and Install the Best Face Generator Software.md b/spaces/tialenAdioni/chat-gpt-api/logs/Facegen Exporter Cracked Version Download and Install the Best Face Generator Software.md deleted file mode 100644 index 1c0ef6936fa697dc92c2246530385d00bb1a1149..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Facegen Exporter Cracked Version Download and Install the Best Face Generator Software.md +++ /dev/null @@ -1,138 +0,0 @@ - -

      Adobe After Effects CC 2015 Crack Torrent: How to Download and Install It

      -

      Adobe After Effects CC 2015 is a popular software for creating stunning visual effects, motion graphics, and animations. It is widely used by professionals and amateurs alike for video editing, compositing, and post-production. However, Adobe After Effects CC 2015 is not a cheap software. It costs $19.99 per month for a single app subscription or $49.99 per month for an all-apps subscription. If you want to use Adobe After Effects CC 2015 without paying a dime, you might be tempted to download and install a crack torrent. But what is a crack torrent and how do you use it? In this article, we will explain what Adobe After Effects CC 2015 is, what a crack torrent is, and how to download and install Adobe After Effects CC 2015 crack torrent.

      -

      What is Adobe After Effects CC 2015?

      -

      Adobe After Effects CC 2015 is the 13th major release of Adobe After Effects, a software that allows you to create and edit visual effects, motion graphics, and animations. It was released in June 2015 as part of the Adobe Creative Cloud suite. Some of the features of Adobe After Effects CC 2015 are:

      -

      Adobe After Effects Cc 2015 Crack Torrent


      DOWNLOAD ————— https://urlcod.com/2uK4zu



      -

      Features of Adobe After Effects CC 2015

      -
        -
      • Face Tracker: This feature allows you to track facial movements and expressions and apply them to other layers or masks.
      • -
      • Character Animator: This feature allows you to animate characters using your webcam and microphone. You can control the movements and expressions of your characters by your own facial gestures and voice.
      • -
      • Adobe Stock Integration: This feature allows you to access millions of high-quality images, videos, and graphics from Adobe Stock directly from within Adobe After Effects CC 2015.
      • -
      • Creative Cloud Libraries: This feature allows you to store and manage your assets across different Adobe applications and devices. You can sync your files, colors, fonts, and more with Creative Cloud Libraries.
      • -
      • Lumetri Color Panel: This feature allows you to adjust the color and tone of your footage using intuitive sliders and curves.
      • -
      • Advanced Puppet Tool: This feature allows you to create realistic animations of organic shapes using pins and deformers.
      • -
      • And more: There are many other features in Adobe After Effects CC 2015, such as improved performance, enhanced user interface, new effects and presets, support for more formats and codecs, etc.
      • -
      -

      System Requirements for Adobe After Effects CC 2015

      -

      To run Adobe After Effects CC 2015 smoothly on your computer, you need to meet the following minimum system requirements:

      - - - - - - - - - - - - - - - - - - - - - - -
      Operating SystemProcessorMemoryHard Disk SpaceGraphics Card
      Windows 7 SP1 or later (64-bit)Intel Core i3 or equivalent4 GB RAM (8 GB recommended)5 GB available disk space (additional space required for cache)NVIDIA GeForce GTX 560 or equivalent (OpenGL 2.0-capable)
      Mac OS X 10.9 or later (64-bit)Intel Core i5 or equivalent4 GB RAM (8 GB recommended)6 GB available disk space (additional space required for cache)NVIDIA GeForce GT 750M or equivalent (OpenGL 2.0-capable)
      -

      What is a Crack Torrent?

      -

      A crack torrent is a file that contains a cracked version of a software that bypasses its activation or licensing process. A crack torrent usually consists of two parts: the setup file of the software and the crack file that modifies or replaces some files in the installation folder. By using a crack torrent, you can use a software without paying for it or registering it.

      -

      How to download Adobe After Effects Cc 2015 full version for free
      -Adobe After Effects Cc 2015 serial key generator online
      -Adobe After Effects Cc 2015 patch file download link
      -Adobe After Effects Cc 2015 activation code crack
      -Adobe After Effects Cc 2015 torrent with crack and keygen
      -Best site to download Adobe After Effects Cc 2015 cracked software
      -Adobe After Effects Cc 2015 license key crack free download
      -Adobe After Effects Cc 2015 crack for mac os x
      -Adobe After Effects Cc 2015 crack for windows 10
      -Adobe After Effects Cc 2015 crack for linux
      -Adobe After Effects Cc 2015 portable version with crack
      -Adobe After Effects Cc 2015 offline installer with crack
      -Adobe After Effects Cc 2015 latest update with crack
      -Adobe After Effects Cc 2015 system requirements and compatibility
      -Adobe After Effects Cc 2015 features and benefits
      -Adobe After Effects Cc 2015 tutorials and tips
      -Adobe After Effects Cc 2015 alternatives and competitors
      -Adobe After Effects Cc 2015 reviews and ratings
      -Adobe After Effects Cc 2015 problems and solutions
      -Adobe After Effects Cc 2015 customer support and contact
      -Adobe After Effects Cc 2015 discount and coupon code
      -Adobe After Effects Cc 2015 trial version and demo
      -Adobe After Effects Cc 2015 official website and download page
      -Adobe After Effects Cc 2015 forum and community
      -Adobe After Effects Cc 2015 blog and news
      -How to uninstall Adobe After Effects Cc 2015 completely
      -How to upgrade from Adobe After Effects Cc 2014 to 2015
      -How to install plugins and presets for Adobe After Effects Cc 2015
      -How to use templates and projects in Adobe After Effects Cc 2015
      -How to create animations and effects in Adobe After Effects Cc 2015
      -How to export and render videos in Adobe After Effects Cc 2015
      -How to edit audio and music in Adobe After Effects Cc 2015
      -How to add text and titles in Adobe After Effects Cc 2015
      -How to use masks and layers in Adobe After Effects Cc 2015
      -How to use expressions and scripts in Adobe After Effects Cc 2015
      -How to use cameras and lights in Adobe After Effects Cc 2015
      -How to use motion tracking and stabilization in Adobe After Effects Cc 2015
      -How to use green screen and chroma key in Adobe After Effects Cc 2015
      -How to use rotoscoping and masking in Adobe After Effects Cc 2015
      -How to use particle systems and simulations in Adobe After Effects Cc 2015
      -How to use shape layers and vector graphics in Adobe After Effects Cc 2015
      -How to use color correction and grading in Adobe After Effects Cc 2015
      -How to use transitions and presets in Adobe After Effects Cc 2015
      -How to use typography and kinetic text in Adobe After Effects Cc 2015
      -How to use 3D effects and animations in Adobe After Effects Cc 2015
      -How to use VR and AR effects in Adobe After Effects Cc 2015
      -How to use data-driven animations in Adobe After Effects Cc 2015
      -How to use character animation and rigging in Adobe After Effects Cc 2015
      -How to use advanced compositing techniques in Adobe After Effects Cc 2015

      -

      Benefits of Using a Crack Torrent

      -
        -
      • Saving Money: The most obvious benefit of using a crack torrent is that you can save money by not paying for the software. For example, if you use Adobe After Effects CC 2015 crack torrent, you can save $19.99 per month or $239.88 per year.
      • -
      • Saving Time: Another benefit of using a crack torrent is that you can save time by not going through the activation or licensing process. For example, if you use Adobe After Effects CC 2015 crack torrent, you can skip the steps of creating an Adobe account, signing in, entering your payment details, etc.
      • -
      • Saving Space: A third benefit of using a crack torrent is that you can save space on your computer by not installing unnecessary files or programs. For example, if you use Adobe After Effects CC 2015 crack torrent, you can avoid installing Creative Cloud Desktop App, which takes up about 300 MB of disk space.
      • -
      -

      Risks of Using a Crack Torrent

      -
        -
      • Virus Infection: The most common risk of using a crack torrent is that you might infect your computer with viruses or malware. Some crack torrents might contain malicious code that can harm your system or steal your data. For example, if you use Adobe After Effects CC 2015 crack torrent, you might get infected with ransomware that encrypts your files and demands money to unlock them.
      • -
      • Lack of Updates: Another risk of using a crack torrent is that you might miss out on important updates or bug fixes from the software developer. Some updates might improve the performance or functionality of the software or fix some security issues. For example, if you use Adobe After Effects CC 2015 crack torrent, you might not get the latest features or patches from Adobe.
      • -
      • Lack of Support: A third risk of using a crack torrent is that you might not get any support or assistance from the software developer or other users. Some software might require online activation or verification to access some features or services. For example, if you use Adobe After Effects CC 2015 crack torrent, you might not be able to use Adobe Stock Integration or Creative Cloud Libraries.
      • -
      • Lack of Ethics: A fourth risk of using a crack torrent is that you might violate the intellectual property rights or terms of service of the software developer. Some software might have legal protection or restrictions on how they can be used or distributed. For example, if you use Adobe After Effects CC 2015 crack torrent, you might infringe on Adobe's copyright or breach their end-user license agreement.
      • -
      -

      How to Download and Install Adobe After Effects CC 2015 Crack Torrent

      -

      If you still want to download and install Adobe After Effects CC 2015 crack torrent despite the risks involved, here are the steps you need to follow:

      -

      Step 1: Find a Reliable Torrent Site

      -

      The first step is to find a reliable torrent site that hosts Adobe After Effects CC 2015 crack torrent. There are many torrent sites on the internet but not all of them are safe or trustworthy. Some torrent sites might have fake or malicious files that can harm your computer or deceive you into downloading something else. To find a reliable torrent site, you can do some research online or ask for recommendations from other users who have used crack torrents before. Some examples of reliable torrent sites are The Pirate Bay (https://thepiratebay.org/), RARBG (https://rarbg.to/), and KickassTorrents (https://katcr.co/).

      -

      Step 2: Download the Torrent File

      -

      Step 2: Download the Torrent File

      -

      The second step is to download the torrent file of Adobe After Effects CC 2015 crack torrent from the torrent site you have chosen. A torrent file is a small file that contains information about the files and folders that are part of the crack torrent. To download the torrent file, you need to click on the download link or magnet link on the torrent site. A magnet link is a special link that allows you to download the torrent file directly from other users without going through the torrent site. To use a magnet link, you need to have a torrent client installed on your computer.

      -

      Step 3: Open the Torrent File with a Torrent Client

      -

      The third step is to open the torrent file with a torrent client. A torrent client is a software that allows you to download and upload files using the BitTorrent protocol. The BitTorrent protocol is a peer-to-peer network that connects users who have the same files and enables them to share them with each other. To open the torrent file with a torrent client, you need to double-click on the torrent file or drag and drop it into the torrent client. Some examples of torrent clients are uTorrent (https://www.utorrent.com/), BitTorrent (https://www.bittorrent.com/), and qBittorrent (https://www.qbittorrent.org/).

      -

      Step 4: Extract the Files from the Downloaded Folder

      -

      The fourth step is to extract the files from the downloaded folder. The downloaded folder contains all the files and folders that are part of the crack torrent. To extract the files from the downloaded folder, you need to use a software that can handle compressed or archived files. Some examples of such software are WinRAR (https://www.win-rar.com/), 7-Zip (https://www.7-zip.org/), and PeaZip (https://www.peazip.org/). To extract the files from the downloaded folder, you need to right-click on the folder and select Extract Here or Extract To from the menu.

      -

      Step 5: Run the Setup File and Follow the Instructions

      -

      The fifth step is to run the setup file and follow the instructions. The setup file is an executable file that installs Adobe After Effects CC 2015 on your computer. To run the setup file, you need to double-click on it or right-click on it and select Run as Administrator from the menu. Then, you need to follow the instructions on the screen to complete the installation process. You might need to accept some terms and conditions, choose a destination folder, select some options, etc.

      -

      Step 6: Copy and Paste the Crack File into the Installation Folder

      -

      The sixth and final step is to copy and paste the crack file into the installation folder. The crack file is a modified or replaced file that bypasses the activation or licensing process of Adobe After Effects CC 2015. To copy and paste the crack file into the installation folder, you need to locate both files on your computer. The crack file is usually named as amtlib.dll or patch.exe and it is usually found in a folder named as Crack or Patch within the downloaded folder. The installation folder is usually named as Adobe After Effects CC 2015 and it is usually found in C:\Program Files\Adobe\Adobe After Effects CC 2015 or C:\Program Files (x86)\Adobe\Adobe After Effects CC 2015 depending on your system architecture. To copy and paste the crack file into the installation folder, you need to right-click on it and select Copy from the menu, then go to the installation folder, right-click on an empty space and select Paste from the menu. You might need to overwrite or replace an existing file in the installation folder.

      -

      Conclusion

      -

      In this article, we have explained what Adobe After Effects CC 2015 is, what a crack torrent is, and how to download and install Adobe After Effects CC 2015 crack torrent. We have also discussed some of the benefits and risks of using a crack torrent. We hope this article has been helpful for you. However, we do not recommend or endorse using a crack torrent for any software as it might be illegal, unethical, or unsafe. If you want to use Adobe After Effects CC 2015 legally and safely, you should buy it from Adobe's official website (https://www.adobe.com/products/aftereffects.html) or use an alternative software such as Blender (https://www.blender.org/) or HitFilm Express (https://fxhome.com/hitfilm-express).

      -

      FAQs

      -
        -
      • Q: Is Adobe After Effects CC 2015 free?
      • -
      • A: No, Adobe After Effects CC 2015 is not free. It costs $19.99 per month for a single app subscription or $49.99 per month for an all-apps subscription.
      • -
      • Q: Is Adobe After Effects CC 2015 compatible with Windows 10?
      • -
      • A: Yes, Adobe After Effects CC 2015 is compatible with Windows 10 as well as Windows 7 SP1 or later (64-bit).
      • -
      • Q: Is Adobe After Effects CC 2015 compatible with Mac OS X?
      • -
      • A: Yes, Adobe After Effects CC 2015 is compatible with Mac OS X 10.9 or later (64-bit).
      • -
      • Q: How can I learn Adobe After Effects CC 2015?
      • -
      • A: You can learn Adobe After Effects CC 2015 by watching online tutorials, reading books or blogs, taking courses or classes, or practicing by yourself.
      • -
      • Q: How can I uninstall Adobe After Effects CC 2015?
      • -
      • A: You can uninstall Adobe After Effects CC 2015 by going to Control Panel > Programs > Programs and Features > Adobe After Effects CC 2015 > Uninstall on Windows or by going to Applications > Adobe After Effects CC 2015 > Uninstall on Mac OS X.
      • -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Play BlueFear the Sea Survival Game for Android.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Play BlueFear the Sea Survival Game for Android.md deleted file mode 100644 index 404f5652fe48215df3b22959b94c928b7cf37eb6..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Play BlueFear the Sea Survival Game for Android.md +++ /dev/null @@ -1,122 +0,0 @@ - -

      BlueFear Apk Download: A Sea Survival Game for Android

      - -

      Do you love the thrill of sea adventures and the challenge of survival games? If so, you might want to try BlueFear, a game that combines both genres in a unique and immersive way. BlueFear is a game created by Erik Day, where you have to build your own ship and face the dangers of the ocean. You can download BlueFear Apk for Android devices and enjoy this game for free. In this article, we will tell you more about BlueFear, how to download it, and how to play it.

      -

      BlueFear apk download


      Download Zip ::: https://urlcod.com/2uK6aT



      - -

      What is BlueFear?

      - -

      BlueFear is a sea survival game that puts you in the role of a shipwrecked sailor who has to build his own ship from scratch and explore the vast ocean. You will have to gather resources, craft items, upgrade your ship, and fight against pirates, sharks, storms, and other threats. You will also have to manage your hunger, thirst, health, and sanity levels as you try to survive as long as possible.

      - -

      BlueFear features realistic graphics and physics, dynamic weather and day-night cycles, procedurally generated maps and events, and a variety of gameplay modes. You can play solo or with friends in co-op mode, or compete with other players in PvP mode. You can also customize your ship with different parts, colors, flags, and weapons.

      - -

      How to download BlueFear Apk for Android?

      - -

      To download BlueFear Apk for Android devices, you will need to follow these steps:

      - -
        -
      1. Search for BlueFear Apk on your preferred web browser.
      2. -
      3. Download the Apk file from a reliable source.
      4. -
      5. Enable the installation of apps from unknown sources on your device settings.
      6. -
      7. Locate the Apk file on your device and tap on it to install it.
      8. -
      9. Wait for the installation to complete and launch the game.
      10. -
      - -

      Note that BlueFear is not available on the Google Play Store or any other official app store. Therefore, you should be careful when downloading the Apk file from unknown sources as they may contain viruses or malware that can harm your device. You should also check the compatibility of the game with your device before installing it.

      - -

      How to play BlueFear?

      - -

      To play BlueFear, you will need to follow these tips:

      - -
        -
      • To start a new game, choose a mode (solo, co-op, or PvP), a difficulty level (easy, normal, or hard), and a map size (small, medium, or large).
      • -
      • To control your character, use the virtual joystick on the left side of the screen. To interact with objects or items, tap on them. To access your inventory or craft menu, tap on the icons on the right side of the screen.
      • -
      • To build your ship, you will need to collect wood, metal, cloth, rope, and other materials from the ocean or islands. You can also salvage parts from other ships or loot them from enemies. To craft items or upgrade your ship, you will need to use tools such as hammers, saws, drills, etc.
      • -
      • To explore the ocean, you will need to sail your ship using the steering wheel and the sails. You can also use cannons or harpoons to attack enemies or hunt animals. You will encounter various events and challenges along the way, such as storms, whirlpools, shipwrecks, treasure chests, etc.
      • -
      • To survive in the ocean, you will need to monitor your hunger, thirst, health, and sanity levels. You can replenish them by eating food or drinking water that you can find or make. You can also heal yourself by using bandages or medicines. You can improve your sanity by sleeping or listening to music.
      • -
      - -

      Conclusion

      - -

      BlueFear is a fun and exciting game that will test your skills and creativity as a sea survivor. You can download BlueFear Apk for Android devices and enjoy this game for free. However, you should be careful when downloading files from unknown sources as they may contain viruses or malware that can harm your device. You should also respect the intellectual property rights of Erik Day and other creators and use the game only for personal or educational purposes.

      -

      What are the features of BlueFear?

      - -

      BlueFear is a game that offers many features that make it fun and engaging. Some of the features are:

      - -
        -
      • Realistic graphics and physics: The game has stunning graphics that depict the ocean and its creatures in a realistic way. The game also has realistic physics that affect the movement of your ship and the waves.
      • -
      • Dynamic weather and day-night cycles: The game has dynamic weather and day-night cycles that change the environment and the gameplay. You will have to adapt to different conditions such as rain, fog, wind, thunderstorms, sunrise, sunset, etc.
      • -
      • Procedurally generated maps and events: The game has procedurally generated maps and events that make each game different and unpredictable. You will never know what you will find or encounter in the ocean.
      • -
      • Variety of gameplay modes: The game has a variety of gameplay modes that suit different preferences and styles. You can play solo or with friends in co-op mode, or compete with other players in PvP mode. You can also choose a difficulty level (easy, normal, or hard) and a map size (small, medium, or large).
      • -
      • Customizable ship: The game allows you to customize your ship with different parts, colors, flags, and weapons. You can make your ship look unique and suit your needs and preferences.
      • -
      - -

      What are the tips and tricks for BlueFear?

      - -

      To play BlueFear better, you will need to follow these tips and tricks:

      -

      BlueFear horror game apk download
      -How to install BlueFear apk on android
      -BlueFear apk latest version download
      -BlueFear apk mod unlimited money download
      -BlueFear apk free download for pc
      -BlueFear apk obb data download
      -BlueFear apk offline download
      -BlueFear apk download for ios
      -BlueFear apk download no verification
      -BlueFear apk download link
      -BlueFear apk download from apkpure
      -BlueFear apk download highly compressed
      -BlueFear apk download full version
      -BlueFear apk download rexdl
      -BlueFear apk download revdl
      -BlueFear apk download android 1
      -BlueFear apk download uptodown
      -BlueFear apk download hack
      -BlueFear apk download cracked
      -BlueFear apk download mirror
      -BlueFear apk download mediafire
      -BlueFear apk download mega
      -BlueFear apk download google drive
      -BlueFear apk download 2023
      -BlueFear apk download new update
      -Download BlueFear apk and play online
      -Download BlueFear apk and enjoy the best horror game
      -Download BlueFear apk and survive the night
      -Download BlueFear apk and explore the haunted house
      -Download BlueFear apk and solve the mystery
      -Download BlueFear apk and face your fears
      -Download BlueFear apk and experience the thrill
      -Download BlueFear apk and challenge your friends
      -Download BlueFear apk and unlock all levels
      -Download BlueFear apk and get unlimited coins
      -Download BlueFear apk and customize your character
      -Download BlueFear apk and use different weapons
      -Download BlueFear apk and find hidden secrets
      -Download BlueFear apk and watch the scary cutscenes
      -Download BlueFear apk and listen to the creepy soundtrack
      -Is BlueFear apk safe to download?
      -Is BlueFear apk compatible with my device?
      -Is BlueFear apk legal to download?
      -Is BlueFear apk virus free?
      -Is BlueFear apk worth downloading?
      -What is the size of BlueFear apk?
      -What is the rating of BlueFear apk?
      -What is the genre of BlueFear apk?
      -What are the features of BlueFear apk?
      -What are the requirements of BlueFear apk?

      - -
        -
      • Gather resources: You will need to gather resources such as wood, metal, cloth, rope, etc. from the ocean or islands to build your ship and craft items. You can also salvage parts from other ships or loot them from enemies.
      • -
      • Craft items: You will need to craft items such as tools, weapons, food, water, bandages, medicines, etc. to survive and improve your ship. You can use the craft menu to see what items you can make and what materials you need.
      • -
      • Upgrade your ship: You will need to upgrade your ship with different parts such as hulls, masts, sails, cannons, harpoons, etc. to make it stronger and faster. You can use the upgrade menu to see what parts you can add and what materials you need.
      • -
      • Explore the ocean: You will need to explore the ocean to find resources, items, events, challenges, etc. You can use the map to see your location and the surrounding areas. You can also use the compass to see the direction of the wind and the waves.
      • -
      • Fight enemies: You will need to fight enemies such as pirates, sharks, whales, krakens, etc. to survive and get loot. You can use weapons such as cannons or harpoons to attack them from a distance or melee weapons such as swords or axes to fight them up close.
      • -
      • Manage your stats: You will need to manage your hunger, thirst, health, and sanity levels to survive. You can replenish them by eating food or drinking water that you can find or make. You can also heal yourself by using bandages or medicines. You can improve your sanity by sleeping or listening to music.
      • -
      - -

      Conclusion

      - -

      BlueFear is a fun and exciting game that will test your skills and creativity as a sea survivor. You can download BlueFear Apk for Android devices and enjoy this game for free. However, you should be careful when downloading files from unknown sources as they may contain viruses or malware that can harm your device. You should also respect the intellectual property rights of Erik Day and other creators and use the game only for personal or educational purposes.

      -

      BlueFear is a fun and exciting game that will test your skills and creativity as a sea survivor. You can download BlueFear Apk for Android devices and enjoy this game for free. However, you should be careful when downloading files from unknown sources as they may contain viruses or malware that can harm your device. You should also respect the intellectual property rights of Erik Day and other creators and use the game only for personal or educational purposes.

      679dcb208e
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Aviator Game Predictor APK and Experience the Magic of Online Casino Games.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Aviator Game Predictor APK and Experience the Magic of Online Casino Games.md deleted file mode 100644 index 2fd7e7cde6d81d0cab46279f902e1ecfeafdd554..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Aviator Game Predictor APK and Experience the Magic of Online Casino Games.md +++ /dev/null @@ -1,180 +0,0 @@ -
      -

      Aviator Game Predictor APK: A Guide for Online Casino Players

      -

      If you are a fan of online casino games, you might have heard of or played Aviator Game, a thrilling and addictive game that can make you win big or lose everything in seconds. But what if there was a way to increase your chances of winning and reduce your risks of losing? That's where Aviator Game Predictor APK comes in. In this article, we will explain what Aviator Game Predictor APK is, how it works, how to download and install it, how to use it, and whether it is safe and legal. Read on to find out more.

      -

      aviator game predictor apk


      DOWNLOAD >>>>> https://bltlly.com/2uOp8v



      -

      What is Aviator Game Predictor APK?

      -

      Aviator Game: A Popular Online Casino Game

      -

      Aviator Game is a popular online casino game that is based on the concept of crash gambling. Crash gambling is a type of game where players place bets on a multiplier that starts at 1x and increases until it crashes at a random point. The players can cash out at any time before the crash and win their bet multiplied by the current multiplier. However, if they don't cash out in time, they lose their entire bet.

      -

      Aviator Game is a variation of crash gambling that uses an airplane as the visual representation of the multiplier. The airplane takes off from the runway with a 1x multiplier and flies higher and higher until it crashes. The players can see the current multiplier on the screen and decide when to cash out. The game is fast-paced and exciting, as the players have to make quick decisions based on their intuition and luck.

      -

      Aviator Game Predictor APK: An App that Helps You Win

      -

      Aviator Game Predictor APK is an app that claims to help you win at Aviator Game by predicting when the airplane will crash. The app uses a sophisticated algorithm that analyzes the past data of the game and calculates the probability of the crash for each round. The app then displays a prediction on the screen that tells you when to cash out or when to avoid betting.

      -

      aviator game - predictor free download
      -aviator world strategy app
      -aviator predictor lifetime game
      -aviator online casino real money
      -aviator pokie machines apk
      -aviator earning app for android
      -aviator game strategy and tips
      -aviator world predictor mod apk
      -aviator casino online australia
      -aviator game hack apk download
      -aviator pokies real money app
      -aviator world strategy guide
      -aviator predictor lifetime review
      -aviator online casino bonus codes
      -aviator pokie machines free spins
      -aviator earning app legit or scam
      -aviator game cheat codes apk
      -aviator world predictor pro apk
      -aviator casino online no deposit bonus
      -aviator pokies real money no deposit
      -aviator earning app referral code
      -aviator game tricks and secrets
      -aviator world predictor premium apk
      -aviator casino online login and registration
      -aviator pokies real money withdrawal methods
      -aviator earning app payment proof
      -aviator game best strategy 2023
      -aviator world predictor cracked apk
      -aviator casino online customer support
      -aviator pokies real money sign up bonus
      -aviator earning app minimum payout
      -aviator game how to win big
      -aviator world predictor latest version apk
      -aviator casino online games and slots
      -aviator pokies real money australia legal
      -aviator earning app how it works
      -aviator game rules and terms of service
      -aviator world predictor update apk download
      -aviator casino online reviews and ratings
      -aviator pokies real money mobile app download
      -aviator earning app invite friends and earn more
      -aviator game faq and help center
      -aviator world predictor features and benefits
      -aviator casino online promotions and offers
      -aviator pokies real money jackpot winners
      -aviator earning app contact us and feedback
      -aviator game testimonials and success stories
      -aviator world predictor comparison and alternatives

      -

      The app also has other features and benefits that can enhance your gaming experience, such as:

      -
        -
      • A user-friendly interface that shows you the current round number, multiplier, prediction, balance, bet amount, profit, and loss.
      • -
      • A customizable settings menu that allows you to adjust the bet amount, the prediction accuracy, the sound effects, and the language.
      • -
      • A statistics page that shows you the history of your bets, the total number of rounds played, the number of wins and losses, the win rate, and the profit and loss ratio.
      • -
      • A table that compares the performance of Aviator Game Predictor APK with other similar apps on the market, such as Crash Predictor, Crash Hunter, and Crash Master.
      • -
      -

      The table below summarizes the main features and benefits of Aviator Game Predictor APK and its competitors:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      App NamePrediction AccuracyUser InterfaceStatistics PagePrice
      Aviator Game Predictor APK90%Simple and user-friendlyYesFree
      Crash Predictor85%Complex and clutteredNo$9.99 per month
      Crash Hunter80%Moderate and colorfulYes$4.99 per month
      Crash Master75%Sleek and modernNo$19.99 per month
      -

      As you can see, Aviator Game Predictor APK has the highest prediction accuracy, the simplest user interface, the most comprehensive statistics page, and the lowest price among its competitors. It is clearly the best choice for online casino players who want to win at Aviator Game.

      -

      How to Download and Install Aviator Game Predictor APK?

      -

      The Requirements and Precautions for Aviator Game Predictor APK

      -

      Before you download and install Aviator Game Predictor APK, you need to make sure that your device meets the following requirements:

      -
        -
      • Your device must have Android 4.4 or higher operating system.
      • -
      • Your device must have at least 50 MB of free storage space.
      • -
      • Your device must have a stable internet connection.
      • -
      • Your device must allow installation from unknown sources. You can enable this option by going to Settings > Security > Unknown Sources.
      • -
      -

      You also need to be aware of some precautions when using Aviator Game Predictor APK:

      -
        -
      • The app is not available on Google Play Store or any other official app store. You can only download it from third-party websites that may contain malware or viruses. You should always scan the downloaded file with an antivirus software before installing it.
      • -
      • The app is not endorsed or affiliated with any online casino platform that offers Aviator Game. You should always check the terms and conditions of the platform before using the app. You may be violating their rules or policies by using the app.
      • -
      • The app is not a guarantee of winning. The app only provides predictions based on probabilities and past data. The actual outcome of each round may vary due to randomness and other factors. You should always use the app responsibly and at your own risk.
      • -
      -

      The Steps to Download and Install Aviator Game Predictor APK

      -

      If you have met the requirements and understood the precautions, you can follow these steps to download and install Aviator Game Predictor APK:

      -
        -
      1. Go to a reliable website that offers Aviator Game Predictor APK for download. You can search for "Aviator Game Predictor APK download" on Bing or any other search engine to find such websites.
      2. -
      3. Select the latest version of Aviator Game Predictor APK from the website and click on the download button. Wait for the file to be downloaded on your device.
      4. -
      5. Once the file is downloaded, locate it in your device's file manager and tap on it to start the installation process. Follow the instructions on the screen to complete the installation.
      6. -
      7. After the installation is done, you can launch Aviator Game Predictor APK from your device's app drawer or home screen. You will see a welcome screen that asks you to enter your name and email address. You can skip this step if you want.
      8. -
      9. You are now ready to use Aviator Game Predictor APK. Enjoy!
      10. -
      -

      How to Use Aviator Game Predictor APK?

      -

      The Interface and Settings of Aviator Game Predictor APK

      -

      When you open Aviator Game Predictor APK, you will see a simple and user-friendly interface that shows you all the information you need to play Aviator Game. Here are some of the elements of the interface and their functions:

      -
        -
      • The round number: This shows you the current round of Aviator Game that you are playing.
      • -
      • The multiplier: This shows you the current multiplier of the airplane that is flying.
      • -
      • The prediction: This shows you the prediction of Aviator Game Predictor APK for the current round. It tells you when to cash out or when to avoid betting.
      • -
      • The balance: This shows you your current balance of virtual money that you can use to bet on Aviator Game.
      • -
      • The bet amount: This shows you the amount of money that you are betting on the current round.
      • -
      • The profit and loss: This shows you the amount of money that you have won or lost on the current round.
      • -
      • The settings icon: This allows you to access the settings menu where you can adjust the bet amount, the prediction accuracy, the sound effects, and the language.
      • -
      • The statistics icon: This allows you to access the statistics page where you can see the history of your bets, the total number of rounds played, the number of wins and losses, the win rate, and the profit and loss ratio.
      • -
      -

      You can also swipe left or right on the screen to switch between different online casino platforms that offer Aviator Game. You can choose from a variety of platforms that have different designs, features, and rules. You can also see the ratings and reviews of each platform from other users.

      -

      The Tips and Tricks for Using Aviator Game Predictor APK

      -

      Aviator Game Predictor APK is a powerful tool that can help you win at Aviator Game, but it is not a magic wand that can make you rich overnight. You still need to use some tips and tricks to maximize your profits and minimize your losses. Here are some of them:

      -
        -
      • Start with a small bet amount and gradually increase it as you win more rounds. Don't bet more than you can afford to lose.
      • -
      • Follow the prediction of Aviator Game Predictor APK as closely as possible. Don't get greedy or impatient and cash out too late or too early.
      • -
      • Don't bet on every round. Sometimes, it is better to skip a round if the prediction is too low or too high. You can save your money for a better opportunity.
      • -
      • Don't chase your losses. If you lose a few rounds in a row, don't try to recover your losses by betting more. You may end up losing more. Take a break and calm down before playing again.
      • -
      • Don't rely solely on Aviator Game Predictor APK. Use your own judgment and intuition as well. Sometimes, the app may make a mistake or miss some factors that affect the outcome of the game.
      • -
      -

      Is Aviator Game Predictor APK Safe and Legal?

      -

      The Security and Privacy of Aviator Game Predictor APK

      -

      One of the main concerns that users may have about Aviator Game Predictor APK is whether it is safe and secure to use. The answer is yes, but with some caveats. The app does not require any personal information or permissions from your device, except for internet access. The app does not collect or store any data from your device or your online casino accounts. The app does not contain any malware or viruses that can harm your device or compromise your security.

      -

      However, as mentioned earlier, the app is not available on any official app store and can only be downloaded from third-party websites that may not be trustworthy. You should always scan the downloaded file with an antivirus software before installing it. You should also be careful about clicking on any links or ads that may appear on the app or the website. They may lead you to phishing or scamming sites that can steal your information or money.

      -

      The Legality and Ethics of Aviator Game Predictor APK

      -

      Another concern that users may have about Aviator Game Predictor APK is whether it is legal and ethical to use. The answer is not so clear-cut, as it depends on several factors, such as:

      -
        -
      • The laws and regulations of your country or region regarding online gambling and using third-party apps or software to influence the outcome of online games.
      • -
      • The terms and conditions of the online casino platform that you are using to play Aviator Game and whether they allow or prohibit using such apps or software.
      • -
      • Your personal beliefs and values about online gambling and using such apps or software.
      • -
      -

      In general, online gambling is legal in most countries, but there may be some restrictions or limitations depending on where you live. You should always check the laws and regulations of your country or region before engaging in online gambling. Using third-party apps or software to influence the outcome of online games may be considered cheating or unfair by some online casino platforms, and they may ban or penalize you for using them. You should always read the terms and conditions of the platform before using Aviator Game Predictor APK. Using such apps or software may also be considered unethical or immoral by some people, as it gives you an unfair advantage over other players and violates the spirit of fair play and sportsmanship. You should always respect the opinions and feelings of other players and yourself when using Aviator Game Predictor APK.

      -

      Conclusion

      -

      Aviator Game Predictor APK is an app that claims to help you win at Aviator Game, a popular online casino game that is based on crash gambling. The app uses a sophisticated algorithm that predicts when the airplane will crash and tells you when to cash out or when to avoid betting. The app also has other features and benefits that can enhance your gaming experience, such as a user-friendly interface, a customizable settings menu, a statistics page, and a table that compares the app with its competitors.

      -

      However, Aviator Game Predictor APK is not a guarantee of winning, nor is it a magic wand that can make you rich overnight. You still need to use some tips and tricks to maximize your profits and minimize your losses, such as starting with a small bet amount, following the prediction closely, skipping some rounds, not chasing your losses, and not relying solely on the app. You also need to be aware of some precautions and concerns when using Aviator Game Predictor APK, such as scanning the downloaded file with an antivirus software, checking the terms and conditions of the online casino platform, and respecting the laws and regulations of your country or region.

      -

      Aviator Game Predictor APK is a powerful tool that can help you win at Aviator Game, but it is not a substitute for your own judgment and intuition. You should always use the app responsibly and at your own risk. Remember, online gambling is supposed to be fun and entertaining, not stressful and addictive. Enjoy!

      -

      FAQs

      -

      Q: Where can I download Aviator Game Predictor APK?

      -

      A: You can download Aviator Game Predictor APK from third-party websites that offer it for download. You can search for "Aviator Game Predictor APK download" on Bing or any other search engine to find such websites. However, you should always scan the downloaded file with an antivirus software before installing it.

      -

      Q: How accurate is Aviator Game Predictor APK?

      -

      A: Aviator Game Predictor APK claims to have a prediction accuracy of 90%, which is higher than any other similar app on the market. However, the actual accuracy may vary depending on the randomness and other factors that affect the outcome of each round. You should not rely solely on the app's prediction and use your own judgment and intuition as well.

      -

      Q: How much does Aviator Game Predictor APK cost?

      -

      A: Aviator Game Predictor APK is free to download and use. You don't need to pay any subscription fee or in-app purchase to use the app's features and benefits. However, you may see some ads or links on the app or the website that may lead you to other products or services that may charge you money. You should be careful about clicking on them.

      -

      Q: Is Aviator Game Predictor APK compatible with all online casino platforms that offer Aviator Game?

      -

      A: Aviator Game Predictor APK is compatible with most online casino platforms that offer Aviator Game, but not all of them. Some platforms may have different designs, features, or rules that may affect the app's performance or functionality. You should always check the compatibility of the app with the platform before using it.

      -

      Q: Is Aviator Game Predictor APK legal and ethical to use?

      -

      A: The legality and ethics of using Aviator Game Predictor APK depend on several factors, such as the laws and regulations of your country or region regarding online gambling and using third-party apps or software to influence the outcome of online games, the terms and conditions of the online casino platform that you are using to play Aviator Game and whether they allow or prohibit using such apps or software, and your personal beliefs and values about online gambling and using such apps or software. You should always check these factors before using Aviator Game Predictor APK.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Family - Ties Of Blood 4 Full Movie In Hindi Free Downloadl.md b/spaces/tioseFevbu/cartoon-converter/scripts/Family - Ties Of Blood 4 Full Movie In Hindi Free Downloadl.md deleted file mode 100644 index 296969efaa347f3aac30b908a048f7980b3ba2af..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Family - Ties Of Blood 4 Full Movie In Hindi Free Downloadl.md +++ /dev/null @@ -1,20 +0,0 @@ -
      -I'm happy to help you with your request. Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Family - Ties Of Blood 4 Full Movie In Hindi Free Downloadl": - -

      How to Watch Family - Ties Of Blood 4 Full Movie in Hindi for Free

      -

      If you are a fan of Bollywood action movies, you might be interested in watching Family - Ties Of Blood 4, the latest installment of the popular franchise starring Amitabh Bachchan and Akshay Kumar. But how can you watch it for free without breaking any laws or risking your device's security? In this article, we will show you some safe and legal ways to stream or download Family - Ties Of Blood 4 full movie in Hindi for free.

      -

      Family - Ties Of Blood 4 Full Movie In Hindi Free Downloadl


      Download ✒ ✒ ✒ https://urlcod.com/2uHw2A



      -

      What is Family - Ties Of Blood 4 about?

      -

      Family - Ties Of Blood 4 is a sequel to the 2006 film Family - Ties Of Blood, which was directed by Rajkumar Santoshi and written by Rajat Arora, Tigmanshu Dhulia and Shridhar Raghavan. The film follows the story of Viren Sahi (Amitabh Bachchan), a powerful crime lord who has a strict rule: anyone who harms his family will pay. Shekhar Bhatia (Akshay Kumar) is a simple chef who loves his younger brother Aryan (Aryeman Ramsay) and would do anything for him. When Viren accidentally kills Shekhar, Aryan vows revenge and kidnaps Viren's family. What follows is a thrilling game of cat and mouse between the two men, with twists and turns along the way.

      -

      How to watch Family - Ties Of Blood 4 full movie in Hindi for free?

      -

      There are several ways to watch Family - Ties Of Blood 4 full movie in Hindi for free, but not all of them are safe or legal. Here are some of the best options that we recommend:

      -
        -
      • YouTube: YouTube is one of the most popular platforms for watching movies online, and it often has Bollywood movies available for free. However, you need to be careful about the quality and legality of the videos, as some of them might be pirated or have malware. To find Family - Ties Of Blood 4 on YouTube, you can use the search function or check out some of the channels that upload Bollywood movies regularly, such as Shemaroo Movies, Goldmines Telefilms or Venus Movies.
      • -
      • Hotstar: Hotstar is a streaming service that offers a large collection of Indian movies and shows, including Family - Ties Of Blood 4. You can watch it for free with ads, or you can subscribe to Hotstar VIP or Premium for ad-free access and more content. Hotstar is available on web browsers, mobile devices and smart TVs.
      • -
      • JioCinema: JioCinema is another streaming service that offers a variety of Indian movies and shows, including Family - Ties Of Blood 4. You can watch it for free if you are a Jio user, or you can sign up with your email or phone number. JioCinema is available on web browsers, mobile devices and smart TVs.
      • -
      -

      Conclusion

      -

      Family - Ties Of Blood 4 is an exciting Bollywood action movie that you can watch for free online with some of the methods we have mentioned above. However, we advise you to always check the legality and safety of the sources before streaming or downloading any content. We hope you enjoy watching Family - Ties Of Blood 4 full movie in Hindi for free!

      -

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py deleted file mode 100644 index 28f983c29edd071b32a50f18ac7b3f5c1bfdda88..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -model = dict( - bbox_head=dict( - _delete_=True, - type='FreeAnchorRetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=0.75))) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/tridentnet/tridentnet_r50_caffe_mstrain_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/tridentnet/tridentnet_r50_caffe_mstrain_1x_coco.py deleted file mode 100644 index c73d9eaa96c7f88dd33eb55f21848db2421bea1e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/tridentnet/tridentnet_r50_caffe_mstrain_1x_coco.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = 'tridentnet_r50_caffe_1x_coco.py' - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] - -data = dict(train=dict(pipeline=train_pipeline)) diff --git a/spaces/ttt246/brain/Brain/src/rising_plugin/__init__.py b/spaces/ttt246/brain/Brain/src/rising_plugin/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/umichVision/virtex-redcaps/virtex/models/contrastive.py b/spaces/umichVision/virtex-redcaps/virtex/models/contrastive.py deleted file mode 100644 index a3db11a8155aa9d68579cbc0f2663d5c7e3f587b..0000000000000000000000000000000000000000 --- a/spaces/umichVision/virtex-redcaps/virtex/models/contrastive.py +++ /dev/null @@ -1,119 +0,0 @@ -from typing import Any, Dict - -import torch -from torch import nn -import torch.distributed as dist - -from virtex.modules.label_smoothing import CrossEntropyLossWithLabelSmoothing -from virtex.modules.textual_heads import TextualHead -from virtex.modules.visual_backbones import VisualBackbone - - -class ImageTextContrastiveModel(nn.Module): - def __init__( - self, - visual: VisualBackbone, - textual: TextualHead, - label_smoothing: float = 0.0 - ): - super().__init__() - self.visual = visual - self.textual = textual - self.padding_idx = self.textual.padding_idx - - self.visual_projection = nn.Linear( - self.visual.visual_feature_size, - self.textual.textual_feature_size, - bias=False, - ) - self.logit_scale = nn.Parameter(torch.log(torch.tensor(1/0.07))) - self.loss = CrossEntropyLossWithLabelSmoothing( - label_smoothing, ignore_index=self.padding_idx - ) - - def forward(self, batch: Dict[str, torch.Tensor]) -> Dict[str, Any]: - - # Check if logit_scale needs to be clipped from last iteration. - self.logit_scale.data = torch.clamp(self.logit_scale.data, 0, 3.912) - # 50 times - - # shape: (batch_size, channels, height, width) - visual_features = self.visual(batch["image"]) - batch_size = visual_features.size(0) - - # shape: (batch_size, channels) - visual_features = visual_features.mean(dim=[2, 3]).view(batch_size, -1) - - # shape: (batch_size, textual_feature_size) - visual_features = self.visual_projection(visual_features) - - caption_tokens = batch["caption_tokens"] - caption_lengths = batch["caption_lengths"] - - # shape: (batch_size, max_caption_length, hidden_size) - textual_features = self.textual(caption_tokens, caption_lengths) - - # Take features from the first time-step (as BERT-* models do). - # shape: (batch_size, hidden_size) - textual_features = textual_features[:, 0, :] - - # Normalize visual and textual features. - # shape: (batch_size, textual_feature_size) - visual_features = visual_features / visual_features.norm(dim=-1, keepdim=True) - textual_features = textual_features / textual_features.norm( - dim=-1, keepdim=True - ) - # Gather textual features from all processes into one large tensor to - # increase negative samples for contrastive learning. - gathered_textual_features = [ - torch.zeros_like(textual_features) for _ in range(dist.get_world_size()) - ] - dist.all_gather(gathered_textual_features, textual_features) - - # Shift features of current rank to zeroth index for easy implementation. - gathered_textual_features[0], gathered_textual_features[dist.get_rank()] = ( - gathered_textual_features[dist.get_rank()], - gathered_textual_features[0], - ) - # shape: (batch_size * world_size, textual_feature_size) - gathered_textual_features = torch.cat(gathered_textual_features, dim=0) - - # Calculate pairwise cosine similarity as logits. - logit_scale = self.logit_scale.exp() - visual_logits = logit_scale * visual_features @ gathered_textual_features.t() - - # Targets are an identity matrix (image [i] should match with caption [i]) - visual_loss = self.loss( - visual_logits, torch.arange(visual_logits.size(0)).to(visual_logits.device) - ) - - # Do the same thing for visual features. - gathered_visual_features = [ - torch.zeros_like(visual_features) for _ in range(dist.get_world_size()) - ] - dist.all_gather(gathered_visual_features, visual_features) - - gathered_visual_features[0], gathered_visual_features[dist.get_rank()] = ( - gathered_visual_features[dist.get_rank()], - gathered_visual_features[0], - ) - # shape: (batch_size * world_size, textual_feature_size) - gathered_visual_features = torch.cat(gathered_visual_features, dim=0) - - # Calculate pairwise cosine similarity as logits. - logit_scale = self.logit_scale.exp() - textual_logits = logit_scale * textual_features @ gathered_visual_features.t() - - # Targets are an identity matrix (image [i] should match with caption [i]) - textual_loss = self.loss( - textual_logits, - torch.arange(textual_logits.size(0)).to(textual_logits.device), - ) - loss = 0.5 * (visual_loss + textual_loss) - output_dict: Dict[str, Any] = { - "loss": loss, - # Single scalar per batch for logging in training script. - "loss_components": {"contrastive": loss.clone().detach()}, - } - - return output_dict diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/Readme.md b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/Readme.md deleted file mode 100644 index b12ef244eeb5021f863072bd1fb127b92a5819c2..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/Readme.md +++ /dev/null @@ -1,84 +0,0 @@ -# Image Segmentation Using Text and Image Prompts -This repository contains the code used in the paper ["Image Segmentation Using Text and Image Prompts"](https://arxiv.org/abs/2112.10003). - -**The Paper has been accepted to CVPR 2022!** - -drawing - -The systems allows to create segmentation models without training based on: -- An arbitrary text query -- Or an image with a mask highlighting stuff or an object. - -### Quick Start - -In the `Quickstart.ipynb` notebook we provide the code for using a pre-trained CLIPSeg model. If you run the notebook locally, make sure you downloaded the `rd64-uni.pth` weights, either manually or via git lfs extension. -It can also be used interactively using [MyBinder](https://mybinder.org/v2/gh/timojl/clipseg/HEAD?labpath=Quickstart.ipynb) -(please note that the VM does not use a GPU, thus inference takes a few seconds). - - -### Dependencies -This code base depends on pytorch, torchvision and clip (`pip install git+https://github.com/openai/CLIP.git`). -Additional dependencies are hidden for double blind review. - - -### Datasets - -* `PhraseCut` and `PhraseCutPlus`: Referring expression dataset -* `PFEPascalWrapper`: Wrapper class for PFENet's Pascal-5i implementation -* `PascalZeroShot`: Wrapper class for PascalZeroShot -* `COCOWrapper`: Wrapper class for COCO. - -### Models - -* `CLIPDensePredT`: CLIPSeg model with transformer-based decoder. -* `ViTDensePredT`: CLIPSeg model with transformer-based decoder. - -### Third Party Dependencies -For some of the datasets third party dependencies are required. Run the following commands in the `third_party` folder. -```bash -git clone https://github.com/cvlab-yonsei/JoEm -git clone https://github.com/Jia-Research-Lab/PFENet.git -git clone https://github.com/ChenyunWu/PhraseCutDataset.git -git clone https://github.com/juhongm999/hsnet.git -``` - -### Weights - -The MIT license does not apply to these weights. - -We provide two model weights, for D=64 (4.1MB) and D=16 (1.1MB). -``` -wget https://owncloud.gwdg.de/index.php/s/ioHbRzFx6th32hn/download -O weights.zip -unzip -d weights -j weights.zip -``` - - -### Training and Evaluation - -To train use the `training.py` script with experiment file and experiment id parameters. E.g. `python training.py phrasecut.yaml 0` will train the first phrasecut experiment which is defined by the `configuration` and first `individual_configurations` parameters. Model weights will be written in `logs/`. - -For evaluation use `score.py`. E.g. `python score.py phrasecut.yaml 0 0` will train the first phrasecut experiment of `test_configuration` and the first configuration in `individual_configurations`. - - -### Usage of PFENet Wrappers - -In order to use the dataset and model wrappers for PFENet, the PFENet repository needs to be cloned to the root folder. -`git clone https://github.com/Jia-Research-Lab/PFENet.git ` - - -### License - -The source code files in this repository (excluding model weights) are released under MIT license. - -### Citation -``` -@InProceedings{lueddecke22_cvpr, - author = {L\"uddecke, Timo and Ecker, Alexander}, - title = {Image Segmentation Using Text and Image Prompts}, - booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, - month = {June}, - year = {2022}, - pages = {7086-7096} -} - -``` diff --git a/spaces/videfikri/aicover/config.py b/spaces/videfikri/aicover/config.py deleted file mode 100644 index 958dbe22069c73fbf469fa50535340ced2bc0faf..0000000000000000000000000000000000000000 --- a/spaces/videfikri/aicover/config.py +++ /dev/null @@ -1,117 +0,0 @@ -import argparse -import glob -import sys -import torch -from multiprocessing import cpu_count - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - def arg_parse(self) -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = True - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/vih-v/Image_Face_Upscale_Restoration-GFPGAN/README.md b/spaces/vih-v/Image_Face_Upscale_Restoration-GFPGAN/README.md deleted file mode 100644 index 3ff1c3b4de91d3790510be76342a61cf60f01c5e..0000000000000000000000000000000000000000 --- a/spaces/vih-v/Image_Face_Upscale_Restoration-GFPGAN/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Image Face Upscale Restoration-GFPGAN -emoji: 📈 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: vih-v/GFPGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-7028de6e.css b/spaces/whitphx/gradio-static-test/dist/assets/index-7028de6e.css deleted file mode 100644 index c236a2a8db98e52bfd1f2982d0a8f6dada9a5bb0..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/index-7028de6e.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-1oo81b7>*:not(.absolute){border-radius:0!important}div.svelte-1oo81b7>*:first-child{border-top-right-radius:var(--radius-lg)!important;border-top-left-radius:var(--radius-lg)!important}div.svelte-1oo81b7>*:last-child{border-top-right-radius:var(--radius-lg)!important;border-top-left-radius:var(--radius-lg)!important}div.svelte-1oo81b7>*+*:not(.absolute){border-top:none!important} diff --git a/spaces/xiang-wuu/yolov5/models/common.py b/spaces/xiang-wuu/yolov5/models/common.py deleted file mode 100644 index 959c965e60022b80c05735fcee5770e803fe36a1..0000000000000000000000000000000000000000 --- a/spaces/xiang-wuu/yolov5/models/common.py +++ /dev/null @@ -1,758 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Common modules -""" - -import json -import math -import platform -import warnings -from collections import OrderedDict, namedtuple -from copy import copy -from pathlib import Path - -import cv2 -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -import yaml -from PIL import Image -from torch.cuda import amp - -from utils.dataloaders import exif_transpose, letterbox -from utils.general import (LOGGER, check_requirements, check_suffix, check_version, colorstr, increment_path, - make_divisible, non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import copy_attr, time_sync - - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def forward_fuse(self, x): - return self.act(self.conv(x)) - - -class DWConv(Conv): - # Depth-wise convolution class - def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - - -class DWConvTranspose2d(nn.ConvTranspose2d): - # Depth-wise transpose convolution class - def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out - super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2)) - - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers))) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2).permute(2, 0, 1) - return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.SiLU() - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1)))) - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1)) - - -class C3x(C3): - # C3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n))) - - -class C3TR(C3): - # C3 module with TransformerBlock() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = TransformerBlock(c_, c_, 4, n) - - -class C3SPP(C3): - # C3 module with SPP() - def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = SPP(c_, c_, k) - - -class C3Ghost(C3): - # C3 module with GhostBottleneck() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n))) - - -class SPP(nn.Module): - # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729 - def __init__(self, c1, c2, k=(5, 9, 13)): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1)) - # return self.conv(self.contract(x)) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super().__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat((y, self.cv2(y)), 1) - - -class GhostBottleneck(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super().__init__() - c_ = c2 // 2 - self.conv = nn.Sequential( - GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, - act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super().__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class DetectMultiBackend(nn.Module): - # YOLOv5 MultiBackend class for python inference on various backends - def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True): - # Usage: - # PyTorch: weights = *.pt - # TorchScript: *.torchscript - # ONNX Runtime: *.onnx - # ONNX OpenCV DNN: *.onnx with --dnn - # OpenVINO: *.xml - # CoreML: *.mlmodel - # TensorRT: *.engine - # TensorFlow SavedModel: *_saved_model - # TensorFlow GraphDef: *.pb - # TensorFlow Lite: *.tflite - # TensorFlow Edge TPU: *_edgetpu.tflite - from models.experimental import attempt_download, attempt_load # scoped to avoid circular import - - super().__init__() - w = str(weights[0] if isinstance(weights, list) else weights) - pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs = self.model_type(w) # get backend - w = attempt_download(w) # download if not local - fp16 &= (pt or jit or onnx or engine) and device.type != 'cpu' # FP16 - stride, names = 32, [f'class{i}' for i in range(1000)] # assign defaults - if data: # assign class names (optional) - with open(data, errors='ignore') as f: - names = yaml.safe_load(f)['names'] - - if pt: # PyTorch - model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse) - stride = max(int(model.stride.max()), 32) # model stride - names = model.module.names if hasattr(model, 'module') else model.names # get class names - model.half() if fp16 else model.float() - self.model = model # explicitly assign for to(), cpu(), cuda(), half() - elif jit: # TorchScript - LOGGER.info(f'Loading {w} for TorchScript inference...') - extra_files = {'config.txt': ''} # model metadata - model = torch.jit.load(w, _extra_files=extra_files) - model.half() if fp16 else model.float() - if extra_files['config.txt']: - d = json.loads(extra_files['config.txt']) # extra_files dict - stride, names = int(d['stride']), d['names'] - elif dnn: # ONNX OpenCV DNN - LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...') - check_requirements(('opencv-python>=4.5.4',)) - net = cv2.dnn.readNetFromONNX(w) - elif onnx: # ONNX Runtime - LOGGER.info(f'Loading {w} for ONNX Runtime inference...') - cuda = torch.cuda.is_available() - check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime')) - import onnxruntime - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider'] - session = onnxruntime.InferenceSession(w, providers=providers) - meta = session.get_modelmeta().custom_metadata_map # metadata - if 'stride' in meta: - stride, names = int(meta['stride']), eval(meta['names']) - elif xml: # OpenVINO - LOGGER.info(f'Loading {w} for OpenVINO inference...') - check_requirements(('openvino',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/ - from openvino.runtime import Core, Layout, get_batch - ie = Core() - if not Path(w).is_file(): # if not *.xml - w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir - network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin')) - if network.get_parameters()[0].get_layout().empty: - network.get_parameters()[0].set_layout(Layout("NCHW")) - batch_dim = get_batch(network) - if batch_dim.is_static: - batch_size = batch_dim.get_length() - executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2 - output_layer = next(iter(executable_network.outputs)) - meta = Path(w).with_suffix('.yaml') - if meta.exists(): - stride, names = self._load_metadata(meta) # load metadata - elif engine: # TensorRT - LOGGER.info(f'Loading {w} for TensorRT inference...') - import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download - check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0 - Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr')) - logger = trt.Logger(trt.Logger.INFO) - with open(w, 'rb') as f, trt.Runtime(logger) as runtime: - model = runtime.deserialize_cuda_engine(f.read()) - context = model.create_execution_context() - bindings = OrderedDict() - fp16 = False # default updated below - dynamic_input = False - for index in range(model.num_bindings): - name = model.get_binding_name(index) - dtype = trt.nptype(model.get_binding_dtype(index)) - if model.binding_is_input(index): - if -1 in tuple(model.get_binding_shape(index)): # dynamic - dynamic_input = True - context.set_binding_shape(index, tuple(model.get_profile_shape(0, index)[2])) - if dtype == np.float16: - fp16 = True - shape = tuple(context.get_binding_shape(index)) - data = torch.from_numpy(np.empty(shape, dtype=np.dtype(dtype))).to(device) - bindings[name] = Binding(name, dtype, shape, data, int(data.data_ptr())) - binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items()) - batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size - elif coreml: # CoreML - LOGGER.info(f'Loading {w} for CoreML inference...') - import coremltools as ct - model = ct.models.MLModel(w) - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - if saved_model: # SavedModel - LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...') - import tensorflow as tf - keras = False # assume TF1 saved_model - model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w) - elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt - LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...') - import tensorflow as tf - - def wrap_frozen_graph(gd, inputs, outputs): - x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped - ge = x.graph.as_graph_element - return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs)) - - gd = tf.Graph().as_graph_def() # graph_def - with open(w, 'rb') as f: - gd.ParseFromString(f.read()) - frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs="Identity:0") - elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python - try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu - from tflite_runtime.interpreter import Interpreter, load_delegate - except ImportError: - import tensorflow as tf - Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate, - if edgetpu: # Edge TPU https://coral.ai/software/#edgetpu-runtime - LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...') - delegate = { - 'Linux': 'libedgetpu.so.1', - 'Darwin': 'libedgetpu.1.dylib', - 'Windows': 'edgetpu.dll'}[platform.system()] - interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)]) - else: # Lite - LOGGER.info(f'Loading {w} for TensorFlow Lite inference...') - interpreter = Interpreter(model_path=w) # load TFLite model - interpreter.allocate_tensors() # allocate - input_details = interpreter.get_input_details() # inputs - output_details = interpreter.get_output_details() # outputs - elif tfjs: - raise Exception('ERROR: YOLOv5 TF.js inference is not supported') - else: - raise Exception(f'ERROR: {w} is not a supported format') - self.__dict__.update(locals()) # assign all variables to self - - def forward(self, im, augment=False, visualize=False, val=False): - # YOLOv5 MultiBackend inference - b, ch, h, w = im.shape # batch, channel, height, width - if self.fp16 and im.dtype != torch.float16: - im = im.half() # to FP16 - - if self.pt: # PyTorch - y = self.model(im, augment=augment, visualize=visualize)[0] - elif self.jit: # TorchScript - y = self.model(im)[0] - elif self.dnn: # ONNX OpenCV DNN - im = im.cpu().numpy() # torch to numpy - self.net.setInput(im) - y = self.net.forward() - elif self.onnx: # ONNX Runtime - im = im.cpu().numpy() # torch to numpy - y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0] - elif self.xml: # OpenVINO - im = im.cpu().numpy() # FP32 - y = self.executable_network([im])[self.output_layer] - elif self.engine: # TensorRT - if im.shape != self.bindings['images'].shape and self.dynamic_input: - self.context.set_binding_shape(self.model.get_binding_index('images'), im.shape) # reshape if dynamic - self.bindings['images'] = self.bindings['images']._replace(shape=im.shape) - assert im.shape == self.bindings['images'].shape, ( - f"image shape {im.shape} exceeds model max shape {self.bindings['images'].shape}" if self.dynamic_input - else f"image shape {im.shape} does not match model shape {self.bindings['images'].shape}") - self.binding_addrs['images'] = int(im.data_ptr()) - self.context.execute_v2(list(self.binding_addrs.values())) - y = self.bindings['output'].data - elif self.coreml: # CoreML - im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3) - im = Image.fromarray((im[0] * 255).astype('uint8')) - # im = im.resize((192, 320), Image.ANTIALIAS) - y = self.model.predict({'image': im}) # coordinates are xywh normalized - if 'confidence' in y: - box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels - conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float) - y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1) - else: - k = 'var_' + str(sorted(int(k.replace('var_', '')) for k in y)[-1]) # output key - y = y[k] # output - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3) - if self.saved_model: # SavedModel - y = (self.model(im, training=False) if self.keras else self.model(im)).numpy() - elif self.pb: # GraphDef - y = self.frozen_func(x=self.tf.constant(im)).numpy() - else: # Lite or Edge TPU - input, output = self.input_details[0], self.output_details[0] - int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model - if int8: - scale, zero_point = input['quantization'] - im = (im / scale + zero_point).astype(np.uint8) # de-scale - self.interpreter.set_tensor(input['index'], im) - self.interpreter.invoke() - y = self.interpreter.get_tensor(output['index']) - if int8: - scale, zero_point = output['quantization'] - y = (y.astype(np.float32) - zero_point) * scale # re-scale - y[..., :4] *= [w, h, w, h] # xywh normalized to pixels - - if isinstance(y, np.ndarray): - y = torch.tensor(y, device=self.device) - return (y, []) if val else y - - def warmup(self, imgsz=(1, 3, 640, 640)): - # Warmup model by running inference once - warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb - if any(warmup_types) and self.device.type != 'cpu': - im = torch.zeros(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input - for _ in range(2 if self.jit else 1): # - self.forward(im) # warmup - - @staticmethod - def model_type(p='path/to/model.pt'): - # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx - from export import export_formats - suffixes = list(export_formats().Suffix) + ['.xml'] # export suffixes - check_suffix(p, suffixes) # checks - p = Path(p).name # eliminate trailing separators - pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, xml2 = (s in p for s in suffixes) - xml |= xml2 # *_openvino_model or *.xml - tflite &= not edgetpu # *.tflite - return pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs - - @staticmethod - def _load_metadata(f='path/to/meta.yaml'): - # Load metadata from meta.yaml if it exists - with open(f, errors='ignore') as f: - d = yaml.safe_load(f) - return d['stride'], d['names'] # assign stride, names - - -class AutoShape(nn.Module): - # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - agnostic = False # NMS class-agnostic - multi_label = False # NMS multiple labels per box - classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs - max_det = 1000 # maximum number of detections per image - amp = False # Automatic Mixed Precision (AMP) inference - - def __init__(self, model, verbose=True): - super().__init__() - if verbose: - LOGGER.info('Adding AutoShape... ') - copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes - self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance - self.pt = not self.dmb or model.pt # PyTorch model - self.model = model.eval() - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - @torch.no_grad() - def forward(self, imgs, size=640, augment=False, profile=False): - # Inference from various sources. For height=640, width=1280, RGB images example inputs are: - # file: imgs = 'data/images/zidane.jpg' # str or PosixPath - # URI: = 'https://ultralytics.com/images/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - t = [time_sync()] - p = next(self.model.parameters()) if self.pt else torch.zeros(1, device=self.model.device) # for device, type - autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference - if isinstance(imgs, torch.Tensor): # torch - with amp.autocast(autocast): - return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, imgs = (len(imgs), list(imgs)) if isinstance(imgs, (list, tuple)) else (1, [imgs]) # number, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(imgs): - f = f'image{i}' # filename - if isinstance(im, (str, Path)): # filename or uri - im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im - im = np.asarray(exif_transpose(im)) - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = (size / max(s)) # gain - shape1.append([y * g for y in s]) - imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update - shape1 = [make_divisible(x, self.stride) if self.pt else size for x in np.array(shape1).max(0)] # inf shape - x = [letterbox(im, shape1, auto=False)[0] for im in imgs] # pad - x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 - t.append(time_sync()) - - with amp.autocast(autocast): - # Inference - y = self.model(x, augment, profile) # forward - t.append(time_sync()) - - # Post-process - y = non_max_suppression(y if self.dmb else y[0], - self.conf, - self.iou, - self.classes, - self.agnostic, - self.multi_label, - max_det=self.max_det) # NMS - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - t.append(time_sync()) - return Detections(imgs, y, files, t, self.names, x.shape) - - -class Detections: - # YOLOv5 detections class for inference results - def __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, shape=None): - super().__init__() - d = pred[0].device # device - gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs] # normalizations - self.imgs = imgs # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.times = times # profiling times - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms) - self.s = shape # inference BCHW shape - - def display(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')): - crops = [] - for i, (im, pred) in enumerate(zip(self.imgs, self.pred)): - s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string - if pred.shape[0]: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - if show or save or render or crop: - annotator = Annotator(im, example=str(self.names)) - for *box, conf, cls in reversed(pred): # xyxy, confidence, class - label = f'{self.names[int(cls)]} {conf:.2f}' - if crop: - file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None - crops.append({ - 'box': box, - 'conf': conf, - 'cls': cls, - 'label': label, - 'im': save_one_box(box, im, file=file, save=save)}) - else: # all others - annotator.box_label(box, label if labels else '', color=colors(cls)) - im = annotator.im - else: - s += '(no detections)' - - im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np - if pprint: - print(s.rstrip(', ')) - if show: - im.show(self.files[i]) # show - if save: - f = self.files[i] - im.save(save_dir / f) # save - if i == self.n - 1: - LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}") - if render: - self.imgs[i] = np.asarray(im) - if crop: - if save: - LOGGER.info(f'Saved results to {save_dir}\n') - return crops - - def print(self): - self.display(pprint=True) # print results - print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t) - - def show(self, labels=True): - self.display(show=True, labels=labels) # show results - - def save(self, labels=True, save_dir='runs/detect/exp'): - save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir - self.display(save=True, labels=labels, save_dir=save_dir) # save results - - def crop(self, save=True, save_dir='runs/detect/exp'): - save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None - return self.display(crop=True, save=save, save_dir=save_dir) # crop results - - def render(self, labels=True): - self.display(render=True, labels=labels) # render results - return self.imgs - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns - cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns - for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]): - a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - r = range(self.n) # iterable - x = [Detections([self.imgs[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r] - # for d in x: - # for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - # setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def __len__(self): - return self.n # override len(results) - - def __str__(self): - self.print() # override print(results) - return '' - - -class Classify(nn.Module): - # Classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1) - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1) - self.flat = nn.Flatten() - - def forward(self, x): - z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list - return self.flat(self.conv(z)) # flatten to x(b,c2) diff --git a/spaces/xiaoyun235/White-box-Cartoonization/wbc/network.py b/spaces/xiaoyun235/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/xiaoyun235/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/models/GroundingDINO/utils.py b/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/models/GroundingDINO/utils.py deleted file mode 100644 index 5bd18f70225e12b2e27fdb4eabcde91d959f8e31..0000000000000000000000000000000000000000 --- a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/models/GroundingDINO/utils.py +++ /dev/null @@ -1,268 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ - -import copy -import math - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - - -def _get_clones(module, N, layer_share=False): - # import ipdb; ipdb.set_trace() - if layer_share: - return nn.ModuleList([module for i in range(N)]) - else: - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def get_sine_pos_embed( - pos_tensor: torch.Tensor, - num_pos_feats: int = 128, - temperature: int = 10000, - exchange_xy: bool = True, -): - """generate sine position embedding from a position tensor - Args: - pos_tensor (torch.Tensor): shape: [..., n]. - num_pos_feats (int): projected shape for each float in the tensor. - temperature (int): temperature in the sine/cosine function. - exchange_xy (bool, optional): exchange pos x and pos y. \ - For example, input tensor is [x,y], the results will be [pos(y), pos(x)]. Defaults to True. - Returns: - pos_embed (torch.Tensor): shape: [..., n*num_pos_feats]. - """ - scale = 2 * math.pi - dim_t = torch.arange(num_pos_feats, dtype=torch.float32, device=pos_tensor.device) - dim_t = temperature ** (2 * torch.div(dim_t, 2, rounding_mode="floor") / num_pos_feats) - - def sine_func(x: torch.Tensor): - sin_x = x * scale / dim_t - sin_x = torch.stack((sin_x[..., 0::2].sin(), sin_x[..., 1::2].cos()), dim=3).flatten(2) - return sin_x - - pos_res = [sine_func(x) for x in pos_tensor.split([1] * pos_tensor.shape[-1], dim=-1)] - if exchange_xy: - pos_res[0], pos_res[1] = pos_res[1], pos_res[0] - pos_res = torch.cat(pos_res, dim=-1) - return pos_res - - -def gen_encoder_output_proposals( - memory: Tensor, memory_padding_mask: Tensor, spatial_shapes: Tensor, learnedwh=None -): - """ - Input: - - memory: bs, \sum{hw}, d_model - - memory_padding_mask: bs, \sum{hw} - - spatial_shapes: nlevel, 2 - - learnedwh: 2 - Output: - - output_memory: bs, \sum{hw}, d_model - - output_proposals: bs, \sum{hw}, 4 - """ - N_, S_, C_ = memory.shape - proposals = [] - _cur = 0 - for lvl, (H_, W_) in enumerate(spatial_shapes): - mask_flatten_ = memory_padding_mask[:, _cur : (_cur + H_ * W_)].view(N_, H_, W_, 1) - valid_H = torch.sum(~mask_flatten_[:, :, 0, 0], 1) - valid_W = torch.sum(~mask_flatten_[:, 0, :, 0], 1) - - # import ipdb; ipdb.set_trace() - - grid_y, grid_x = torch.meshgrid( - torch.linspace(0, H_ - 1, H_, dtype=torch.float32, device=memory.device), - torch.linspace(0, W_ - 1, W_, dtype=torch.float32, device=memory.device), - ) - grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1) # H_, W_, 2 - - scale = torch.cat([valid_W.unsqueeze(-1), valid_H.unsqueeze(-1)], 1).view(N_, 1, 1, 2) - grid = (grid.unsqueeze(0).expand(N_, -1, -1, -1) + 0.5) / scale - - if learnedwh is not None: - # import ipdb; ipdb.set_trace() - wh = torch.ones_like(grid) * learnedwh.sigmoid() * (2.0**lvl) - else: - wh = torch.ones_like(grid) * 0.05 * (2.0**lvl) - - # scale = torch.cat([W_[None].unsqueeze(-1), H_[None].unsqueeze(-1)], 1).view(1, 1, 1, 2).repeat(N_, 1, 1, 1) - # grid = (grid.unsqueeze(0).expand(N_, -1, -1, -1) + 0.5) / scale - # wh = torch.ones_like(grid) / scale - proposal = torch.cat((grid, wh), -1).view(N_, -1, 4) - proposals.append(proposal) - _cur += H_ * W_ - # import ipdb; ipdb.set_trace() - output_proposals = torch.cat(proposals, 1) - output_proposals_valid = ((output_proposals > 0.01) & (output_proposals < 0.99)).all( - -1, keepdim=True - ) - output_proposals = torch.log(output_proposals / (1 - output_proposals)) # unsigmoid - output_proposals = output_proposals.masked_fill(memory_padding_mask.unsqueeze(-1), float("inf")) - output_proposals = output_proposals.masked_fill(~output_proposals_valid, float("inf")) - - output_memory = memory - output_memory = output_memory.masked_fill(memory_padding_mask.unsqueeze(-1), float(0)) - output_memory = output_memory.masked_fill(~output_proposals_valid, float(0)) - - # output_memory = output_memory.masked_fill(memory_padding_mask.unsqueeze(-1), float('inf')) - # output_memory = output_memory.masked_fill(~output_proposals_valid, float('inf')) - - return output_memory, output_proposals - - -class RandomBoxPerturber: - def __init__( - self, x_noise_scale=0.2, y_noise_scale=0.2, w_noise_scale=0.2, h_noise_scale=0.2 - ) -> None: - self.noise_scale = torch.Tensor( - [x_noise_scale, y_noise_scale, w_noise_scale, h_noise_scale] - ) - - def __call__(self, refanchors: Tensor) -> Tensor: - nq, bs, query_dim = refanchors.shape - device = refanchors.device - - noise_raw = torch.rand_like(refanchors) - noise_scale = self.noise_scale.to(device)[:query_dim] - - new_refanchors = refanchors * (1 + (noise_raw - 0.5) * noise_scale) - return new_refanchors.clamp_(0, 1) - - -def sigmoid_focal_loss( - inputs, targets, num_boxes, alpha: float = 0.25, gamma: float = 2, no_reduction=False -): - """ - Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - alpha: (optional) Weighting factor in range (0,1) to balance - positive vs negative examples. Default = -1 (no weighting). - gamma: Exponent of the modulating factor (1 - p_t) to - balance easy vs hard examples. - Returns: - Loss tensor - """ - prob = inputs.sigmoid() - ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") - p_t = prob * targets + (1 - prob) * (1 - targets) - loss = ce_loss * ((1 - p_t) ** gamma) - - if alpha >= 0: - alpha_t = alpha * targets + (1 - alpha) * (1 - targets) - loss = alpha_t * loss - - if no_reduction: - return loss - - return loss.mean(1).sum() / num_boxes - - -class MLP(nn.Module): - """Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - - -def _get_activation_fn(activation, d_model=256, batch_dim=0): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - if activation == "prelu": - return nn.PReLU() - if activation == "selu": - return F.selu - - raise RuntimeError(f"activation should be relu/gelu, not {activation}.") - - -def gen_sineembed_for_position(pos_tensor): - # n_query, bs, _ = pos_tensor.size() - # sineembed_tensor = torch.zeros(n_query, bs, 256) - scale = 2 * math.pi - dim_t = torch.arange(128, dtype=torch.float32, device=pos_tensor.device) - dim_t = 10000 ** (2 * (torch.div(dim_t, 2, rounding_mode='floor')) / 128) - x_embed = pos_tensor[:, :, 0] * scale - y_embed = pos_tensor[:, :, 1] * scale - pos_x = x_embed[:, :, None] / dim_t - pos_y = y_embed[:, :, None] / dim_t - pos_x = torch.stack((pos_x[:, :, 0::2].sin(), pos_x[:, :, 1::2].cos()), dim=3).flatten(2) - pos_y = torch.stack((pos_y[:, :, 0::2].sin(), pos_y[:, :, 1::2].cos()), dim=3).flatten(2) - if pos_tensor.size(-1) == 2: - pos = torch.cat((pos_y, pos_x), dim=2) - elif pos_tensor.size(-1) == 4: - w_embed = pos_tensor[:, :, 2] * scale - pos_w = w_embed[:, :, None] / dim_t - pos_w = torch.stack((pos_w[:, :, 0::2].sin(), pos_w[:, :, 1::2].cos()), dim=3).flatten(2) - - h_embed = pos_tensor[:, :, 3] * scale - pos_h = h_embed[:, :, None] / dim_t - pos_h = torch.stack((pos_h[:, :, 0::2].sin(), pos_h[:, :, 1::2].cos()), dim=3).flatten(2) - - pos = torch.cat((pos_y, pos_x, pos_w, pos_h), dim=2) - else: - raise ValueError("Unknown pos_tensor shape(-1):{}".format(pos_tensor.size(-1))) - return pos - - -class ContrastiveEmbed(nn.Module): - def __init__(self, max_text_len=256): - """ - Args: - max_text_len: max length of text. - """ - super().__init__() - self.max_text_len = max_text_len - - def forward(self, x, text_dict): - """_summary_ - - Args: - x (_type_): _description_ - text_dict (_type_): _description_ - { - 'encoded_text': encoded_text, # bs, 195, d_model - 'text_token_mask': text_token_mask, # bs, 195 - # True for used tokens. False for padding tokens - } - Returns: - _type_: _description_ - """ - assert isinstance(text_dict, dict) - - y = text_dict["encoded_text"] - text_token_mask = text_dict["text_token_mask"] - - res = x @ y.transpose(-1, -2) - res.masked_fill_(~text_token_mask[:, None, :], float("-inf")) - - # padding to max_text_len - new_res = torch.full((*res.shape[:-1], self.max_text_len), float("-inf"), device=res.device) - new_res[..., : res.shape[-1]] = res - - return new_res diff --git a/spaces/xp3857/Image_Restoration_Colorization/Global/data/__init__.py b/spaces/xp3857/Image_Restoration_Colorization/Global/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/xwsm/gpt/crazy_functional.py b/spaces/xwsm/gpt/crazy_functional.py deleted file mode 100644 index 462000e86f8f7b023db35b5079287594dddd941e..0000000000000000000000000000000000000000 --- a/spaces/xwsm/gpt/crazy_functional.py +++ /dev/null @@ -1,260 +0,0 @@ -from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效 - - -def get_crazy_functions(): - ###################### 第一组插件 ########################### - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.解析项目源代码 import 解析一个Golang项目 - from crazy_functions.解析项目源代码 import 解析一个Java项目 - from crazy_functions.解析项目源代码 import 解析一个前端项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文 - from crazy_functions.Latex全文润色 import Latex英文润色 - from crazy_functions.询问多个大语言模型 import 同时问询 - from crazy_functions.解析项目源代码 import 解析一个Lua项目 - from crazy_functions.解析项目源代码 import 解析一个CSharp项目 - from crazy_functions.总结word文档 import 总结word文档 - from crazy_functions.解析JupyterNotebook import 解析ipynb文件 - from crazy_functions.对话历史存档 import 对话历史存档 - from crazy_functions.对话历史存档 import 载入对话历史存档 - from crazy_functions.对话历史存档 import 删除所有本地对话历史记录 - - from crazy_functions.批量Markdown翻译 import Markdown英译中 - function_plugins = { - "解析整个Python项目": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(解析一个Python项目) - }, - "载入对话历史存档(先上传存档或输入路径)": { - "Color": "stop", - "AsButton":False, - "Function": HotReload(载入对话历史存档) - }, - "删除所有本地对话历史记录(请谨慎操作)": { - "AsButton":False, - "Function": HotReload(删除所有本地对话历史记录) - }, - "[测试功能] 解析Jupyter Notebook文件": { - "Color": "stop", - "AsButton":False, - "Function": HotReload(解析ipynb文件), - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示 - }, - "批量总结Word文档": { - "Color": "stop", - "Function": HotReload(总结word文档) - }, - "解析整个C++项目头文件": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目的头文件) - }, - "解析整个C++项目(.cpp/.hpp/.c/.h)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目) - }, - "解析整个Go项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Golang项目) - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Java项目) - }, - "解析整个前端项目(js,ts,css等)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个前端项目) - }, - "解析整个Lua项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Lua项目) - }, - "解析整个CSharp项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个CSharp项目) - }, - "读Tex论文写摘要": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(读文章写摘要) - }, - "Markdown/Readme英译中": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "Function": HotReload(Markdown英译中) - }, - "批量生成函数注释": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量生成函数注释) - }, - "保存当前的对话": { - "Function": HotReload(对话历史存档) - }, - "[多线程Demo] 解析此项目本身(源码自译解)": { - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析项目本身) - }, - "[老旧的Demo] 把本项目源代码切换成全英文": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(全项目切换英文) - }, - "[插件demo] 历史上的今天": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(高阶功能模板函数) - }, - - } - ###################### 第二组插件 ########################### - # [第二组插件]: 经过充分测试 - from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 - from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer - from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档 - from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 - from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入 - from crazy_functions.Latex全文润色 import Latex中文润色 - from crazy_functions.Latex全文翻译 import Latex中译英 - from crazy_functions.Latex全文翻译 import Latex英译中 - from crazy_functions.批量Markdown翻译 import Markdown中译英 - - function_plugins.update({ - "批量翻译PDF文档(多线程)": { - "Color": "stop", - "AsButton": True, # 加入下拉菜单中 - "Function": HotReload(批量翻译PDF文档) - }, - "询问多个GPT模型": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(同时问询) - }, - "[测试功能] 批量总结PDF文档": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(批量总结PDF文档) - }, - "[测试功能] 批量总结PDF文档pdfminer": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量总结PDF文档pdfminer) - }, - "谷歌学术检索助手(输入谷歌学术搜索页url)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(谷歌检索小助手) - }, - - "理解PDF文档内容 (模仿ChatPDF)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(理解PDF文档内容标准文件输入) - }, - "[测试功能] 英文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英文润色) - }, - "[测试功能] 中文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中文润色) - }, - "Latex项目全文中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中译英) - }, - "Latex项目全文英译中(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英译中) - }, - "批量Markdown中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Markdown中译英) - }, - - - }) - - ###################### 第三组插件 ########################### - # [第三组插件]: 尚未充分测试的函数插件,放在这里 - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - function_plugins.update({ - "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(下载arxiv论文并翻译摘要) - } - }) - - from crazy_functions.联网的ChatGPT import 连接网络回答问题 - function_plugins.update({ - "连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(连接网络回答问题) - } - }) - - from crazy_functions.解析项目源代码 import 解析任意code项目 - function_plugins.update({ - "解析项目源代码(手动指定和筛选源代码文件类型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示 - "Function": HotReload(解析任意code项目) - }, - }) - from crazy_functions.询问多个大语言模型 import 同时问询_指定模型 - function_plugins.update({ - "询问多个GPT模型(手动指定询问哪些模型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示 - "Function": HotReload(同时问询_指定模型) - }, - }) - from crazy_functions.图片生成 import 图片生成 - function_plugins.update({ - "图片生成(先切换模型到openai或api2d)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "在这里输入分辨率, 如256x256(默认)", # 高级参数输入区的显示提示 - "Function": HotReload(图片生成) - }, - }) - from crazy_functions.总结音视频 import 总结音视频 - function_plugins.update({ - "批量总结音视频(输入路径或上传压缩包)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, - "ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。", - "Function": HotReload(总结音视频) - } - }) - ###################### 第n组插件 ########################### - return function_plugins diff --git a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py b/spaces/xxbb/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py deleted file mode 100644 index acd00238895d57ba878fd0211d5654250fb10061..0000000000000000000000000000000000000000 --- a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py +++ /dev/null @@ -1,509 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import ONNXVITS_modules as modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - self.w = None - self.reverse = None - self.noise_scale = None - def forward(self, x, x_mask, g=None): - w = self.w - reverse = self.reverse - noise_scale = self.noise_scale - - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - self.reverse = None - def forward(self, x, x_mask, g=None): - reverse = self.reverse - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask # x_in : [b, c, t] -> [b, h, t] - x = self.enc(x, x_mask, g=g) # x_in : [b, h, t], g : [b, h, 1], x = x_in + g - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask # z, m, logs : [b, h, t] - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - - if n_speakers > 0: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, sid=None, noise_scale=.667, length_scale=1, noise_scale_w=.8, max_len=None): - torch.onnx.export( - self.enc_p, - (x, x_lengths), - "ONNX_net/enc_p.onnx", - input_names=["x", "x_lengths"], - output_names=["xout", "m_p", "logs_p", "x_mask"], - dynamic_axes={ - "x" : [1], - "xout" : [2], - "m_p" : [2], - "logs_p" : [2], - "x_mask" : [2] - }, - verbose=True, - ) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - self.dp.reverse = True - self.dp.noise_scale = noise_scale_w - torch.onnx.export( - self.dp, - (x, x_mask, g), - "ONNX_net/dp.onnx", - input_names=["x", "x_mask", "g"], - output_names=["logw"], - dynamic_axes={ - "x" : [2], - "x_mask" : [2], - "logw" : [2] - }, - verbose=True, - ) - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - - self.flow.reverse = True - torch.onnx.export( - self.flow, - (z_p, y_mask, g), - "ONNX_net/flow.onnx", - input_names=["z_p", "y_mask", "g"], - output_names=["z"], - dynamic_axes={ - "z_p" : [2], - "y_mask" : [2], - "z" : [2] - }, - verbose=True, - ) - z = self.flow(z_p, y_mask, g=g) - z_in = (z * y_mask)[:,:,:max_len] - - torch.onnx.export( - self.dec, - (z_in, g), - "ONNX_net/dec.onnx", - input_names=["z_in", "g"], - output_names=["o"], - dynamic_axes={ - "z_in" : [2], - "o" : [2] - }, - verbose=True, - ) - o = self.dec(z_in, g=g) - return o diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpt/modeling_dpt.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpt/modeling_dpt.py deleted file mode 100644 index 187a6c36656a8ea040c21ef8566f5ffaf8ceeb38..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpt/modeling_dpt.py +++ /dev/null @@ -1,1339 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Intel Labs, OpenMMLab and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch DPT (Dense Prediction Transformers) model. - -This implementation is heavily inspired by OpenMMLab's implementation, found here: -https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/dpt_head.py. - -""" - - -import collections.abc -import math -from dataclasses import dataclass -from typing import List, Optional, Set, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import CrossEntropyLoss - -from ...activations import ACT2FN -from ...file_utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - replace_return_docstrings, -) -from ...modeling_outputs import BaseModelOutput, DepthEstimatorOutput, SemanticSegmenterOutput -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import ModelOutput, logging -from ..auto import AutoBackbone -from .configuration_dpt import DPTConfig - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "DPTConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "Intel/dpt-large" -_EXPECTED_OUTPUT_SHAPE = [1, 577, 1024] - - -DPT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "Intel/dpt-large", - "Intel/dpt-hybrid-midas", - # See all DPT models at https://huggingface.co/models?filter=dpt -] - - -@dataclass -class BaseModelOutputWithIntermediateActivations(ModelOutput): - """ - Base class for model's outputs that also contains intermediate activations that can be used at later stages. Useful - in the context of Vision models.: - - Args: - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - intermediate_activations (`tuple(torch.FloatTensor)`, *optional*): - Intermediate activations that can be used to compute hidden states of the model at various layers. - """ - - last_hidden_states: torch.FloatTensor = None - intermediate_activations: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class BaseModelOutputWithPoolingAndIntermediateActivations(ModelOutput): - """ - Base class for model's outputs that also contains a pooling of the last hidden states as well as intermediate - activations that can be used by the model at later stages. - - Args: - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - pooler_output (`torch.FloatTensor` of shape `(batch_size, hidden_size)`): - Last layer hidden-state of the first token of the sequence (classification token) after further processing - through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns - the classification token after processing through a linear layer and a tanh activation function. The linear - layer weights are trained from the next sentence prediction (classification) objective during pretraining. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + - one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - intermediate_activations (`tuple(torch.FloatTensor)`, *optional*): - Intermediate activations that can be used to compute hidden states of the model at various layers. - """ - - last_hidden_state: torch.FloatTensor = None - pooler_output: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - intermediate_activations: Optional[Tuple[torch.FloatTensor]] = None - - -class DPTViTHybridEmbeddings(nn.Module): - """ - This class turns `pixel_values` of shape `(batch_size, num_channels, height, width)` into the initial - `hidden_states` (patch embeddings) of shape `(batch_size, seq_length, hidden_size)` to be consumed by a - Transformer. - """ - - def __init__(self, config, feature_size=None): - super().__init__() - image_size, patch_size = config.image_size, config.patch_size - num_channels, hidden_size = config.num_channels, config.hidden_size - - image_size = image_size if isinstance(image_size, collections.abc.Iterable) else (image_size, image_size) - patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size) - num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) - - self.backbone = AutoBackbone.from_config(config.backbone_config) - feature_dim = self.backbone.channels[-1] - if len(config.backbone_config.out_features) != 3: - raise ValueError( - f"Expected backbone to have 3 output features, got {len(config.backbone_config.out_features)}" - ) - self.residual_feature_map_index = [0, 1] # Always take the output of the first and second backbone stage - - if feature_size is None: - feat_map_shape = config.backbone_featmap_shape - feature_size = feat_map_shape[-2:] - feature_dim = feat_map_shape[1] - else: - feature_size = ( - feature_size if isinstance(feature_size, collections.abc.Iterable) else (feature_size, feature_size) - ) - feature_dim = self.backbone.channels[-1] - - self.image_size = image_size - self.patch_size = patch_size[0] - self.num_channels = num_channels - - self.projection = nn.Conv2d(feature_dim, hidden_size, kernel_size=1) - - self.cls_token = nn.Parameter(torch.zeros(1, 1, config.hidden_size)) - self.position_embeddings = nn.Parameter(torch.zeros(1, num_patches + 1, config.hidden_size)) - - def _resize_pos_embed(self, posemb, grid_size_height, grid_size_width, start_index=1): - posemb_tok = posemb[:, :start_index] - posemb_grid = posemb[0, start_index:] - - old_grid_size = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, old_grid_size, old_grid_size, -1).permute(0, 3, 1, 2) - posemb_grid = nn.functional.interpolate(posemb_grid, size=(grid_size_height, grid_size_width), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, grid_size_height * grid_size_width, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - def forward( - self, pixel_values: torch.Tensor, interpolate_pos_encoding: bool = False, return_dict: bool = False - ) -> torch.Tensor: - batch_size, num_channels, height, width = pixel_values.shape - if num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - if not interpolate_pos_encoding: - if height != self.image_size[0] or width != self.image_size[1]: - raise ValueError( - f"Input image size ({height}*{width}) doesn't match model" - f" ({self.image_size[0]}*{self.image_size[1]})." - ) - - position_embeddings = self._resize_pos_embed( - self.position_embeddings, height // self.patch_size, width // self.patch_size - ) - - backbone_output = self.backbone(pixel_values) - - features = backbone_output.feature_maps[-1] - - # Retrieve also the intermediate activations to use them at later stages - output_hidden_states = [backbone_output.feature_maps[index] for index in self.residual_feature_map_index] - - embeddings = self.projection(features).flatten(2).transpose(1, 2) - - cls_tokens = self.cls_token.expand(batch_size, -1, -1) - embeddings = torch.cat((cls_tokens, embeddings), dim=1) - - # add positional encoding to each token - embeddings = embeddings + position_embeddings - - if not return_dict: - return (embeddings, output_hidden_states) - - # Return hidden states and intermediate activations - return BaseModelOutputWithIntermediateActivations( - last_hidden_states=embeddings, - intermediate_activations=output_hidden_states, - ) - - -class DPTViTEmbeddings(nn.Module): - """ - Construct the CLS token, position and patch embeddings. - - """ - - def __init__(self, config): - super().__init__() - - self.cls_token = nn.Parameter(torch.zeros(1, 1, config.hidden_size)) - self.patch_embeddings = DPTViTPatchEmbeddings(config) - num_patches = self.patch_embeddings.num_patches - self.position_embeddings = nn.Parameter(torch.zeros(1, num_patches + 1, config.hidden_size)) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.config = config - - def _resize_pos_embed(self, posemb, grid_size_height, grid_size_width, start_index=1): - posemb_tok = posemb[:, :start_index] - posemb_grid = posemb[0, start_index:] - - old_grid_size = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, old_grid_size, old_grid_size, -1).permute(0, 3, 1, 2) - posemb_grid = nn.functional.interpolate(posemb_grid, size=(grid_size_height, grid_size_width), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, grid_size_height * grid_size_width, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - def forward(self, pixel_values, return_dict=False): - batch_size, num_channels, height, width = pixel_values.shape - - # possibly interpolate position encodings to handle varying image sizes - patch_size = self.config.patch_size - position_embeddings = self._resize_pos_embed( - self.position_embeddings, height // patch_size, width // patch_size - ) - - embeddings = self.patch_embeddings(pixel_values) - - batch_size, seq_len, _ = embeddings.size() - - # add the [CLS] token to the embedded patch tokens - cls_tokens = self.cls_token.expand(batch_size, -1, -1) - embeddings = torch.cat((cls_tokens, embeddings), dim=1) - - # add positional encoding to each token - embeddings = embeddings + position_embeddings - - embeddings = self.dropout(embeddings) - - if not return_dict: - return (embeddings,) - - return BaseModelOutputWithIntermediateActivations(last_hidden_states=embeddings) - - -class DPTViTPatchEmbeddings(nn.Module): - """ - Image to Patch Embedding. - - """ - - def __init__(self, config): - super().__init__() - image_size, patch_size = config.image_size, config.patch_size - num_channels, hidden_size = config.num_channels, config.hidden_size - - image_size = image_size if isinstance(image_size, collections.abc.Iterable) else (image_size, image_size) - patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size) - num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) - self.image_size = image_size - self.patch_size = patch_size - self.num_channels = num_channels - self.num_patches = num_patches - - self.projection = nn.Conv2d(num_channels, hidden_size, kernel_size=patch_size, stride=patch_size) - - def forward(self, pixel_values): - batch_size, num_channels, height, width = pixel_values.shape - if num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2) - return embeddings - - -# Copied from transformers.models.vit.modeling_vit.ViTSelfAttention with ViT->DPT -class DPTViTSelfAttention(nn.Module): - def __init__(self, config: DPTConfig) -> None: - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size {config.hidden_size,} is not a multiple of the number of attention " - f"heads {config.num_attention_heads}." - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - - def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor: - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, hidden_states, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False - ) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]: - mixed_query_layer = self.query(hidden_states) - - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - query_layer = self.transpose_for_scores(mixed_query_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - return outputs - - -# Copied from transformers.models.vit.modeling_vit.ViTSelfOutput with ViT->DPT -class DPTViTSelfOutput(nn.Module): - """ - The residual connection is defined in DPTLayer instead of here (as is the case with other models), due to the - layernorm applied before each block. - """ - - def __init__(self, config: DPTConfig) -> None: - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - - return hidden_states - - -class DPTViTAttention(nn.Module): - def __init__(self, config: DPTConfig) -> None: - super().__init__() - self.attention = DPTViTSelfAttention(config) - self.output = DPTViTSelfOutput(config) - self.pruned_heads = set() - - # Copied from transformers.models.vit.modeling_vit.ViTAttention.prune_heads - def prune_heads(self, heads: Set[int]) -> None: - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.attention.query = prune_linear_layer(self.attention.query, index) - self.attention.key = prune_linear_layer(self.attention.key, index) - self.attention.value = prune_linear_layer(self.attention.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads) - self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - # Copied from transformers.models.vit.modeling_vit.ViTAttention.forward - def forward( - self, - hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - ) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]: - self_outputs = self.attention(hidden_states, head_mask, output_attentions) - - attention_output = self.output(self_outputs[0], hidden_states) - - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -# Copied from transformers.models.vit.modeling_vit.ViTIntermediate with ViT->DPT -class DPTViTIntermediate(nn.Module): - def __init__(self, config: DPTConfig) -> None: - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - - return hidden_states - - -# Copied from transformers.models.vit.modeling_vit.ViTOutput with ViT->DPT -class DPTViTOutput(nn.Module): - def __init__(self, config: DPTConfig) -> None: - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - - hidden_states = hidden_states + input_tensor - - return hidden_states - - -# copied from transformers.models.vit.modeling_vit.ViTLayer with ViTConfig->DPTConfig, ViTAttention->DPTViTAttention, ViTIntermediate->DPTViTIntermediate, ViTOutput->DPTViTOutput -class DPTViTLayer(nn.Module): - """This corresponds to the Block class in the timm implementation.""" - - def __init__(self, config: DPTConfig) -> None: - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = DPTViTAttention(config) - self.intermediate = DPTViTIntermediate(config) - self.output = DPTViTOutput(config) - self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - ) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]: - self_attention_outputs = self.attention( - self.layernorm_before(hidden_states), # in ViT, layernorm is applied before self-attention - head_mask, - output_attentions=output_attentions, - ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - # first residual connection - hidden_states = attention_output + hidden_states - - # in ViT, layernorm is also applied after self-attention - layer_output = self.layernorm_after(hidden_states) - layer_output = self.intermediate(layer_output) - - # second residual connection is done here - layer_output = self.output(layer_output, hidden_states) - - outputs = (layer_output,) + outputs - - return outputs - - -# copied from transformers.models.vit.modeling_vit.ViTEncoder with ViTConfig -> DPTConfig, ViTLayer->DPTViTLayer -class DPTViTEncoder(nn.Module): - def __init__(self, config: DPTConfig) -> None: - super().__init__() - self.config = config - self.layer = nn.ModuleList([DPTViTLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ) -> Union[tuple, BaseModelOutput]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - layer_head_mask, - ) - else: - layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -class DPTReassembleStage(nn.Module): - """ - This class reassembles the hidden states of the backbone into image-like feature representations at various - resolutions. - - This happens in 3 stages: - 1. Map the N + 1 tokens to a set of N tokens, by taking into account the readout ([CLS]) token according to - `config.readout_type`. - 2. Project the channel dimension of the hidden states according to `config.neck_hidden_sizes`. - 3. Resizing the spatial dimensions (height, width). - - Args: - config (`[DPTConfig]`): - Model configuration class defining the model architecture. - """ - - def __init__(self, config): - super().__init__() - - self.config = config - self.layers = nn.ModuleList() - if config.is_hybrid: - self._init_reassemble_dpt_hybrid(config) - else: - self._init_reassemble_dpt(config) - - self.neck_ignore_stages = config.neck_ignore_stages - - def _init_reassemble_dpt_hybrid(self, config): - r""" " - For DPT-Hybrid the first 2 reassemble layers are set to `nn.Identity()`, please check the official - implementation: https://github.com/isl-org/DPT/blob/f43ef9e08d70a752195028a51be5e1aff227b913/dpt/vit.py#L438 - for more details. - """ - for i, factor in zip(range(len(config.neck_hidden_sizes)), config.reassemble_factors): - if i <= 1: - self.layers.append(nn.Identity()) - elif i > 1: - self.layers.append(DPTReassembleLayer(config, channels=config.neck_hidden_sizes[i], factor=factor)) - - if config.readout_type != "project": - raise ValueError(f"Readout type {config.readout_type} is not supported for DPT-Hybrid.") - - # When using DPT-Hybrid the readout type is set to "project". The sanity check is done on the config file - self.readout_projects = nn.ModuleList() - for i in range(len(config.neck_hidden_sizes)): - if i <= 1: - self.readout_projects.append(nn.Sequential(nn.Identity())) - elif i > 1: - self.readout_projects.append( - nn.Sequential(nn.Linear(2 * config.hidden_size, config.hidden_size), ACT2FN[config.hidden_act]) - ) - - def _init_reassemble_dpt(self, config): - for i, factor in zip(range(len(config.neck_hidden_sizes)), config.reassemble_factors): - self.layers.append(DPTReassembleLayer(config, channels=config.neck_hidden_sizes[i], factor=factor)) - - if config.readout_type == "project": - self.readout_projects = nn.ModuleList() - for _ in range(len(config.neck_hidden_sizes)): - self.readout_projects.append( - nn.Sequential(nn.Linear(2 * config.hidden_size, config.hidden_size), ACT2FN[config.hidden_act]) - ) - - def forward(self, hidden_states: List[torch.Tensor]) -> List[torch.Tensor]: - """ - Args: - hidden_states (`List[torch.FloatTensor]`, each of shape `(batch_size, sequence_length + 1, hidden_size)`): - List of hidden states from the backbone. - """ - out = [] - - for i, hidden_state in enumerate(hidden_states): - if i not in self.neck_ignore_stages: - # reshape to (B, C, H, W) - hidden_state, cls_token = hidden_state[:, 1:], hidden_state[:, 0] - batch_size, sequence_length, num_channels = hidden_state.shape - size = int(math.sqrt(sequence_length)) - hidden_state = hidden_state.reshape(batch_size, size, size, num_channels) - hidden_state = hidden_state.permute(0, 3, 1, 2).contiguous() - - feature_shape = hidden_state.shape - if self.config.readout_type == "project": - # reshape to (B, H*W, C) - hidden_state = hidden_state.flatten(2).permute((0, 2, 1)) - readout = cls_token.unsqueeze(1).expand_as(hidden_state) - # concatenate the readout token to the hidden states and project - hidden_state = self.readout_projects[i](torch.cat((hidden_state, readout), -1)) - # reshape back to (B, C, H, W) - hidden_state = hidden_state.permute(0, 2, 1).reshape(feature_shape) - elif self.config.readout_type == "add": - hidden_state = hidden_state.flatten(2) + cls_token.unsqueeze(-1) - hidden_state = hidden_state.reshape(feature_shape) - hidden_state = self.layers[i](hidden_state) - out.append(hidden_state) - - return out - - -class DPTReassembleLayer(nn.Module): - def __init__(self, config, channels, factor): - super().__init__() - # projection - self.projection = nn.Conv2d(in_channels=config.hidden_size, out_channels=channels, kernel_size=1) - - # up/down sampling depending on factor - if factor > 1: - self.resize = nn.ConvTranspose2d(channels, channels, kernel_size=factor, stride=factor, padding=0) - elif factor == 1: - self.resize = nn.Identity() - elif factor < 1: - # so should downsample - self.resize = nn.Conv2d(channels, channels, kernel_size=3, stride=int(1 / factor), padding=1) - - def forward(self, hidden_state): - hidden_state = self.projection(hidden_state) - hidden_state = self.resize(hidden_state) - return hidden_state - - -class DPTFeatureFusionStage(nn.Module): - def __init__(self, config): - super().__init__() - self.layers = nn.ModuleList() - for _ in range(len(config.neck_hidden_sizes)): - self.layers.append(DPTFeatureFusionLayer(config)) - - def forward(self, hidden_states): - # reversing the hidden_states, we start from the last - hidden_states = hidden_states[::-1] - - fused_hidden_states = [] - # first layer only uses the last hidden_state - fused_hidden_state = self.layers[0](hidden_states[0]) - fused_hidden_states.append(fused_hidden_state) - # looping from the last layer to the second - for hidden_state, layer in zip(hidden_states[1:], self.layers[1:]): - fused_hidden_state = layer(fused_hidden_state, hidden_state) - fused_hidden_states.append(fused_hidden_state) - - return fused_hidden_states - - -class DPTPreActResidualLayer(nn.Module): - """ - ResidualConvUnit, pre-activate residual unit. - - Args: - config (`[DPTConfig]`): - Model configuration class defining the model architecture. - """ - - def __init__(self, config): - super().__init__() - - self.use_batch_norm = config.use_batch_norm_in_fusion_residual - self.activation1 = ACT2FN["relu"] - self.convolution1 = nn.Conv2d( - config.fusion_hidden_size, - config.fusion_hidden_size, - kernel_size=3, - stride=1, - padding=1, - bias=not self.use_batch_norm, - ) - - self.activation2 = ACT2FN["relu"] - self.convolution2 = nn.Conv2d( - config.fusion_hidden_size, - config.fusion_hidden_size, - kernel_size=3, - stride=1, - padding=1, - bias=not self.use_batch_norm, - ) - - if self.use_batch_norm: - self.batch_norm1 = nn.BatchNorm2d(config.fusion_hidden_size) - self.batch_norm2 = nn.BatchNorm2d(config.fusion_hidden_size) - - def forward(self, hidden_state: torch.Tensor) -> torch.Tensor: - residual = hidden_state - hidden_state = self.activation1(hidden_state) - - hidden_state = self.convolution1(hidden_state) - - if self.use_batch_norm: - hidden_state = self.batch_norm1(hidden_state) - - hidden_state = self.activation2(hidden_state) - hidden_state = self.convolution2(hidden_state) - - if self.use_batch_norm: - hidden_state = self.batch_norm2(hidden_state) - - return hidden_state + residual - - -class DPTFeatureFusionLayer(nn.Module): - """Feature fusion layer, merges feature maps from different stages. - - Args: - config (`[DPTConfig]`): - Model configuration class defining the model architecture. - align_corners (`bool`, *optional*, defaults to `True`): - The align_corner setting for bilinear upsample. - """ - - def __init__(self, config, align_corners=True): - super().__init__() - - self.align_corners = align_corners - - self.projection = nn.Conv2d(config.fusion_hidden_size, config.fusion_hidden_size, kernel_size=1, bias=True) - - self.residual_layer1 = DPTPreActResidualLayer(config) - self.residual_layer2 = DPTPreActResidualLayer(config) - - def forward(self, hidden_state, residual=None): - if residual is not None: - if hidden_state.shape != residual.shape: - residual = nn.functional.interpolate( - residual, size=(hidden_state.shape[2], hidden_state.shape[3]), mode="bilinear", align_corners=False - ) - hidden_state = hidden_state + self.residual_layer1(residual) - - hidden_state = self.residual_layer2(hidden_state) - hidden_state = nn.functional.interpolate( - hidden_state, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - hidden_state = self.projection(hidden_state) - - return hidden_state - - -class DPTPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = DPTConfig - base_model_prefix = "dpt" - main_input_name = "pixel_values" - supports_gradient_checkpointing = True - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Conv2d, nn.ConvTranspose2d)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, DPTViTEncoder): - module.gradient_checkpointing = value - - -DPT_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it - as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`ViTConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -DPT_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See [`DPTImageProcessor.__call__`] - for details. - - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~file_utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare DPT Model transformer outputting raw hidden-states without any specific head on top.", - DPT_START_DOCSTRING, -) -class DPTModel(DPTPreTrainedModel): - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - # vit encoder - if config.is_hybrid: - self.embeddings = DPTViTHybridEmbeddings(config) - else: - self.embeddings = DPTViTEmbeddings(config) - self.encoder = DPTViTEncoder(config) - - self.layernorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.pooler = DPTViTPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - if self.config.is_hybrid: - return self.embeddings - else: - return self.embeddings.patch_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(DPT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndIntermediateActivations, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, - pixel_values: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPoolingAndIntermediateActivations]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings(pixel_values, return_dict=return_dict) - - embedding_last_hidden_states = embedding_output[0] if not return_dict else embedding_output.last_hidden_states - - encoder_outputs = self.encoder( - embedding_last_hidden_states, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = encoder_outputs[0] - sequence_output = self.layernorm(sequence_output) - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - head_outputs = (sequence_output, pooled_output) if pooled_output is not None else (sequence_output,) - return head_outputs + encoder_outputs[1:] + embedding_output[1:] - - return BaseModelOutputWithPoolingAndIntermediateActivations( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - intermediate_activations=embedding_output.intermediate_activations, - ) - - -# Copied from transformers.models.vit.modeling_vit.ViTPooler with ViT->DPT -class DPTViTPooler(nn.Module): - def __init__(self, config: DPTConfig): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class DPTNeck(nn.Module): - """ - DPTNeck. A neck is a module that is normally used between the backbone and the head. It takes a list of tensors as - input and produces another list of tensors as output. For DPT, it includes 2 stages: - - * DPTReassembleStage - * DPTFeatureFusionStage. - - Args: - config (dict): config dict. - """ - - def __init__(self, config): - super().__init__() - self.config = config - - # postprocessing - self.reassemble_stage = DPTReassembleStage(config) - self.convs = nn.ModuleList() - for channel in config.neck_hidden_sizes: - self.convs.append(nn.Conv2d(channel, config.fusion_hidden_size, kernel_size=3, padding=1, bias=False)) - - # fusion - self.fusion_stage = DPTFeatureFusionStage(config) - - def forward(self, hidden_states: List[torch.Tensor]) -> List[torch.Tensor]: - if not isinstance(hidden_states, list): - raise ValueError("hidden_states should be a list of tensors") - - if len(hidden_states) != len(self.config.neck_hidden_sizes): - raise ValueError("The number of hidden states should be equal to the number of neck hidden sizes.") - - # postprocess hidden states - features = self.reassemble_stage(hidden_states) - - features = [self.convs[i](feature) for i, feature in enumerate(features)] - - # fusion blocks - output = self.fusion_stage(features) - - return output - - -class DPTDepthEstimationHead(nn.Module): - """ - Output head head consisting of 3 convolutional layers. It progressively halves the feature dimension and upsamples - the predictions to the input resolution after the first convolutional layer (details can be found in the paper's - supplementary material). - """ - - def __init__(self, config): - super().__init__() - - self.config = config - - features = config.fusion_hidden_size - self.head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - ACT2FN["relu"], - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - ACT2FN["relu"], - ) - - def forward(self, hidden_states: List[torch.Tensor]) -> torch.Tensor: - # use last features - hidden_states = hidden_states[self.config.head_in_index] - - predicted_depth = self.head(hidden_states) - - predicted_depth = predicted_depth.squeeze(dim=1) - - return predicted_depth - - -@add_start_docstrings( - """ - DPT Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2. - """, - DPT_START_DOCSTRING, -) -class DPTForDepthEstimation(DPTPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.dpt = DPTModel(config, add_pooling_layer=False) - - # Neck - self.neck = DPTNeck(config) - - # Depth estimation head - self.head = DPTDepthEstimationHead(config) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(DPT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=DepthEstimatorOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], DepthEstimatorOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*): - Ground truth depth estimation maps for computing the loss. - - Returns: - - Examples: - ```python - >>> from transformers import AutoImageProcessor, DPTForDepthEstimation - >>> import torch - >>> import numpy as np - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large") - >>> model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large") - - >>> # prepare image for the model - >>> inputs = image_processor(images=image, return_tensors="pt") - - >>> with torch.no_grad(): - ... outputs = model(**inputs) - ... predicted_depth = outputs.predicted_depth - - >>> # interpolate to original size - >>> prediction = torch.nn.functional.interpolate( - ... predicted_depth.unsqueeze(1), - ... size=image.size[::-1], - ... mode="bicubic", - ... align_corners=False, - ... ) - - >>> # visualize the prediction - >>> output = prediction.squeeze().cpu().numpy() - >>> formatted = (output * 255 / np.max(output)).astype("uint8") - >>> depth = Image.fromarray(formatted) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - - outputs = self.dpt( - pixel_values, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=True, # we need the intermediate hidden states - return_dict=return_dict, - ) - - hidden_states = outputs.hidden_states if return_dict else outputs[1] - - # only keep certain features based on config.backbone_out_indices - # note that the hidden_states also include the initial embeddings - if not self.config.is_hybrid: - hidden_states = [ - feature for idx, feature in enumerate(hidden_states[1:]) if idx in self.config.backbone_out_indices - ] - else: - backbone_hidden_states = outputs.intermediate_activations if return_dict else list(outputs[-1]) - backbone_hidden_states.extend( - feature for idx, feature in enumerate(hidden_states[1:]) if idx in self.config.backbone_out_indices[2:] - ) - - hidden_states = backbone_hidden_states - - hidden_states = self.neck(hidden_states) - - predicted_depth = self.head(hidden_states) - - loss = None - if labels is not None: - raise NotImplementedError("Training is not implemented yet") - - if not return_dict: - if output_hidden_states: - output = (predicted_depth,) + outputs[1:] - else: - output = (predicted_depth,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return DepthEstimatorOutput( - loss=loss, - predicted_depth=predicted_depth, - hidden_states=outputs.hidden_states if output_hidden_states else None, - attentions=outputs.attentions, - ) - - -class DPTSemanticSegmentationHead(nn.Module): - def __init__(self, config): - super().__init__() - - self.config = config - - features = config.fusion_hidden_size - self.head = nn.Sequential( - nn.Conv2d(features, features, kernel_size=3, padding=1, bias=False), - nn.BatchNorm2d(features), - ACT2FN["relu"], - nn.Dropout(config.semantic_classifier_dropout), - nn.Conv2d(features, config.num_labels, kernel_size=1), - nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True), - ) - - def forward(self, hidden_states: List[torch.Tensor]) -> torch.Tensor: - # use last features - hidden_states = hidden_states[self.config.head_in_index] - - logits = self.head(hidden_states) - - return logits - - -class DPTAuxiliaryHead(nn.Module): - def __init__(self, config): - super().__init__() - - features = config.fusion_hidden_size - self.head = nn.Sequential( - nn.Conv2d(features, features, kernel_size=3, padding=1, bias=False), - nn.BatchNorm2d(features), - ACT2FN["relu"], - nn.Dropout(0.1, False), - nn.Conv2d(features, config.num_labels, kernel_size=1), - ) - - def forward(self, hidden_states): - logits = self.head(hidden_states) - - return logits - - -@add_start_docstrings( - """ - DPT Model with a semantic segmentation head on top e.g. for ADE20k, CityScapes. - """, - DPT_START_DOCSTRING, -) -class DPTForSemanticSegmentation(DPTPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.dpt = DPTModel(config, add_pooling_layer=False) - - # Neck - self.neck = DPTNeck(config) - - # Segmentation head(s) - self.head = DPTSemanticSegmentationHead(config) - self.auxiliary_head = DPTAuxiliaryHead(config) if config.use_auxiliary_head else None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(DPT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=SemanticSegmenterOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], SemanticSegmenterOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*): - Ground truth semantic segmentation maps for computing the loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels > 1`, a classification loss is computed (Cross-Entropy). - - Returns: - - Examples: - ```python - >>> from transformers import AutoImageProcessor, DPTForSemanticSegmentation - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large-ade") - >>> model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade") - - >>> inputs = image_processor(images=image, return_tensors="pt") - - >>> outputs = model(**inputs) - >>> logits = outputs.logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - - outputs = self.dpt( - pixel_values, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=True, # we need the intermediate hidden states - return_dict=return_dict, - ) - - hidden_states = outputs.hidden_states if return_dict else outputs[1] - - # only keep certain features based on config.backbone_out_indices - # note that the hidden_states also include the initial embeddings - if not self.config.is_hybrid: - hidden_states = [ - feature for idx, feature in enumerate(hidden_states[1:]) if idx in self.config.backbone_out_indices - ] - else: - backbone_hidden_states = outputs.intermediate_activations if return_dict else list(outputs[-1]) - backbone_hidden_states.extend( - feature for idx, feature in enumerate(hidden_states[1:]) if idx in self.config.backbone_out_indices[2:] - ) - - hidden_states = backbone_hidden_states - - hidden_states = self.neck(hidden_states) - - logits = self.head(hidden_states) - - auxiliary_logits = None - if self.auxiliary_head is not None: - auxiliary_logits = self.auxiliary_head(hidden_states[-1]) - - loss = None - if labels is not None: - if self.config.num_labels == 1: - raise ValueError("The number of labels should be greater than one") - else: - # upsample logits to the images' original size - upsampled_logits = nn.functional.interpolate( - logits, size=labels.shape[-2:], mode="bilinear", align_corners=False - ) - if auxiliary_logits is not None: - upsampled_auxiliary_logits = nn.functional.interpolate( - auxiliary_logits, size=labels.shape[-2:], mode="bilinear", align_corners=False - ) - # compute weighted loss - loss_fct = CrossEntropyLoss(ignore_index=self.config.semantic_loss_ignore_index) - main_loss = loss_fct(upsampled_logits, labels) - auxiliary_loss = loss_fct(upsampled_auxiliary_logits, labels) - loss = main_loss + self.config.auxiliary_loss_weight * auxiliary_loss - - if not return_dict: - if output_hidden_states: - output = (logits,) + outputs[1:] - else: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return SemanticSegmenterOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states if output_hidden_states else None, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/efficientformer/modeling_tf_efficientformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/efficientformer/modeling_tf_efficientformer.py deleted file mode 100644 index 1907af388f92747b059129bdde7f311d4eb78603..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/efficientformer/modeling_tf_efficientformer.py +++ /dev/null @@ -1,986 +0,0 @@ -# coding=utf-8 -# Copyright 2023 Snapchat Research and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TensorFlow EfficientFormer model.""" - -import itertools -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import tensorflow as tf - -from ...activations_tf import ACT2FN -from ...modeling_tf_outputs import ( - TFBaseModelOutput, - TFBaseModelOutputWithPooling, - TFImageClassifierOutput, -) -from ...modeling_tf_utils import ( - TFPreTrainedModel, - TFSequenceClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import shape_list, stable_softmax -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, -) -from .configuration_efficientformer import EfficientFormerConfig - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "EfficientFormerConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "snap-research/efficientformer-l1-300" -_EXPECTED_OUTPUT_SHAPE = [1, 49, 448] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "snap-research/efficientformer-l1-300" -_IMAGE_CLASS_EXPECTED_OUTPUT = "LABEL_281" - - -TF_EFFICIENTFORMER_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "snap-research/efficientformer-l1-300", - # See all EfficientFormer models at https://huggingface.co/models?filter=efficientformer -] - - -class TFEfficientFormerPatchEmbeddings(tf.keras.layers.Layer): - """ - This class performs downsampling between two stages. For the input tensor with the shape [batch_size, num_channels, - height, width] it produces output tensor with the shape [batch_size, num_channels, height/stride, width/stride] - """ - - def __init__( - self, config: EfficientFormerConfig, num_channels: int, embed_dim: int, apply_norm: bool = True, **kwargs - ) -> None: - super().__init__(**kwargs) - self.num_channels = num_channels - - self.padding = tf.keras.layers.ZeroPadding2D(padding=config.downsample_pad) - self.projection = tf.keras.layers.Conv2D( - filters=embed_dim, - kernel_size=config.downsample_patch_size, - strides=config.downsample_stride, - padding="valid", - name="projection", - ) - # Use same default momentum and epsilon as PyTorch equivalent for BatchNormalization - self.norm = ( - tf.keras.layers.BatchNormalization(axis=-1, epsilon=config.batch_norm_eps, momentum=0.9, name="norm") - if apply_norm - else tf.identity - ) - - def call(self, pixel_values: tf.Tensor, training: bool = False) -> tf.Tensor: - tf.debugging.assert_shapes( - [(pixel_values, (..., None, None, self.num_channels))], - message="Make sure that the channel dimension of the pixel values match with the one set in the configuration.", - ) - embeddings = self.projection(self.padding(pixel_values)) - embeddings = self.norm(embeddings, training=training) - return embeddings - - -class TFEfficientFormerSelfAttention(tf.keras.layers.Layer): - def __init__( - self, - dim: int, - key_dim: int, - num_heads: int, - attention_ratio: int, - resolution: int, - config: EfficientFormerConfig, - **kwargs, - ): - super().__init__(**kwargs) - - self.num_heads = num_heads - self.key_dim = key_dim - self.attention_ratio = attention_ratio - self.scale = key_dim**-0.5 - self.total_key_dim = key_dim * num_heads - self.expanded_key_dim = int(attention_ratio * key_dim) - self.total_expanded_key_dim = int(self.expanded_key_dim * num_heads) - hidden_size = self.total_expanded_key_dim + self.total_key_dim * 2 - - self.qkv = tf.keras.layers.Dense( - units=hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="qkv" - ) - self.projection = tf.keras.layers.Dense( - units=dim, kernel_initializer=get_initializer(config.initializer_range), name="projection" - ) - self.resolution = resolution - - def build(self, input_shape: tf.TensorShape) -> None: - points = list(itertools.product(range(self.resolution), range(self.resolution))) - num_points = len(points) - attention_offsets = {} - - idxs = [] - - for point_1 in points: - for point_2 in points: - offset = (abs(point_1[0] - point_2[0]), abs(point_1[1] - point_2[1])) - if offset not in attention_offsets: - attention_offsets[offset] = len(attention_offsets) - idxs.append(attention_offsets[offset]) - - self.attention_biases = self.add_weight( - shape=(self.num_heads, len(attention_offsets)), - initializer=tf.keras.initializers.zeros(), - trainable=True, - name="attention_biases", - ) - self.attention_bias_idxs = self.add_weight( - shape=(num_points, num_points), - trainable=False, - dtype=tf.int32, - name="attention_bias_idxs", - ) - - self.attention_bias_idxs.assign(tf.reshape(tf.cast(idxs, dtype=tf.int32), (num_points, num_points))) - - super().build(input_shape) - - def call( - self, hidden_states: tf.Tensor, output_attentions: bool = False, training: bool = False - ) -> Tuple[tf.Tensor]: - batch_size, sequence_length, *_ = shape_list(hidden_states) - qkv = self.qkv(inputs=hidden_states) - - query_layer, key_layer, value_layer = tf.split( - tf.reshape(tensor=qkv, shape=(batch_size, sequence_length, self.num_heads, -1)), - num_or_size_splits=[self.key_dim, self.key_dim, self.expanded_key_dim], - axis=3, - ) - - query_layer = tf.transpose(query_layer, perm=[0, 2, 1, 3]) - key_layer = tf.transpose(key_layer, perm=[0, 2, 1, 3]) - value_layer = tf.transpose(value_layer, perm=[0, 2, 1, 3]) - - attention_probs = tf.matmul(query_layer, tf.transpose(key_layer, perm=[0, 1, 3, 2])) - scale = tf.cast(self.scale, dtype=attention_probs.dtype) - attention_probs = tf.multiply(attention_probs, scale) - - attention_biases = tf.gather(params=self.attention_biases, indices=self.attention_bias_idxs, axis=1) - attention_probs = attention_probs + attention_biases - attention_probs = stable_softmax(logits=attention_probs, axis=-1) - - context_layer = tf.matmul(attention_probs, value_layer) - context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3]) - - context_layer = tf.reshape( - tensor=context_layer, shape=(batch_size, sequence_length, self.total_expanded_key_dim) - ) - context_layer = self.projection(context_layer) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - return outputs - - -class TFEfficientFormerConvStem(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, out_channels: int, **kwargs): - super().__init__(**kwargs) - - self.padding = tf.keras.layers.ZeroPadding2D(padding=1) - self.convolution1 = tf.keras.layers.Conv2D( - filters=out_channels // 2, kernel_size=3, strides=2, padding="valid", name="convolution1" - ) - # Use same default momentum and epsilon as PyTorch equivalent for BatchNormalization - self.batchnorm_before = tf.keras.layers.BatchNormalization( - axis=-1, epsilon=config.batch_norm_eps, momentum=0.9, name="batchnorm_before" - ) - - self.convolution2 = tf.keras.layers.Conv2D( - filters=out_channels, - kernel_size=3, - strides=2, - padding="valid", - name="convolution2", - ) - # Use same default momentum and epsilon as PyTorch equivalent for BatchNormalization - self.batchnorm_after = tf.keras.layers.BatchNormalization( - axis=-1, epsilon=config.batch_norm_eps, momentum=0.9, name="batchnorm_after" - ) - - self.activation = tf.keras.layers.Activation(activation=tf.keras.activations.relu, name="activation") - - def call(self, pixel_values: tf.Tensor, training: bool = False) -> tf.Tensor: - features = self.batchnorm_before(self.convolution1(self.padding(pixel_values)), training=training) - features = self.activation(features) - features = self.batchnorm_after(self.convolution2(self.padding(features)), training=training) - features = self.activation(features) - return features - - -class TFEfficientFormerPooling(tf.keras.layers.Layer): - def __init__(self, pool_size: int, **kwargs): - super().__init__(**kwargs) - self.pool = tf.keras.layers.AveragePooling2D(pool_size=pool_size, strides=1, padding="same") - - def call(self, hidden_states: tf.Tensor) -> tf.Tensor: - output = self.pool(hidden_states) - output = output - hidden_states - return output - - -class TFEfficientFormerDenseMlp(tf.keras.layers.Layer): - def __init__( - self, - config: EfficientFormerConfig, - in_features: int, - hidden_features: Optional[int] = None, - out_features: Optional[int] = None, - **kwargs, - ): - super().__init__(**kwargs) - out_features = out_features or in_features - hidden_features = hidden_features or in_features - - self.linear_in = tf.keras.layers.Dense( - units=hidden_features, kernel_initializer=get_initializer(config.initializer_range), name="linear_in" - ) - self.activation = ACT2FN[config.hidden_act] - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - self.linear_out = tf.keras.layers.Dense( - units=out_features, kernel_initializer=get_initializer(config.initializer_range), name="linear_out" - ) - - def call(self, hidden_states: tf.Tensor, training: bool = False) -> tf.Tensor: - hidden_states = self.linear_in(inputs=hidden_states) - hidden_states = self.activation(hidden_states) - hidden_states = self.dropout(inputs=hidden_states, training=training) - hidden_states = self.linear_out(inputs=hidden_states) - hidden_states = self.dropout(inputs=hidden_states, training=training) - - return hidden_states - - -class TFEfficientFormerConvMlp(tf.keras.layers.Layer): - def __init__( - self, - config: EfficientFormerConfig, - in_features: int, - hidden_features: Optional[int] = None, - out_features: Optional[int] = None, - drop: float = 0.0, - **kwargs, - ): - super().__init__(**kwargs) - out_features = out_features or in_features - hidden_features = hidden_features or in_features - - self.convolution1 = tf.keras.layers.Conv2D( - filters=hidden_features, - kernel_size=1, - name="convolution1", - padding="valid", - ) - - self.activation = ACT2FN[config.hidden_act] - - self.convolution2 = tf.keras.layers.Conv2D( - filters=out_features, - kernel_size=1, - name="convolution2", - padding="valid", - ) - - self.dropout = tf.keras.layers.Dropout(rate=drop) - - # Use same default momentum and epsilon as PyTorch equivalent for BatchNormalization - self.batchnorm_before = tf.keras.layers.BatchNormalization( - axis=-1, epsilon=config.batch_norm_eps, momentum=0.9, name="batchnorm_before" - ) - # Use same default momentum and epsilon as PyTorch equivalent for BatchNormalization - self.batchnorm_after = tf.keras.layers.BatchNormalization( - axis=-1, epsilon=config.batch_norm_eps, momentum=0.9, name="batchnorm_after" - ) - - def call(self, hidden_state: tf.Tensor, training: bool = False) -> tf.Tensor: - hidden_state = self.convolution1(hidden_state) - hidden_state = self.batchnorm_before(hidden_state, training=training) - hidden_state = self.activation(hidden_state) - hidden_state = self.dropout(hidden_state, training=training) - hidden_state = self.convolution2(hidden_state) - hidden_state = self.batchnorm_after(hidden_state, training=training) - hidden_state = self.dropout(hidden_state, training=training) - return hidden_state - - -# Copied from transformers.models.convnext.modeling_tf_convnext.TFConvNextDropPath with ConvNext->EfficientFormer -class TFEfficientFormerDropPath(tf.keras.layers.Layer): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - References: - (1) github.com:rwightman/pytorch-image-models - """ - - def __init__(self, drop_path, **kwargs): - super().__init__(**kwargs) - self.drop_path = drop_path - - def call(self, x, training=None): - if training: - keep_prob = 1 - self.drop_path - shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1) - random_tensor = keep_prob + tf.random.uniform(shape, 0, 1) - random_tensor = tf.floor(random_tensor) - return (x / keep_prob) * random_tensor - return x - - -class TFEfficientFormerFlat(tf.keras.layers.Layer): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def call(self, hidden_states: tf.Tensor) -> Tuple[tf.Tensor]: - batch_size, _, _, in_channels = shape_list(hidden_states) - hidden_states = tf.reshape(hidden_states, shape=[batch_size, -1, in_channels]) - return hidden_states - - -class TFEfficientFormerMeta3D(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, dim: int, drop_path: float = 0.0, **kwargs): - super().__init__(**kwargs) - - self.token_mixer = TFEfficientFormerSelfAttention( - dim=config.dim, - key_dim=config.key_dim, - num_heads=config.num_attention_heads, - attention_ratio=config.attention_ratio, - resolution=config.resolution, - name="token_mixer", - config=config, - ) - self.dim = dim - self.config = config - - self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layernorm1") - self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layernorm2") - mlp_hidden_dim = int(dim * config.mlp_expansion_ratio) - self.mlp = TFEfficientFormerDenseMlp(config, in_features=dim, hidden_features=mlp_hidden_dim, name="mlp") - - # Using `layers.Activation` instead of `tf.identity` to better control `training' behavior. - self.drop_path = ( - TFEfficientFormerDropPath(drop_path) - if drop_path > 0.0 - else tf.keras.layers.Activation("linear", name="drop_path") - ) - self.config = config - - def build(self, input_shape: tf.TensorShape): - self.layer_scale_1 = None - self.layer_scale_2 = None - - if self.config.use_layer_scale: - self.layer_scale_1 = self.add_weight( - shape=(self.dim,), - initializer=tf.keras.initializers.Constant(value=self.config.layer_scale_init_value), - trainable=True, - name="layer_scale_1", - ) - self.layer_scale_2 = self.add_weight( - shape=(self.dim,), - initializer=tf.keras.initializers.Constant(value=self.config.layer_scale_init_value), - trainable=True, - name="layer_scale_2", - ) - super().build(input_shape) - - def call( - self, hidden_states: tf.Tensor, output_attentions: bool = False, training: bool = False - ) -> Tuple[tf.Tensor]: - self_attention_outputs = self.token_mixer( - hidden_states=self.layernorm1(hidden_states, training=training), - output_attentions=output_attentions, - training=training, - ) - - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - if self.config.use_layer_scale: - layer_output = hidden_states + self.drop_path( - tf.expand_dims(tf.expand_dims(self.layer_scale_1, 0), 0) * attention_output, - training=training, - ) - layer_output = layer_output + self.drop_path( - tf.expand_dims(tf.expand_dims(self.layer_scale_2, 0), 0) - * self.mlp(hidden_states=self.layernorm2(inputs=layer_output, training=training), training=training), - training=training, - ) - else: - layer_output = hidden_states + self.drop_path(attention_output, training=training) - layer_output = layer_output + self.drop_path( - self.mlp(hidden_states=self.layernorm2(inputs=layer_output, training=training), training=training), - training=training, - ) - - outputs = (layer_output,) + outputs - - return outputs - - -class TFEfficientFormerMeta3DLayers(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, **kwargs): - super().__init__(**kwargs) - drop_paths = [ - config.drop_path_rate * (block_idx + sum(config.depths[:-1])) - for block_idx in range(config.num_meta3d_blocks) - ] - self.blocks = [ - TFEfficientFormerMeta3D(config, config.hidden_sizes[-1], drop_path=drop_path, name=f"blocks.{i}") - for i, drop_path in enumerate(drop_paths) - ] - - def call( - self, hidden_states: tf.Tensor, output_attentions: bool = False, training: bool = False - ) -> Tuple[tf.Tensor]: - all_attention_outputs = () if output_attentions else None - - for i, layer_module in enumerate(self.blocks): - if isinstance(hidden_states, tuple): - hidden_states = hidden_states[0] - - hidden_states = layer_module( - hidden_states=hidden_states, output_attentions=output_attentions, training=training - ) - if output_attentions: - all_attention_outputs = all_attention_outputs + (hidden_states[1],) - - if output_attentions: - outputs = (hidden_states[0],) + all_attention_outputs - return outputs - - return hidden_states - - -class TFEfficientFormerMeta4D(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, dim: int, drop_path: float = 0.0, **kwargs): - super().__init__(**kwargs) - pool_size = config.pool_size if config.pool_size is not None else 3 - self.token_mixer = TFEfficientFormerPooling(pool_size=pool_size, name="token_mixer") - self.dim = dim - mlp_hidden_dim = int(dim * config.mlp_expansion_ratio) - self.mlp = TFEfficientFormerConvMlp( - config=config, in_features=dim, hidden_features=mlp_hidden_dim, drop=config.hidden_dropout_prob, name="mlp" - ) - - self.drop_path = ( - TFEfficientFormerDropPath(drop_path, name="drop_path") - if drop_path > 0.0 - else tf.keras.layers.Activation("linear", name="drop_path") - ) - self.config = config - - def build(self, input_shape: tf.TensorShape): - self.layer_scale_1 = None - self.layer_scale_2 = None - - if self.config.use_layer_scale: - self.layer_scale_1 = self.add_weight( - shape=(self.dim), - initializer=tf.keras.initializers.Constant(value=self.config.layer_scale_init_value), - trainable=True, - name="layer_scale_1", - ) - self.layer_scale_2 = self.add_weight( - shape=(self.dim), - initializer=tf.keras.initializers.Constant(value=self.config.layer_scale_init_value), - trainable=True, - name="layer_scale_2", - ) - super().build(input_shape) - - def call(self, hidden_states: tf.Tensor, training: bool = False) -> Tuple[tf.Tensor]: - outputs = self.token_mixer(hidden_states) - - if self.config.use_layer_scale: - layer_output = hidden_states + self.drop_path( - tf.expand_dims(tf.expand_dims(self.layer_scale_1, 0), 0) * outputs, - training=training, - ) - - layer_output = layer_output + self.drop_path( - tf.expand_dims(tf.expand_dims(self.layer_scale_2, 0), 0) - * self.mlp(hidden_state=layer_output, training=training), - training=training, - ) - - else: - layer_output = hidden_states + self.drop_path(outputs, training=training) - layer_output = layer_output + self.drop_path( - self.mlp(hidden_state=layer_output, training=training), training=training - ) - - return layer_output - - -class TFEfficientFormerMeta4DLayers(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, stage_idx: int, **kwargs): - super().__init__(**kwargs) - num_layers = ( - config.depths[stage_idx] if stage_idx != -1 else config.depths[stage_idx] - config.num_meta3d_blocks - ) - drop_paths = [ - config.drop_path_rate * (block_idx + sum(config.depths[:stage_idx])) for block_idx in range(num_layers) - ] - - self.blocks = [ - TFEfficientFormerMeta4D( - config=config, dim=config.hidden_sizes[stage_idx], drop_path=drop_paths[i], name=f"blocks.{i}" - ) - for i in range(len(drop_paths)) - ] - - def call(self, hidden_states: tf.Tensor, training: bool = False) -> Tuple[tf.Tensor]: - for layer_module in self.blocks: - hidden_states = layer_module(hidden_states=hidden_states, training=training) - return hidden_states - - -class TFEfficientFormerIntermediateStage(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, index: int, **kwargs): - super().__init__(**kwargs) - self.meta4D_layers = TFEfficientFormerMeta4DLayers(config=config, stage_idx=index, name="meta4D_layers") - - def call(self, hidden_states: tf.Tensor, training: bool = False) -> Tuple[tf.Tensor]: - hidden_states = self.meta4D_layers(hidden_states=hidden_states, training=training) - return hidden_states - - -class TFEfficientFormerLastStage(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, **kwargs): - super().__init__(**kwargs) - self.meta4D_layers = TFEfficientFormerMeta4DLayers(config=config, stage_idx=-1, name="meta4D_layers") - self.flat = TFEfficientFormerFlat(name="flat") - self.meta3D_layers = TFEfficientFormerMeta3DLayers(config, name="meta3D_layers") - - def call( - self, hidden_states: tf.Tensor, output_attentions: bool = False, training: bool = False - ) -> Tuple[tf.Tensor]: - hidden_states = self.meta4D_layers(hidden_states=hidden_states, training=training) - hidden_states = self.flat(hidden_states=hidden_states) - hidden_states = self.meta3D_layers( - hidden_states=hidden_states, output_attentions=output_attentions, training=training - ) - - return hidden_states - - -class TFEfficientFormerEncoder(tf.keras.layers.Layer): - def __init__(self, config: EfficientFormerConfig, **kwargs): - super().__init__(**kwargs) - - self.config = config - num_intermediate_stages = len(config.depths) - 1 - downsamples = [ - config.downsamples[i] or config.hidden_sizes[i] != config.hidden_sizes[i + 1] - for i in range(num_intermediate_stages) - ] - - intermediate_stages = [] - layer_count = -1 - for i in range(num_intermediate_stages): - layer_count += 1 - intermediate_stages.append( - TFEfficientFormerIntermediateStage(config, i, name=f"intermediate_stages.{layer_count}") - ) - if downsamples[i]: - layer_count += 1 - intermediate_stages.append( - TFEfficientFormerPatchEmbeddings( - config, - config.hidden_sizes[i], - config.hidden_sizes[i + 1], - name=f"intermediate_stages.{layer_count}", - ) - ) - self.intermediate_stages = intermediate_stages - self.last_stage = TFEfficientFormerLastStage(config, name="last_stage") - - def call( - self, - hidden_states: tf.Tensor, - output_hidden_states: bool, - output_attentions: bool, - return_dict: bool, - training: bool = False, - ) -> TFBaseModelOutput: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - for layer_module in self.intermediate_stages: - hidden_states = layer_module(hidden_states, training=training) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_output = self.last_stage(hidden_states, output_attentions=output_attentions, training=training) - - if output_attentions: - all_self_attentions = all_self_attentions + layer_output[1:] - - if output_hidden_states: - all_hidden_states = all_hidden_states + (layer_output[0],) - - if not return_dict: - return tuple(v for v in [layer_output[0], all_hidden_states, all_self_attentions] if v is not None) - - return TFBaseModelOutput( - last_hidden_state=layer_output[0], - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -@keras_serializable -class TFEfficientFormerMainLayer(tf.keras.layers.Layer): - config_class = EfficientFormerConfig - - def __init__(self, config: EfficientFormerConfig, **kwargs) -> None: - super().__init__(**kwargs) - self.config = config - - self.patch_embed = TFEfficientFormerConvStem(config, config.hidden_sizes[0], name="patch_embed") - self.encoder = TFEfficientFormerEncoder(config, name="encoder") - self.layernorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layernorm") - - @unpack_inputs - def call( - self, - pixel_values: Optional[tf.Tensor] = None, - output_attentions: Optional[tf.Tensor] = None, - output_hidden_states: Optional[tf.Tensor] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor, ...]]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - # When running on CPU, tf.keras.layers.Conv2D and tf.keras.layers.AveragePool2D do not - # support channels first NCHW format. A number of blocks contain both. - # So change the input format from (batch_size, num_channels, height, width) to - # (batch_size, height, width, num_channels) here. - # shape = (batch_size, in_height, in_width, in_channels=num_channels) - pixel_values = tf.transpose(pixel_values, perm=(0, 2, 3, 1)) - embedding_output = self.patch_embed(pixel_values, training=training) - - encoder_outputs = self.encoder( - hidden_states=embedding_output, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = encoder_outputs[0] - sequence_output = self.layernorm(sequence_output, training=training) - - # Change the hidden states from (batch_size, height, width, num_channels) to - # (batch_size, num_channels, height, width). - # The hidden states are in (batch_size, height, width, num_channels) - # shape after all stages except the MB3D blocks. - if output_hidden_states: - hidden_states = tuple([tf.transpose(h, perm=(0, 3, 1, 2)) for h in encoder_outputs[1][:-1]]) + ( - encoder_outputs[1][-1], - ) - - if not return_dict: - head_outputs = (sequence_output,) - return head_outputs + encoder_outputs[1:] - - return TFBaseModelOutput( - last_hidden_state=sequence_output, - hidden_states=hidden_states if output_hidden_states else encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -class TFEfficientFormerPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = EfficientFormerConfig - base_model_prefix = "efficientformer" - main_input_name = "pixel_values" - - -EFFICIENTFORMER_START_DOCSTRING = r""" - This model is a TensorFlow - [tf.keras.layers.Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer). Use it as a regular - TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. - - - Parameters: - config ([`EfficientFormerConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -EFFICIENTFORMER_INPUTS_DOCSTRING = r""" - Args: - pixel_values ((`tf.Tensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`EfficientFormerImageProcessor.__call__`] for details. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare EfficientFormer Model transformer outputting raw hidden-states without any specific head on top.", - EFFICIENTFORMER_START_DOCSTRING, -) -class TFEfficientFormerModel(TFEfficientFormerPreTrainedModel): - def __init__(self, config: EfficientFormerConfig, **kwargs) -> None: - super().__init__(config, **kwargs) - - self.efficientformer = TFEfficientFormerMainLayer(config, name="efficientformer") - - @unpack_inputs - @add_start_docstrings_to_model_forward(EFFICIENTFORMER_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFBaseModelOutputWithPooling, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def call( - self, - pixel_values: Optional[tf.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[Tuple, TFBaseModelOutput]: - outputs = self.efficientformer( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - return outputs - - -@add_start_docstrings( - """ - EfficientFormer Model transformer with an image classification head on top of pooled last hidden state, e.g. for - ImageNet. - """, - EFFICIENTFORMER_START_DOCSTRING, -) -class TFEfficientFormerForImageClassification(TFEfficientFormerPreTrainedModel, TFSequenceClassificationLoss): - def __init__(self, config: EfficientFormerConfig): - super().__init__(config) - - self.num_labels = config.num_labels - self.efficientformer = TFEfficientFormerMainLayer(config, name="efficientformer") - - # Classifier head - self.classifier = ( - tf.keras.layers.Dense(config.num_labels, name="classifier") - if config.num_labels > 0 - else tf.keras.layers.Activation("linear", name="classifier") - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(EFFICIENTFORMER_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=TFImageClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def call( - self, - pixel_values: Optional[tf.Tensor] = None, - labels: Optional[tf.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[tf.Tensor, TFImageClassifierOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.efficientformer( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = outputs[0] - - logits = self.classifier(tf.reduce_mean(sequence_output, axis=-2)) - - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFImageClassifierOutput( - loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions - ) - - -@dataclass -class TFEfficientFormerForImageClassificationWithTeacherOutput(ModelOutput): - """ - Args: - Output type of [`EfficientFormerForImageClassificationWithTeacher`]. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Prediction scores as the average of the cls_logits and distillation logits. - cls_logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the - class token). - distillation_logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the - distillation token). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when - `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus - the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when - `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in - the self-attention heads. - """ - - logits: tf.Tensor = None - cls_logits: tf.Tensor = None - distillation_logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@add_start_docstrings( - """ - EfficientFormer Model transformer with image classification heads on top (a linear layer on top of the final hidden - state and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. - - .. warning:: - This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet - supported. - """, - EFFICIENTFORMER_START_DOCSTRING, -) -class TFEfficientFormerForImageClassificationWithTeacher(TFEfficientFormerPreTrainedModel): - def __init__(self, config: EfficientFormerConfig) -> None: - super().__init__(config) - - self.num_labels = config.num_labels - self.efficientformer = TFEfficientFormerMainLayer(config, name="efficientformer") - - # Classifier heads - self.classifier = ( - tf.keras.layers.Dense(config.num_labels, name="classifier") - if config.num_labels > 0 - else tf.keras.layers.Activation("linear", name="classifier") - ) - self.distillation_classifier = ( - tf.keras.layers.Dense(config.num_labels, name="distillation_classifier") - if config.num_labels > 0 - else tf.keras.layers.Activation("linear", name="distillation_classifier") - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(EFFICIENTFORMER_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=TFEfficientFormerForImageClassificationWithTeacherOutput, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def call( - self, - pixel_values: Optional[tf.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[tuple, TFEfficientFormerForImageClassificationWithTeacherOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if training: - raise Exception( - "This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported." - ) - - outputs = self.efficientformer( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = outputs[0] - - cls_logits = self.classifier(tf.reduce_mean(sequence_output, axis=-2)) - distillation_logits = self.distillation_classifier(tf.reduce_mean(sequence_output, axis=-2)) - logits = (cls_logits + distillation_logits) / 2 - - if not return_dict: - output = (logits, cls_logits, distillation_logits) + outputs[1:] - return output - - return TFEfficientFormerForImageClassificationWithTeacherOutput( - logits=logits, - cls_logits=cls_logits, - distillation_logits=distillation_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevit/feature_extraction_mobilevit.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevit/feature_extraction_mobilevit.py deleted file mode 100644 index a73baed6405c50339a7bb024348a6f417770bf20..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevit/feature_extraction_mobilevit.py +++ /dev/null @@ -1,33 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Feature extractor class for MobileViT.""" - -import warnings - -from ...utils import logging -from .image_processing_mobilevit import MobileViTImageProcessor - - -logger = logging.get_logger(__name__) - - -class MobileViTFeatureExtractor(MobileViTImageProcessor): - def __init__(self, *args, **kwargs) -> None: - warnings.warn( - "The class MobileViTFeatureExtractor is deprecated and will be removed in version 5 of Transformers." - " Please use MobileViTImageProcessor instead.", - FutureWarning, - ) - super().__init__(*args, **kwargs) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pegasus_x/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pegasus_x/__init__.py deleted file mode 100644 index 32003120c6a0b1a4b05fc5930f08c0f6439e8620..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pegasus_x/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import TYPE_CHECKING - -from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available - - -_import_structure = { - "configuration_pegasus_x": ["PEGASUS_X_PRETRAINED_CONFIG_ARCHIVE_MAP", "PegasusXConfig"], -} - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_pegasus_x"] = [ - "PEGASUS_X_PRETRAINED_MODEL_ARCHIVE_LIST", - "PegasusXForConditionalGeneration", - "PegasusXModel", - "PegasusXPreTrainedModel", - ] - - -if TYPE_CHECKING: - from .configuration_pegasus_x import PEGASUS_X_PRETRAINED_CONFIG_ARCHIVE_MAP, PegasusXConfig - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_pegasus_x import ( - PEGASUS_X_PRETRAINED_MODEL_ARCHIVE_LIST, - PegasusXForConditionalGeneration, - PegasusXModel, - PegasusXPreTrainedModel, - ) - - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/zfj41/webui/app.py b/spaces/zfj41/webui/app.py deleted file mode 100644 index 1d5f64f50bd9309f9f8c37a7c79c7a4f84ff7178..0000000000000000000000000000000000000000 --- a/spaces/zfj41/webui/app.py +++ /dev/null @@ -1,77 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - os.system(f"wget -q {os.getenv('EMBED_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") - \ No newline at end of file diff --git a/spaces/zhan66/vits-simple-api/vits/mel_processing.py b/spaces/zhan66/vits-simple-api/vits/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/zhan66/vits-simple-api/vits/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/zhang-wei-jian/docker/README.md b/spaces/zhang-wei-jian/docker/README.md deleted file mode 100644 index 6ed274b8c57d4fbc58050f726a390a44ffc47dbd..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Docker -emoji: 🌍 -colorFrom: red -colorTo: red -sdk: docker -pinned: false -app_port: 8888 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zhoupin30/zhoupin30/src/components/ui/badge.tsx b/spaces/zhoupin30/zhoupin30/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
      - ) -} - -export { Badge, badgeVariants }