diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!LINK!! Download Archexteriors Vol 18 Torrent 33.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!LINK!! Download Archexteriors Vol 18 Torrent 33.md deleted file mode 100644 index 16592b0a5dd0ca6746110564d6c2bff4d28c8067..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!LINK!! Download Archexteriors Vol 18 Torrent 33.md +++ /dev/null @@ -1,62 +0,0 @@ - -

Download Archexteriors Vol 18 Torrent 33: A Guide for 3D Artists

-

If you are a 3D artist who is looking for some high-quality and realistic architectural templates for your projects, you might be interested in downloading Archexteriors Vol 18 torrent 33. This is a collection of ten fully modeled and textured 3D exteriors with complete lighting and three cameras setups for every scene, created by Evermotion, a leading company in the field of 3D modeling and rendering. In this article, we will show you what Archexteriors Vol 18 is, what are its features and benefits, how to download it using torrent, how to use it in your 3D projects, and some FAQs that you might have. Let's get started!

-

Download Archexteriors Vol 18 Torrent 33


Download ✸✸✸ https://byltly.com/2uKwji



-

What is Archexteriors Vol 18?

-

Archexteriors Vol 18 is a collection of architectural templates that consists of ten fully modeled and textured 3D exteriors with complete lighting and three cameras setups for every scene. It is part of the Archexteriors series by Evermotion, which offers various collections of outdoor environments for different purposes and styles. You can find more information about Archexteriors Vol 18 on its official website or on Trinity3D, where you can also purchase it for $60.

-

What are the features and benefits of Archexteriors Vol 18?

-

Archexteriors Vol 18 has many features and benefits that make it a great choice for any 3D artist who wants to create stunning outdoor scenes. Here are some of them:

- -

How to download Archexteriors Vol 18 torrent 33?

-

If you want to download Archexteriors Vol 18 torrent 33, you need to follow these steps:

-
    -
  1. Find a reliable torrent website that offers Archexteriors Vol 18 torrent 33. There are many torrent websites on the internet, but not all of them are trustworthy or safe. Some of them might have fake or malicious files, or expose you to legal risks. Therefore, you need to do some research and find a reputable torrent website that has Archexteriors Vol 18 torrent 33 available. Some examples of popular torrent websites are The Pirate Bay, RARBG, and 1337x. However, we do not endorse or recommend any of these websites, and you should use them at your own risk.
  2. -
  3. Download a torrent client that can handle Archexteriors Vol 18 torrent 33. A torrent client is a software that allows you to download and upload files using the BitTorrent protocol. You need a torrent client to download Archexteriors Vol 18 torrent 33 from the torrent website. There are many torrent clients to choose from, but some of the most popular ones are uTorrent, BitTorrent, and qBittorrent. Again, we do not endorse or recommend any of these software, and you should use them at your own discretion.
  4. -
  5. Open the torrent file or magnet link of Archexteriors Vol 18 torrent 33 with your torrent client. Once you have found a reliable torrent website and downloaded a torrent client, you can proceed to download Archexteriors Vol 18 torrent 33. You can either download the torrent file, which is a small file that contains information about the files you want to download, or use the magnet link, which is a URL that does the same thing without requiring a file. You can then open the torrent file or magnet link with your torrent client, and it will start downloading Archexteriors Vol 18 torrent 33 to your computer.
  6. -
  7. Wait for the download to finish and verify the files. Depending on the size of Archexteriors Vol 18 torrent 33, your internet speed, and the number of seeders and leechers (people who have or want the files), the download might take some time. You can check the progress and status of your download on your torrent client. Once the download is complete, you should verify the files to make sure they are not corrupted or infected. You can use a file manager or an antivirus software to do this.
  8. -
  9. Extract the files and install Archexteriors Vol 18 on your computer. After verifying the files, you need to extract them from the compressed folder they are in. You can use a software like WinRAR or 7-Zip to do this. Then, you need to install Archexteriors Vol 18 on your computer by following the instructions provided by Evermotion. You might need to enter a license key or activate the product online.
  10. -
-

How to use Archexteriors Vol 18 in your 3D projects?

-

Now that you have downloaded and installed Archexteriors Vol 18 on your computer, you can start using it in your 3D projects. Here are some steps on how to do that:

-

-
    -
  1. Open your 3D software and import one of the scenes from Archexteriors Vol 18. You can use any 3D software that supports V-Ray and 3ds Max files, such as Blender, Maya, or SketchUp. However, for optimal results, we recommend using V-Ray with 3ds Max. To import one of the scenes from Archexteriors Vol 18, you need to go to File > Import > Merge and select one of the . max files from the Archexteriors Vol 18 folder. You will see a list of objects and materials that you can merge into your scene. You can select all of them or only the ones you need.
  2. -
  3. Adjust the scale, position, and orientation of the scene to fit your project. Depending on the size and dimensions of your project, you might need to adjust the scale, position, and orientation of the scene from Archexteriors Vol 18. You can use the tools and commands in your 3D software to do this. For example, in 3ds Max, you can use the Scale, Move, and Rotate tools, or the Transform Type-In dialog box.
  4. -
  5. Replace the placeholder building model with your own building model. The scenes from Archexteriors Vol 18 come with a placeholder building model that you can replace with your own building model. To do this, you need to delete or hide the placeholder model, and import or merge your own model into the scene. You can then adjust the scale, position, and orientation of your model to match the scene. You can also apply materials and textures to your model if needed.
  6. -
  7. Customize the lighting and camera settings of the scene according to your preferences. The scenes from Archexteriors Vol 18 come with complete lighting and three cameras setups for every scene. However, you can customize them according to your preferences. You can change the intensity, color, direction, and type of the lights, or add new lights if needed. You can also change the focal length, aperture, exposure, and angle of the cameras, or add new cameras if needed. You can use the tools and commands in your 3D software to do this. For example, in 3ds Max, you can use the Light Lister and Camera Lister dialogs.
  8. -
  9. Render the scene and save the image file. Once you are satisfied with your scene, you can render it using V-Ray or any other rendering engine that supports V-Ray materials and proxies. You can adjust the render settings according to your desired quality and speed. You can then save the image file in any format that you want. You can use the tools and commands in your 3D software to do this. For example, in 3ds Max, you can use the Render Setup dialog and the Save Image dialog.
  10. -
-

How to optimize the performance and quality of Archexteriors Vol 18 scenes?

-

Archexteriors Vol 18 scenes are designed to be realistic and detailed, but they can also be demanding on your computer resources and rendering time. Therefore, you might want to optimize them for better performance and quality. Here are some tips and tricks on how to do that:

- -

How to create stunning projects with Archexteriors Vol 18?

-

Archexteriors Vol 18 is a great tool for creating stunning projects with realistic and detailed outdoor scenes. However, you can also enhance your projects with some creativity and imagination. Here are some examples of projects made with Archexteriors Vol 18 that might inspire you:

- -

Conclusion

-

In conclusion, Archexteriors Vol 18 is a collection of architectural templates that consists of ten fully modeled and textured 3D exteriors with complete lighting and three cameras setups for every scene. It is a great choice for any 3D artist who wants to create stunning outdoor scenes for villas, houses, and small and medium buildings, mostly with natural surroundings. It is prepared for V-Ray 2.0 with 3ds Max 2010, but it can also be used with other software and engines. It can be downloaded using torrent from various websites, but you need to be careful about the reliability and safety of these websites. It can be used in your 3D projects by importing, customizing, and rendering the scenes in your 3D software. It can also be optimized for better performance and quality by using proxies, instancing, low-poly models, adaptive subdivision, and render elements. It can also be enhanced with some creativity and imagination by adding your own building models, changing the lighting and camera settings, or modifying the materials and textures.

-

FAQs

-

Here are some FAQs that you might have about Archexteriors Vol 18:

-
    -
  1. Is Archexteriors Vol 18 compatible with other versions of V-Ray or 3ds Max? Yes, Archexteriors Vol 18 is compatible with other versions of V-Ray or 3ds Max, but you might need to convert or tweak some files or settings to make them work properly.
  2. -
  3. Can I use Archexteriors Vol 18 for commercial purposes? Yes, you can use Archexteriors Vol 18 for commercial purposes as long as you have purchased a license from Evermotion or Trinity 3D, and you have followed their terms and conditions. You can find more information about the license agreement on their websites .
  4. -
  5. Can I modify or edit Archexteriors Vol 18 scenes? Yes, you can modify or edit Archexteriors Vol 18 scenes as much as you want, as long as you do not resell or redistribute them. You can change the lighting, camera, material, texture, or geometry of the scenes, or add your own objects or models to them.
  6. -
  7. Can I use Archexteriors Vol 18 with other Archexteriors collections? Yes, you can use Archexteriors Vol 18 with other Archexteriors collections, as long as they are compatible with V-Ray and 3ds Max. You can mix and match different scenes from different collections, or use elements from one collection in another scene. However, you might need to adjust the scale, position, orientation, lighting, camera, material, texture, or geometry of the scenes or elements to make them fit together.
  8. -
  9. Can I get support or help for Archexteriors Vol 18? Yes, you can get support or help for Archexteriors Vol 18 from Evermotion or Trinity3D, depending on where you purchased it from. You can contact them via email, phone, or online chat. You can also find some tutorials, tips, and FAQs on their websites or on their YouTube channels .
  10. -

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adjustment Program For Epson Pm245 467.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adjustment Program For Epson Pm245 467.md deleted file mode 100644 index 35a8b8b216d6a16bc6bf2e55350d17226714c419..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Adjustment Program For Epson Pm245 467.md +++ /dev/null @@ -1,6 +0,0 @@ -

Adjustment Program For Epson Pm245 467


DOWNLOADhttps://imgfil.com/2uy1XI



- - 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD LT 2010 64bit Keygen Xforce __TOP__.md b/spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD LT 2010 64bit Keygen Xforce __TOP__.md deleted file mode 100644 index dbf0a2dff50b173d573936627764cb170c48025d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD LT 2010 64bit Keygen Xforce __TOP__.md +++ /dev/null @@ -1,6 +0,0 @@ -

AutoCAD LT 2010 64bit Keygen Xforce


Download File ->>->>->> https://imgfil.com/2uy1SU



-
-dll to xforce keygen AutoCAD 2009 64 bit. Find AutoCAD and much more at Novedge buy online or CallAmazon. com: buy AutoCAD LT. Cause ... 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Avg Tuneup 2019 Full V19.1 Build 1158 Multilingual Key Free Download BETTER.md b/spaces/1gistliPinn/ChatGPT4/Examples/Avg Tuneup 2019 Full V19.1 Build 1158 Multilingual Key Free Download BETTER.md deleted file mode 100644 index fbf534ce2253015a4bc10b1c02210ed0797e1f0c..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Avg Tuneup 2019 Full V19.1 Build 1158 Multilingual Key Free Download BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

Avg tuneup 2019 full v19.1 Build 1158 Multilingual Key Free Download


Download Zip ✫✫✫ https://imgfil.com/2uy28E



- -Free Download. tuneup ... AVG PC TuneUp v19.1 build 831 Multilingual » application: ✓10 months59 MB20. ... Download AVG TuneUp 19 1 Build 995 Final Full Version ... AVG TuneUp V19 1 Build 1158 Serial Key Working. ... AVG PC TuneUp 2019 v19.1.1209 With Serial Key | 4HowCrack 25 Jul 2019 . 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator 2023 APK OBB Play with Friends Online and Chat in Coop Bus Routes.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator 2023 APK OBB Play with Friends Online and Chat in Coop Bus Routes.md deleted file mode 100644 index 54c073cc1dda0ed3a932c9037a4efa9eed0f29c4..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator 2023 APK OBB Play with Friends Online and Chat in Coop Bus Routes.md +++ /dev/null @@ -1,148 +0,0 @@ - -

Bus Simulator 2023 APK + OBB Download: How to Install and Play the Latest Bus Driving Game

-

Do you love driving buses and transporting passengers in realistic environments? If yes, then you might want to check out Bus Simulator 2023, the latest bus simulation game for Android and PC. In this article, we will show you how to download and install Bus Simulator 2023 APK + OBB on your devices, as well as give you a brief review of the game's features, pros and cons.

-

What is Bus Simulator 2023?

-

Bus Simulator 2023 is a bus driving game that puts you in the driver's seat and lets you become a real bus driver. The game features detailed maps all over the world, modern buses with realistic interiors and a groundbreaking 1:1 physics engine. You can drive various types of buses, such as diesel, hybrid, electric, articulated, coach and school buses, and customize them as you wish. You can also explore different cities from around the world in career mode, freeride mode or online multiplayer mode with friends.

-

bus simulator 2023 apk + obb download


Download > https://urlin.us/2uSVSa



-

Features of Bus Simulator 2023

-

Some of the features that make Bus Simulator 2023 stand out from other bus simulation games are:

- -

System Requirements for Bus Simulator 2023

-

Before you download and install Bus Simulator 2023 APK + OBB on your device, you should make sure that your device meets the minimum system requirements for the game. According to the official website of the game, the minimum system requirements are:

- - - - - - - - - - - - - - - - - - - - - - - - - -
DeviceOSRAMStorageProcessorGraphics
AndroidAndroid 5.0 or higher2 GB or more1 GB or moreQuad-core 1.5 GHz or higherMali-T720 MP2 or higher
PC WindowsWindows 7 or higher4 GB or more2 GB or moreDual-core 2.4 GHz or higherNVIDIA GeForce GTX 550 Ti or higher
-

If your device meets these requirements, you can proceed to download and install Bus Simulator 2023 APK + OBB on your device.

-

How to Download and Install Bus Simulator 2023 APK + OBB on Android Devices

-

To download and install Bus Simulator 2023 APK + OBB on your Android device, you need to follow these steps:

-

Step 1: Download the APK and OBB files from a trusted source

-

The first step is to download the APK and OBB files of Bus Simulator 2023 from a trusted source. You can find many websites that offer these files for free, but you should be careful about the quality and security of the files. Some websites may contain malware, viruses or fake files that can harm your device or steal your data. Therefore, we recommend you to use a reliable website that has positive reviews and ratings from other users. For example, you can use this link to download the APK and OBB files of Bus Simulator 2023.

-

bus simulator 2023 android game free download
-bus simulator 2023 mod apk + obb unlimited money
-bus simulator 2023 latest version apk + obb
-bus simulator 2023 realistic driving game download
-bus simulator 2023 offline apk + obb
-bus simulator 2023 online multiplayer apk + obb
-bus simulator 2023 coach and school buses download
-bus simulator 2023 next-gen graphics apk + obb
-bus simulator 2023 diesel hybrid electric buses download
-bus simulator 2023 career mode apk + obb
-bus simulator 2023 freeride mode apk + obb
-bus simulator 2023 custom bus paint and accessories download
-bus simulator 2023 open world maps apk + obb
-bus simulator 2023 weather and time of day download
-bus simulator 2023 traffic system apk + obb
-bus simulator 2023 steering wheel and tilting controls download
-bus simulator 2023 Ovidiu Pop game apk + obb
-bus simulator 2023 Zuuks Games game apk + obb
-bus simulator 2023 yulilnikoy game apk + obb
-bus simulator 2023 San Francisco and Texas maps download
-bus simulator 2023 Buenos Aires and Germany maps apk + obb
-bus simulator 2023 Spain and Prague maps download
-bus simulator 2023 St. Petersburg and Dubai maps apk + obb
-bus simulator 2023 Shanghai and more maps download
-bus simulator 2023 articulated and coach buses apk + obb
-bus simulator 2023 school and electric buses download
-bus simulator 2023 company management system apk + obb
-bus simulator 2023 live chat and coop routes download
-bus simulator 2023 leaderboards and achievements apk + obb
-bus simulator 2023 APKCombo website download link
-how to install bus simulator 2023 apk + obb on android device
-how to update bus simulator 2023 apk + obb to latest version
-how to play bus simulator 2023 online with friends
-how to unlock all buses in bus simulator 2023 mod apk + obb
-how to fix bus simulator 2023 not working or crashing issues
-how to get more money and xp in bus simulator 2023 game
-how to customize your bus in bus simulator 2023 game
-how to drive kids to school in bus simulator 2023 game
-how to hire drivers and schedule routes in bus simulator 2023 game
-how to change weather and time of day in bus simulator 2023 game
-best tips and tricks for playing bus simulator 2023 game
-best buses to drive in bus simulator 2023 game
-best maps to explore in bus simulator 2023 game
-best graphics settings for bus simulator 2023 game
-best steering wheel and tilting controls for bus simulator 2023 game
-best multiplayer modes for playing bus simulator 2023 game with friends
-best websites to download bus simulator 2023 apk + obb for free
-best reviews and ratings for bus simulator 2023 game on Google Play Store

-

Step 2: Enable installation from unknown sources on your device

-

The second step is to enable installation from unknown sources on your device. This is because Android devices do not allow installation of apps from sources other than the Google Play Store by default. To enable installation from unknown sources, you need to go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than the Google Play Store.

-

Step 3: Install the APK file and extract the OBB file to the Android/obb folder

-

The third step is to install the APK file and extract the OBB file to the Android/obb folder on your device. To do this, you need to locate the downloaded APK file on your device using a file manager app and tap on it to start the installation process. Follow the instructions on the screen to complete the installation. Then, you need to locate the downloaded OBB file on your device using a file manager app and extract it using a zip extractor app. You will get a folder named com.bus.simulator2023. Copy this folder and paste it into the Android/obb folder on your device.

-

Step 4: Launch the game and enjoy

-

The final step is to launch the game and enjoy it. To do this, you need to go to your app drawer and tap on the Bus Simulator 2023 icon to start the game. You will see a loading screen and then a main menu with different options. You can choose your preferred mode, bus, map and settings and start driving your bus in realistic environments.

-

How to Download and Install Bus Simulator 2023 on PC Windows

-

If you want to play Bus Simulator 2023 on your PC Windows, you need to follow these steps:

-

Step 1: Download and install an Android emulator on your PC

-

The first step is to download and install an Android emulator on your PC. An Android emulator is a software that allows you to run Android apps and games on your PC Windows. There are many Android emulators available for PC Windows, such as BlueStacks[^3 ^), NoxPlayer, MEmu, LDPlayer 9, etc. You can choose any emulator that suits your PC specifications and preferences. You can download and install an emulator from its official website or from a trusted source. Follow the instructions on the screen to complete the installation.

-

Step 2: Download the APK and OBB files from a trusted source

-

The second step is to download the APK and OBB files of Bus Simulator 2023 from a trusted source. You can use the same link as mentioned above for Android devices, or you can search for another source that offers these files for free. Make sure that the source is reliable and safe, and that the files are compatible with your emulator.

-

Step 3: Install the APK file and copy the OBB file to the emulator's Android/obb folder

-

The third step is to install the APK file and copy the OBB file to the emulator's Android/obb folder on your PC. To do this, you need to open your emulator and locate the downloaded APK file using the built-in file manager or browser. Tap on it to start the installation process. Follow the instructions on the screen to complete the installation. Then, you need to locate the downloaded OBB file on your PC using a file manager or browser. Extract it using a zip extractor app. You will get a folder named com.bus.simulator2023. Copy this folder and paste it into the emulator's Android/obb folder on your PC.

-

Step 4: Launch the game and enjoy

-

The final step is to launch the game and enjoy it. To do this, you need to go to your emulator's app drawer and tap on the Bus Simulator 2023 icon to start the game. You will see a loading screen and then a main menu with different options. You can choose your preferred mode, bus, map and settings and start driving your bus in realistic environments.

-

Bus Simulator 2023 Review: Pros and Cons

-

Now that you know how to download and install Bus Simulator 2023 APK + OBB on your devices, you might be wondering how good is the game itself. Well, like any other game, Bus Simulator 2023 has its pros and cons, which we will discuss below.

-

Pros

-

Some of the pros of Bus Simulator 2023 are:

- -

Cons

-

Some of the cons of Bus Simulator 2023 are:

- -

Conclusion

-

Bus Simulator 2023 is a bus driving game that offers a realistic and immersive experience of driving a bus in various locations around the world. The game has many features, such as realistic graphics and physics, variety of buses and maps, career, free-ride and multiplayer modes, customizable buses and interiors, intelligent traffic system and passengers, etc. However, the game also has some drawbacks, such as buggy AI behavior and glitches, ugly and overwhelming UI, and an acquired taste for some gamers. Overall, Bus Simulator 2023 is a game that can appeal to bus enthusiasts and simulation fans, but may not be suitable for everyone.

-

FAQs

-

Here are some frequently asked questions about Bus Simulator 2023:

-
    -
  1. Is Bus Simulator 2023 free to play?
  2. -

    Yes, Bus Simulator 2023 is free to play on Android devices. However, the game may contain ads and in-app purchases that can enhance your gameplay or unlock more features.

    -
  3. Is Bus Simulator 2023 available on iOS devices?
  4. -

    No, Bus Simulator 2023 is not available on iOS devices at the moment. The game is only compatible with Android devices and PC Windows.

    -
  5. How can I play Bus Simulator 2023 with a controller?
  6. -

    You can play Bus Simulator 2023 with a controller by connecting your controller to your device via Bluetooth or USB. You can also use an emulator on your PC Windows to play the game with a controller.

    -
  7. How can I update Bus Simulator 2023 to the latest version?
  8. -

    You can update Bus Simulator 2023 to the latest version by downloading and installing the latest APK file from a trusted source. You can also check for updates from within the game or from the Google Play Store.

    -
  9. How can I contact the developers of Bus Simulator 2023?
  10. -

    You can contact the developers of Bus Simulator 2023 by sending them an email at support@bussimulator2023.com or by visiting their official website at www.bussimulator2023.com.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga APK The Most Downloaded Game on Google Play.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga APK The Most Downloaded Game on Google Play.md deleted file mode 100644 index aa5b342f8110b099f59d956b65b7d746cb7cc0e4..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga APK The Most Downloaded Game on Google Play.md +++ /dev/null @@ -1,117 +0,0 @@ - -

Candy Crush Saga APK: How to Download and Play the Sweetest Game Ever

-

If you are looking for a fun and addictive game that will keep you entertained for hours, you might want to try Candy Crush Saga. This game is one of the most popular and successful puzzle games ever created, with millions of players around the world. But what if you want to play it on your Android device without using Google Play Store? In this article, we will show you how to download and install Candy Crush Saga APK, as well as how to play it and enjoy its features.

-

What is Candy Crush Saga?

-

Candy Crush Saga is a game that splendidly tackles the match-3 genre. It was developed by King, a leading mobile game developer, and released in 2012. Since then, it has become a global phenomenon, with over a billion downloads and hundreds of levels to complete.

-

candy crush saga apk


Download Ziphttps://urlin.us/2uSUaR



-

The gameplay of Candy Crush Saga

-

The gameplay of Candy Crush Saga is simple but challenging. You have to match three or more candies of the same color in a row or column to clear them from the board. You can also create special candies by matching four or more candies in different shapes, such as striped, wrapped, or color bomb candies. These special candies can help you clear more candies and score more points.

-

Each level has a different objective and a limited number of moves or time. You have to achieve the objective before running out of moves or time, or else you will lose a life. You can also earn stars based on your score, which can unlock new episodes and features. Some levels also have obstacles, such as chocolate, jelly, licorice, or blockers, that make the game more difficult.

-

The features of Candy Crush Saga

-

Candy Crush Saga has many features that make it an enjoyable and rewarding game. Some of these features are:

-

candy crush saga download free android
-candy crush saga mod apk unlimited lives
-candy crush saga latest version apk
-candy crush saga game install
-candy crush saga offline apk
-candy crush saga hack apk download
-candy crush saga update apk
-candy crush saga app store
-candy crush saga cheats apk
-candy crush saga old version apk
-candy crush saga play online
-candy crush saga apk mirror
-candy crush saga apk pure
-candy crush saga for pc
-candy crush saga levels apk
-candy crush saga android 1
-candy crush saga apk mod 2023
-candy crush saga full apk
-candy crush saga apk rexdl
-candy crush saga revdl apk
-candy crush saga apk uptodown
-candy crush saga apkpure download
-candy crush saga original apk
-candy crush saga apkmonk
-candy crush saga apkmody
-candy crush saga mob.org apk
-candy crush saga apkmirror download
-candy crush saga apknite
-candy crush saga apkpanda
-candy crush saga apksfree
-candy crush saga apkgalaxy
-candy crush saga apksfull
-candy crush saga apksmodhub.com
-candy crush saga apksmash.com
-candy crush saga apksnake.com
-candy crush saga apktada.com
-candy crush saga apkturbo.com
-candy crush saga apktwister.com
-candy crush saga apkun.com
-candy crush saga apkxmod.com

- -

Why download Candy Crush Saga APK?

-

Candy Crush Saga is available on Google Play Store for free, but there are some reasons why you might want to download its APK file instead. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. By downloading an APK file, you can enjoy some benefits that are not possible with the official version.

-

The benefits of downloading Candy Crush Saga APK

-

Some of the benefits of downloading Candy Crush Saga APK are:

- -

The risks of downloading Candy Crush Saga APK

-

However, downloading Candy Crush Saga APK also comes with some risks that you should be aware of. Some of these risks are:

- -

Therefore, you should always download Candy Crush Saga APK from a trusted and reputable source, and scan it with an antivirus software before installing it. You should also backup your data and progress before using the APK file, and use it at your own risk.

-

How to download and install Candy Crush Saga APK?

-

If you have decided to download and install Candy Crush Saga APK, you will need to follow some simple steps. Here is a guide on how to do it:

-

Step 1: Find a reliable source for the APK file

-

The first step is to find a website that offers the APK file for Candy Crush Saga. You can search for it on Google or use a dedicated APK website, such as APKPure, APKMirror, or Uptodown. Make sure that the website is safe and secure, and that the APK file is updated and verified. You can also check the reviews and ratings of the APK file from other users.

-

Step 2: Enable unknown sources on your device

-

The next step is to enable unknown sources on your device, which will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might also need to grant permission to your browser or file manager to install apps from unknown sources.

-

Step 3: Download and install the APK file

-

The final step is to download and install the APK file on your device. To do this, go to the website where you found the APK file and tap on the download button. Once the download is complete, open the file and tap on install. Wait for the installation process to finish and then launch the game.

-

How to play Candy Crush Saga APK?

-

Playing Candy Crush Saga APK is similar to playing the official version of the game. You can log in with your Facebook account or play as a guest. You can also sync your progress and data with the official version if you have it installed on your device. However, you might encounter some issues or errors while playing the game, such as crashing, freezing, or lagging. If this happens, you can try clearing the cache and data of the game, updating the game, or reinstalling it.

-

Tips and tricks for playing Candy Crush Saga APK

-

If you want to master Candy Crush Saga APK and complete all the levels, you might need some tips and tricks to help you out. Here are some of them:

- -

How to update Candy Crush Saga APK

-

To keep playing Candy Crush Saga APK without any problems, you should always update it to the latest version. To do this, you can either check for updates on the website where you downloaded the APK file, or use an app updater tool, such as ApkUpdater or ApkTrack. These tools will notify you when there is a new version available and let you download and install it easily.

-

Conclusion

-

Candy Crush Saga is a game that will make you fall in love with its sweet and colorful world. It is a game that will challenge your mind and skills with its thousands of levels and modes. It is a game that will connect you with millions of other players who share your passion for candy crushing. And it is a game that you can play on your Android device without using Google Play Store by downloading its APK file.

-

In this article, we have shown you what Candy Crush Saga is, why you might want to download its APK file, how to download and install it, how to play it, and some tips and tricks to help you succeed. We hope that this article has been helpful and informative for you, and that you will enjoy playing Candy Crush Saga APK on your device.

-

Before we end this article, here are some frequently asked questions that you might have about Candy Crush Saga APK:

-

FAQs

-
    -
  1. Is Candy Crush Saga APK safe to download and install?
  2. -

    Yes, Candy Crush Saga APK is safe to download and install, as long as you get it from a reliable and reputable source. You should also scan the APK file with an antivirus software before installing it, and backup your data and progress before using it.

    -
  3. Is Candy Crush Saga APK free to play?
  4. -

    Yes, Candy Crush Saga APK is free to play, but it also offers in-app purchases that can enhance your gaming experience. You can buy extra lives, gold bars, boosters, or other items with real money. However, you can also play the game without spending any money, as there are many ways to get free rewards and bonuses.

    -
  5. How can I contact the developer of Candy Crush Saga APK?
  6. -

    If you have any questions, feedback, or issues regarding Candy Crush Saga APK, you can contact the developer of the game by visiting their website, https://king.com/, or by sending an email to candycrush.techhelp@king.com. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, or YouTube.

    -
  7. Can I play Candy Crush Saga APK offline?
  8. -

    Yes, you can play Candy Crush Saga APK offline, but you will not be able to access some of its online features, such as connecting with Facebook friends, joining teams, competing on leaderboards, or participating in events and challenges. You will also need an internet connection to update the game or sync your progress and data with the official version.

    -
  9. Can I play Candy Crush Saga APK on other devices?
  10. -

    Yes, you can play Candy Crush Saga APK on other devices that support Android operating system, such as tablets or smart TVs. However, you might need to adjust the settings or resolution of the game to fit your device's screen size and performance.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FIFA 16 Ultimate Team Mod APK and Experience the Most Realistic Football Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FIFA 16 Ultimate Team Mod APK and Experience the Most Realistic Football Ever.md deleted file mode 100644 index ee443c610de4e61d9732cb71466a7adcaadd0b88..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FIFA 16 Ultimate Team Mod APK and Experience the Most Realistic Football Ever.md +++ /dev/null @@ -1,139 +0,0 @@ - -

Download FIFA 16 Ultimate Team Mod Apk for Android

-

Are you a fan of soccer games and want to experience the thrill of playing with your favorite players and teams? If yes, then you should try FIFA 16 Ultimate Team, one of the most popular and realistic soccer games for Android devices. And if you want to unlock all the features and modes of the game, then you should download FIFA 16 Ultimate Team mod apk, which gives you unlimited coins, points, players, and more. In this article, we will tell you what is FIFA 16 Ultimate Team, why you should download its mod apk, and how to download and install it on your Android device.

-

download fifa 16 ultimate team mod apk


Download Zip --->>> https://urlin.us/2uSYtu



-

What is FIFA 16 Ultimate Team?

-

FIFA 16 Ultimate Team is a soccer game developed by EA Sports and released in 2015. It is the first game in the FIFA series to feature female players and teams. It also has improved graphics, gameplay, and modes compared to its predecessors. In FIFA 16 Ultimate Team, you can create your own dream team by choosing from over 10,000 players from over 500 licensed teams. You can also compete in various leagues, tournaments, and events to earn rewards and trophies. You can also customize your team's kits, badges, stadiums, and managers.

-

Features of FIFA 16 Ultimate Team

-

Some of the features of FIFA 16 Ultimate Team are:

- -

Why download FIFA 16 Ultimate Team mod apk?

-

While FIFA 16 Ultimate Team is a free-to-play game, it has some limitations and restrictions that can affect your gaming experience. For example, you need coins and points to buy players, items, packs, etc. You also need energy to play matches. You can earn these resources by playing the game or by spending real money. However, this can be time-consuming or expensive. That's why you should download FIFA 16 Ultimate Team mod apk, which gives you unlimited coins, points, energy, players, items, packs, etc. With this mod apk, you can enjoy the game without any worries or hassles. You can also unlock all the features and modes of the game that are otherwise locked or restricted.

-

Download FIFA 16 Mod 2023 Apk Obb Data Android offline (UPDATED)[^1^]
-FIFA 16: Ultimate team Download APK for Android (Free) - mob.org[^2^]
-How to install FIFA 16 Ultimate Team Mod Apk on your device
-FIFA 16 Ultimate Team Mod Apk latest version with unlimited coins
-FIFA 16 Ultimate Team Mod Apk gameplay and features review
-FIFA 16 Ultimate Team Mod Apk vs FIFA 16 original game comparison
-FIFA 16 Ultimate Team Mod Apk download link and instructions
-FIFA 16 Ultimate Team Mod Apk best players and teams guide
-FIFA 16 Ultimate Team Mod Apk tips and tricks for beginners
-FIFA 16 Ultimate Team Mod Apk cheats and hacks for android
-FIFA 16 Ultimate Team Mod Apk problems and solutions
-FIFA 16 Ultimate Team Mod Apk update and news
-FIFA 16 Ultimate Team Mod Apk ratings and reviews by users
-FIFA 16 Ultimate Team Mod Apk alternatives and similar games
-FIFA 16 Ultimate Team Mod Apk requirements and compatibility
-Download FIFA 16 Soccer Mod Apk with realistic graphics and physics
-FIFA 16 Soccer Mod Apk offline mode and online mode
-FIFA 16 Soccer Mod Apk tournaments and leagues
-FIFA 16 Soccer Mod Apk customizations and settings
-FIFA 16 Soccer Mod Apk challenges and achievements
-Download FIFA Mobile Soccer Mod Apk with new features and modes
-FIFA Mobile Soccer Mod Apk career mode and manager mode
-FIFA Mobile Soccer Mod Apk live events and seasons
-FIFA Mobile Soccer Mod Apk social features and leaderboards
-FIFA Mobile Soccer Mod Apk controls and interface
-Download Dream League Soccer 2023 Mod Apk with unlimited money
-Dream League Soccer 2023 Mod Apk create your own team and stadium
-Dream League Soccer 2023 Mod Apk realistic animations and sound effects
-Dream League Soccer 2023 Mod Apk compete with other players online
-Dream League Soccer 2023 Mod Apk transfer market and player development
-Download PES Club Manager Mod Apk with full license and data
-PES Club Manager Mod Apk build your dream club from scratch
-PES Club Manager Mod Apk realistic match simulation and tactics
-PES Club Manager Mod Apk train your players and scout new talents
-PES Club Manager Mod Apk join official tournaments and events
-Download Real Football 2023 Mod Apk with unlimited gold and cash
-Real Football 2023 Mod Apk experience the ultimate football game on mobile
-Real Football 2023 Mod Apk play with real teams and players from around the world
-Real Football 2023 Mod Apk improve your skills and strategy in various modes
-Real Football 2023 Mod Apk enjoy stunning graphics and smooth gameplay

-

How to download and install FIFA 16 Ultimate Team mod apk?

-

If you are interested in downloading and installing FIFA 16 Ultimate Team mod apk on your Android device, then you need to follow some simple steps. But before that, you need to make sure that your device meets some requirements.

-

Requirements for FIFA 16 Ultimate Team mod apk

-

The requirements for FIFA 16 Ultimate Team mod apk are:

- -

Steps to download and install FIFA 16 Ultimate Team mod apk

-

The steps to download and install FIFA 16 Ultimate Team mod apk are:

-

Step 1: Download the files

-

The first step is to download the FIFA 16 Ultimate Team mod apk file and the obb file from a reliable source. You can use the links given below to download them:

- -

Make sure you download both the files and save them in a folder on your device.

-

Step 2: Extract the files

-

The next step is to extract the FIFA 16 Ultimate Team obb file using a zip extractor app. You can use any app that can extract zip files, such as ZArchiver, RAR, etc. To extract the file, follow these steps:

-
    -
  1. Open the zip extractor app and locate the FIFA 16 Ultimate Team obb file that you downloaded.
  2. -
  3. Select the file and tap on the extract option.
  4. -
  5. Choose a destination folder where you want to extract the file. You can create a new folder or use an existing one.
  6. -
  7. Wait for the extraction process to complete.
  8. -
-

After extracting the file, you will get a folder named "com.ea.gp.fifaworld". This folder contains the data of the game.

-

Step 3: Install the apk file

-

The third step is to install the FIFA 16 Ultimate Team mod apk file on your device. To do this, follow these steps:

-
    -
  1. Open the file manager app and locate the FIFA 16 Ultimate Team mod apk file that you downloaded.
  2. -
  3. Select the file and tap on it to start the installation process.
  4. -
  5. You may get a warning message that says "This type of file can harm your device". Ignore it and tap on "OK".
  6. -
  7. You may also get a prompt that says "For your security, your phone is not allowed to install unknown apps from this source". Tap on "Settings" and enable the option "Allow from this source".
  8. -
  9. Go back to the installation screen and tap on "Install".
  10. -
  11. Wait for the installation process to complete.
  12. -
-

After installing the apk file, you will see an icon of FIFA 16 Ultimate Team on your device's home screen or app drawer.

-

Step 4: Move the obb file

-

The fourth step is to move the FIFA 16 Ultimate Team obb folder that you extracted to the right location on your device. To do this, follow these steps:

-
    -
  1. Open the file manager app and locate the FIFA 16 Ultimate Team obb folder that you extracted. It should be named "com.ea.gp.fifaworld".
  2. -
  3. Select the folder and tap on the cut or move option.
  4. -
  5. Navigate to the following path on your device: Internal Storage > Android > obb. If you don't see an obb folder, create one.
  6. -
  7. Paste or move the FIFA 16 Ultimate Team obb folder in the obb folder.
  8. -
-

This step is important because it will allow the game to access its data and run properly.

-

Step 5: Launch the game and enjoy

-

The final step is to launch FIFA 16 Ultimate Team mod apk on your device and enjoy playing with unlimited coins, points, players, items, packs, etc. To do this, follow these steps:

-
    -
  1. Tap on the FIFA 16 Ultimate Team icon on your device's home screen or app drawer.
  2. -
  3. You may get a message that says "Download failed because you may not have purchased this app". Ignore it and tap on "OK".
  4. -
  5. The game will start loading and verifying its data. Wait for it to finish.
  6. -
  7. You may also get a message that says "You need an internet connection for first time verification". Make sure you have a stable internet connection and tap on "Retry".
  8. -
  9. The game will launch and ask you to choose your language and accept some terms and conditions. Do so accordingly.
  10. -
  11. You will then see the main menu of FIFA 16 Ultimate Team mod apk. You can choose any mode or option you want and start playing.
  12. -
-

ConclusionConclusion

-

In this article, we have shown you how to download and install FIFA 16 Ultimate Team mod apk on your Android device. This mod apk will give you unlimited coins, points, players, items, packs, etc. and unlock all the features and modes of the game. You can enjoy playing with your favorite players and teams and compete in various leagues, tournaments, and events. FIFA 16 Ultimate Team mod apk is one of the best soccer games for Android devices and you should definitely try it out.

-

FAQs

-

Here are some frequently asked questions about FIFA 16 Ultimate Team mod apk:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the New Features of Lokicraft 1.18 0 APK on Your Android Device.md b/spaces/1phancelerku/anime-remove-background/Enjoy the New Features of Lokicraft 1.18 0 APK on Your Android Device.md deleted file mode 100644 index df6528a9c13653add8026bba930e4c358cabc6c5..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy the New Features of Lokicraft 1.18 0 APK on Your Android Device.md +++ /dev/null @@ -1,125 +0,0 @@ - -

Lokicraft 1.18 0 APK: Everything You Need to Know

-

If you are a fan of sandbox games, you might have heard of Lokicraft, a game inspired by the popular Minecraft. In this article, we will tell you everything you need to know about Lokicraft 1.18 0 APK, the latest version of the game that has some exciting new features and improvements. We will also show you how to download and install it on your Android device, and why you should give it a try.

-

lokicraft 1.18 0 apk


Download File ✓✓✓ https://jinyurl.com/2uNTZS



-

What is Lokicraft?

-

Lokicraft is a sandbox game that allows you to create your own world using various blocks and materials. You can explore, build, craft, and survive in different environments, such as forests, deserts, mountains, and caves. You can also play with other players online or offline, and share your creations with them.

-

A sandbox game inspired by Minecraft

-

Lokicraft is clearly influenced by Minecraft, one of the most popular and successful games of all time. The graphics, gameplay, and mechanics of Lokicraft are very similar to those of Minecraft, but with some differences and variations. For example, Lokicraft has more types of blocks and items than Minecraft, and some of them have unique functions and effects. Lokicraft also has more animals and creatures than Minecraft, some of which are friendly and some of which are hostile.

-

Features and gameplay of Lokicraft

-

Lokicraft has two main modes: creative mode and survival mode. In creative mode, you have unlimited resources and can build anything you want without any restrictions or dangers. In survival mode, you have to gather resources, craft tools and weapons, and protect yourself from enemies and environmental hazards. You also have to manage your hunger and health levels.

-

Lokicraft has many features that make it fun and engaging to play. Some of them are:

- -

What is new in Lokicraft 1.18 0 APK?

-

Lokicraft 1.18 0 APK is the latest version of the game that was released in June 2023. It has some new features and improvements that make it more enjoyable and immersive than ever. Here are some of them:

-

lokicraft 1.18 0 apk download free
-lokicraft 1.18 0 apk mod menu
-lokicraft 1.18 0 apk unlimited money
-lokicraft 1.18 0 apk latest version
-lokicraft 1.18 0 apk for android
-lokicraft 1.18 0 apk no ads
-lokicraft 1.18 0 apk offline
-lokicraft 1.18 0 apk with xbox live
-lokicraft 1.18 0 apk update
-lokicraft 1.18 0 apk full version
-lokicraft 1.18 0 apk hack
-lokicraft 1.18 0 apk premium
-lokicraft 1.18 0 apk cracked
-lokicraft 1.18 0 apk original
-lokicraft 1.18 0 apk mediafire
-lokicraft 1.18 0 apk mega
-lokicraft 1.18 0 apk google drive
-lokicraft 1.18 0 apk android oyun club
-lokicraft 1.18 0 apk revdl
-lokicraft 1.18 0 apk rexdl
-lokicraft 1.18 0 apk uptodown
-lokicraft 1.18 0 apk apkpure
-lokicraft 1.18 0 apk apkmirror
-lokicraft 1.18 0 apk apkmody
-lokicraft 1.18 0 apk happymod
-lokicraft 1.18 0 apk an1
-lokicraft 1.18 0 apk andropalace
-lokicraft 1.18 0 apk android republic
-lokicraft 1.18 0 apk blackmod
-lokicraft 1.18 0 apk platinmods
-lokicraft pro adventure craft mod master for minecraft pe pocket edition free game app download install play new update version online multiplayer server ip address port number best top rated popular sandbox building survival exploration adventure simulation creative education pixel art block world sandbox game app for android mobile phone tablet device how to play guide tips tricks cheats hacks mods skins maps textures addons seeds commands features review rating feedback screenshot video gameplay trailer youtube link google play store apple app store amazon app store windows store steam store microsoft store facebook instagram twitter tiktok reddit discord telegram whatsapp pinterest snapchat quora medium tumblr wordpress blogger blogspot wix weebly squarespace shopify godaddy bluehost hostgator siteground dreamhost namecheap bigcommerce woocommerce magento shopify plus volusion big cartel wix ecommerce squarespace commerce weebly ecommerce ecwid sellfy gumroad sendowl samcart thrivecart payhip podia kajabi teachable thinkific learndash skillshare udemy udacity coursera edx khan academy codecademy pluralsight lynda linkedin learning skillsoft alison futurelearn openlearn open university harvardx mitx edx stanford online coursera yale open courses columbiax edx berkeleyx edx uclax extension edx oxford online edx cambridge online edx imperialx edx lse online edx ucl online edx king's college london online edx edinburghx edx glasgow online edx manchester online edx birmingham online edx leeds online edx liverpool online edx nottingham online edx sheffield online edx bristol online edx cardiff online edx southampton online edx lancaster online edx durham online edx exeter online edx leicester online edx aberdeen online edx st andrews online edx sussex online edx bath online edx loughborough online edx east anglia uea online edx reading online edx aston online edx surrey online edx dundee online edx essex online edx goldsmiths university of london online edx queen mary university of london qmul online edx royal holloway university of london rhul online edx soas university of london soas online edx birkbeck university of london birkbeck online edx city university of london cityuol cityuolonline cityonline cityuolonlineonline cityuolonlineonlineonline cityuolonlineonlineonlineonline cityuolonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonlineonlineonline

-

New biomes and blocks

-

Lokicraft 1.18 0 APK introduces two new biomes: the swamp biome and the snow biome. The swamp biome is a wetland area with water, mud, grass, vines, mushrooms, frogs, snakes, crocodiles, and other swampy creatures. The snow biome is a frozen area with snow, ice, pine trees, polar bears, penguins, snowmen, and other snowy creatures.

-

Lokicraft 1.18 0 APK also adds some new blocks and items that are related to these biomes. For example, you can find mud blocks, vine blocks, mushroom blocks, crocodile eggs, snow blocks, ice blocks, pine cones, and snowballs in these biomes. You can use them to craft new items and decorations, such as mushroom soup, vine ladders, mud bricks, ice sculptures, snow forts, and snowballs.

-

Improved graphics and performance

-

Lokicraft 1.18 0 APK also improves the graphics and performance of the game. The game now has more realistic lighting and shadows, smoother animations, and higher resolution textures. The game also runs faster and smoother on most devices, and has less bugs and glitches.

-

How to download and install Lokicraft 1.18 0 APK

-

If you want to download and install Lokicraft 1.18 0 APK on your Android device, you can follow these simple steps:

-
    -
  1. Go to the official website of Lokicraft or any trusted third-party source that provides the APK file.
  2. -
  3. Download the APK file to your device.
  4. -
  5. Enable the installation of apps from unknown sources in your device settings.
  6. -
  7. Locate the APK file in your device storage and tap on it to install it.
  8. -
  9. Launch the game and enjoy!
  10. -
-

Note: You may need to uninstall the previous version of Lokicraft before installing the new one.

-

Why should you play Lokicraft 1.18 0 APK?

-

Lokicraft 1.18 0 APK is a great game for anyone who loves sandbox games, creativity, and adventure. Here are some reasons why you should play it:

-

Pros and cons of Lokicraft

-

Lokicraft has many pros and cons that make it different from other sandbox games. Some of the pros are:

- -

Some of the cons are:

- -

Comparison with Minecraft

-

Lokicraft is often compared with Minecraft, as they are both sandbox games that share many similarities. However, they also have some differences that make them unique and appealing to different types of players. Here are some of the main differences between Lokicraft and Minecraft:

- | Feature | Lokicraft | Minecraft | | --- | --- | --- | | Graphics | Pixelated and colorful | Pixelated and realistic | | Blocks | More types and functions | Less types and functions | | Items | More types and variations | Less types and variations | | Animals | More types and behaviors | Less types and behaviors | | Biomes | More types and diversity | Less types and diversity | | Modes | Creative mode and survival mode | Creative mode, survival mode, adventure mode, hardcore mode, spectator mode | | Multiplayer | Online mode and offline mode | Online mode only | | World editor | Available for all players | Available for PC players only | | Achievements | Available for all players | Available for PC players only | | Price | Free to play | Paid to play |

Tips and tricks for playing Lokicraft

-

If you want to have more fun and success in playing Lokicraft, you can follow these tips and tricks:

- -

Conclusion

-

Lokicraft 1.18 0 APK is a sandbox game that lets you create your own world using blocks and materials. It is inspired by Minecraft, but has its own features and gameplay that make it unique and fun. It has two main modes: creative mode and survival mode. It also has a world editor, a multiplayer mode, and achievements. It has new biomes and blocks, improved graphics and performance, and is free to play.

-

Summary of the main points

-

To summarize, here are the main points of this article:

- -

Call to action for the readers

-

If you are interested in playing Lokicraft 1.18 0 APK, you can download it from the official website of Lokicraft or any trusted third-party source that provides the APK file. You can also follow Lokicraft on social media platforms to get the latest news and updates about the game. You can also share your feedback and suggestions with the developers and other players.

-

So what are you waiting for? Download Lokicraft 1.18 0 APK today and start creating your own world!

-

FAQs

-

Here are some frequently asked questions about Lokicraft 1.18 0 APK:

-

Q: Is Lokicraft 1.18 0 APK safe to download?

-

A: Yes, Lokicraft 1.18 0 APK is safe to download as long as you get it from a reliable source that does not contain any viruses or malware. However, you should always be careful when downloading any APK file from unknown sources, as they may harm your device or compromise your privacy.

-

Q: Is Lokicraft 1.18 0 APK compatible with my device?

-

A: Lokicraft 1.18 0 APK is compatible with most Android devices that have Android 4.4 or higher versions installed. However, some devices may have some compatibility issues or performance problems depending on their specifications or operating systems.

-

Q: How can I update Lokicraft 1.18 0 APK?

-

A: You can update Lokicraft 1.18 0 APK by downloading the latest version of the game from the official website of Lokicraft or any trusted third-party source that provides the APK file. You may need to uninstall the previous version of Lokicraft before installing the new one.

-

Q: How can I contact the developers of Lokicraft?

-

A: You can contact the developers of Lokicraft by sending them an email at lokicraft@gmail.com or by visiting their website at www.lokicraft.com. You can also follow them on Facebook, Twitter, Instagram, YouTube, or TikTok to get the latest news and updates about the game.

-

Q: How can I support the developers of Lokicraft?

-

A: You can support the developers of Lokicraft by making a donation to them via PayPal or by purchasing some of their in-app products or services. You can also support them by rating and reviewing their game on the Google Play Store or any other platform where you downloaded it. You can also share their game with your friends and family and invite them to play with you.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/docs/install.md b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/docs/install.md deleted file mode 100644 index 6314a40441285e9236438e468caf8b71a407531a..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/docs/install.md +++ /dev/null @@ -1,51 +0,0 @@ -## v1.8.0 -### Linux and Windows -```shell -# CUDA 11.0 -pip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 - -# CPU only -pip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -``` - - -## v1.7.1 -### Linux and Windows -```shell -# CUDA 11.0 -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 - -# CUDA 10.1 -pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html -``` - - -## v1.6.0 - -### Linux and Windows -```shell -# CUDA 10.2 -pip install torch==1.6.0 torchvision==0.7.0 - -# CUDA 10.1 -pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -``` \ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/util/preprocess.py b/spaces/4Taps/SadTalker/src/face3d/util/preprocess.py deleted file mode 100644 index b77a3a4058c208e5ba8cb1cfbb563954a5f7a3e2..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/util/preprocess.py +++ /dev/null @@ -1,103 +0,0 @@ -"""This script contains the image preprocessing code for Deep3DFaceRecon_pytorch -""" - -import numpy as np -from scipy.io import loadmat -from PIL import Image -import cv2 -import os -from skimage import transform as trans -import torch -import warnings -warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning) -warnings.filterwarnings("ignore", category=FutureWarning) - - -# calculating least square problem for image alignment -def POS(xp, x): - npts = xp.shape[1] - - A = np.zeros([2*npts, 8]) - - A[0:2*npts-1:2, 0:3] = x.transpose() - A[0:2*npts-1:2, 3] = 1 - - A[1:2*npts:2, 4:7] = x.transpose() - A[1:2*npts:2, 7] = 1 - - b = np.reshape(xp.transpose(), [2*npts, 1]) - - k, _, _, _ = np.linalg.lstsq(A, b) - - R1 = k[0:3] - R2 = k[4:7] - sTx = k[3] - sTy = k[7] - s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2 - t = np.stack([sTx, sTy], axis=0) - - return t, s - -# resize and crop images for face reconstruction -def resize_n_crop_img(img, lm, t, s, target_size=224., mask=None): - w0, h0 = img.size - w = (w0*s).astype(np.int32) - h = (h0*s).astype(np.int32) - left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32) - right = left + target_size - up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32) - below = up + target_size - - img = img.resize((w, h), resample=Image.BICUBIC) - img = img.crop((left, up, right, below)) - - if mask is not None: - mask = mask.resize((w, h), resample=Image.BICUBIC) - mask = mask.crop((left, up, right, below)) - - lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] - - t[1] + h0/2], axis=1)*s - lm = lm - np.reshape( - np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2]) - - return img, lm, mask - -# utils for face reconstruction -def extract_5p(lm): - lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1 - lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean( - lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0) - lm5p = lm5p[[1, 2, 0, 3, 4], :] - return lm5p - -# utils for face reconstruction -def align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.): - """ - Return: - transparams --numpy.array (raw_W, raw_H, scale, tx, ty) - img_new --PIL.Image (target_size, target_size, 3) - lm_new --numpy.array (68, 2), y direction is opposite to v direction - mask_new --PIL.Image (target_size, target_size) - - Parameters: - img --PIL.Image (raw_H, raw_W, 3) - lm --numpy.array (68, 2), y direction is opposite to v direction - lm3D --numpy.array (5, 3) - mask --PIL.Image (raw_H, raw_W, 3) - """ - - w0, h0 = img.size - if lm.shape[0] != 5: - lm5p = extract_5p(lm) - else: - lm5p = lm - - # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face - t, s = POS(lm5p.transpose(), lm3D.transpose()) - s = rescale_factor/s - - # processing the image - img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask) - trans_params = np.array([w0, h0, s, t[0], t[1]]) - - return trans_params, img_new, lm_new, mask_new diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/dataset_VQ.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/dataset_VQ.py deleted file mode 100644 index 2342de946f2cbdf64729a5145168df1bdda54fa0..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/dataset_VQ.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -from torch.utils import data -import numpy as np -from os.path import join as pjoin -import random -import codecs as cs -from tqdm import tqdm - - - -class VQMotionDataset(data.Dataset): - def __init__(self, dataset_name, window_size = 64, unit_length = 4): - self.window_size = window_size - self.unit_length = unit_length - self.dataset_name = dataset_name - - if dataset_name == 't2m': - self.data_root = './dataset/HumanML3D' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 22 - self.max_motion_length = 196 - self.meta_dir = 'checkpoints/t2m/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - - elif dataset_name == 'kit': - self.data_root = './dataset/KIT-ML' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 21 - - self.max_motion_length = 196 - self.meta_dir = 'checkpoints/kit/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - - joints_num = self.joints_num - - mean = np.load(pjoin(self.meta_dir, 'mean.npy')) - std = np.load(pjoin(self.meta_dir, 'std.npy')) - - split_file = pjoin(self.data_root, 'train.txt') - - self.data = [] - self.lengths = [] - id_list = [] - with cs.open(split_file, 'r') as f: - for line in f.readlines(): - id_list.append(line.strip()) - - for name in tqdm(id_list): - try: - motion = np.load(pjoin(self.motion_dir, name + '.npy')) - if motion.shape[0] < self.window_size: - continue - self.lengths.append(motion.shape[0] - self.window_size) - self.data.append(motion) - except: - # Some motion may not exist in KIT dataset - pass - - - self.mean = mean - self.std = std - print("Total number of motions {}".format(len(self.data))) - - def inv_transform(self, data): - return data * self.std + self.mean - - def compute_sampling_prob(self) : - - prob = np.array(self.lengths, dtype=np.float32) - prob /= np.sum(prob) - return prob - - def __len__(self): - return len(self.data) - - def __getitem__(self, item): - motion = self.data[item] - - idx = random.randint(0, len(motion) - self.window_size) - - motion = motion[idx:idx+self.window_size] - "Z Normalization" - motion = (motion - self.mean) / self.std - - return motion - -def DATALoader(dataset_name, - batch_size, - num_workers = 8, - window_size = 64, - unit_length = 4): - - trainSet = VQMotionDataset(dataset_name, window_size=window_size, unit_length=unit_length) - prob = trainSet.compute_sampling_prob() - sampler = torch.utils.data.WeightedRandomSampler(prob, num_samples = len(trainSet) * 1000, replacement=True) - train_loader = torch.utils.data.DataLoader(trainSet, - batch_size, - shuffle=True, - #sampler=sampler, - num_workers=num_workers, - #collate_fn=collate_fn, - drop_last = True) - - return train_loader - -def cycle(iterable): - while True: - for x in iterable: - yield x diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/classifier.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/classifier.py deleted file mode 100644 index 67e98b9d8ffb96a150b517497ace0a242d7163ef..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/classifier.py +++ /dev/null @@ -1,267 +0,0 @@ -import os -import torch -import pytorch_lightning as pl -from omegaconf import OmegaConf -from torch.nn import functional as F -from torch.optim import AdamW -from torch.optim.lr_scheduler import LambdaLR -from copy import deepcopy -from einops import rearrange -from glob import glob -from natsort import natsorted - -from ldm.modules.diffusionmodules.openaimodel import EncoderUNetModel, UNetModel -from ldm.util import log_txt_as_img, default, ismap, instantiate_from_config - -__models__ = { - 'class_label': EncoderUNetModel, - 'segmentation': UNetModel -} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class NoisyLatentImageClassifier(pl.LightningModule): - - def __init__(self, - diffusion_path, - num_classes, - ckpt_path=None, - pool='attention', - label_key=None, - diffusion_ckpt_path=None, - scheduler_config=None, - weight_decay=1.e-2, - log_steps=10, - monitor='val/loss', - *args, - **kwargs): - super().__init__(*args, **kwargs) - self.num_classes = num_classes - # get latest config of diffusion model - diffusion_config = natsorted(glob(os.path.join(diffusion_path, 'configs', '*-project.yaml')))[-1] - self.diffusion_config = OmegaConf.load(diffusion_config).model - self.diffusion_config.params.ckpt_path = diffusion_ckpt_path - self.load_diffusion() - - self.monitor = monitor - self.numd = self.diffusion_model.first_stage_model.encoder.num_resolutions - 1 - self.log_time_interval = self.diffusion_model.num_timesteps // log_steps - self.log_steps = log_steps - - self.label_key = label_key if not hasattr(self.diffusion_model, 'cond_stage_key') \ - else self.diffusion_model.cond_stage_key - - assert self.label_key is not None, 'label_key neither in diffusion model nor in model.params' - - if self.label_key not in __models__: - raise NotImplementedError() - - self.load_classifier(ckpt_path, pool) - - self.scheduler_config = scheduler_config - self.use_scheduler = self.scheduler_config is not None - self.weight_decay = weight_decay - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def load_diffusion(self): - model = instantiate_from_config(self.diffusion_config) - self.diffusion_model = model.eval() - self.diffusion_model.train = disabled_train - for param in self.diffusion_model.parameters(): - param.requires_grad = False - - def load_classifier(self, ckpt_path, pool): - model_config = deepcopy(self.diffusion_config.params.unet_config.params) - model_config.in_channels = self.diffusion_config.params.unet_config.params.out_channels - model_config.out_channels = self.num_classes - if self.label_key == 'class_label': - model_config.pool = pool - - self.model = __models__[self.label_key](**model_config) - if ckpt_path is not None: - print('#####################################################################') - print(f'load from ckpt "{ckpt_path}"') - print('#####################################################################') - self.init_from_ckpt(ckpt_path) - - @torch.no_grad() - def get_x_noisy(self, x, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x)) - continuous_sqrt_alpha_cumprod = None - if self.diffusion_model.use_continuous_noise: - continuous_sqrt_alpha_cumprod = self.diffusion_model.sample_continuous_noise_level(x.shape[0], t + 1) - # todo: make sure t+1 is correct here - - return self.diffusion_model.q_sample(x_start=x, t=t, noise=noise, - continuous_sqrt_alpha_cumprod=continuous_sqrt_alpha_cumprod) - - def forward(self, x_noisy, t, *args, **kwargs): - return self.model(x_noisy, t) - - @torch.no_grad() - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - @torch.no_grad() - def get_conditioning(self, batch, k=None): - if k is None: - k = self.label_key - assert k is not None, 'Needs to provide label key' - - targets = batch[k].to(self.device) - - if self.label_key == 'segmentation': - targets = rearrange(targets, 'b h w c -> b c h w') - for down in range(self.numd): - h, w = targets.shape[-2:] - targets = F.interpolate(targets, size=(h // 2, w // 2), mode='nearest') - - # targets = rearrange(targets,'b c h w -> b h w c') - - return targets - - def compute_top_k(self, logits, labels, k, reduction="mean"): - _, top_ks = torch.topk(logits, k, dim=1) - if reduction == "mean": - return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item() - elif reduction == "none": - return (top_ks == labels[:, None]).float().sum(dim=-1) - - def on_train_epoch_start(self): - # save some memory - self.diffusion_model.model.to('cpu') - - @torch.no_grad() - def write_logs(self, loss, logits, targets): - log_prefix = 'train' if self.training else 'val' - log = {} - log[f"{log_prefix}/loss"] = loss.mean() - log[f"{log_prefix}/acc@1"] = self.compute_top_k( - logits, targets, k=1, reduction="mean" - ) - log[f"{log_prefix}/acc@5"] = self.compute_top_k( - logits, targets, k=5, reduction="mean" - ) - - self.log_dict(log, prog_bar=False, logger=True, on_step=self.training, on_epoch=True) - self.log('loss', log[f"{log_prefix}/loss"], prog_bar=True, logger=False) - self.log('global_step', self.global_step, logger=False, on_epoch=False, prog_bar=True) - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, on_step=True, logger=True, on_epoch=False, prog_bar=True) - - def shared_step(self, batch, t=None): - x, *_ = self.diffusion_model.get_input(batch, k=self.diffusion_model.first_stage_key) - targets = self.get_conditioning(batch) - if targets.dim() == 4: - targets = targets.argmax(dim=1) - if t is None: - t = torch.randint(0, self.diffusion_model.num_timesteps, (x.shape[0],), device=self.device).long() - else: - t = torch.full(size=(x.shape[0],), fill_value=t, device=self.device).long() - x_noisy = self.get_x_noisy(x, t) - logits = self(x_noisy, t) - - loss = F.cross_entropy(logits, targets, reduction='none') - - self.write_logs(loss.detach(), logits.detach(), targets.detach()) - - loss = loss.mean() - return loss, logits, x_noisy, targets - - def training_step(self, batch, batch_idx): - loss, *_ = self.shared_step(batch) - return loss - - def reset_noise_accs(self): - self.noisy_acc = {t: {'acc@1': [], 'acc@5': []} for t in - range(0, self.diffusion_model.num_timesteps, self.diffusion_model.log_every_t)} - - def on_validation_start(self): - self.reset_noise_accs() - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - loss, *_ = self.shared_step(batch) - - for t in self.noisy_acc: - _, logits, _, targets = self.shared_step(batch, t) - self.noisy_acc[t]['acc@1'].append(self.compute_top_k(logits, targets, k=1, reduction='mean')) - self.noisy_acc[t]['acc@5'].append(self.compute_top_k(logits, targets, k=5, reduction='mean')) - - return loss - - def configure_optimizers(self): - optimizer = AdamW(self.model.parameters(), lr=self.learning_rate, weight_decay=self.weight_decay) - - if self.use_scheduler: - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(optimizer, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [optimizer], scheduler - - return optimizer - - @torch.no_grad() - def log_images(self, batch, N=8, *args, **kwargs): - log = dict() - x = self.get_input(batch, self.diffusion_model.first_stage_key) - log['inputs'] = x - - y = self.get_conditioning(batch) - - if self.label_key == 'class_label': - y = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['labels'] = y - - if ismap(y): - log['labels'] = self.diffusion_model.to_rgb(y) - - for step in range(self.log_steps): - current_time = step * self.log_time_interval - - _, logits, x_noisy, _ = self.shared_step(batch, t=current_time) - - log[f'inputs@t{current_time}'] = x_noisy - - pred = F.one_hot(logits.argmax(dim=1), num_classes=self.num_classes) - pred = rearrange(pred, 'b h w c -> b c h w') - - log[f'pred@t{current_time}'] = self.diffusion_model.to_rgb(pred) - - for key in log: - log[key] = log[key][:N] - - return log diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/CLAPWrapper.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/CLAPWrapper.py deleted file mode 100644 index b26af847dcfdd314d10aa2c795362deac1e1fac7..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/CLAPWrapper.py +++ /dev/null @@ -1,257 +0,0 @@ -import random -import torchaudio -from torch._six import string_classes -import collections -import re -import torch.nn.functional as F -import numpy as np -from transformers import AutoTokenizer -from ldm.modules.encoders.CLAP.utils import read_config_as_args -from ldm.modules.encoders.CLAP.clap import CLAP -import math -import torchaudio.transforms as T -import os -import torch -from importlib_resources import files - - -class CLAPWrapper(): - """ - A class for interfacing CLAP model. - """ - - def __init__(self, model_fp, device): - self.np_str_obj_array_pattern = re.compile(r'[SaUO]') - self.file_path = os.path.realpath(__file__) - self.default_collate_err_msg_format = ( - "default_collate: batch must contain tensors, numpy arrays, numbers, " - "dicts or lists; found {}") - self.config_as_str = files('ldm').joinpath('modules/encoders/CLAP/config.yml').read_text() - self.model_fp = model_fp - self.device = device - self.clap, self.tokenizer, self.args = self.load_clap() - - def load_clap(self): - r"""Load CLAP model with args from config file""" - - args = read_config_as_args(self.config_as_str, is_config_str=True) - - if 'bert' in args.text_model: - self.token_keys = ['input_ids', 'token_type_ids', 'attention_mask'] - else: - self.token_keys = ['input_ids', 'attention_mask'] - - clap = CLAP( - audioenc_name=args.audioenc_name, - sample_rate=args.sampling_rate, - window_size=args.window_size, - hop_size=args.hop_size, - mel_bins=args.mel_bins, - fmin=args.fmin, - fmax=args.fmax, - classes_num=args.num_classes, - out_emb=args.out_emb, - text_model=args.text_model, - transformer_embed_dim=args.transformer_embed_dim, - d_proj=args.d_proj - ) - - # Load pretrained weights for model - model_state_dict = torch.load(self.model_fp, map_location=torch.device('cpu'))['model'] - clap.load_state_dict(model_state_dict) - - clap.eval() # set clap in eval mode - tokenizer = AutoTokenizer.from_pretrained(args.text_model) - - clap = clap.to(self.device) - tokenizer = tokenizer.to(self.device) - - return clap, tokenizer, args - - def default_collate(self, batch): - r"""Puts each data field into a tensor with outer dimension batch size""" - elem = batch[0] - elem_type = type(elem) - if isinstance(elem, torch.Tensor): - out = None - if torch.utils.data.get_worker_info() is not None: - # If we're in a background process, concatenate directly into a - # shared memory tensor to avoid an extra copy - numel = sum([x.numel() for x in batch]) - storage = elem.storage()._new_shared(numel) - out = elem.new(storage) - return torch.stack(batch, 0, out=out) - elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ - and elem_type.__name__ != 'string_': - if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap': - # array of string classes and object - if self.np_str_obj_array_pattern.search(elem.dtype.str) is not None: - raise TypeError( - self.default_collate_err_msg_format.format(elem.dtype)) - - return self.default_collate([torch.as_tensor(b) for b in batch]) - elif elem.shape == (): # scalars - return torch.as_tensor(batch) - elif isinstance(elem, float): - return torch.tensor(batch, dtype=torch.float64) - elif isinstance(elem, int): - return torch.tensor(batch) - elif isinstance(elem, string_classes): - return batch - elif isinstance(elem, collections.abc.Mapping): - return {key: self.default_collate([d[key] for d in batch]) for key in elem} - elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple - return elem_type(*(self.default_collate(samples) for samples in zip(*batch))) - elif isinstance(elem, collections.abc.Sequence): - # check to make sure that the elements in batch have consistent size - it = iter(batch) - elem_size = len(next(it)) - if not all(len(elem) == elem_size for elem in it): - raise RuntimeError( - 'each element in list of batch should be of equal size') - transposed = zip(*batch) - return [self.default_collate(samples) for samples in transposed] - - raise TypeError(self.default_collate_err_msg_format.format(elem_type)) - - def load_audio_into_tensor(self, audio_path, audio_duration, resample=False): - r"""Loads audio file and returns raw audio.""" - # Randomly sample a segment of audio_duration from the clip or pad to match duration - audio_time_series, sample_rate = torchaudio.load(audio_path) - resample_rate = self.args.sampling_rate - if resample: - resampler = T.Resample(sample_rate, resample_rate) - audio_time_series = resampler(audio_time_series) - audio_time_series = audio_time_series.reshape(-1) - - # audio_time_series is shorter than predefined audio duration, - # so audio_time_series is extended - if audio_duration*sample_rate >= audio_time_series.shape[0]: - repeat_factor = int(np.ceil((audio_duration*sample_rate) / - audio_time_series.shape[0])) - # Repeat audio_time_series by repeat_factor to match audio_duration - audio_time_series = audio_time_series.repeat(repeat_factor) - # remove excess part of audio_time_series - audio_time_series = audio_time_series[0:audio_duration*sample_rate] - else: - # audio_time_series is longer than predefined audio duration, - # so audio_time_series is trimmed - start_index = random.randrange( - audio_time_series.shape[0] - audio_duration*sample_rate) - audio_time_series = audio_time_series[start_index:start_index + - audio_duration*sample_rate] - return torch.FloatTensor(audio_time_series) - - def preprocess_audio(self, audio_files, resample): - r"""Load list of audio files and return raw audio""" - audio_tensors = [] - for audio_file in audio_files: - audio_tensor = self.load_audio_into_tensor( - audio_file, self.args.duration, resample) - audio_tensor = audio_tensor.reshape(1, -1).to(self.device) - audio_tensors.append(audio_tensor) - return self.default_collate(audio_tensors) - - def preprocess_text(self, text_queries, text_len=100): - r"""Load list of class labels and return tokenized text""" - device = next(self.clap.parameters()).device - tokenized_texts = [] - for ttext in text_queries: - tok = self.tokenizer.encode_plus( - text=ttext, add_special_tokens=True, max_length=text_len, pad_to_max_length=True, return_tensors="pt") - for key in self.token_keys: - tok[key] = tok[key].reshape(-1).to(device) - tokenized_texts.append(tok) - return self.default_collate(tokenized_texts) - - def get_text_embeddings(self, class_labels): - r"""Load list of class labels and return text embeddings""" - preprocessed_text = self.preprocess_text(class_labels) - text_embeddings = self._get_text_embeddings(preprocessed_text) - text_embeddings = text_embeddings/torch.norm(text_embeddings, dim=-1, keepdim=True) - return text_embeddings - - def get_audio_embeddings(self, audio_files, resample): - r"""Load list of audio files and return a audio embeddings""" - preprocessed_audio = self.preprocess_audio(audio_files, resample) - audio_embeddings = self._get_audio_embeddings(preprocessed_audio) - audio_embeddings = audio_embeddings/torch.norm(audio_embeddings, dim=-1, keepdim=True) - return audio_embeddings - - def _get_text_embeddings(self, preprocessed_text): - r"""Load preprocessed text and return text embeddings""" - with torch.no_grad(): - text_embeddings = self.clap.caption_encoder(preprocessed_text) - text_embeddings = text_embeddings/torch.norm(text_embeddings, dim=-1, keepdim=True) - return text_embeddings - - def _get_audio_embeddings(self, preprocessed_audio): - r"""Load preprocessed audio and return a audio embeddings""" - with torch.no_grad(): - preprocessed_audio = preprocessed_audio.reshape( - preprocessed_audio.shape[0], preprocessed_audio.shape[2]) - #Append [0] the audio emebdding, [1] has output class probabilities - audio_embeddings = self.clap.audio_encoder(preprocessed_audio)[0] - audio_embeddings = audio_embeddings/torch.norm(audio_embeddings, dim=-1, keepdim=True) - return audio_embeddings - - def compute_similarity(self, audio_embeddings, text_embeddings): - r"""Compute similarity between text and audio embeddings""" - logit_scale = self.clap.logit_scale.exp() - similarity = logit_scale*text_embeddings @ audio_embeddings.T - return similarity.T - - def _generic_batch_inference(self, func, *args): - r"""Process audio and/or text per batch""" - input_tmp = args[0] - batch_size = args[-1] - # args[0] has audio_files, args[1] has class_labels - inputs = [args[0], args[1]] if len(args) == 3 else [args[0]] - args0_len = len(args[0]) - # compute text_embeddings once for all the audio_files batches - if len(inputs) == 2: - text_embeddings = self.get_text_embeddings(args[1]) - inputs = [args[0], args[1], text_embeddings] - dataset_idx = 0 - for _ in range(math.ceil(args0_len/batch_size)): - next_batch_idx = dataset_idx + batch_size - # batch size is bigger than available audio/text items - if next_batch_idx >= args0_len: - inputs[0] = input_tmp[dataset_idx:] - return func(*tuple(inputs)) - else: - inputs[0] = input_tmp[dataset_idx:next_batch_idx] - yield func(*tuple(inputs)) - dataset_idx = next_batch_idx - - def get_audio_embeddings_per_batch(self, audio_files, batch_size): - r"""Load preprocessed audio and return a audio embeddings per batch""" - return self._generic_batch_inference(self.get_audio_embeddings, audio_files, batch_size) - - def get_text_embeddings_per_batch(self, class_labels, batch_size): - r"""Load preprocessed text and return text embeddings per batch""" - return self._generic_batch_inference(self.get_text_embeddings, class_labels, batch_size) - - def classify_audio_files_per_batch(self, audio_files, class_labels, batch_size): - r"""Compute classification probabilities for each audio recording in a batch and each class label""" - return self._generic_batch_inference(self.classify_audio_files, audio_files, class_labels, batch_size) - -if __name__ == '__main__': - - # Load and initialize CLAP - weights_path = "/home1/huangrongjie/Project/Diffusion/LatentDiffusion/CLAP/CLAP_weights_2022.pth" - clap_model = CLAPWrapper(weights_path, use_cuda=False) - - y = ["A woman talks nearby as water pours", "Multiple clanging and clanking sounds"] - x = ['/home2/huangjiawei/data/audiocaps/train/Yr1nicOVtvkQ.wav', '/home2/huangjiawei/data/audiocaps/train/YUDGBjjwyaqE.wav'] - - # Computing text embeddings - text_embeddings = clap_model.get_text_embeddings(y) - - import ipdb - ipdb.set_trace() - - # Computing audio embeddings - audio_embeddings = clap_model.get_audio_embeddings(x, resample=True) - similarity = clap_model.compute_similarity(audio_embeddings, text_embeddings) - diff --git a/spaces/AIKey/facetofacechat/style.css b/spaces/AIKey/facetofacechat/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/AIKey/facetofacechat/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/AIatUIUC/CodeLATS/generators/model.py b/spaces/AIatUIUC/CodeLATS/generators/model.py deleted file mode 100644 index c2c6465b21c8a04847a8ab8535f33061516e7f9b..0000000000000000000000000000000000000000 --- a/spaces/AIatUIUC/CodeLATS/generators/model.py +++ /dev/null @@ -1,120 +0,0 @@ -from typing import List, Union, Optional, Literal -import dataclasses - -from tenacity import ( - retry, - stop_after_attempt, # type: ignore - wait_random_exponential, # type: ignore -) -import openai - -MessageRole = Literal["system", "user", "assistant"] - - -@dataclasses.dataclass() -class Message(): - role: MessageRole - content: str - - -def message_to_str(message: Message) -> str: - return f"{message.role}: {message.content}" - - -def messages_to_str(messages: List[Message]) -> str: - return "\n".join([message_to_str(message) for message in messages]) - - -@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6)) -def gpt_completion( - model: str, - prompt: str, - max_tokens: int = 1024, - stop_strs: Optional[List[str]] = None, - temperature: float = 0.0, - num_comps=1, -) -> Union[List[str], str]: - response = openai.Completion.create( - model=model, - prompt=prompt, - temperature=temperature, - max_tokens=max_tokens, - top_p=1, - frequency_penalty=0.0, - presence_penalty=0.0, - stop=stop_strs, - n=num_comps, - ) - if num_comps == 1: - return response.choices[0].text # type: ignore - - return [choice.text for choice in response.choices] # type: ignore - - -@retry(wait=wait_random_exponential(min=1, max=180), stop=stop_after_attempt(6)) -def gpt_chat( - model: str, - messages: List, - max_tokens: int = 1024, - temperature: float = 0.0, - num_comps=1, -) -> Union[List[str], str]: - try: - response = openai.ChatCompletion.create( - model=model, - messages=[dataclasses.asdict(message) for message in messages], - max_tokens=max_tokens, - temperature=temperature, - top_p=1, - frequency_penalty=0.0, - presence_penalty=0.0, - n=num_comps, - ) - if num_comps == 1: - return response.choices[0].message.content # type: ignore - return [choice.message.content for choice in response.choices] # type: ignore - - except Exception as e: - print(f"An error occurred while calling OpenAI: {e}") - raise - -class ModelBase(): - def __init__(self, name: str): - self.name = name - self.is_chat = False - - def __repr__(self) -> str: - return f'{self.name}' - - def generate_chat(self, messages: List[Message], max_tokens: int = 1024, temperature: float = 0.2, num_comps: int = 1) -> Union[List[str], str]: - raise NotImplementedError - - def generate(self, prompt: str, max_tokens: int = 1024, stop_strs: Optional[List[str]] = None, temperature: float = 0.0, num_comps=1) -> Union[List[str], str]: - raise NotImplementedError - - -class GPTChat(ModelBase): - def __init__(self, model_name: str): - self.name = model_name - self.is_chat = True - - def generate_chat(self, messages: List[Message], max_tokens: int = 1024, temperature: float = 0.2, num_comps: int = 1) -> Union[List[str], str]: - return gpt_chat(self.name, messages, max_tokens, temperature, num_comps) - - -class GPT4(GPTChat): - def __init__(self): - super().__init__("gpt-4") - - -class GPT35(GPTChat): - def __init__(self): - super().__init__("gpt-3.5-turbo") - - -class GPTDavinci(ModelBase): - def __init__(self, model_name: str): - self.name = model_name - - def generate(self, prompt: str, max_tokens: int = 1024, stop_strs: Optional[List[str]] = None, temperature: float = 0, num_comps=1) -> Union[List[str], str]: - return gpt_completion(self.name, prompt, max_tokens, stop_strs, temperature, num_comps) \ No newline at end of file diff --git a/spaces/ARTeLab/ARTeLab-SummIT/style.css b/spaces/ARTeLab/ARTeLab-SummIT/style.css deleted file mode 100644 index 466c5241e568ca1659988d0aff1954369db026c6..0000000000000000000000000000000000000000 --- a/spaces/ARTeLab/ARTeLab-SummIT/style.css +++ /dev/null @@ -1,38 +0,0 @@ -body { - background-color: #eee; -} -/*.fullScreenFrame > div {*/ -/* display: flex;*/ -/* justify-content: center;*/ -/*}*/ -/*.stButton>button {*/ -/* color: #4F8BF9;*/ -/* border-radius: 50%;*/ -/* height: 3em;*/ -/* width: 3em;*/ -/*}*/ - -.stTextInput>div>div>input { - color: #4F8BF9; -} -.stTextArea>div>div>input { - color: #4F8BF9; - min-height: 500px; -} - - -/*.st-cj {*/ -/* min-height: 500px;*/ -/* spellcheck="false";*/ -/* color: #4F8BF9;*/ -/*}*/ -/*.st-ch {*/ -/* min-height: 500px;*/ -/* spellcheck="false";*/ -/* color: #4F8BF9;*/ -/*}*/ -/*.st-bb {*/ -/* min-height: 500px;*/ -/* spellcheck="false";*/ -/* color: #4F8BF9;*/ -/*}*/ \ No newline at end of file diff --git a/spaces/Abhilashvj/planogram-compliance/utils/loggers/clearml/README.md b/spaces/Abhilashvj/planogram-compliance/utils/loggers/clearml/README.md deleted file mode 100644 index 3cf4c268583fc69df9ae3b58ea2566ed871a896c..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/loggers/clearml/README.md +++ /dev/null @@ -1,230 +0,0 @@ -# ClearML Integration - -Clear|MLClear|ML - -## About ClearML - -[ClearML](https://cutt.ly/yolov5-tutorial-clearml) is an [open-source](https://github.com/allegroai/clearml) toolbox designed to save you time ⏱️. - -🔨 Track every YOLOv5 training run in the experiment manager - -🔧 Version and easily access your custom training data with the integrated ClearML Data Versioning Tool - -🔦 Remotely train and monitor your YOLOv5 training runs using ClearML Agent - -🔬 Get the very best mAP using ClearML Hyperparameter Optimization - -🔭 Turn your newly trained YOLOv5 model into an API with just a few commands using ClearML Serving - -
-And so much more. It's up to you how many of these tools you want to use, you can stick to the experiment manager, or chain them all together into an impressive pipeline! -
-
- -![ClearML scalars dashboard](https://github.com/thepycoder/clearml_screenshots/raw/main/experiment_manager_with_compare.gif) - - -
-
- -## 🦾 Setting Things Up - -To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 options to get one: - -Either sign up for free to the [ClearML Hosted Service](https://cutt.ly/yolov5-tutorial-clearml) or you can set up your own server, see [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). Even the server is open-source, so even if you're dealing with sensitive data, you should be good to go! - -1. Install the `clearml` python package: - - ```bash - pip install clearml - ``` - -1. Connect the ClearML SDK to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration) (go right top to Settings -> Workspace -> Create new credentials), then execute the command below and follow the instructions: - - ```bash - clearml-init - ``` - -That's it! You're done 😎 - -
- -## 🚀 Training YOLOv5 With ClearML - -To enable ClearML experiment tracking, simply install the ClearML pip package. - -```bash -pip install clearml>=1.2.0 -``` - -This will enable integration with the YOLOv5 training script. Every training run from now on, will be captured and stored by the ClearML experiment manager. - -If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`. -PLEASE NOTE: ClearML uses `/` as a delimter for subprojects, so be careful when using `/` in your project name! - -```bash -python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache -``` - -or with custom project and task name: -```bash -python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache -``` - -This will capture: -- Source code + uncommitted changes -- Installed packages -- (Hyper)parameters -- Model files (use `--save-period n` to save a checkpoint every n epochs) -- Console output -- Scalars (mAP_0.5, mAP_0.5:0.95, precision, recall, losses, learning rates, ...) -- General info such as machine details, runtime, creation date etc. -- All produced plots such as label correlogram and confusion matrix -- Images with bounding boxes per epoch -- Mosaic per epoch -- Validation images per epoch -- ... - -That's a lot right? 🤯 -Now, we can visualize all of this information in the ClearML UI to get an overview of our training progress. Add custom columns to the table view (such as e.g. mAP_0.5) so you can easily sort on the best performing model. Or select multiple experiments and directly compare them! - -There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works! - -
- -## 🔗 Dataset Version Management - -Versioning your data separately from your code is generally a good idea and makes it easy to aqcuire the latest version too. This repository supports supplying a dataset version ID and it will make sure to get the data if it's not there yet. Next to that, this workflow also saves the used dataset ID as part of the task parameters, so you will always know for sure which data was used in which experiment! - -![ClearML Dataset Interface](https://github.com/thepycoder/clearml_screenshots/raw/main/clearml_data.gif) - -### Prepare Your Dataset - -The YOLOv5 repository supports a number of different datasets by using yaml files containing their information. By default datasets are downloaded to the `../datasets` folder in relation to the repository root folder. So if you downloaded the `coco128` dataset using the link in the yaml or with the scripts provided by yolov5, you get this folder structure: - -``` -.. -|_ yolov5 -|_ datasets - |_ coco128 - |_ images - |_ labels - |_ LICENSE - |_ README.txt -``` -But this can be any dataset you wish. Feel free to use your own, as long as you keep to this folder structure. - -Next, ⚠️**copy the corresponding yaml file to the root of the dataset folder**⚠️. This yaml files contains the information ClearML will need to properly use the dataset. You can make this yourself too, of course, just follow the structure of the example yamls. - -Basically we need the following keys: `path`, `train`, `test`, `val`, `nc`, `names`. - -``` -.. -|_ yolov5 -|_ datasets - |_ coco128 - |_ images - |_ labels - |_ coco128.yaml # <---- HERE! - |_ LICENSE - |_ README.txt -``` - -### Upload Your Dataset - -To get this dataset into ClearML as a versionned dataset, go to the dataset root folder and run the following command: -```bash -cd coco128 -clearml-data sync --project YOLOv5 --name coco128 --folder . -``` - -The command `clearml-data sync` is actually a shorthand command. You could also run these commands one after the other: -```bash -# Optionally add --parent if you want to base -# this version on another dataset version, so no duplicate files are uploaded! -clearml-data create --name coco128 --project YOLOv5 -clearml-data add --files . -clearml-data close -``` - -### Run Training Using A ClearML Dataset - -Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 🚀 models! - -```bash -python train.py --img 640 --batch 16 --epochs 3 --data clearml:// --weights yolov5s.pt --cache -``` - -
- -## 👀 Hyperparameter Optimization - -Now that we have our experiments and data versioned, it's time to take a look at what we can build on top! - -Using the code information, installed packages and environment details, the experiment itself is now **completely reproducible**. In fact, ClearML allows you to clone an experiment and even change its parameters. We can then just rerun it with these new parameters automatically, this is basically what HPO does! - -To **run hyperparameter optimization locally**, we've included a pre-made script for you. Just make sure a training task has been run at least once, so it is in the ClearML experiment manager, we will essentially clone it and change its hyperparameters. - -You'll need to fill in the ID of this `template task` in the script found at `utils/loggers/clearml/hpo.py` and then just run it :) You can change `task.execute_locally()` to `task.execute()` to put it in a ClearML queue and have a remote agent work on it instead. - -```bash -# To use optuna, install it first, otherwise you can change the optimizer to just be RandomSearch -pip install optuna -python utils/loggers/clearml/hpo.py -``` - -![HPO](https://github.com/thepycoder/clearml_screenshots/raw/main/hpo.png) - -## 🤯 Remote Execution (advanced) - -Running HPO locally is really handy, but what if we want to run our experiments on a remote machine instead? Maybe you have access to a very powerful GPU machine on-site or you have some budget to use cloud GPUs. -This is where the ClearML Agent comes into play. Check out what the agent can do here: - -- [YouTube video](https://youtu.be/MX3BrXnaULs) -- [Documentation](https://clear.ml/docs/latest/docs/clearml_agent) - -In short: every experiment tracked by the experiment manager contains enough information to reproduce it on a different machine (installed packages, uncommitted changes etc.). So a ClearML agent does just that: it listens to a queue for incoming tasks and when it finds one, it recreates the environment and runs it while still reporting scalars, plots etc. to the experiment manager. - -You can turn any machine (a cloud VM, a local GPU machine, your own laptop ... ) into a ClearML agent by simply running: -```bash -clearml-agent daemon --queue [--docker] -``` - -### Cloning, Editing And Enqueuing - -With our agent running, we can give it some work. Remember from the HPO section that we can clone a task and edit the hyperparameters? We can do that from the interface too! - -🪄 Clone the experiment by right clicking it - -🎯 Edit the hyperparameters to what you wish them to be - -⏳ Enqueue the task to any of the queues by right clicking it - -![Enqueue a task from the UI](https://github.com/thepycoder/clearml_screenshots/raw/main/enqueue.gif) - -### Executing A Task Remotely - -Now you can clone a task like we explained above, or simply mark your current script by adding `task.execute_remotely()` and on execution it will be put into a queue, for the agent to start working on! - -To run the YOLOv5 training script remotely, all you have to do is add this line to the training.py script after the clearml logger has been instatiated: -```python -# ... -# Loggers -data_dict = None -if RANK in {-1, 0}: - loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance - if loggers.clearml: - loggers.clearml.task.execute_remotely(queue='my_queue') # <------ ADD THIS LINE - # Data_dict is either None is user did not choose for ClearML dataset or is filled in by ClearML - data_dict = loggers.clearml.data_dict -# ... -``` -When running the training script after this change, python will run the script up until that line, after which it will package the code and send it to the queue instead! - -### Autoscaling workers - -ClearML comes with autoscalers too! This tool will automatically spin up new remote machines in the cloud of your choice (AWS, GCP, Azure) and turn them into ClearML agents for you whenever there are experiments detected in the queue. Once the tasks are processed, the autoscaler will automatically shut down the remote machines and you stop paying! - -Check out the autoscalers getting started video below. - -[![Watch the video](https://img.youtube.com/vi/j4XVMAaUt3E/0.jpg)](https://youtu.be/j4XVMAaUt3E) diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/trimSuffix.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/trimSuffix.ts deleted file mode 100644 index 729107942ebaa2d7e1281dd77f8e52e8b135a5ad..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/trimSuffix.ts +++ /dev/null @@ -1,6 +0,0 @@ -export function trimSuffix(input: string, end: string): string { - if (input.endsWith(end)) { - return input.slice(0, input.length - end.length); - } - return input; -} diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/login/callback/updateUser.spec.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/login/callback/updateUser.spec.ts deleted file mode 100644 index 336da17dd996d62cf0ea0e7cf24ca3249a3c9e05..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/login/callback/updateUser.spec.ts +++ /dev/null @@ -1,143 +0,0 @@ -import { assert, it, describe, afterEach, vi, expect } from "vitest"; -import type { Cookies } from "@sveltejs/kit"; -import { collections } from "$lib/server/database"; -import { updateUser } from "./updateUser"; -import { DEFAULT_SETTINGS } from "$lib/types/Settings"; -import { defaultModel } from "$lib/server/models"; - -const userData = { - preferred_username: "new-username", - name: "name", - picture: "https://example.com/avatar.png", - sub: "1234567890", -}; - -const locals = { - userId: "1234567890", - sessionId: "1234567890", -}; - -// @ts-expect-error SvelteKit cookies dumb mock -const cookiesMock: Cookies = { - set: vi.fn(), -}; - -const insertRandomUser = async () => { - /*const res = await collections.users.insertOne({ - _id: new ObjectId(), - createdAt: new Date(), - updatedAt: new Date(), - username: "base-username", - name: userData.name, - avatarUrl: userData.picture, - hfUserId: userData.sub, - sessionId: locals.sessionId, - }); - - return res.insertedId;*/ -}; - -const insertRandomConversations = async (count: number) => { - /*const res = await collections.conversations.insertMany( - new Array(count).fill(0).map(() => ({ - _id: new ObjectId(), - title: "random title", - messages: [], - model: defaultModel.id, - createdAt: new Date(), - updatedAt: new Date(), - sessionId: locals.sessionId, - })) - ); - - return res.insertedIds;*/ -}; - -describe("login", () => { - it("should update user if existing", async () => { - /*await insertRandomUser(); - - await updateUser({ userData, locals, cookies: cookiesMock }); - - const existingUser = await collections.users.findOne({ hfUserId: userData.sub }); - - assert.equal(existingUser?.name, userData.name); - - expect(cookiesMock.set).toBeCalledTimes(1);*/ - }); - - it("should migrate pre-existing conversations for new user", async () => { - /*const insertedId = await insertRandomUser(); - - await insertRandomConversations(2); - - await updateUser({ userData, locals, cookies: cookiesMock }); - - const conversationCount = await collections.conversations.countDocuments({ - userId: insertedId, - sessionId: { $exists: false }, - }); - - assert.equal(conversationCount, 2); - - await collections.conversations.deleteMany({ userId: insertedId });*/ - }); - - it("should create default settings for new user", async () => { - /*await updateUser({ userData, locals, cookies: cookiesMock }); - - const user = await collections.users.findOne({ sessionId: locals.sessionId }); - - assert.exists(user); - - const settings = await collections.settings.findOne({ userId: user?._id }); - - expect(settings).toMatchObject({ - userId: user?._id, - updatedAt: expect.any(Date), - createdAt: expect.any(Date), - ethicsModalAcceptedAt: expect.any(Date), - ...DEFAULT_SETTINGS, - }); - - await collections.settings.deleteOne({ userId: user?._id });*/ - }); - - it("should migrate pre-existing settings for pre-existing user", async () => { - /*const { insertedId } = await collections.settings.insertOne({ - sessionId: locals.sessionId, - ethicsModalAcceptedAt: new Date(), - updatedAt: new Date(), - createdAt: new Date(), - ...DEFAULT_SETTINGS, - shareConversationsWithModelAuthors: false, - }); - - await updateUser({ userData, locals, cookies: cookiesMock }); - - const settings = await collections.settings.findOne({ - _id: insertedId, - sessionId: { $exists: false }, - }); - - assert.exists(settings); - - const user = await collections.users.findOne({ hfUserId: userData.sub }); - - expect(settings).toMatchObject({ - userId: user?._id, - updatedAt: expect.any(Date), - createdAt: expect.any(Date), - ethicsModalAcceptedAt: expect.any(Date), - ...DEFAULT_SETTINGS, - shareConversationsWithModelAuthors: false, - }); - - await collections.settings.deleteOne({ userId: user?._id });*/ - }); -}); - -afterEach(async () => { - /*await collections.users.deleteMany({ hfUserId: userData.sub }); - vi.clearAllMocks();*/ -}); diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/__init__.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/__init__.py deleted file mode 100644 index 815194c4c76b97d56aa7d5356cd2324ea6ab5093..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .Bard import Bard -from .Raycast import Raycast -from .Theb import Theb -from .HuggingChat import HuggingChat -from .OpenaiChat import OpenaiChat -from .OpenAssistant import OpenAssistant \ No newline at end of file diff --git a/spaces/Aditya9790/yolo7-object-tracking/scripts/get_coco.sh b/spaces/Aditya9790/yolo7-object-tracking/scripts/get_coco.sh deleted file mode 100644 index 524f8dd9e2cae992a4047476520a7e4e1402e6de..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/scripts/get_coco.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/bash -# COCO 2017 dataset http://cocodataset.org -# Download command: bash ./scripts/get_coco.sh - -# Download/unzip labels -d='./' # unzip directory -url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ -f='coco2017labels-segments.zip' # or 'coco2017labels.zip', 68 MB -echo 'Downloading' $url$f ' ...' -curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background - -# Download/unzip images -d='./coco/images' # unzip directory -url=http://images.cocodataset.org/zips/ -f1='train2017.zip' # 19G, 118k images -f2='val2017.zip' # 1G, 5k images -f3='test2017.zip' # 7G, 41k images (optional) -for f in $f1 $f2 $f3; do - echo 'Downloading' $url$f '...' - curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background -done -wait # finish background tasks diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal_tool.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal_tool.py deleted file mode 100644 index 5cea85eab38b8cea3fcb05c509a44f61d1040c86..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal_tool.py +++ /dev/null @@ -1,89 +0,0 @@ -from __future__ import annotations -import json -import asyncio -from copy import deepcopy -from colorama import Fore -from itertools import cycle - -from typing import TYPE_CHECKING, List - -from . import decision_maker_registry -from .base import BaseDecisionMaker -from agentverse.logging import logger -from agentverse.message import SolverMessage, Message - -if TYPE_CHECKING: - from agentverse.agents.base import BaseAgent - from agentverse.message import CriticMessage - - -@decision_maker_registry.register("horizontal-tool") -class HorizontalToolDecisionMaker(BaseDecisionMaker): - """ - Discuss in a horizontal manner. - """ - - name: str = "horizontal_tool" - tools: List[dict] = [] - tool_names: List[str] = [] - tool_config: str = None - - def __init__(self, *args, **kwargs): - assert kwargs.get("tool_config", None) is not None - with open(kwargs.get("tool_config"), "r") as f: - tools_dict = json.load(f) - tools = tools_dict["tools_json"] - tool_names = [t["name"] for t in tools] - super().__init__(tools=tools, tool_names=tool_names, *args, **kwargs) - - # def step( - async def astep( - self, - agents: List[BaseAgent], - task_description: str, - previous_plan: str = "No solution yet.", - advice: str = "No advice yet.", - **kwargs, - ) -> List[str]: - agents[0].memory.reset() - if advice != "No advice yet.": - self.broadcast_messages( - agents[1:], [Message(content=advice, sender="Evaluator")] - ) - all_roles = "\n".join( - [f"{agent.name}: {agent.role_description}" for agent in agents[1:]] - ) - end_flag = False - discussion_cnt = 0 - for agent in cycle(agents[1:]): - discussion_cnt += 1 - review: CriticMessage = await agent.astep( - previous_plan, advice, task_description, all_roles - ) - if review.content.strip().endswith("[END]"): - review.content = review.content.strip().replace("[END]", "") - if discussion_cnt >= len(agents) - 1: - # Force all the agents to speak at least once. - end_flag = True - if review.content != "": - self.broadcast_messages(agents, [review]) - - logger.info("", "Reviews:", Fore.YELLOW) - logger.info( - "", - f"[{review.sender}]: {review.content}", - Fore.YELLOW, - ) - if end_flag: - break - - result: SolverMessage = agents[0].step(previous_plan, advice, task_description) - result_list = [] - for res in result.content: - res_tmp = deepcopy(result) - res_tmp.content = " - ".join(res) - result_list.append(res_tmp) - return result_list - - def reset(self): - pass diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/DelayCallMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/DelayCallMethods.js deleted file mode 100644 index be82ac0bcbcd804a2099a8a0da5e32944dc42601..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/DelayCallMethods.js +++ /dev/null @@ -1,17 +0,0 @@ -import PostUpdateDelayCall from '../../../../plugins/utils/time/PostUpdateDelayCall.js'; - -export default { - delayCall(delay, callback, scope) { - // Invoke callback under scene's 'postupdate' event - this.timer = PostUpdateDelayCall(this, delay, callback, scope); - return this; - }, - - removeDelayCall() { - if (this.timer) { - this.timer.remove(false); - this.timer = undefined; - } - return this; - } -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetChildrenHeight.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetChildrenHeight.js deleted file mode 100644 index bcc82329d2e14d50c1e4ca16f55d9f63f1bfa0ac..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetChildrenHeight.js +++ /dev/null @@ -1,24 +0,0 @@ -import { GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js'; - -var GetChildrenHeight = function () { - if (this.rexSizer.hidden) { - return 0; - } - - var result = 0; - var children = this.sizerChildren; - var child, padding, childHeight; - for (var key in children) { - child = children[key]; - childHeight = (child.isRexSizer) ? - Math.max(child.minHeight, child.childrenHeight) : - (child.minHeight !== undefined) ? child.minHeight : GetDisplayHeight(child); - - padding = child.rexSizer.padding; - childHeight += (padding.top + padding.bottom); - result = Math.max(childHeight, result); - } - return result + this.space.top + this.space.bottom; -} - -export default GetChildrenHeight; \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/alt_diffusion.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/alt_diffusion.md deleted file mode 100644 index ed8db52f9a51198260c4f0d1927b29f7e3913f8a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/alt_diffusion.md +++ /dev/null @@ -1,47 +0,0 @@ - - -# AltDiffusion - -AltDiffusion was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://huggingface.co/papers/2211.06679) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. - -The abstract from the paper is: - -*In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.* - -## Tips - -`AltDiffusion` is conceptually the same as [Stable Diffusion](./stable_diffusion/overview). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## AltDiffusionPipeline - -[[autodoc]] AltDiffusionPipeline - - all - - __call__ - -## AltDiffusionImg2ImgPipeline - -[[autodoc]] AltDiffusionImg2ImgPipeline - - all - - __call__ - -## AltDiffusionPipelineOutput - -[[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput - - all - - __call__ \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md deleted file mode 100644 index d44e9f507830e8c8afa026a404cfb0f093b8edb9..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md +++ /dev/null @@ -1,139 +0,0 @@ - - -# Stable Diffusion 2 - -Stable Diffusion 2 is a text-to-image _latent diffusion_ model built upon the work of the original [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release), and it was led by Robin Rombach and Katherine Crowson from [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). - -*The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. -These models are trained on an aesthetic subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/) created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using [LAION’s NSFW filter](https://openreview.net/forum?id=M3Y74vmsMcY).* - -For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official [announcement post](https://stability.ai/blog/stable-diffusion-v2-release). - -The architecture of Stable Diffusion 2 is more or less identical to the original [Stable Diffusion model](./text2img) so check out it's API documentation for how to use Stable Diffusion 2. We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest scheduler. - -Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: - -| Task | Repository | -|-------------------------|---------------------------------------------------------------------------------------------------------------| -| text-to-image (512x512) | [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) | -| text-to-image (768x768) | [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) | -| inpainting | [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) | -| super-resolution | [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) | -| depth-to-image | [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) | - -Here are some examples for how to use Stable Diffusion 2 for each task: - - - -Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! - -If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations! - - - -## Text-to-image - -```py -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -import torch - -repo_id = "stabilityai/stable-diffusion-2-base" -pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") - -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -pipe = pipe.to("cuda") - -prompt = "High quality photo of an astronaut riding a horse in space" -image = pipe(prompt, num_inference_steps=25).images[0] -image.save("astronaut.png") -``` - -## Inpainting - -```py -import PIL -import requests -import torch -from io import BytesIO - -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler - - -def download_image(url): - response = requests.get(url) - return PIL.Image.open(BytesIO(response.content)).convert("RGB") - - -img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" -mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" - -init_image = download_image(img_url).resize((512, 512)) -mask_image = download_image(mask_url).resize((512, 512)) - -repo_id = "stabilityai/stable-diffusion-2-inpainting" -pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") - -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -pipe = pipe.to("cuda") - -prompt = "Face of a yellow cat, high resolution, sitting on a park bench" -image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] - -image.save("yellow_cat.png") -``` - -## Super-resolution - -```py -import requests -from PIL import Image -from io import BytesIO -from diffusers import StableDiffusionUpscalePipeline -import torch - -# load model and scheduler -model_id = "stabilityai/stable-diffusion-x4-upscaler" -pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) -pipeline = pipeline.to("cuda") - -# let's download an image -url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" -response = requests.get(url) -low_res_img = Image.open(BytesIO(response.content)).convert("RGB") -low_res_img = low_res_img.resize((128, 128)) -prompt = "a white cat" -upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] -upscaled_image.save("upsampled_cat.png") -``` - -## Depth-to-image - -```py -import torch -import requests -from PIL import Image - -from diffusers import StableDiffusionDepth2ImgPipeline - -pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-depth", - torch_dtype=torch.float16, -).to("cuda") - - -url = "http://images.cocodataset.org/val2017/000000039769.jpg" -init_image = Image.open(requests.get(url, stream=True).raw) -prompt = "two tigers" -n_propmt = "bad, deformed, ugly, bad anotomy" -image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] -``` \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/pipeline_shap_e.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/pipeline_shap_e.py deleted file mode 100644 index c02a3117af6e44680edcfdfbb98d5e97022f57d6..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/pipeline_shap_e.py +++ /dev/null @@ -1,363 +0,0 @@ -# Copyright 2023 Open AI and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from dataclasses import dataclass -from typing import List, Optional, Union - -import numpy as np -import PIL -import torch -from transformers import CLIPTextModelWithProjection, CLIPTokenizer - -from ...models import PriorTransformer -from ...schedulers import HeunDiscreteScheduler -from ...utils import ( - BaseOutput, - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline -from .renderer import ShapERenderer - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import DiffusionPipeline - >>> from diffusers.utils import export_to_gif - - >>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - >>> repo = "openai/shap-e" - >>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) - >>> pipe = pipe.to(device) - - >>> guidance_scale = 15.0 - >>> prompt = "a shark" - - >>> images = pipe( - ... prompt, - ... guidance_scale=guidance_scale, - ... num_inference_steps=64, - ... frame_size=256, - ... ).images - - >>> gif_path = export_to_gif(images[0], "shark_3d.gif") - ``` -""" - - -@dataclass -class ShapEPipelineOutput(BaseOutput): - """ - Output class for [`ShapEPipeline`] and [`ShapEImg2ImgPipeline`]. - - Args: - images (`torch.FloatTensor`) - A list of images for 3D rendering. - """ - - images: Union[List[List[PIL.Image.Image]], List[List[np.ndarray]]] - - -class ShapEPipeline(DiffusionPipeline): - """ - Pipeline for generating latent representation of a 3D asset and rendering with NeRF method with Shap-E. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - prior ([`PriorTransformer`]): - The canonincal unCLIP prior to approximate the image embedding from the text embedding. - text_encoder ([`CLIPTextModelWithProjection`]): - Frozen text-encoder. - tokenizer (`CLIPTokenizer`): - A [`~transformers.CLIPTokenizer`] to tokenize text. - scheduler ([`HeunDiscreteScheduler`]): - A scheduler to be used in combination with `prior` to generate image embedding. - shap_e_renderer ([`ShapERenderer`]): - Shap-E renderer projects the generated latents into parameters of a MLP that's used to create 3D objects - with the NeRF rendering method. - """ - - def __init__( - self, - prior: PriorTransformer, - text_encoder: CLIPTextModelWithProjection, - tokenizer: CLIPTokenizer, - scheduler: HeunDiscreteScheduler, - shap_e_renderer: ShapERenderer, - ): - super().__init__() - - self.register_modules( - prior=prior, - text_encoder=text_encoder, - tokenizer=tokenizer, - scheduler=scheduler, - shap_e_renderer=shap_e_renderer, - ) - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents - def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - latents = latents * scheduler.init_noise_sigma - return latents - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a - time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs. - Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the - iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.prior, self.shap_e_renderer]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - ): - len(prompt) if isinstance(prompt, list) else 1 - - # YiYi Notes: set pad_token_id to be 0, not sure why I can't set in the config file - self.tokenizer.pad_token_id = 0 - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - text_encoder_output = self.text_encoder(text_input_ids.to(device)) - prompt_embeds = text_encoder_output.text_embeds - - prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0) - # in Shap-E it normalize the prompt_embeds and then later rescale it - prompt_embeds = prompt_embeds / torch.linalg.norm(prompt_embeds, dim=-1, keepdim=True) - - if do_classifier_free_guidance: - negative_prompt_embeds = torch.zeros_like(prompt_embeds) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - # Rescale the features to have unit variance - prompt_embeds = math.sqrt(prompt_embeds.shape[1]) * prompt_embeds - - return prompt_embeds - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: str, - num_images_per_prompt: int = 1, - num_inference_steps: int = 25, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - guidance_scale: float = 4.0, - frame_size: int = 64, - output_type: Optional[str] = "pil", # pil, np, latent, mesh - return_dict: bool = True, - ): - """ - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - num_inference_steps (`int`, *optional*, defaults to 25): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - guidance_scale (`float`, *optional*, defaults to 4.0): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - usually at the expense of lower image quality. - frame_size (`int`, *optional*, default to 64): - The width and height of each image frame of the generated 3D output. - output_type (`str`, *optional*, defaults to `"pt"`): - The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` - (`np.array`),`"latent"` (`torch.Tensor`), mesh ([`MeshDecoderOutput`]). - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput`] instead of a plain - tuple. - - Examples: - - Returns: - [`~pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput`] is returned, - otherwise a `tuple` is returned where the first element is a list with the generated images. - """ - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - device = self._execution_device - - batch_size = batch_size * num_images_per_prompt - - do_classifier_free_guidance = guidance_scale > 1.0 - prompt_embeds = self._encode_prompt(prompt, device, num_images_per_prompt, do_classifier_free_guidance) - - # prior - - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - num_embeddings = self.prior.config.num_embeddings - embedding_dim = self.prior.config.embedding_dim - - latents = self.prepare_latents( - (batch_size, num_embeddings * embedding_dim), - prompt_embeds.dtype, - device, - generator, - latents, - self.scheduler, - ) - - # YiYi notes: for testing only to match ldm, we can directly create a latents with desired shape: batch_size, num_embeddings, embedding_dim - latents = latents.reshape(latents.shape[0], num_embeddings, embedding_dim) - - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - scaled_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - noise_pred = self.prior( - scaled_model_input, - timestep=t, - proj_embedding=prompt_embeds, - ).predicted_image_embedding - - # remove the variance - noise_pred, _ = noise_pred.split( - scaled_model_input.shape[2], dim=2 - ) # batch_size, num_embeddings, embedding_dim - - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred - noise_pred_uncond) - - latents = self.scheduler.step( - noise_pred, - timestep=t, - sample=latents, - ).prev_sample - - if output_type not in ["np", "pil", "latent", "mesh"]: - raise ValueError( - f"Only the output types `pil`, `np`, `latent` and `mesh` are supported not output_type={output_type}" - ) - - if output_type == "latent": - return ShapEPipelineOutput(images=latents) - - images = [] - if output_type == "mesh": - for i, latent in enumerate(latents): - mesh = self.shap_e_renderer.decode_to_mesh( - latent[None, :], - device, - ) - images.append(mesh) - - else: - # np, pil - for i, latent in enumerate(latents): - image = self.shap_e_renderer.decode_to_image( - latent[None, :], - device, - size=frame_size, - ) - images.append(image) - - images = torch.stack(images) - - images = images.cpu().numpy() - - if output_type == "pil": - images = [self.numpy_to_pil(image) for image in images] - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (images,) - - return ShapEPipelineOutput(images=images) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py deleted file mode 100644 index 556136d4023df32e4df2477523463829a0722db4..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright 2022 The Music Spectrogram Diffusion Authors. -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import torch -import torch.nn as nn -from transformers.modeling_utils import ModuleUtilsMixin -from transformers.models.t5.modeling_t5 import ( - T5Block, - T5Config, - T5LayerNorm, -) - -from ...configuration_utils import ConfigMixin, register_to_config -from ...models import ModelMixin - - -class SpectrogramContEncoder(ModelMixin, ConfigMixin, ModuleUtilsMixin): - @register_to_config - def __init__( - self, - input_dims: int, - targets_context_length: int, - d_model: int, - dropout_rate: float, - num_layers: int, - num_heads: int, - d_kv: int, - d_ff: int, - feed_forward_proj: str, - is_decoder: bool = False, - ): - super().__init__() - - self.input_proj = nn.Linear(input_dims, d_model, bias=False) - - self.position_encoding = nn.Embedding(targets_context_length, d_model) - self.position_encoding.weight.requires_grad = False - - self.dropout_pre = nn.Dropout(p=dropout_rate) - - t5config = T5Config( - d_model=d_model, - num_heads=num_heads, - d_kv=d_kv, - d_ff=d_ff, - feed_forward_proj=feed_forward_proj, - dropout_rate=dropout_rate, - is_decoder=is_decoder, - is_encoder_decoder=False, - ) - self.encoders = nn.ModuleList() - for lyr_num in range(num_layers): - lyr = T5Block(t5config) - self.encoders.append(lyr) - - self.layer_norm = T5LayerNorm(d_model) - self.dropout_post = nn.Dropout(p=dropout_rate) - - def forward(self, encoder_inputs, encoder_inputs_mask): - x = self.input_proj(encoder_inputs) - - # terminal relative positional encodings - max_positions = encoder_inputs.shape[1] - input_positions = torch.arange(max_positions, device=encoder_inputs.device) - - seq_lens = encoder_inputs_mask.sum(-1) - input_positions = torch.roll(input_positions.unsqueeze(0), tuple(seq_lens.tolist()), dims=0) - x += self.position_encoding(input_positions) - - x = self.dropout_pre(x) - - # inverted the attention mask - input_shape = encoder_inputs.size() - extended_attention_mask = self.get_extended_attention_mask(encoder_inputs_mask, input_shape) - - for lyr in self.encoders: - x = lyr(x, extended_attention_mask)[0] - x = self.layer_norm(x) - - return self.dropout_post(x), encoder_inputs_mask diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 3503c76935e294c881130b309999d32f13df8839..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_r50-d8_769x769_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap.min.js b/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap.min.js deleted file mode 100644 index cc0a25561dfb616832227c9e2b2c1b1bbe69bc05..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap.min.js +++ /dev/null @@ -1,7 +0,0 @@ -/*! - * Bootstrap v5.1.3 (https://getbootstrap.com/) - * Copyright 2011-2021 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) - */ -!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).bootstrap=e()}(this,(function(){"use strict";const t="transitionend",e=t=>{let e=t.getAttribute("data-bs-target");if(!e||"#"===e){let i=t.getAttribute("href");if(!i||!i.includes("#")&&!i.startsWith("."))return null;i.includes("#")&&!i.startsWith("#")&&(i=`#${i.split("#")[1]}`),e=i&&"#"!==i?i.trim():null}return e},i=t=>{const i=e(t);return i&&document.querySelector(i)?i:null},n=t=>{const i=e(t);return i?document.querySelector(i):null},s=e=>{e.dispatchEvent(new Event(t))},o=t=>!(!t||"object"!=typeof t)&&(void 0!==t.jquery&&(t=t[0]),void 0!==t.nodeType),r=t=>o(t)?t.jquery?t[0]:t:"string"==typeof t&&t.length>0?document.querySelector(t):null,a=(t,e,i)=>{Object.keys(i).forEach((n=>{const s=i[n],r=e[n],a=r&&o(r)?"element":null==(l=r)?`${l}`:{}.toString.call(l).match(/\s([a-z]+)/i)[1].toLowerCase();var l;if(!new RegExp(s).test(a))throw new TypeError(`${t.toUpperCase()}: Option "${n}" provided type "${a}" but expected type "${s}".`)}))},l=t=>!(!o(t)||0===t.getClientRects().length)&&"visible"===getComputedStyle(t).getPropertyValue("visibility"),c=t=>!t||t.nodeType!==Node.ELEMENT_NODE||!!t.classList.contains("disabled")||(void 0!==t.disabled?t.disabled:t.hasAttribute("disabled")&&"false"!==t.getAttribute("disabled")),h=t=>{if(!document.documentElement.attachShadow)return null;if("function"==typeof t.getRootNode){const e=t.getRootNode();return e instanceof ShadowRoot?e:null}return t instanceof ShadowRoot?t:t.parentNode?h(t.parentNode):null},d=()=>{},u=t=>{t.offsetHeight},f=()=>{const{jQuery:t}=window;return t&&!document.body.hasAttribute("data-bs-no-jquery")?t:null},p=[],m=()=>"rtl"===document.documentElement.dir,g=t=>{var e;e=()=>{const e=f();if(e){const i=t.NAME,n=e.fn[i];e.fn[i]=t.jQueryInterface,e.fn[i].Constructor=t,e.fn[i].noConflict=()=>(e.fn[i]=n,t.jQueryInterface)}},"loading"===document.readyState?(p.length||document.addEventListener("DOMContentLoaded",(()=>{p.forEach((t=>t()))})),p.push(e)):e()},_=t=>{"function"==typeof t&&t()},b=(e,i,n=!0)=>{if(!n)return void _(e);const o=(t=>{if(!t)return 0;let{transitionDuration:e,transitionDelay:i}=window.getComputedStyle(t);const n=Number.parseFloat(e),s=Number.parseFloat(i);return n||s?(e=e.split(",")[0],i=i.split(",")[0],1e3*(Number.parseFloat(e)+Number.parseFloat(i))):0})(i)+5;let r=!1;const a=({target:n})=>{n===i&&(r=!0,i.removeEventListener(t,a),_(e))};i.addEventListener(t,a),setTimeout((()=>{r||s(i)}),o)},v=(t,e,i,n)=>{let s=t.indexOf(e);if(-1===s)return t[!i&&n?t.length-1:0];const o=t.length;return s+=i?1:-1,n&&(s=(s+o)%o),t[Math.max(0,Math.min(s,o-1))]},y=/[^.]*(?=\..*)\.|.*/,w=/\..*/,E=/::\d+$/,A={};let T=1;const O={mouseenter:"mouseover",mouseleave:"mouseout"},C=/^(mouseenter|mouseleave)/i,k=new Set(["click","dblclick","mouseup","mousedown","contextmenu","mousewheel","DOMMouseScroll","mouseover","mouseout","mousemove","selectstart","selectend","keydown","keypress","keyup","orientationchange","touchstart","touchmove","touchend","touchcancel","pointerdown","pointermove","pointerup","pointerleave","pointercancel","gesturestart","gesturechange","gestureend","focus","blur","change","reset","select","submit","focusin","focusout","load","unload","beforeunload","resize","move","DOMContentLoaded","readystatechange","error","abort","scroll"]);function L(t,e){return e&&`${e}::${T++}`||t.uidEvent||T++}function x(t){const e=L(t);return t.uidEvent=e,A[e]=A[e]||{},A[e]}function D(t,e,i=null){const n=Object.keys(t);for(let s=0,o=n.length;sfunction(e){if(!e.relatedTarget||e.relatedTarget!==e.delegateTarget&&!e.delegateTarget.contains(e.relatedTarget))return t.call(this,e)};n?n=t(n):i=t(i)}const[o,r,a]=S(e,i,n),l=x(t),c=l[a]||(l[a]={}),h=D(c,r,o?i:null);if(h)return void(h.oneOff=h.oneOff&&s);const d=L(r,e.replace(y,"")),u=o?function(t,e,i){return function n(s){const o=t.querySelectorAll(e);for(let{target:r}=s;r&&r!==this;r=r.parentNode)for(let a=o.length;a--;)if(o[a]===r)return s.delegateTarget=r,n.oneOff&&j.off(t,s.type,e,i),i.apply(r,[s]);return null}}(t,i,n):function(t,e){return function i(n){return n.delegateTarget=t,i.oneOff&&j.off(t,n.type,e),e.apply(t,[n])}}(t,i);u.delegationSelector=o?i:null,u.originalHandler=r,u.oneOff=s,u.uidEvent=d,c[d]=u,t.addEventListener(a,u,o)}function I(t,e,i,n,s){const o=D(e[i],n,s);o&&(t.removeEventListener(i,o,Boolean(s)),delete e[i][o.uidEvent])}function P(t){return t=t.replace(w,""),O[t]||t}const j={on(t,e,i,n){N(t,e,i,n,!1)},one(t,e,i,n){N(t,e,i,n,!0)},off(t,e,i,n){if("string"!=typeof e||!t)return;const[s,o,r]=S(e,i,n),a=r!==e,l=x(t),c=e.startsWith(".");if(void 0!==o){if(!l||!l[r])return;return void I(t,l,r,o,s?i:null)}c&&Object.keys(l).forEach((i=>{!function(t,e,i,n){const s=e[i]||{};Object.keys(s).forEach((o=>{if(o.includes(n)){const n=s[o];I(t,e,i,n.originalHandler,n.delegationSelector)}}))}(t,l,i,e.slice(1))}));const h=l[r]||{};Object.keys(h).forEach((i=>{const n=i.replace(E,"");if(!a||e.includes(n)){const e=h[i];I(t,l,r,e.originalHandler,e.delegationSelector)}}))},trigger(t,e,i){if("string"!=typeof e||!t)return null;const n=f(),s=P(e),o=e!==s,r=k.has(s);let a,l=!0,c=!0,h=!1,d=null;return o&&n&&(a=n.Event(e,i),n(t).trigger(a),l=!a.isPropagationStopped(),c=!a.isImmediatePropagationStopped(),h=a.isDefaultPrevented()),r?(d=document.createEvent("HTMLEvents"),d.initEvent(s,l,!0)):d=new CustomEvent(e,{bubbles:l,cancelable:!0}),void 0!==i&&Object.keys(i).forEach((t=>{Object.defineProperty(d,t,{get:()=>i[t]})})),h&&d.preventDefault(),c&&t.dispatchEvent(d),d.defaultPrevented&&void 0!==a&&a.preventDefault(),d}},M=new Map,H={set(t,e,i){M.has(t)||M.set(t,new Map);const n=M.get(t);n.has(e)||0===n.size?n.set(e,i):console.error(`Bootstrap doesn't allow more than one instance per element. Bound instance: ${Array.from(n.keys())[0]}.`)},get:(t,e)=>M.has(t)&&M.get(t).get(e)||null,remove(t,e){if(!M.has(t))return;const i=M.get(t);i.delete(e),0===i.size&&M.delete(t)}};class B{constructor(t){(t=r(t))&&(this._element=t,H.set(this._element,this.constructor.DATA_KEY,this))}dispose(){H.remove(this._element,this.constructor.DATA_KEY),j.off(this._element,this.constructor.EVENT_KEY),Object.getOwnPropertyNames(this).forEach((t=>{this[t]=null}))}_queueCallback(t,e,i=!0){b(t,e,i)}static getInstance(t){return H.get(r(t),this.DATA_KEY)}static getOrCreateInstance(t,e={}){return this.getInstance(t)||new this(t,"object"==typeof e?e:null)}static get VERSION(){return"5.1.3"}static get NAME(){throw new Error('You have to implement the static method "NAME", for each component!')}static get DATA_KEY(){return`bs.${this.NAME}`}static get EVENT_KEY(){return`.${this.DATA_KEY}`}}const R=(t,e="hide")=>{const i=`click.dismiss${t.EVENT_KEY}`,s=t.NAME;j.on(document,i,`[data-bs-dismiss="${s}"]`,(function(i){if(["A","AREA"].includes(this.tagName)&&i.preventDefault(),c(this))return;const o=n(this)||this.closest(`.${s}`);t.getOrCreateInstance(o)[e]()}))};class W extends B{static get NAME(){return"alert"}close(){if(j.trigger(this._element,"close.bs.alert").defaultPrevented)return;this._element.classList.remove("show");const t=this._element.classList.contains("fade");this._queueCallback((()=>this._destroyElement()),this._element,t)}_destroyElement(){this._element.remove(),j.trigger(this._element,"closed.bs.alert"),this.dispose()}static jQueryInterface(t){return this.each((function(){const e=W.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}R(W,"close"),g(W);const $='[data-bs-toggle="button"]';class z extends B{static get NAME(){return"button"}toggle(){this._element.setAttribute("aria-pressed",this._element.classList.toggle("active"))}static jQueryInterface(t){return this.each((function(){const e=z.getOrCreateInstance(this);"toggle"===t&&e[t]()}))}}function q(t){return"true"===t||"false"!==t&&(t===Number(t).toString()?Number(t):""===t||"null"===t?null:t)}function F(t){return t.replace(/[A-Z]/g,(t=>`-${t.toLowerCase()}`))}j.on(document,"click.bs.button.data-api",$,(t=>{t.preventDefault();const e=t.target.closest($);z.getOrCreateInstance(e).toggle()})),g(z);const U={setDataAttribute(t,e,i){t.setAttribute(`data-bs-${F(e)}`,i)},removeDataAttribute(t,e){t.removeAttribute(`data-bs-${F(e)}`)},getDataAttributes(t){if(!t)return{};const e={};return Object.keys(t.dataset).filter((t=>t.startsWith("bs"))).forEach((i=>{let n=i.replace(/^bs/,"");n=n.charAt(0).toLowerCase()+n.slice(1,n.length),e[n]=q(t.dataset[i])})),e},getDataAttribute:(t,e)=>q(t.getAttribute(`data-bs-${F(e)}`)),offset(t){const e=t.getBoundingClientRect();return{top:e.top+window.pageYOffset,left:e.left+window.pageXOffset}},position:t=>({top:t.offsetTop,left:t.offsetLeft})},V={find:(t,e=document.documentElement)=>[].concat(...Element.prototype.querySelectorAll.call(e,t)),findOne:(t,e=document.documentElement)=>Element.prototype.querySelector.call(e,t),children:(t,e)=>[].concat(...t.children).filter((t=>t.matches(e))),parents(t,e){const i=[];let n=t.parentNode;for(;n&&n.nodeType===Node.ELEMENT_NODE&&3!==n.nodeType;)n.matches(e)&&i.push(n),n=n.parentNode;return i},prev(t,e){let i=t.previousElementSibling;for(;i;){if(i.matches(e))return[i];i=i.previousElementSibling}return[]},next(t,e){let i=t.nextElementSibling;for(;i;){if(i.matches(e))return[i];i=i.nextElementSibling}return[]},focusableChildren(t){const e=["a","button","input","textarea","select","details","[tabindex]",'[contenteditable="true"]'].map((t=>`${t}:not([tabindex^="-"])`)).join(", ");return this.find(e,t).filter((t=>!c(t)&&l(t)))}},K="carousel",X={interval:5e3,keyboard:!0,slide:!1,pause:"hover",wrap:!0,touch:!0},Y={interval:"(number|boolean)",keyboard:"boolean",slide:"(boolean|string)",pause:"(string|boolean)",wrap:"boolean",touch:"boolean"},Q="next",G="prev",Z="left",J="right",tt={ArrowLeft:J,ArrowRight:Z},et="slid.bs.carousel",it="active",nt=".active.carousel-item";class st extends B{constructor(t,e){super(t),this._items=null,this._interval=null,this._activeElement=null,this._isPaused=!1,this._isSliding=!1,this.touchTimeout=null,this.touchStartX=0,this.touchDeltaX=0,this._config=this._getConfig(e),this._indicatorsElement=V.findOne(".carousel-indicators",this._element),this._touchSupported="ontouchstart"in document.documentElement||navigator.maxTouchPoints>0,this._pointerEvent=Boolean(window.PointerEvent),this._addEventListeners()}static get Default(){return X}static get NAME(){return K}next(){this._slide(Q)}nextWhenVisible(){!document.hidden&&l(this._element)&&this.next()}prev(){this._slide(G)}pause(t){t||(this._isPaused=!0),V.findOne(".carousel-item-next, .carousel-item-prev",this._element)&&(s(this._element),this.cycle(!0)),clearInterval(this._interval),this._interval=null}cycle(t){t||(this._isPaused=!1),this._interval&&(clearInterval(this._interval),this._interval=null),this._config&&this._config.interval&&!this._isPaused&&(this._updateInterval(),this._interval=setInterval((document.visibilityState?this.nextWhenVisible:this.next).bind(this),this._config.interval))}to(t){this._activeElement=V.findOne(nt,this._element);const e=this._getItemIndex(this._activeElement);if(t>this._items.length-1||t<0)return;if(this._isSliding)return void j.one(this._element,et,(()=>this.to(t)));if(e===t)return this.pause(),void this.cycle();const i=t>e?Q:G;this._slide(i,this._items[t])}_getConfig(t){return t={...X,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(K,t,Y),t}_handleSwipe(){const t=Math.abs(this.touchDeltaX);if(t<=40)return;const e=t/this.touchDeltaX;this.touchDeltaX=0,e&&this._slide(e>0?J:Z)}_addEventListeners(){this._config.keyboard&&j.on(this._element,"keydown.bs.carousel",(t=>this._keydown(t))),"hover"===this._config.pause&&(j.on(this._element,"mouseenter.bs.carousel",(t=>this.pause(t))),j.on(this._element,"mouseleave.bs.carousel",(t=>this.cycle(t)))),this._config.touch&&this._touchSupported&&this._addTouchEventListeners()}_addTouchEventListeners(){const t=t=>this._pointerEvent&&("pen"===t.pointerType||"touch"===t.pointerType),e=e=>{t(e)?this.touchStartX=e.clientX:this._pointerEvent||(this.touchStartX=e.touches[0].clientX)},i=t=>{this.touchDeltaX=t.touches&&t.touches.length>1?0:t.touches[0].clientX-this.touchStartX},n=e=>{t(e)&&(this.touchDeltaX=e.clientX-this.touchStartX),this._handleSwipe(),"hover"===this._config.pause&&(this.pause(),this.touchTimeout&&clearTimeout(this.touchTimeout),this.touchTimeout=setTimeout((t=>this.cycle(t)),500+this._config.interval))};V.find(".carousel-item img",this._element).forEach((t=>{j.on(t,"dragstart.bs.carousel",(t=>t.preventDefault()))})),this._pointerEvent?(j.on(this._element,"pointerdown.bs.carousel",(t=>e(t))),j.on(this._element,"pointerup.bs.carousel",(t=>n(t))),this._element.classList.add("pointer-event")):(j.on(this._element,"touchstart.bs.carousel",(t=>e(t))),j.on(this._element,"touchmove.bs.carousel",(t=>i(t))),j.on(this._element,"touchend.bs.carousel",(t=>n(t))))}_keydown(t){if(/input|textarea/i.test(t.target.tagName))return;const e=tt[t.key];e&&(t.preventDefault(),this._slide(e))}_getItemIndex(t){return this._items=t&&t.parentNode?V.find(".carousel-item",t.parentNode):[],this._items.indexOf(t)}_getItemByOrder(t,e){const i=t===Q;return v(this._items,e,i,this._config.wrap)}_triggerSlideEvent(t,e){const i=this._getItemIndex(t),n=this._getItemIndex(V.findOne(nt,this._element));return j.trigger(this._element,"slide.bs.carousel",{relatedTarget:t,direction:e,from:n,to:i})}_setActiveIndicatorElement(t){if(this._indicatorsElement){const e=V.findOne(".active",this._indicatorsElement);e.classList.remove(it),e.removeAttribute("aria-current");const i=V.find("[data-bs-target]",this._indicatorsElement);for(let e=0;e{j.trigger(this._element,et,{relatedTarget:o,direction:d,from:s,to:r})};if(this._element.classList.contains("slide")){o.classList.add(h),u(o),n.classList.add(c),o.classList.add(c);const t=()=>{o.classList.remove(c,h),o.classList.add(it),n.classList.remove(it,h,c),this._isSliding=!1,setTimeout(f,0)};this._queueCallback(t,n,!0)}else n.classList.remove(it),o.classList.add(it),this._isSliding=!1,f();a&&this.cycle()}_directionToOrder(t){return[J,Z].includes(t)?m()?t===Z?G:Q:t===Z?Q:G:t}_orderToDirection(t){return[Q,G].includes(t)?m()?t===G?Z:J:t===G?J:Z:t}static carouselInterface(t,e){const i=st.getOrCreateInstance(t,e);let{_config:n}=i;"object"==typeof e&&(n={...n,...e});const s="string"==typeof e?e:n.slide;if("number"==typeof e)i.to(e);else if("string"==typeof s){if(void 0===i[s])throw new TypeError(`No method named "${s}"`);i[s]()}else n.interval&&n.ride&&(i.pause(),i.cycle())}static jQueryInterface(t){return this.each((function(){st.carouselInterface(this,t)}))}static dataApiClickHandler(t){const e=n(this);if(!e||!e.classList.contains("carousel"))return;const i={...U.getDataAttributes(e),...U.getDataAttributes(this)},s=this.getAttribute("data-bs-slide-to");s&&(i.interval=!1),st.carouselInterface(e,i),s&&st.getInstance(e).to(s),t.preventDefault()}}j.on(document,"click.bs.carousel.data-api","[data-bs-slide], [data-bs-slide-to]",st.dataApiClickHandler),j.on(window,"load.bs.carousel.data-api",(()=>{const t=V.find('[data-bs-ride="carousel"]');for(let e=0,i=t.length;et===this._element));null!==s&&o.length&&(this._selector=s,this._triggerArray.push(e))}this._initializeChildren(),this._config.parent||this._addAriaAndCollapsedClass(this._triggerArray,this._isShown()),this._config.toggle&&this.toggle()}static get Default(){return rt}static get NAME(){return ot}toggle(){this._isShown()?this.hide():this.show()}show(){if(this._isTransitioning||this._isShown())return;let t,e=[];if(this._config.parent){const t=V.find(ut,this._config.parent);e=V.find(".collapse.show, .collapse.collapsing",this._config.parent).filter((e=>!t.includes(e)))}const i=V.findOne(this._selector);if(e.length){const n=e.find((t=>i!==t));if(t=n?pt.getInstance(n):null,t&&t._isTransitioning)return}if(j.trigger(this._element,"show.bs.collapse").defaultPrevented)return;e.forEach((e=>{i!==e&&pt.getOrCreateInstance(e,{toggle:!1}).hide(),t||H.set(e,"bs.collapse",null)}));const n=this._getDimension();this._element.classList.remove(ct),this._element.classList.add(ht),this._element.style[n]=0,this._addAriaAndCollapsedClass(this._triggerArray,!0),this._isTransitioning=!0;const s=`scroll${n[0].toUpperCase()+n.slice(1)}`;this._queueCallback((()=>{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct,lt),this._element.style[n]="",j.trigger(this._element,"shown.bs.collapse")}),this._element,!0),this._element.style[n]=`${this._element[s]}px`}hide(){if(this._isTransitioning||!this._isShown())return;if(j.trigger(this._element,"hide.bs.collapse").defaultPrevented)return;const t=this._getDimension();this._element.style[t]=`${this._element.getBoundingClientRect()[t]}px`,u(this._element),this._element.classList.add(ht),this._element.classList.remove(ct,lt);const e=this._triggerArray.length;for(let t=0;t{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct),j.trigger(this._element,"hidden.bs.collapse")}),this._element,!0)}_isShown(t=this._element){return t.classList.contains(lt)}_getConfig(t){return(t={...rt,...U.getDataAttributes(this._element),...t}).toggle=Boolean(t.toggle),t.parent=r(t.parent),a(ot,t,at),t}_getDimension(){return this._element.classList.contains("collapse-horizontal")?"width":"height"}_initializeChildren(){if(!this._config.parent)return;const t=V.find(ut,this._config.parent);V.find(ft,this._config.parent).filter((e=>!t.includes(e))).forEach((t=>{const e=n(t);e&&this._addAriaAndCollapsedClass([t],this._isShown(e))}))}_addAriaAndCollapsedClass(t,e){t.length&&t.forEach((t=>{e?t.classList.remove(dt):t.classList.add(dt),t.setAttribute("aria-expanded",e)}))}static jQueryInterface(t){return this.each((function(){const e={};"string"==typeof t&&/show|hide/.test(t)&&(e.toggle=!1);const i=pt.getOrCreateInstance(this,e);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t]()}}))}}j.on(document,"click.bs.collapse.data-api",ft,(function(t){("A"===t.target.tagName||t.delegateTarget&&"A"===t.delegateTarget.tagName)&&t.preventDefault();const e=i(this);V.find(e).forEach((t=>{pt.getOrCreateInstance(t,{toggle:!1}).toggle()}))})),g(pt);var mt="top",gt="bottom",_t="right",bt="left",vt="auto",yt=[mt,gt,_t,bt],wt="start",Et="end",At="clippingParents",Tt="viewport",Ot="popper",Ct="reference",kt=yt.reduce((function(t,e){return t.concat([e+"-"+wt,e+"-"+Et])}),[]),Lt=[].concat(yt,[vt]).reduce((function(t,e){return t.concat([e,e+"-"+wt,e+"-"+Et])}),[]),xt="beforeRead",Dt="read",St="afterRead",Nt="beforeMain",It="main",Pt="afterMain",jt="beforeWrite",Mt="write",Ht="afterWrite",Bt=[xt,Dt,St,Nt,It,Pt,jt,Mt,Ht];function Rt(t){return t?(t.nodeName||"").toLowerCase():null}function Wt(t){if(null==t)return window;if("[object Window]"!==t.toString()){var e=t.ownerDocument;return e&&e.defaultView||window}return t}function $t(t){return t instanceof Wt(t).Element||t instanceof Element}function zt(t){return t instanceof Wt(t).HTMLElement||t instanceof HTMLElement}function qt(t){return"undefined"!=typeof ShadowRoot&&(t instanceof Wt(t).ShadowRoot||t instanceof ShadowRoot)}const Ft={name:"applyStyles",enabled:!0,phase:"write",fn:function(t){var e=t.state;Object.keys(e.elements).forEach((function(t){var i=e.styles[t]||{},n=e.attributes[t]||{},s=e.elements[t];zt(s)&&Rt(s)&&(Object.assign(s.style,i),Object.keys(n).forEach((function(t){var e=n[t];!1===e?s.removeAttribute(t):s.setAttribute(t,!0===e?"":e)})))}))},effect:function(t){var e=t.state,i={popper:{position:e.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(e.elements.popper.style,i.popper),e.styles=i,e.elements.arrow&&Object.assign(e.elements.arrow.style,i.arrow),function(){Object.keys(e.elements).forEach((function(t){var n=e.elements[t],s=e.attributes[t]||{},o=Object.keys(e.styles.hasOwnProperty(t)?e.styles[t]:i[t]).reduce((function(t,e){return t[e]="",t}),{});zt(n)&&Rt(n)&&(Object.assign(n.style,o),Object.keys(s).forEach((function(t){n.removeAttribute(t)})))}))}},requires:["computeStyles"]};function Ut(t){return t.split("-")[0]}function Vt(t,e){var i=t.getBoundingClientRect();return{width:i.width/1,height:i.height/1,top:i.top/1,right:i.right/1,bottom:i.bottom/1,left:i.left/1,x:i.left/1,y:i.top/1}}function Kt(t){var e=Vt(t),i=t.offsetWidth,n=t.offsetHeight;return Math.abs(e.width-i)<=1&&(i=e.width),Math.abs(e.height-n)<=1&&(n=e.height),{x:t.offsetLeft,y:t.offsetTop,width:i,height:n}}function Xt(t,e){var i=e.getRootNode&&e.getRootNode();if(t.contains(e))return!0;if(i&&qt(i)){var n=e;do{if(n&&t.isSameNode(n))return!0;n=n.parentNode||n.host}while(n)}return!1}function Yt(t){return Wt(t).getComputedStyle(t)}function Qt(t){return["table","td","th"].indexOf(Rt(t))>=0}function Gt(t){return(($t(t)?t.ownerDocument:t.document)||window.document).documentElement}function Zt(t){return"html"===Rt(t)?t:t.assignedSlot||t.parentNode||(qt(t)?t.host:null)||Gt(t)}function Jt(t){return zt(t)&&"fixed"!==Yt(t).position?t.offsetParent:null}function te(t){for(var e=Wt(t),i=Jt(t);i&&Qt(i)&&"static"===Yt(i).position;)i=Jt(i);return i&&("html"===Rt(i)||"body"===Rt(i)&&"static"===Yt(i).position)?e:i||function(t){var e=-1!==navigator.userAgent.toLowerCase().indexOf("firefox");if(-1!==navigator.userAgent.indexOf("Trident")&&zt(t)&&"fixed"===Yt(t).position)return null;for(var i=Zt(t);zt(i)&&["html","body"].indexOf(Rt(i))<0;){var n=Yt(i);if("none"!==n.transform||"none"!==n.perspective||"paint"===n.contain||-1!==["transform","perspective"].indexOf(n.willChange)||e&&"filter"===n.willChange||e&&n.filter&&"none"!==n.filter)return i;i=i.parentNode}return null}(t)||e}function ee(t){return["top","bottom"].indexOf(t)>=0?"x":"y"}var ie=Math.max,ne=Math.min,se=Math.round;function oe(t,e,i){return ie(t,ne(e,i))}function re(t){return Object.assign({},{top:0,right:0,bottom:0,left:0},t)}function ae(t,e){return e.reduce((function(e,i){return e[i]=t,e}),{})}const le={name:"arrow",enabled:!0,phase:"main",fn:function(t){var e,i=t.state,n=t.name,s=t.options,o=i.elements.arrow,r=i.modifiersData.popperOffsets,a=Ut(i.placement),l=ee(a),c=[bt,_t].indexOf(a)>=0?"height":"width";if(o&&r){var h=function(t,e){return re("number"!=typeof(t="function"==typeof t?t(Object.assign({},e.rects,{placement:e.placement})):t)?t:ae(t,yt))}(s.padding,i),d=Kt(o),u="y"===l?mt:bt,f="y"===l?gt:_t,p=i.rects.reference[c]+i.rects.reference[l]-r[l]-i.rects.popper[c],m=r[l]-i.rects.reference[l],g=te(o),_=g?"y"===l?g.clientHeight||0:g.clientWidth||0:0,b=p/2-m/2,v=h[u],y=_-d[c]-h[f],w=_/2-d[c]/2+b,E=oe(v,w,y),A=l;i.modifiersData[n]=((e={})[A]=E,e.centerOffset=E-w,e)}},effect:function(t){var e=t.state,i=t.options.element,n=void 0===i?"[data-popper-arrow]":i;null!=n&&("string"!=typeof n||(n=e.elements.popper.querySelector(n)))&&Xt(e.elements.popper,n)&&(e.elements.arrow=n)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function ce(t){return t.split("-")[1]}var he={top:"auto",right:"auto",bottom:"auto",left:"auto"};function de(t){var e,i=t.popper,n=t.popperRect,s=t.placement,o=t.variation,r=t.offsets,a=t.position,l=t.gpuAcceleration,c=t.adaptive,h=t.roundOffsets,d=!0===h?function(t){var e=t.x,i=t.y,n=window.devicePixelRatio||1;return{x:se(se(e*n)/n)||0,y:se(se(i*n)/n)||0}}(r):"function"==typeof h?h(r):r,u=d.x,f=void 0===u?0:u,p=d.y,m=void 0===p?0:p,g=r.hasOwnProperty("x"),_=r.hasOwnProperty("y"),b=bt,v=mt,y=window;if(c){var w=te(i),E="clientHeight",A="clientWidth";w===Wt(i)&&"static"!==Yt(w=Gt(i)).position&&"absolute"===a&&(E="scrollHeight",A="scrollWidth"),w=w,s!==mt&&(s!==bt&&s!==_t||o!==Et)||(v=gt,m-=w[E]-n.height,m*=l?1:-1),s!==bt&&(s!==mt&&s!==gt||o!==Et)||(b=_t,f-=w[A]-n.width,f*=l?1:-1)}var T,O=Object.assign({position:a},c&&he);return l?Object.assign({},O,((T={})[v]=_?"0":"",T[b]=g?"0":"",T.transform=(y.devicePixelRatio||1)<=1?"translate("+f+"px, "+m+"px)":"translate3d("+f+"px, "+m+"px, 0)",T)):Object.assign({},O,((e={})[v]=_?m+"px":"",e[b]=g?f+"px":"",e.transform="",e))}const ue={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:function(t){var e=t.state,i=t.options,n=i.gpuAcceleration,s=void 0===n||n,o=i.adaptive,r=void 0===o||o,a=i.roundOffsets,l=void 0===a||a,c={placement:Ut(e.placement),variation:ce(e.placement),popper:e.elements.popper,popperRect:e.rects.popper,gpuAcceleration:s};null!=e.modifiersData.popperOffsets&&(e.styles.popper=Object.assign({},e.styles.popper,de(Object.assign({},c,{offsets:e.modifiersData.popperOffsets,position:e.options.strategy,adaptive:r,roundOffsets:l})))),null!=e.modifiersData.arrow&&(e.styles.arrow=Object.assign({},e.styles.arrow,de(Object.assign({},c,{offsets:e.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:l})))),e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-placement":e.placement})},data:{}};var fe={passive:!0};const pe={name:"eventListeners",enabled:!0,phase:"write",fn:function(){},effect:function(t){var e=t.state,i=t.instance,n=t.options,s=n.scroll,o=void 0===s||s,r=n.resize,a=void 0===r||r,l=Wt(e.elements.popper),c=[].concat(e.scrollParents.reference,e.scrollParents.popper);return o&&c.forEach((function(t){t.addEventListener("scroll",i.update,fe)})),a&&l.addEventListener("resize",i.update,fe),function(){o&&c.forEach((function(t){t.removeEventListener("scroll",i.update,fe)})),a&&l.removeEventListener("resize",i.update,fe)}},data:{}};var me={left:"right",right:"left",bottom:"top",top:"bottom"};function ge(t){return t.replace(/left|right|bottom|top/g,(function(t){return me[t]}))}var _e={start:"end",end:"start"};function be(t){return t.replace(/start|end/g,(function(t){return _e[t]}))}function ve(t){var e=Wt(t);return{scrollLeft:e.pageXOffset,scrollTop:e.pageYOffset}}function ye(t){return Vt(Gt(t)).left+ve(t).scrollLeft}function we(t){var e=Yt(t),i=e.overflow,n=e.overflowX,s=e.overflowY;return/auto|scroll|overlay|hidden/.test(i+s+n)}function Ee(t){return["html","body","#document"].indexOf(Rt(t))>=0?t.ownerDocument.body:zt(t)&&we(t)?t:Ee(Zt(t))}function Ae(t,e){var i;void 0===e&&(e=[]);var n=Ee(t),s=n===(null==(i=t.ownerDocument)?void 0:i.body),o=Wt(n),r=s?[o].concat(o.visualViewport||[],we(n)?n:[]):n,a=e.concat(r);return s?a:a.concat(Ae(Zt(r)))}function Te(t){return Object.assign({},t,{left:t.x,top:t.y,right:t.x+t.width,bottom:t.y+t.height})}function Oe(t,e){return e===Tt?Te(function(t){var e=Wt(t),i=Gt(t),n=e.visualViewport,s=i.clientWidth,o=i.clientHeight,r=0,a=0;return n&&(s=n.width,o=n.height,/^((?!chrome|android).)*safari/i.test(navigator.userAgent)||(r=n.offsetLeft,a=n.offsetTop)),{width:s,height:o,x:r+ye(t),y:a}}(t)):zt(e)?function(t){var e=Vt(t);return e.top=e.top+t.clientTop,e.left=e.left+t.clientLeft,e.bottom=e.top+t.clientHeight,e.right=e.left+t.clientWidth,e.width=t.clientWidth,e.height=t.clientHeight,e.x=e.left,e.y=e.top,e}(e):Te(function(t){var e,i=Gt(t),n=ve(t),s=null==(e=t.ownerDocument)?void 0:e.body,o=ie(i.scrollWidth,i.clientWidth,s?s.scrollWidth:0,s?s.clientWidth:0),r=ie(i.scrollHeight,i.clientHeight,s?s.scrollHeight:0,s?s.clientHeight:0),a=-n.scrollLeft+ye(t),l=-n.scrollTop;return"rtl"===Yt(s||i).direction&&(a+=ie(i.clientWidth,s?s.clientWidth:0)-o),{width:o,height:r,x:a,y:l}}(Gt(t)))}function Ce(t){var e,i=t.reference,n=t.element,s=t.placement,o=s?Ut(s):null,r=s?ce(s):null,a=i.x+i.width/2-n.width/2,l=i.y+i.height/2-n.height/2;switch(o){case mt:e={x:a,y:i.y-n.height};break;case gt:e={x:a,y:i.y+i.height};break;case _t:e={x:i.x+i.width,y:l};break;case bt:e={x:i.x-n.width,y:l};break;default:e={x:i.x,y:i.y}}var c=o?ee(o):null;if(null!=c){var h="y"===c?"height":"width";switch(r){case wt:e[c]=e[c]-(i[h]/2-n[h]/2);break;case Et:e[c]=e[c]+(i[h]/2-n[h]/2)}}return e}function ke(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=void 0===n?t.placement:n,o=i.boundary,r=void 0===o?At:o,a=i.rootBoundary,l=void 0===a?Tt:a,c=i.elementContext,h=void 0===c?Ot:c,d=i.altBoundary,u=void 0!==d&&d,f=i.padding,p=void 0===f?0:f,m=re("number"!=typeof p?p:ae(p,yt)),g=h===Ot?Ct:Ot,_=t.rects.popper,b=t.elements[u?g:h],v=function(t,e,i){var n="clippingParents"===e?function(t){var e=Ae(Zt(t)),i=["absolute","fixed"].indexOf(Yt(t).position)>=0&&zt(t)?te(t):t;return $t(i)?e.filter((function(t){return $t(t)&&Xt(t,i)&&"body"!==Rt(t)})):[]}(t):[].concat(e),s=[].concat(n,[i]),o=s[0],r=s.reduce((function(e,i){var n=Oe(t,i);return e.top=ie(n.top,e.top),e.right=ne(n.right,e.right),e.bottom=ne(n.bottom,e.bottom),e.left=ie(n.left,e.left),e}),Oe(t,o));return r.width=r.right-r.left,r.height=r.bottom-r.top,r.x=r.left,r.y=r.top,r}($t(b)?b:b.contextElement||Gt(t.elements.popper),r,l),y=Vt(t.elements.reference),w=Ce({reference:y,element:_,strategy:"absolute",placement:s}),E=Te(Object.assign({},_,w)),A=h===Ot?E:y,T={top:v.top-A.top+m.top,bottom:A.bottom-v.bottom+m.bottom,left:v.left-A.left+m.left,right:A.right-v.right+m.right},O=t.modifiersData.offset;if(h===Ot&&O){var C=O[s];Object.keys(T).forEach((function(t){var e=[_t,gt].indexOf(t)>=0?1:-1,i=[mt,gt].indexOf(t)>=0?"y":"x";T[t]+=C[i]*e}))}return T}function Le(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=i.boundary,o=i.rootBoundary,r=i.padding,a=i.flipVariations,l=i.allowedAutoPlacements,c=void 0===l?Lt:l,h=ce(n),d=h?a?kt:kt.filter((function(t){return ce(t)===h})):yt,u=d.filter((function(t){return c.indexOf(t)>=0}));0===u.length&&(u=d);var f=u.reduce((function(e,i){return e[i]=ke(t,{placement:i,boundary:s,rootBoundary:o,padding:r})[Ut(i)],e}),{});return Object.keys(f).sort((function(t,e){return f[t]-f[e]}))}const xe={name:"flip",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name;if(!e.modifiersData[n]._skip){for(var s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0===r||r,l=i.fallbackPlacements,c=i.padding,h=i.boundary,d=i.rootBoundary,u=i.altBoundary,f=i.flipVariations,p=void 0===f||f,m=i.allowedAutoPlacements,g=e.options.placement,_=Ut(g),b=l||(_!==g&&p?function(t){if(Ut(t)===vt)return[];var e=ge(t);return[be(t),e,be(e)]}(g):[ge(g)]),v=[g].concat(b).reduce((function(t,i){return t.concat(Ut(i)===vt?Le(e,{placement:i,boundary:h,rootBoundary:d,padding:c,flipVariations:p,allowedAutoPlacements:m}):i)}),[]),y=e.rects.reference,w=e.rects.popper,E=new Map,A=!0,T=v[0],O=0;O=0,D=x?"width":"height",S=ke(e,{placement:C,boundary:h,rootBoundary:d,altBoundary:u,padding:c}),N=x?L?_t:bt:L?gt:mt;y[D]>w[D]&&(N=ge(N));var I=ge(N),P=[];if(o&&P.push(S[k]<=0),a&&P.push(S[N]<=0,S[I]<=0),P.every((function(t){return t}))){T=C,A=!1;break}E.set(C,P)}if(A)for(var j=function(t){var e=v.find((function(e){var i=E.get(e);if(i)return i.slice(0,t).every((function(t){return t}))}));if(e)return T=e,"break"},M=p?3:1;M>0&&"break"!==j(M);M--);e.placement!==T&&(e.modifiersData[n]._skip=!0,e.placement=T,e.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function De(t,e,i){return void 0===i&&(i={x:0,y:0}),{top:t.top-e.height-i.y,right:t.right-e.width+i.x,bottom:t.bottom-e.height+i.y,left:t.left-e.width-i.x}}function Se(t){return[mt,_t,gt,bt].some((function(e){return t[e]>=0}))}const Ne={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(t){var e=t.state,i=t.name,n=e.rects.reference,s=e.rects.popper,o=e.modifiersData.preventOverflow,r=ke(e,{elementContext:"reference"}),a=ke(e,{altBoundary:!0}),l=De(r,n),c=De(a,s,o),h=Se(l),d=Se(c);e.modifiersData[i]={referenceClippingOffsets:l,popperEscapeOffsets:c,isReferenceHidden:h,hasPopperEscaped:d},e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-reference-hidden":h,"data-popper-escaped":d})}},Ie={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.offset,o=void 0===s?[0,0]:s,r=Lt.reduce((function(t,i){return t[i]=function(t,e,i){var n=Ut(t),s=[bt,mt].indexOf(n)>=0?-1:1,o="function"==typeof i?i(Object.assign({},e,{placement:t})):i,r=o[0],a=o[1];return r=r||0,a=(a||0)*s,[bt,_t].indexOf(n)>=0?{x:a,y:r}:{x:r,y:a}}(i,e.rects,o),t}),{}),a=r[e.placement],l=a.x,c=a.y;null!=e.modifiersData.popperOffsets&&(e.modifiersData.popperOffsets.x+=l,e.modifiersData.popperOffsets.y+=c),e.modifiersData[n]=r}},Pe={name:"popperOffsets",enabled:!0,phase:"read",fn:function(t){var e=t.state,i=t.name;e.modifiersData[i]=Ce({reference:e.rects.reference,element:e.rects.popper,strategy:"absolute",placement:e.placement})},data:{}},je={name:"preventOverflow",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0!==r&&r,l=i.boundary,c=i.rootBoundary,h=i.altBoundary,d=i.padding,u=i.tether,f=void 0===u||u,p=i.tetherOffset,m=void 0===p?0:p,g=ke(e,{boundary:l,rootBoundary:c,padding:d,altBoundary:h}),_=Ut(e.placement),b=ce(e.placement),v=!b,y=ee(_),w="x"===y?"y":"x",E=e.modifiersData.popperOffsets,A=e.rects.reference,T=e.rects.popper,O="function"==typeof m?m(Object.assign({},e.rects,{placement:e.placement})):m,C={x:0,y:0};if(E){if(o||a){var k="y"===y?mt:bt,L="y"===y?gt:_t,x="y"===y?"height":"width",D=E[y],S=E[y]+g[k],N=E[y]-g[L],I=f?-T[x]/2:0,P=b===wt?A[x]:T[x],j=b===wt?-T[x]:-A[x],M=e.elements.arrow,H=f&&M?Kt(M):{width:0,height:0},B=e.modifiersData["arrow#persistent"]?e.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},R=B[k],W=B[L],$=oe(0,A[x],H[x]),z=v?A[x]/2-I-$-R-O:P-$-R-O,q=v?-A[x]/2+I+$+W+O:j+$+W+O,F=e.elements.arrow&&te(e.elements.arrow),U=F?"y"===y?F.clientTop||0:F.clientLeft||0:0,V=e.modifiersData.offset?e.modifiersData.offset[e.placement][y]:0,K=E[y]+z-V-U,X=E[y]+q-V;if(o){var Y=oe(f?ne(S,K):S,D,f?ie(N,X):N);E[y]=Y,C[y]=Y-D}if(a){var Q="x"===y?mt:bt,G="x"===y?gt:_t,Z=E[w],J=Z+g[Q],tt=Z-g[G],et=oe(f?ne(J,K):J,Z,f?ie(tt,X):tt);E[w]=et,C[w]=et-Z}}e.modifiersData[n]=C}},requiresIfExists:["offset"]};function Me(t,e,i){void 0===i&&(i=!1);var n=zt(e);zt(e)&&function(t){var e=t.getBoundingClientRect();e.width,t.offsetWidth,e.height,t.offsetHeight}(e);var s,o,r=Gt(e),a=Vt(t),l={scrollLeft:0,scrollTop:0},c={x:0,y:0};return(n||!n&&!i)&&(("body"!==Rt(e)||we(r))&&(l=(s=e)!==Wt(s)&&zt(s)?{scrollLeft:(o=s).scrollLeft,scrollTop:o.scrollTop}:ve(s)),zt(e)?((c=Vt(e)).x+=e.clientLeft,c.y+=e.clientTop):r&&(c.x=ye(r))),{x:a.left+l.scrollLeft-c.x,y:a.top+l.scrollTop-c.y,width:a.width,height:a.height}}function He(t){var e=new Map,i=new Set,n=[];function s(t){i.add(t.name),[].concat(t.requires||[],t.requiresIfExists||[]).forEach((function(t){if(!i.has(t)){var n=e.get(t);n&&s(n)}})),n.push(t)}return t.forEach((function(t){e.set(t.name,t)})),t.forEach((function(t){i.has(t.name)||s(t)})),n}var Be={placement:"bottom",modifiers:[],strategy:"absolute"};function Re(){for(var t=arguments.length,e=new Array(t),i=0;ij.on(t,"mouseover",d))),this._element.focus(),this._element.setAttribute("aria-expanded",!0),this._menu.classList.add(Je),this._element.classList.add(Je),j.trigger(this._element,"shown.bs.dropdown",t)}hide(){if(c(this._element)||!this._isShown(this._menu))return;const t={relatedTarget:this._element};this._completeHide(t)}dispose(){this._popper&&this._popper.destroy(),super.dispose()}update(){this._inNavbar=this._detectNavbar(),this._popper&&this._popper.update()}_completeHide(t){j.trigger(this._element,"hide.bs.dropdown",t).defaultPrevented||("ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._popper&&this._popper.destroy(),this._menu.classList.remove(Je),this._element.classList.remove(Je),this._element.setAttribute("aria-expanded","false"),U.removeDataAttribute(this._menu,"popper"),j.trigger(this._element,"hidden.bs.dropdown",t))}_getConfig(t){if(t={...this.constructor.Default,...U.getDataAttributes(this._element),...t},a(Ue,t,this.constructor.DefaultType),"object"==typeof t.reference&&!o(t.reference)&&"function"!=typeof t.reference.getBoundingClientRect)throw new TypeError(`${Ue.toUpperCase()}: Option "reference" provided type "object" without a required "getBoundingClientRect" method.`);return t}_createPopper(t){if(void 0===Fe)throw new TypeError("Bootstrap's dropdowns require Popper (https://popper.js.org)");let e=this._element;"parent"===this._config.reference?e=t:o(this._config.reference)?e=r(this._config.reference):"object"==typeof this._config.reference&&(e=this._config.reference);const i=this._getPopperConfig(),n=i.modifiers.find((t=>"applyStyles"===t.name&&!1===t.enabled));this._popper=qe(e,this._menu,i),n&&U.setDataAttribute(this._menu,"popper","static")}_isShown(t=this._element){return t.classList.contains(Je)}_getMenuElement(){return V.next(this._element,ei)[0]}_getPlacement(){const t=this._element.parentNode;if(t.classList.contains("dropend"))return ri;if(t.classList.contains("dropstart"))return ai;const e="end"===getComputedStyle(this._menu).getPropertyValue("--bs-position").trim();return t.classList.contains("dropup")?e?ni:ii:e?oi:si}_detectNavbar(){return null!==this._element.closest(".navbar")}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_getPopperConfig(){const t={placement:this._getPlacement(),modifiers:[{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"offset",options:{offset:this._getOffset()}}]};return"static"===this._config.display&&(t.modifiers=[{name:"applyStyles",enabled:!1}]),{...t,..."function"==typeof this._config.popperConfig?this._config.popperConfig(t):this._config.popperConfig}}_selectMenuItem({key:t,target:e}){const i=V.find(".dropdown-menu .dropdown-item:not(.disabled):not(:disabled)",this._menu).filter(l);i.length&&v(i,e,t===Ye,!i.includes(e)).focus()}static jQueryInterface(t){return this.each((function(){const e=hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}static clearMenus(t){if(t&&(2===t.button||"keyup"===t.type&&"Tab"!==t.key))return;const e=V.find(ti);for(let i=0,n=e.length;ie+t)),this._setElementAttributes(di,"paddingRight",(e=>e+t)),this._setElementAttributes(ui,"marginRight",(e=>e-t))}_disableOverFlow(){this._saveInitialAttribute(this._element,"overflow"),this._element.style.overflow="hidden"}_setElementAttributes(t,e,i){const n=this.getWidth();this._applyManipulationCallback(t,(t=>{if(t!==this._element&&window.innerWidth>t.clientWidth+n)return;this._saveInitialAttribute(t,e);const s=window.getComputedStyle(t)[e];t.style[e]=`${i(Number.parseFloat(s))}px`}))}reset(){this._resetElementAttributes(this._element,"overflow"),this._resetElementAttributes(this._element,"paddingRight"),this._resetElementAttributes(di,"paddingRight"),this._resetElementAttributes(ui,"marginRight")}_saveInitialAttribute(t,e){const i=t.style[e];i&&U.setDataAttribute(t,e,i)}_resetElementAttributes(t,e){this._applyManipulationCallback(t,(t=>{const i=U.getDataAttribute(t,e);void 0===i?t.style.removeProperty(e):(U.removeDataAttribute(t,e),t.style[e]=i)}))}_applyManipulationCallback(t,e){o(t)?e(t):V.find(t,this._element).forEach(e)}isOverflowing(){return this.getWidth()>0}}const pi={className:"modal-backdrop",isVisible:!0,isAnimated:!1,rootElement:"body",clickCallback:null},mi={className:"string",isVisible:"boolean",isAnimated:"boolean",rootElement:"(element|string)",clickCallback:"(function|null)"},gi="show",_i="mousedown.bs.backdrop";class bi{constructor(t){this._config=this._getConfig(t),this._isAppended=!1,this._element=null}show(t){this._config.isVisible?(this._append(),this._config.isAnimated&&u(this._getElement()),this._getElement().classList.add(gi),this._emulateAnimation((()=>{_(t)}))):_(t)}hide(t){this._config.isVisible?(this._getElement().classList.remove(gi),this._emulateAnimation((()=>{this.dispose(),_(t)}))):_(t)}_getElement(){if(!this._element){const t=document.createElement("div");t.className=this._config.className,this._config.isAnimated&&t.classList.add("fade"),this._element=t}return this._element}_getConfig(t){return(t={...pi,..."object"==typeof t?t:{}}).rootElement=r(t.rootElement),a("backdrop",t,mi),t}_append(){this._isAppended||(this._config.rootElement.append(this._getElement()),j.on(this._getElement(),_i,(()=>{_(this._config.clickCallback)})),this._isAppended=!0)}dispose(){this._isAppended&&(j.off(this._element,_i),this._element.remove(),this._isAppended=!1)}_emulateAnimation(t){b(t,this._getElement(),this._config.isAnimated)}}const vi={trapElement:null,autofocus:!0},yi={trapElement:"element",autofocus:"boolean"},wi=".bs.focustrap",Ei="backward";class Ai{constructor(t){this._config=this._getConfig(t),this._isActive=!1,this._lastTabNavDirection=null}activate(){const{trapElement:t,autofocus:e}=this._config;this._isActive||(e&&t.focus(),j.off(document,wi),j.on(document,"focusin.bs.focustrap",(t=>this._handleFocusin(t))),j.on(document,"keydown.tab.bs.focustrap",(t=>this._handleKeydown(t))),this._isActive=!0)}deactivate(){this._isActive&&(this._isActive=!1,j.off(document,wi))}_handleFocusin(t){const{target:e}=t,{trapElement:i}=this._config;if(e===document||e===i||i.contains(e))return;const n=V.focusableChildren(i);0===n.length?i.focus():this._lastTabNavDirection===Ei?n[n.length-1].focus():n[0].focus()}_handleKeydown(t){"Tab"===t.key&&(this._lastTabNavDirection=t.shiftKey?Ei:"forward")}_getConfig(t){return t={...vi,..."object"==typeof t?t:{}},a("focustrap",t,yi),t}}const Ti="modal",Oi="Escape",Ci={backdrop:!0,keyboard:!0,focus:!0},ki={backdrop:"(boolean|string)",keyboard:"boolean",focus:"boolean"},Li="hidden.bs.modal",xi="show.bs.modal",Di="resize.bs.modal",Si="click.dismiss.bs.modal",Ni="keydown.dismiss.bs.modal",Ii="mousedown.dismiss.bs.modal",Pi="modal-open",ji="show",Mi="modal-static";class Hi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._dialog=V.findOne(".modal-dialog",this._element),this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._isShown=!1,this._ignoreBackdropClick=!1,this._isTransitioning=!1,this._scrollBar=new fi}static get Default(){return Ci}static get NAME(){return Ti}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||this._isTransitioning||j.trigger(this._element,xi,{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._isAnimated()&&(this._isTransitioning=!0),this._scrollBar.hide(),document.body.classList.add(Pi),this._adjustDialog(),this._setEscapeEvent(),this._setResizeEvent(),j.on(this._dialog,Ii,(()=>{j.one(this._element,"mouseup.dismiss.bs.modal",(t=>{t.target===this._element&&(this._ignoreBackdropClick=!0)}))})),this._showBackdrop((()=>this._showElement(t))))}hide(){if(!this._isShown||this._isTransitioning)return;if(j.trigger(this._element,"hide.bs.modal").defaultPrevented)return;this._isShown=!1;const t=this._isAnimated();t&&(this._isTransitioning=!0),this._setEscapeEvent(),this._setResizeEvent(),this._focustrap.deactivate(),this._element.classList.remove(ji),j.off(this._element,Si),j.off(this._dialog,Ii),this._queueCallback((()=>this._hideModal()),this._element,t)}dispose(){[window,this._dialog].forEach((t=>j.off(t,".bs.modal"))),this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}handleUpdate(){this._adjustDialog()}_initializeBackDrop(){return new bi({isVisible:Boolean(this._config.backdrop),isAnimated:this._isAnimated()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_getConfig(t){return t={...Ci,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Ti,t,ki),t}_showElement(t){const e=this._isAnimated(),i=V.findOne(".modal-body",this._dialog);this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE||document.body.append(this._element),this._element.style.display="block",this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.scrollTop=0,i&&(i.scrollTop=0),e&&u(this._element),this._element.classList.add(ji),this._queueCallback((()=>{this._config.focus&&this._focustrap.activate(),this._isTransitioning=!1,j.trigger(this._element,"shown.bs.modal",{relatedTarget:t})}),this._dialog,e)}_setEscapeEvent(){this._isShown?j.on(this._element,Ni,(t=>{this._config.keyboard&&t.key===Oi?(t.preventDefault(),this.hide()):this._config.keyboard||t.key!==Oi||this._triggerBackdropTransition()})):j.off(this._element,Ni)}_setResizeEvent(){this._isShown?j.on(window,Di,(()=>this._adjustDialog())):j.off(window,Di)}_hideModal(){this._element.style.display="none",this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._isTransitioning=!1,this._backdrop.hide((()=>{document.body.classList.remove(Pi),this._resetAdjustments(),this._scrollBar.reset(),j.trigger(this._element,Li)}))}_showBackdrop(t){j.on(this._element,Si,(t=>{this._ignoreBackdropClick?this._ignoreBackdropClick=!1:t.target===t.currentTarget&&(!0===this._config.backdrop?this.hide():"static"===this._config.backdrop&&this._triggerBackdropTransition())})),this._backdrop.show(t)}_isAnimated(){return this._element.classList.contains("fade")}_triggerBackdropTransition(){if(j.trigger(this._element,"hidePrevented.bs.modal").defaultPrevented)return;const{classList:t,scrollHeight:e,style:i}=this._element,n=e>document.documentElement.clientHeight;!n&&"hidden"===i.overflowY||t.contains(Mi)||(n||(i.overflowY="hidden"),t.add(Mi),this._queueCallback((()=>{t.remove(Mi),n||this._queueCallback((()=>{i.overflowY=""}),this._dialog)}),this._dialog),this._element.focus())}_adjustDialog(){const t=this._element.scrollHeight>document.documentElement.clientHeight,e=this._scrollBar.getWidth(),i=e>0;(!i&&t&&!m()||i&&!t&&m())&&(this._element.style.paddingLeft=`${e}px`),(i&&!t&&!m()||!i&&t&&m())&&(this._element.style.paddingRight=`${e}px`)}_resetAdjustments(){this._element.style.paddingLeft="",this._element.style.paddingRight=""}static jQueryInterface(t,e){return this.each((function(){const i=Hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t](e)}}))}}j.on(document,"click.bs.modal.data-api",'[data-bs-toggle="modal"]',(function(t){const e=n(this);["A","AREA"].includes(this.tagName)&&t.preventDefault(),j.one(e,xi,(t=>{t.defaultPrevented||j.one(e,Li,(()=>{l(this)&&this.focus()}))}));const i=V.findOne(".modal.show");i&&Hi.getInstance(i).hide(),Hi.getOrCreateInstance(e).toggle(this)})),R(Hi),g(Hi);const Bi="offcanvas",Ri={backdrop:!0,keyboard:!0,scroll:!1},Wi={backdrop:"boolean",keyboard:"boolean",scroll:"boolean"},$i="show",zi=".offcanvas.show",qi="hidden.bs.offcanvas";class Fi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._isShown=!1,this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._addEventListeners()}static get NAME(){return Bi}static get Default(){return Ri}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||j.trigger(this._element,"show.bs.offcanvas",{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._element.style.visibility="visible",this._backdrop.show(),this._config.scroll||(new fi).hide(),this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.classList.add($i),this._queueCallback((()=>{this._config.scroll||this._focustrap.activate(),j.trigger(this._element,"shown.bs.offcanvas",{relatedTarget:t})}),this._element,!0))}hide(){this._isShown&&(j.trigger(this._element,"hide.bs.offcanvas").defaultPrevented||(this._focustrap.deactivate(),this._element.blur(),this._isShown=!1,this._element.classList.remove($i),this._backdrop.hide(),this._queueCallback((()=>{this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._element.style.visibility="hidden",this._config.scroll||(new fi).reset(),j.trigger(this._element,qi)}),this._element,!0)))}dispose(){this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}_getConfig(t){return t={...Ri,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Bi,t,Wi),t}_initializeBackDrop(){return new bi({className:"offcanvas-backdrop",isVisible:this._config.backdrop,isAnimated:!0,rootElement:this._element.parentNode,clickCallback:()=>this.hide()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_addEventListeners(){j.on(this._element,"keydown.dismiss.bs.offcanvas",(t=>{this._config.keyboard&&"Escape"===t.key&&this.hide()}))}static jQueryInterface(t){return this.each((function(){const e=Fi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}j.on(document,"click.bs.offcanvas.data-api",'[data-bs-toggle="offcanvas"]',(function(t){const e=n(this);if(["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this))return;j.one(e,qi,(()=>{l(this)&&this.focus()}));const i=V.findOne(zi);i&&i!==e&&Fi.getInstance(i).hide(),Fi.getOrCreateInstance(e).toggle(this)})),j.on(window,"load.bs.offcanvas.data-api",(()=>V.find(zi).forEach((t=>Fi.getOrCreateInstance(t).show())))),R(Fi),g(Fi);const Ui=new Set(["background","cite","href","itemtype","longdesc","poster","src","xlink:href"]),Vi=/^(?:(?:https?|mailto|ftp|tel|file|sms):|[^#&/:?]*(?:[#/?]|$))/i,Ki=/^data:(?:image\/(?:bmp|gif|jpeg|jpg|png|tiff|webp)|video\/(?:mpeg|mp4|ogg|webm)|audio\/(?:mp3|oga|ogg|opus));base64,[\d+/a-z]+=*$/i,Xi=(t,e)=>{const i=t.nodeName.toLowerCase();if(e.includes(i))return!Ui.has(i)||Boolean(Vi.test(t.nodeValue)||Ki.test(t.nodeValue));const n=e.filter((t=>t instanceof RegExp));for(let t=0,e=n.length;t{Xi(t,r)||i.removeAttribute(t.nodeName)}))}return n.body.innerHTML}const Qi="tooltip",Gi=new Set(["sanitize","allowList","sanitizeFn"]),Zi={animation:"boolean",template:"string",title:"(string|element|function)",trigger:"string",delay:"(number|object)",html:"boolean",selector:"(string|boolean)",placement:"(string|function)",offset:"(array|string|function)",container:"(string|element|boolean)",fallbackPlacements:"array",boundary:"(string|element)",customClass:"(string|function)",sanitize:"boolean",sanitizeFn:"(null|function)",allowList:"object",popperConfig:"(null|object|function)"},Ji={AUTO:"auto",TOP:"top",RIGHT:m()?"left":"right",BOTTOM:"bottom",LEFT:m()?"right":"left"},tn={animation:!0,template:'',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:[0,0],container:!1,fallbackPlacements:["top","right","bottom","left"],boundary:"clippingParents",customClass:"",sanitize:!0,sanitizeFn:null,allowList:{"*":["class","dir","id","lang","role",/^aria-[\w-]*$/i],a:["target","href","title","rel"],area:[],b:[],br:[],col:[],code:[],div:[],em:[],hr:[],h1:[],h2:[],h3:[],h4:[],h5:[],h6:[],i:[],img:["src","srcset","alt","title","width","height"],li:[],ol:[],p:[],pre:[],s:[],small:[],span:[],sub:[],sup:[],strong:[],u:[],ul:[]},popperConfig:null},en={HIDE:"hide.bs.tooltip",HIDDEN:"hidden.bs.tooltip",SHOW:"show.bs.tooltip",SHOWN:"shown.bs.tooltip",INSERTED:"inserted.bs.tooltip",CLICK:"click.bs.tooltip",FOCUSIN:"focusin.bs.tooltip",FOCUSOUT:"focusout.bs.tooltip",MOUSEENTER:"mouseenter.bs.tooltip",MOUSELEAVE:"mouseleave.bs.tooltip"},nn="fade",sn="show",on="show",rn="out",an=".tooltip-inner",ln=".modal",cn="hide.bs.modal",hn="hover",dn="focus";class un extends B{constructor(t,e){if(void 0===Fe)throw new TypeError("Bootstrap's tooltips require Popper (https://popper.js.org)");super(t),this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this._config=this._getConfig(e),this.tip=null,this._setListeners()}static get Default(){return tn}static get NAME(){return Qi}static get Event(){return en}static get DefaultType(){return Zi}enable(){this._isEnabled=!0}disable(){this._isEnabled=!1}toggleEnabled(){this._isEnabled=!this._isEnabled}toggle(t){if(this._isEnabled)if(t){const e=this._initializeOnDelegatedTarget(t);e._activeTrigger.click=!e._activeTrigger.click,e._isWithActiveTrigger()?e._enter(null,e):e._leave(null,e)}else{if(this.getTipElement().classList.contains(sn))return void this._leave(null,this);this._enter(null,this)}}dispose(){clearTimeout(this._timeout),j.off(this._element.closest(ln),cn,this._hideModalHandler),this.tip&&this.tip.remove(),this._disposePopper(),super.dispose()}show(){if("none"===this._element.style.display)throw new Error("Please use show on visible elements");if(!this.isWithContent()||!this._isEnabled)return;const t=j.trigger(this._element,this.constructor.Event.SHOW),e=h(this._element),i=null===e?this._element.ownerDocument.documentElement.contains(this._element):e.contains(this._element);if(t.defaultPrevented||!i)return;"tooltip"===this.constructor.NAME&&this.tip&&this.getTitle()!==this.tip.querySelector(an).innerHTML&&(this._disposePopper(),this.tip.remove(),this.tip=null);const n=this.getTipElement(),s=(t=>{do{t+=Math.floor(1e6*Math.random())}while(document.getElementById(t));return t})(this.constructor.NAME);n.setAttribute("id",s),this._element.setAttribute("aria-describedby",s),this._config.animation&&n.classList.add(nn);const o="function"==typeof this._config.placement?this._config.placement.call(this,n,this._element):this._config.placement,r=this._getAttachment(o);this._addAttachmentClass(r);const{container:a}=this._config;H.set(n,this.constructor.DATA_KEY,this),this._element.ownerDocument.documentElement.contains(this.tip)||(a.append(n),j.trigger(this._element,this.constructor.Event.INSERTED)),this._popper?this._popper.update():this._popper=qe(this._element,n,this._getPopperConfig(r)),n.classList.add(sn);const l=this._resolvePossibleFunction(this._config.customClass);l&&n.classList.add(...l.split(" ")),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>{j.on(t,"mouseover",d)}));const c=this.tip.classList.contains(nn);this._queueCallback((()=>{const t=this._hoverState;this._hoverState=null,j.trigger(this._element,this.constructor.Event.SHOWN),t===rn&&this._leave(null,this)}),this.tip,c)}hide(){if(!this._popper)return;const t=this.getTipElement();if(j.trigger(this._element,this.constructor.Event.HIDE).defaultPrevented)return;t.classList.remove(sn),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._activeTrigger.click=!1,this._activeTrigger.focus=!1,this._activeTrigger.hover=!1;const e=this.tip.classList.contains(nn);this._queueCallback((()=>{this._isWithActiveTrigger()||(this._hoverState!==on&&t.remove(),this._cleanTipClass(),this._element.removeAttribute("aria-describedby"),j.trigger(this._element,this.constructor.Event.HIDDEN),this._disposePopper())}),this.tip,e),this._hoverState=""}update(){null!==this._popper&&this._popper.update()}isWithContent(){return Boolean(this.getTitle())}getTipElement(){if(this.tip)return this.tip;const t=document.createElement("div");t.innerHTML=this._config.template;const e=t.children[0];return this.setContent(e),e.classList.remove(nn,sn),this.tip=e,this.tip}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),an)}_sanitizeAndSetContent(t,e,i){const n=V.findOne(i,t);e||!n?this.setElementContent(n,e):n.remove()}setElementContent(t,e){if(null!==t)return o(e)?(e=r(e),void(this._config.html?e.parentNode!==t&&(t.innerHTML="",t.append(e)):t.textContent=e.textContent)):void(this._config.html?(this._config.sanitize&&(e=Yi(e,this._config.allowList,this._config.sanitizeFn)),t.innerHTML=e):t.textContent=e)}getTitle(){const t=this._element.getAttribute("data-bs-original-title")||this._config.title;return this._resolvePossibleFunction(t)}updateAttachment(t){return"right"===t?"end":"left"===t?"start":t}_initializeOnDelegatedTarget(t,e){return e||this.constructor.getOrCreateInstance(t.delegateTarget,this._getDelegateConfig())}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_resolvePossibleFunction(t){return"function"==typeof t?t.call(this._element):t}_getPopperConfig(t){const e={placement:t,modifiers:[{name:"flip",options:{fallbackPlacements:this._config.fallbackPlacements}},{name:"offset",options:{offset:this._getOffset()}},{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"arrow",options:{element:`.${this.constructor.NAME}-arrow`}},{name:"onChange",enabled:!0,phase:"afterWrite",fn:t=>this._handlePopperPlacementChange(t)}],onFirstUpdate:t=>{t.options.placement!==t.placement&&this._handlePopperPlacementChange(t)}};return{...e,..."function"==typeof this._config.popperConfig?this._config.popperConfig(e):this._config.popperConfig}}_addAttachmentClass(t){this.getTipElement().classList.add(`${this._getBasicClassPrefix()}-${this.updateAttachment(t)}`)}_getAttachment(t){return Ji[t.toUpperCase()]}_setListeners(){this._config.trigger.split(" ").forEach((t=>{if("click"===t)j.on(this._element,this.constructor.Event.CLICK,this._config.selector,(t=>this.toggle(t)));else if("manual"!==t){const e=t===hn?this.constructor.Event.MOUSEENTER:this.constructor.Event.FOCUSIN,i=t===hn?this.constructor.Event.MOUSELEAVE:this.constructor.Event.FOCUSOUT;j.on(this._element,e,this._config.selector,(t=>this._enter(t))),j.on(this._element,i,this._config.selector,(t=>this._leave(t)))}})),this._hideModalHandler=()=>{this._element&&this.hide()},j.on(this._element.closest(ln),cn,this._hideModalHandler),this._config.selector?this._config={...this._config,trigger:"manual",selector:""}:this._fixTitle()}_fixTitle(){const t=this._element.getAttribute("title"),e=typeof this._element.getAttribute("data-bs-original-title");(t||"string"!==e)&&(this._element.setAttribute("data-bs-original-title",t||""),!t||this._element.getAttribute("aria-label")||this._element.textContent||this._element.setAttribute("aria-label",t),this._element.setAttribute("title",""))}_enter(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusin"===t.type?dn:hn]=!0),e.getTipElement().classList.contains(sn)||e._hoverState===on?e._hoverState=on:(clearTimeout(e._timeout),e._hoverState=on,e._config.delay&&e._config.delay.show?e._timeout=setTimeout((()=>{e._hoverState===on&&e.show()}),e._config.delay.show):e.show())}_leave(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusout"===t.type?dn:hn]=e._element.contains(t.relatedTarget)),e._isWithActiveTrigger()||(clearTimeout(e._timeout),e._hoverState=rn,e._config.delay&&e._config.delay.hide?e._timeout=setTimeout((()=>{e._hoverState===rn&&e.hide()}),e._config.delay.hide):e.hide())}_isWithActiveTrigger(){for(const t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1}_getConfig(t){const e=U.getDataAttributes(this._element);return Object.keys(e).forEach((t=>{Gi.has(t)&&delete e[t]})),(t={...this.constructor.Default,...e,..."object"==typeof t&&t?t:{}}).container=!1===t.container?document.body:r(t.container),"number"==typeof t.delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),a(Qi,t,this.constructor.DefaultType),t.sanitize&&(t.template=Yi(t.template,t.allowList,t.sanitizeFn)),t}_getDelegateConfig(){const t={};for(const e in this._config)this.constructor.Default[e]!==this._config[e]&&(t[e]=this._config[e]);return t}_cleanTipClass(){const t=this.getTipElement(),e=new RegExp(`(^|\\s)${this._getBasicClassPrefix()}\\S+`,"g"),i=t.getAttribute("class").match(e);null!==i&&i.length>0&&i.map((t=>t.trim())).forEach((e=>t.classList.remove(e)))}_getBasicClassPrefix(){return"bs-tooltip"}_handlePopperPlacementChange(t){const{state:e}=t;e&&(this.tip=e.elements.popper,this._cleanTipClass(),this._addAttachmentClass(this._getAttachment(e.placement)))}_disposePopper(){this._popper&&(this._popper.destroy(),this._popper=null)}static jQueryInterface(t){return this.each((function(){const e=un.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(un);const fn={...un.Default,placement:"right",offset:[0,8],trigger:"click",content:"",template:''},pn={...un.DefaultType,content:"(string|element|function)"},mn={HIDE:"hide.bs.popover",HIDDEN:"hidden.bs.popover",SHOW:"show.bs.popover",SHOWN:"shown.bs.popover",INSERTED:"inserted.bs.popover",CLICK:"click.bs.popover",FOCUSIN:"focusin.bs.popover",FOCUSOUT:"focusout.bs.popover",MOUSEENTER:"mouseenter.bs.popover",MOUSELEAVE:"mouseleave.bs.popover"};class gn extends un{static get Default(){return fn}static get NAME(){return"popover"}static get Event(){return mn}static get DefaultType(){return pn}isWithContent(){return this.getTitle()||this._getContent()}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),".popover-header"),this._sanitizeAndSetContent(t,this._getContent(),".popover-body")}_getContent(){return this._resolvePossibleFunction(this._config.content)}_getBasicClassPrefix(){return"bs-popover"}static jQueryInterface(t){return this.each((function(){const e=gn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(gn);const _n="scrollspy",bn={offset:10,method:"auto",target:""},vn={offset:"number",method:"string",target:"(string|element)"},yn="active",wn=".nav-link, .list-group-item, .dropdown-item",En="position";class An extends B{constructor(t,e){super(t),this._scrollElement="BODY"===this._element.tagName?window:this._element,this._config=this._getConfig(e),this._offsets=[],this._targets=[],this._activeTarget=null,this._scrollHeight=0,j.on(this._scrollElement,"scroll.bs.scrollspy",(()=>this._process())),this.refresh(),this._process()}static get Default(){return bn}static get NAME(){return _n}refresh(){const t=this._scrollElement===this._scrollElement.window?"offset":En,e="auto"===this._config.method?t:this._config.method,n=e===En?this._getScrollTop():0;this._offsets=[],this._targets=[],this._scrollHeight=this._getScrollHeight(),V.find(wn,this._config.target).map((t=>{const s=i(t),o=s?V.findOne(s):null;if(o){const t=o.getBoundingClientRect();if(t.width||t.height)return[U[e](o).top+n,s]}return null})).filter((t=>t)).sort(((t,e)=>t[0]-e[0])).forEach((t=>{this._offsets.push(t[0]),this._targets.push(t[1])}))}dispose(){j.off(this._scrollElement,".bs.scrollspy"),super.dispose()}_getConfig(t){return(t={...bn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}}).target=r(t.target)||document.documentElement,a(_n,t,vn),t}_getScrollTop(){return this._scrollElement===window?this._scrollElement.pageYOffset:this._scrollElement.scrollTop}_getScrollHeight(){return this._scrollElement.scrollHeight||Math.max(document.body.scrollHeight,document.documentElement.scrollHeight)}_getOffsetHeight(){return this._scrollElement===window?window.innerHeight:this._scrollElement.getBoundingClientRect().height}_process(){const t=this._getScrollTop()+this._config.offset,e=this._getScrollHeight(),i=this._config.offset+e-this._getOffsetHeight();if(this._scrollHeight!==e&&this.refresh(),t>=i){const t=this._targets[this._targets.length-1];this._activeTarget!==t&&this._activate(t)}else{if(this._activeTarget&&t0)return this._activeTarget=null,void this._clear();for(let e=this._offsets.length;e--;)this._activeTarget!==this._targets[e]&&t>=this._offsets[e]&&(void 0===this._offsets[e+1]||t`${e}[data-bs-target="${t}"],${e}[href="${t}"]`)),i=V.findOne(e.join(","),this._config.target);i.classList.add(yn),i.classList.contains("dropdown-item")?V.findOne(".dropdown-toggle",i.closest(".dropdown")).classList.add(yn):V.parents(i,".nav, .list-group").forEach((t=>{V.prev(t,".nav-link, .list-group-item").forEach((t=>t.classList.add(yn))),V.prev(t,".nav-item").forEach((t=>{V.children(t,".nav-link").forEach((t=>t.classList.add(yn)))}))})),j.trigger(this._scrollElement,"activate.bs.scrollspy",{relatedTarget:t})}_clear(){V.find(wn,this._config.target).filter((t=>t.classList.contains(yn))).forEach((t=>t.classList.remove(yn)))}static jQueryInterface(t){return this.each((function(){const e=An.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(window,"load.bs.scrollspy.data-api",(()=>{V.find('[data-bs-spy="scroll"]').forEach((t=>new An(t)))})),g(An);const Tn="active",On="fade",Cn="show",kn=".active",Ln=":scope > li > .active";class xn extends B{static get NAME(){return"tab"}show(){if(this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE&&this._element.classList.contains(Tn))return;let t;const e=n(this._element),i=this._element.closest(".nav, .list-group");if(i){const e="UL"===i.nodeName||"OL"===i.nodeName?Ln:kn;t=V.find(e,i),t=t[t.length-1]}const s=t?j.trigger(t,"hide.bs.tab",{relatedTarget:this._element}):null;if(j.trigger(this._element,"show.bs.tab",{relatedTarget:t}).defaultPrevented||null!==s&&s.defaultPrevented)return;this._activate(this._element,i);const o=()=>{j.trigger(t,"hidden.bs.tab",{relatedTarget:this._element}),j.trigger(this._element,"shown.bs.tab",{relatedTarget:t})};e?this._activate(e,e.parentNode,o):o()}_activate(t,e,i){const n=(!e||"UL"!==e.nodeName&&"OL"!==e.nodeName?V.children(e,kn):V.find(Ln,e))[0],s=i&&n&&n.classList.contains(On),o=()=>this._transitionComplete(t,n,i);n&&s?(n.classList.remove(Cn),this._queueCallback(o,t,!0)):o()}_transitionComplete(t,e,i){if(e){e.classList.remove(Tn);const t=V.findOne(":scope > .dropdown-menu .active",e.parentNode);t&&t.classList.remove(Tn),"tab"===e.getAttribute("role")&&e.setAttribute("aria-selected",!1)}t.classList.add(Tn),"tab"===t.getAttribute("role")&&t.setAttribute("aria-selected",!0),u(t),t.classList.contains(On)&&t.classList.add(Cn);let n=t.parentNode;if(n&&"LI"===n.nodeName&&(n=n.parentNode),n&&n.classList.contains("dropdown-menu")){const e=t.closest(".dropdown");e&&V.find(".dropdown-toggle",e).forEach((t=>t.classList.add(Tn))),t.setAttribute("aria-expanded",!0)}i&&i()}static jQueryInterface(t){return this.each((function(){const e=xn.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(document,"click.bs.tab.data-api",'[data-bs-toggle="tab"], [data-bs-toggle="pill"], [data-bs-toggle="list"]',(function(t){["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this)||xn.getOrCreateInstance(this).show()})),g(xn);const Dn="toast",Sn="hide",Nn="show",In="showing",Pn={animation:"boolean",autohide:"boolean",delay:"number"},jn={animation:!0,autohide:!0,delay:5e3};class Mn extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._timeout=null,this._hasMouseInteraction=!1,this._hasKeyboardInteraction=!1,this._setListeners()}static get DefaultType(){return Pn}static get Default(){return jn}static get NAME(){return Dn}show(){j.trigger(this._element,"show.bs.toast").defaultPrevented||(this._clearTimeout(),this._config.animation&&this._element.classList.add("fade"),this._element.classList.remove(Sn),u(this._element),this._element.classList.add(Nn),this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.remove(In),j.trigger(this._element,"shown.bs.toast"),this._maybeScheduleHide()}),this._element,this._config.animation))}hide(){this._element.classList.contains(Nn)&&(j.trigger(this._element,"hide.bs.toast").defaultPrevented||(this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.add(Sn),this._element.classList.remove(In),this._element.classList.remove(Nn),j.trigger(this._element,"hidden.bs.toast")}),this._element,this._config.animation)))}dispose(){this._clearTimeout(),this._element.classList.contains(Nn)&&this._element.classList.remove(Nn),super.dispose()}_getConfig(t){return t={...jn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}},a(Dn,t,this.constructor.DefaultType),t}_maybeScheduleHide(){this._config.autohide&&(this._hasMouseInteraction||this._hasKeyboardInteraction||(this._timeout=setTimeout((()=>{this.hide()}),this._config.delay)))}_onInteraction(t,e){switch(t.type){case"mouseover":case"mouseout":this._hasMouseInteraction=e;break;case"focusin":case"focusout":this._hasKeyboardInteraction=e}if(e)return void this._clearTimeout();const i=t.relatedTarget;this._element===i||this._element.contains(i)||this._maybeScheduleHide()}_setListeners(){j.on(this._element,"mouseover.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"mouseout.bs.toast",(t=>this._onInteraction(t,!1))),j.on(this._element,"focusin.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"focusout.bs.toast",(t=>this._onInteraction(t,!1)))}_clearTimeout(){clearTimeout(this._timeout),this._timeout=null}static jQueryInterface(t){return this.each((function(){const e=Mn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}return R(Mn),g(Mn),{Alert:W,Button:z,Carousel:st,Collapse:pt,Dropdown:hi,Modal:Hi,Offcanvas:Fi,Popover:gn,ScrollSpy:An,Tab:xn,Toast:Mn,Tooltip:un}})); -//# sourceMappingURL=bootstrap.bundle.min.js.map \ No newline at end of file diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/visualizer.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/visualizer.py deleted file mode 100644 index 63ce19ec5bbc01915ffae0cb550e2a3d0c4d9059..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/visualizer.py +++ /dev/null @@ -1,233 +0,0 @@ -import numpy as np -import os -import sys -import ntpath -import time -from . import util, html -from subprocess import Popen, PIPE - -if sys.version_info[0] == 2: - VisdomExceptionBase = Exception -else: - VisdomExceptionBase = ConnectionError - - -def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256): - """Save examples to the disk. - - Parameters: - webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details) - visuals (OrderedDict) -- an ordered dictionary that stores (name, examples (either tensor or numpy) ) pairs - image_path (str) -- the string is used to create image paths - aspect_ratio (float) -- the aspect ratio of saved examples - width (int) -- the examples will be resized to width x width - - This function will save examples stored in 'visuals' to the HTML file specified by 'webpage'. - """ - image_dir = webpage.get_image_dir() - short_path = ntpath.basename(image_path[0]) - name = os.path.splitext(short_path)[0] - - webpage.add_header(name) - ims, txts, links = [], [], [] - - for label, im_data in visuals.items(): - im = util.tensor2im(im_data) - image_name = '%s/%s.png' % (label, name) - os.makedirs(os.path.join(image_dir, label), exist_ok=True) - save_path = os.path.join(image_dir, image_name) - util.save_image(im, save_path) - ims.append(image_name) - txts.append(label) - links.append(image_name) - webpage.add_images(ims, txts, links, width=width) - - -class Visualizer(): - """This class includes several functions that can display/save examples and print/save logging information. - - It uses a Python library 'visdom' for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with examples. - """ - - def __init__(self, opt): - """Initialize the Visualizer class - - Parameters: - opt -- stores all the experiment flags; needs to be a subclass of BaseOptions - Step 1: Cache the training/test options - Step 2: connect to a visdom server - Step 3: create an HTML object for saveing HTML filters - Step 4: create a logging file to store training losses - """ - self.opt = opt # cache the option - if opt.display_id is None: - self.display_id = np.random.randint(100000) * 10 # just a random display id - else: - self.display_id = opt.display_id - self.use_html = opt.isTrain and not opt.no_html - self.win_size = opt.display_winsize - self.name = opt.name - self.port = opt.display_port - self.saved = False - if self.display_id > 0: # connect to a visdom server given and - import visdom - self.plot_data = {} - self.ncols = opt.display_ncols - self.vis = visdom.Visdom(server=opt.display_server, port=opt.display_port, env=opt.display_env) - if not self.vis.check_connection(): - self.create_visdom_connections() - - if self.use_html: # create an HTML object at /web/; examples will be saved under /web/examples/ - self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web') - self.img_dir = os.path.join(self.web_dir, 'examples') - print('create web directory %s...' % self.web_dir) - util.mkdirs([self.web_dir, self.img_dir]) - # create a logging file to store training losses - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - def reset(self): - """Reset the self.saved status""" - self.saved = False - - def create_visdom_connections(self): - """If the program could not connect to Visdom server, this function will start a new server at port < self.port > """ - cmd = sys.executable + ' -m visdom.server -p %d &>/dev/null &' % self.port - print('\n\nCould not connect to Visdom server. \n Trying to start a server....') - print('Command: %s' % cmd) - Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE) - - def display_current_results(self, visuals, epoch, save_result): - """Display current results on visdom; save current results to an HTML file. - - Parameters: - visuals (OrderedDict) - - dictionary of examples to display or save - epoch (int) - - the current epoch - save_result (bool) - - if save the current results to an HTML file - """ - if self.display_id > 0: # show examples in the browser using visdom - ncols = self.ncols - if ncols > 0: # show all the examples in one visdom panel - ncols = min(ncols, len(visuals)) - h, w = next(iter(visuals.values())).shape[:2] - table_css = """""" % (w, h) # create a table css - # create a table of examples. - title = self.name - label_html = '' - label_html_row = '' - images = [] - idx = 0 - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - label_html_row += '%s' % label - images.append(image_numpy.transpose([2, 0, 1])) - idx += 1 - if idx % ncols == 0: - label_html += '%s' % label_html_row - label_html_row = '' - white_image = np.ones_like(image_numpy.transpose([2, 0, 1])) * 255 - while idx % ncols != 0: - images.append(white_image) - label_html_row += '' - idx += 1 - if label_html_row != '': - label_html += '%s' % label_html_row - try: - self.vis.images(images, nrow=ncols, win=self.display_id + 1, - padding=2, opts=dict(title=title + ' examples')) - label_html = '%s
' % label_html - self.vis.text(table_css + label_html, win=self.display_id + 2, - opts=dict(title=title + ' labels')) - except VisdomExceptionBase: - self.create_visdom_connections() - - else: # show each image in a separate visdom panel; - idx = 1 - try: - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - self.vis.image(image_numpy.transpose([2, 0, 1]), opts=dict(title=label), - win=self.display_id + idx) - idx += 1 - except VisdomExceptionBase: - self.create_visdom_connections() - - if self.use_html and (save_result or not self.saved): # save examples to an HTML file if they haven't been saved. - self.saved = True - # save examples to the disk - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label)) - util.save_image(image_numpy, img_path) - - # update website - webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=1) - for n in range(epoch, 0, -1): - webpage.add_header('epoch [%d]' % n) - ims, txts, links = [], [], [] - - for label, image_numpy in visuals.items(): - img_path = 'epoch%.3d_%s.png' % (n, label) - ims.append(img_path) - txts.append(label) - links.append(img_path) - webpage.add_images(ims, txts, links, width=self.win_size) - webpage.save() - - def plot_current_losses(self, epoch, counter_ratio, losses): - """display the current losses on visdom display: dictionary of error labels and values - - Parameters: - epoch (int) -- current epoch - counter_ratio (float) -- progress (percentage) in the current epoch, between 0 to 1 - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - """ - if len(losses) == 0: - return - - plot_name = '_'.join(list(losses.keys())) - - if plot_name not in self.plot_data: - self.plot_data[plot_name] = {'X': [], 'Y': [], 'legend': list(losses.keys())} - - plot_data = self.plot_data[plot_name] - plot_id = list(self.plot_data.keys()).index(plot_name) - - plot_data['X'].append(epoch + counter_ratio) - plot_data['Y'].append([losses[k] for k in plot_data['legend']]) - try: - self.vis.line( - X=np.stack([np.array(plot_data['X'])] * len(plot_data['legend']), 1), - Y=np.array(plot_data['Y']), - opts={ - 'title': self.name, - 'legend': plot_data['legend'], - 'xlabel': 'epoch', - 'ylabel': 'loss'}, - win=self.display_id - plot_id) - except VisdomExceptionBase: - self.create_visdom_connections() - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message diff --git a/spaces/Awiny/Image2Paragraph/models/segment_models/edit_anything_model.py b/spaces/Awiny/Image2Paragraph/models/segment_models/edit_anything_model.py deleted file mode 100644 index afeb3b337572cc17e442465c4de2f72e41decd4e..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/segment_models/edit_anything_model.py +++ /dev/null @@ -1,62 +0,0 @@ -import cv2 -import torch -import mmcv -import numpy as np -from PIL import Image -from utils.util import resize_long_edge -from concurrent.futures import ThreadPoolExecutor -import time - -class EditAnything: - def __init__(self, image_caption_model): - self.device = image_caption_model.device - self.data_type = image_caption_model.data_type - self.image_caption_model = image_caption_model - - def region_classify_w_blip2(self, images): - inputs = self.image_caption_model.processor(images=images, return_tensors="pt").to(self.device, self.data_type) - generated_ids = self.image_caption_model.model.generate(**inputs) - generated_texts = self.image_caption_model.processor.batch_decode(generated_ids, skip_special_tokens=True) - return [text.strip() for text in generated_texts] - - def process_ann(self, ann, image, target_size=(224, 224)): - start_time = time.time() - m = ann['segmentation'] - m_3c = m[:, :, np.newaxis] - m_3c = np.concatenate((m_3c, m_3c, m_3c), axis=2) - bbox = ann['bbox'] - region = mmcv.imcrop(image * m_3c, np.array([bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]]), scale=1) - resized_region = mmcv.imresize(region, target_size) - end_time = time.time() - print("process_ann took {:.2f} seconds".format(end_time - start_time)) - return resized_region, ann - - def region_level_semantic_api(self, image, anns, topk=5): - """ - rank regions by area, and classify each region with blip2, parallel processing for speed up - Args: - image: numpy array - topk: int - Returns: - topk_region_w_class_label: list of dict with key 'class_label' - """ - start_time = time.time() - if len(anns) == 0: - return [] - sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True) - topk_anns = sorted_anns[:min(topk, len(sorted_anns))] - with ThreadPoolExecutor() as executor: - regions_and_anns = list(executor.map(lambda ann: self.process_ann(ann, image), topk_anns)) - regions = [region for region, _ in regions_and_anns] - region_class_labels = self.region_classify_w_blip2(regions) - for (region, ann), class_label in zip(regions_and_anns, region_class_labels): - ann['class_name'] = class_label - end_time = time.time() - print("region_level_semantic_api took {:.2f} seconds".format(end_time - start_time)) - - return [ann for _, ann in regions_and_anns] - - def semantic_class_w_mask(self, img_src, anns): - image = Image.open(img_src) - image = resize_long_edge(image, 384) - return self.region_level_semantic_api(image, anns) \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/demucs/train.py b/spaces/Bart92/RVC_HF/demucs/train.py deleted file mode 100644 index 6bd221279dc986a6df1a8d7b4d4444bb822a1cb3..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/demucs/train.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import tqdm -from torch.utils.data import DataLoader -from torch.utils.data.distributed import DistributedSampler - -from .utils import apply_model, average_metric, center_trim - - -def train_model(epoch, - dataset, - model, - criterion, - optimizer, - augment, - quantizer=None, - diffq=0, - repeat=1, - device="cpu", - seed=None, - workers=4, - world_size=1, - batch_size=16): - - if world_size > 1: - sampler = DistributedSampler(dataset) - sampler_epoch = epoch * repeat - if seed is not None: - sampler_epoch += seed * 1000 - sampler.set_epoch(sampler_epoch) - batch_size //= world_size - loader = DataLoader(dataset, batch_size=batch_size, sampler=sampler, num_workers=workers) - else: - loader = DataLoader(dataset, batch_size=batch_size, num_workers=workers, shuffle=True) - current_loss = 0 - model_size = 0 - for repetition in range(repeat): - tq = tqdm.tqdm(loader, - ncols=120, - desc=f"[{epoch:03d}] train ({repetition + 1}/{repeat})", - leave=False, - file=sys.stdout, - unit=" batch") - total_loss = 0 - for idx, sources in enumerate(tq): - if len(sources) < batch_size: - # skip uncomplete batch for augment.Remix to work properly - continue - sources = sources.to(device) - sources = augment(sources) - mix = sources.sum(dim=1) - - estimates = model(mix) - sources = center_trim(sources, estimates) - loss = criterion(estimates, sources) - model_size = 0 - if quantizer is not None: - model_size = quantizer.model_size() - - train_loss = loss + diffq * model_size - train_loss.backward() - grad_norm = 0 - for p in model.parameters(): - if p.grad is not None: - grad_norm += p.grad.data.norm()**2 - grad_norm = grad_norm**0.5 - optimizer.step() - optimizer.zero_grad() - - if quantizer is not None: - model_size = model_size.item() - - total_loss += loss.item() - current_loss = total_loss / (1 + idx) - tq.set_postfix(loss=f"{current_loss:.4f}", ms=f"{model_size:.2f}", - grad=f"{grad_norm:.5f}") - - # free some space before next round - del sources, mix, estimates, loss, train_loss - - if world_size > 1: - sampler.epoch += 1 - - if world_size > 1: - current_loss = average_metric(current_loss) - return current_loss, model_size - - -def validate_model(epoch, - dataset, - model, - criterion, - device="cpu", - rank=0, - world_size=1, - shifts=0, - overlap=0.25, - split=False): - indexes = range(rank, len(dataset), world_size) - tq = tqdm.tqdm(indexes, - ncols=120, - desc=f"[{epoch:03d}] valid", - leave=False, - file=sys.stdout, - unit=" track") - current_loss = 0 - for index in tq: - streams = dataset[index] - # first five minutes to avoid OOM on --upsample models - streams = streams[..., :15_000_000] - streams = streams.to(device) - sources = streams[1:] - mix = streams[0] - estimates = apply_model(model, mix, shifts=shifts, split=split, overlap=overlap) - loss = criterion(estimates, sources) - current_loss += loss.item() / len(indexes) - del estimates, streams, sources - - if world_size > 1: - current_loss = average_metric(current_loss, len(indexes)) - return current_loss diff --git a/spaces/Benson/text-generation/Examples/Amor.ly App Descargar Apk.md b/spaces/Benson/text-generation/Examples/Amor.ly App Descargar Apk.md deleted file mode 100644 index 8b47e77f22a1178e4336bc6882a9549fd3ab417e..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Amor.ly App Descargar Apk.md +++ /dev/null @@ -1,62 +0,0 @@ - -

Love.ly App: Una forma creativa de capturar y compartir tu historia de vídeo

-

¿Te encanta hacer videos y compartirlos con tus amigos y seguidores? ¿Quieres expresarte a través de streaming de video en vivo y conectarte con gente de todo el mundo? ¿Desea vender sus productos o servicios a través de chat de vídeo en vivo y aumentar sus ventas? Si respondiste sí a cualquiera de estas preguntas, entonces deberías revisar la aplicación Love.ly, una forma creativa de capturar y compartir tu historia de video.

-

¿Qué es la aplicación Love.ly?

-

La aplicación Love.ly es una aplicación de videollamada gratuita que te permite hacer videollamadas en vivo y chatear con personas al azar, hacer nuevos amigos e incluso vender tus productos o servicios a través de streaming de video en vivo. También puedes mostrar tus emociones a tus streamers favoritos enviándoles regalos virtuales geniales y disfrutar de las ventajas exclusivas a medida que subes de nivel. La aplicación Love.ly está disponible para dispositivos iOS y Android, y puedes descargarla desde la App Store o Google Play Store.

-

amor.ly app descargar apk


Download ———>>> https://bltlly.com/2v6LQL



-

Características de la aplicación Love.ly

-

Algunas de las características de la aplicación Love.ly son:

-
    -
  • Puedes hacer videollamadas gratis en cualquier momento y en cualquier lugar con cualquier persona.
  • -
  • Puedes conectarte con personas de diferentes países y culturas y aprender algo nuevo.
  • -
  • Puedes crear tu propia historia de video y compartirla con tus amigos y seguidores.
  • -
  • Puede vender sus productos o servicios a través de chat de vídeo en vivo y obtener comentarios instantáneos de sus clientes.
  • -
  • Puedes enviar y recibir regalos y monedas virtuales y usarlas para desbloquear más funciones y recompensas.
  • -
  • Puedes unirte o crear chats de video grupales con tus amigos y divertirte juntos.
  • -
-

Cómo descargar e instalar la aplicación Love.ly en tu dispositivo

-

Para descargar e instalar la aplicación Love.ly en tu dispositivo, sigue estos sencillos pasos:

-
    -
  1. Ir a la App Store o Google Play Store en su dispositivo.
  2. - -
  3. Toque en el icono de la aplicación y luego toque en "Instalar" o "Obtener" botón.
  4. -
  5. Espere a que la aplicación se descargue e instale en su dispositivo.
  6. -
  7. Abra la aplicación y regístrese con su correo electrónico, número de teléfono o cuenta de redes sociales.
  8. -
  9. Empieza a hacer videollamadas y disfruta de la aplicación.
  10. -
-

¿Por qué debería usar la aplicación Love.ly?

-

Hay muchas razones por las que deberías usar la aplicación Love.ly, como:

-

Beneficios de usar la aplicación Love.ly

-
    -
  • Puedes expresarte creativa y auténticamente a través de streaming de video en vivo.
  • -
  • Puedes conocer gente nueva, hacer nuevos amigos y expandir tu red social.
  • -
  • Puede aumentar sus ventas, promover su marca, y hacer crecer su negocio a través de chat de vídeo en vivo.
  • -
  • Puedes divertirte, relajarte y entretenerte viendo o uniéndote a transmisiones de video en vivo.
  • -
  • Puedes ganar monedas, regalos y recompensas usando la aplicación regularmente.
  • -
-

Consejos y trucos para usar la aplicación Love.ly

-
    -
  • Usa una buena cámara, micrófono e iluminación para mejorar la calidad de tu video.
  • -
  • Elija un título pegadizo, descripción y miniatura para su flujo de vídeo para atraer a más espectadores.
  • -
  • Interactúa con tus espectadores, responde a sus preguntas, agradéceles por sus regalos y pídeles que te sigan.
  • -
  • Sé respetuoso, educado y amigable con todos en la aplicación. Reporta cualquier comportamiento o contenido inapropiado.
  • -
  • Explora diferentes categorías, temas y hashtags en la aplicación para encontrar flujos interesantes o personas para ver o chatear con.
  • -
-

Conclusión

- -

En este artículo, hemos discutido qué es la aplicación Love.ly, qué características tiene, cómo descargarla e instalarla en tu dispositivo, por qué deberías usarla y algunos consejos y trucos para usarla. Esperamos que este artículo te haya ayudado a aprender más sobre la aplicación Love.ly y te haya inspirado a probarla. Si tiene alguna pregunta o comentario, no dude en contactarnos o dejar un comentario a continuación.

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre la aplicación Love.ly:

-

-

Q: ¿Es segura la aplicación Love.ly?

-

A: Sí, la aplicación Love.ly es segura. Utiliza el cifrado y la protección de la privacidad para garantizar que su información personal y sus datos no se filtren o se utilicen indebidamente. También puedes bloquear o reportar a cualquier usuario que te moleste en la aplicación.

-

Q: ¿Cómo puedo ganar monedas y regalos en la aplicación Love.ly?

-

A: Puede ganar monedas y regalos en la aplicación Love.ly viendo o uniéndose a transmisiones de video en vivo, enviando o recibiendo regalos virtuales, invitando a sus amigos a unirse a la aplicación, completando tareas diarias y participando en eventos y actividades. Puedes usar las monedas y los regalos para desbloquear más funciones y recompensas en la aplicación.

-

Q: ¿Cómo puedo convertirme en un streamer en la aplicación Love.ly?

-

A: Puedes convertirte en un streamer en la aplicación Love.ly creando tu propia historia de video y compartiéndola con tus amigos y seguidores. También puede solicitar convertirse en un streamer verificado en la aplicación mediante el cumplimiento de ciertos requisitos y criterios. Como streamer, puede ganar dinero recibiendo regalos de sus espectadores, vendiendo sus productos o servicios a través del chat de video en vivo y uniéndose al programa de socios.

-

Q: ¿Cuáles son las categorías y temas en la aplicación Love.ly?

-

A: Hay varias categorías y temas en la aplicación Love.ly que puede explorar, como música, danza, belleza, moda, juegos, deportes, viajes, educación, estilo de vida, comedia, arte y más. También puedes buscar hashtags o palabras clave específicas para encontrar flujos o personas que coincidan con tus intereses.

- -

A: Puede ponerse en contacto con el servicio de atención al cliente de la aplicación Love.ly enviando un correo electrónico a support@love.ly.com o rellenando el formulario de comentarios en la aplicación. También puede visitar el sitio web oficial de la aplicación Love.ly en www.love.ly.com para obtener más información y actualizaciones.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat/src/lib/stores/errors.ts b/spaces/BetterAPI/BetterChat/src/lib/stores/errors.ts deleted file mode 100644 index c7dd124ff03c1845237213b6c22ec7afefcd18e8..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/lib/stores/errors.ts +++ /dev/null @@ -1,7 +0,0 @@ -import { writable } from "svelte/store"; - -export const ERROR_MESSAGES = { - default: "Oops, something went wrong.", -}; - -export const error = writable(null); diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessage.ts b/spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessage.ts deleted file mode 100644 index f28d7aaf9995f9848f6c7988503c20a08d81d97c..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessage.ts +++ /dev/null @@ -1,3 +0,0 @@ -import { writable } from "svelte/store"; - -export const pendingMessage = writable(""); diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/metadata/pkg_resources.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/metadata/pkg_resources.py deleted file mode 100644 index f330ef12a2c5ea0a4adbecbeea389741479d5eb4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/metadata/pkg_resources.py +++ /dev/null @@ -1,270 +0,0 @@ -import email.message -import email.parser -import logging -import os -import zipfile -from typing import Collection, Iterable, Iterator, List, Mapping, NamedTuple, Optional - -from pip._vendor import pkg_resources -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.exceptions import InvalidWheel, NoneMetadataError, UnsupportedWheel -from pip._internal.utils.egg_link import egg_link_path_from_location -from pip._internal.utils.misc import display_path, normalize_path -from pip._internal.utils.wheel import parse_wheel, read_wheel_metadata_file - -from .base import ( - BaseDistribution, - BaseEntryPoint, - BaseEnvironment, - DistributionVersion, - InfoPath, - Wheel, -) - -logger = logging.getLogger(__name__) - - -class EntryPoint(NamedTuple): - name: str - value: str - group: str - - -class InMemoryMetadata: - """IMetadataProvider that reads metadata files from a dictionary. - - This also maps metadata decoding exceptions to our internal exception type. - """ - - def __init__(self, metadata: Mapping[str, bytes], wheel_name: str) -> None: - self._metadata = metadata - self._wheel_name = wheel_name - - def has_metadata(self, name: str) -> bool: - return name in self._metadata - - def get_metadata(self, name: str) -> str: - try: - return self._metadata[name].decode() - except UnicodeDecodeError as e: - # Augment the default error with the origin of the file. - raise UnsupportedWheel( - f"Error decoding metadata for {self._wheel_name}: {e} in {name} file" - ) - - def get_metadata_lines(self, name: str) -> Iterable[str]: - return pkg_resources.yield_lines(self.get_metadata(name)) - - def metadata_isdir(self, name: str) -> bool: - return False - - def metadata_listdir(self, name: str) -> List[str]: - return [] - - def run_script(self, script_name: str, namespace: str) -> None: - pass - - -class Distribution(BaseDistribution): - def __init__(self, dist: pkg_resources.Distribution) -> None: - self._dist = dist - - @classmethod - def from_directory(cls, directory: str) -> BaseDistribution: - dist_dir = directory.rstrip(os.sep) - - # Build a PathMetadata object, from path to metadata. :wink: - base_dir, dist_dir_name = os.path.split(dist_dir) - metadata = pkg_resources.PathMetadata(base_dir, dist_dir) - - # Determine the correct Distribution object type. - if dist_dir.endswith(".egg-info"): - dist_cls = pkg_resources.Distribution - dist_name = os.path.splitext(dist_dir_name)[0] - else: - assert dist_dir.endswith(".dist-info") - dist_cls = pkg_resources.DistInfoDistribution - dist_name = os.path.splitext(dist_dir_name)[0].split("-")[0] - - dist = dist_cls(base_dir, project_name=dist_name, metadata=metadata) - return cls(dist) - - @classmethod - def from_metadata_file_contents( - cls, - metadata_contents: bytes, - filename: str, - project_name: str, - ) -> BaseDistribution: - metadata_dict = { - "METADATA": metadata_contents, - } - dist = pkg_resources.DistInfoDistribution( - location=filename, - metadata=InMemoryMetadata(metadata_dict, filename), - project_name=project_name, - ) - return cls(dist) - - @classmethod - def from_wheel(cls, wheel: Wheel, name: str) -> BaseDistribution: - try: - with wheel.as_zipfile() as zf: - info_dir, _ = parse_wheel(zf, name) - metadata_dict = { - path.split("/", 1)[-1]: read_wheel_metadata_file(zf, path) - for path in zf.namelist() - if path.startswith(f"{info_dir}/") - } - except zipfile.BadZipFile as e: - raise InvalidWheel(wheel.location, name) from e - except UnsupportedWheel as e: - raise UnsupportedWheel(f"{name} has an invalid wheel, {e}") - dist = pkg_resources.DistInfoDistribution( - location=wheel.location, - metadata=InMemoryMetadata(metadata_dict, wheel.location), - project_name=name, - ) - return cls(dist) - - @property - def location(self) -> Optional[str]: - return self._dist.location - - @property - def installed_location(self) -> Optional[str]: - egg_link = egg_link_path_from_location(self.raw_name) - if egg_link: - location = egg_link - elif self.location: - location = self.location - else: - return None - return normalize_path(location) - - @property - def info_location(self) -> Optional[str]: - return self._dist.egg_info - - @property - def installed_by_distutils(self) -> bool: - # A distutils-installed distribution is provided by FileMetadata. This - # provider has a "path" attribute not present anywhere else. Not the - # best introspection logic, but pip has been doing this for a long time. - try: - return bool(self._dist._provider.path) - except AttributeError: - return False - - @property - def canonical_name(self) -> NormalizedName: - return canonicalize_name(self._dist.project_name) - - @property - def version(self) -> DistributionVersion: - return parse_version(self._dist.version) - - def is_file(self, path: InfoPath) -> bool: - return self._dist.has_metadata(str(path)) - - def iter_distutils_script_names(self) -> Iterator[str]: - yield from self._dist.metadata_listdir("scripts") - - def read_text(self, path: InfoPath) -> str: - name = str(path) - if not self._dist.has_metadata(name): - raise FileNotFoundError(name) - content = self._dist.get_metadata(name) - if content is None: - raise NoneMetadataError(self, name) - return content - - def iter_entry_points(self) -> Iterable[BaseEntryPoint]: - for group, entries in self._dist.get_entry_map().items(): - for name, entry_point in entries.items(): - name, _, value = str(entry_point).partition("=") - yield EntryPoint(name=name.strip(), value=value.strip(), group=group) - - def _metadata_impl(self) -> email.message.Message: - """ - :raises NoneMetadataError: if the distribution reports `has_metadata()` - True but `get_metadata()` returns None. - """ - if isinstance(self._dist, pkg_resources.DistInfoDistribution): - metadata_name = "METADATA" - else: - metadata_name = "PKG-INFO" - try: - metadata = self.read_text(metadata_name) - except FileNotFoundError: - if self.location: - displaying_path = display_path(self.location) - else: - displaying_path = repr(self.location) - logger.warning("No metadata found in %s", displaying_path) - metadata = "" - feed_parser = email.parser.FeedParser() - feed_parser.feed(metadata) - return feed_parser.close() - - def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: - if extras: # pkg_resources raises on invalid extras, so we sanitize. - extras = frozenset(extras).intersection(self._dist.extras) - return self._dist.requires(extras) - - def iter_provided_extras(self) -> Iterable[str]: - return self._dist.extras - - -class Environment(BaseEnvironment): - def __init__(self, ws: pkg_resources.WorkingSet) -> None: - self._ws = ws - - @classmethod - def default(cls) -> BaseEnvironment: - return cls(pkg_resources.working_set) - - @classmethod - def from_paths(cls, paths: Optional[List[str]]) -> BaseEnvironment: - return cls(pkg_resources.WorkingSet(paths)) - - def _iter_distributions(self) -> Iterator[BaseDistribution]: - for dist in self._ws: - yield Distribution(dist) - - def _search_distribution(self, name: str) -> Optional[BaseDistribution]: - """Find a distribution matching the ``name`` in the environment. - - This searches from *all* distributions available in the environment, to - match the behavior of ``pkg_resources.get_distribution()``. - """ - canonical_name = canonicalize_name(name) - for dist in self.iter_all_distributions(): - if dist.canonical_name == canonical_name: - return dist - return None - - def get_distribution(self, name: str) -> Optional[BaseDistribution]: - # Search the distribution by looking through the working set. - dist = self._search_distribution(name) - if dist: - return dist - - # If distribution could not be found, call working_set.require to - # update the working set, and try to find the distribution again. - # This might happen for e.g. when you install a package twice, once - # using setup.py develop and again using setup.py install. Now when - # running pip uninstall twice, the package gets removed from the - # working set in the first uninstall, so we have to populate the - # working set again so that pip knows about it and the packages gets - # picked up and is successfully uninstalled the second time too. - try: - # We didn't pass in any version specifiers, so this can never - # raise pkg_resources.VersionConflict. - self._ws.require(name) - except pkg_resources.DistributionNotFound: - return None - return self._search_distribution(name) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/build_clib.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/build_clib.py deleted file mode 100644 index 67ce2444ea69a0bbdfab0bda8c2aa14951187096..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/build_clib.py +++ /dev/null @@ -1,101 +0,0 @@ -import distutils.command.build_clib as orig -from distutils.errors import DistutilsSetupError -from distutils import log -from setuptools.dep_util import newer_pairwise_group - - -class build_clib(orig.build_clib): - """ - Override the default build_clib behaviour to do the following: - - 1. Implement a rudimentary timestamp-based dependency system - so 'compile()' doesn't run every time. - 2. Add more keys to the 'build_info' dictionary: - * obj_deps - specify dependencies for each object compiled. - this should be a dictionary mapping a key - with the source filename to a list of - dependencies. Use an empty string for global - dependencies. - * cflags - specify a list of additional flags to pass to - the compiler. - """ - - def build_libraries(self, libraries): - for (lib_name, build_info) in libraries: - sources = build_info.get('sources') - if sources is None or not isinstance(sources, (list, tuple)): - raise DistutilsSetupError( - "in 'libraries' option (library '%s'), " - "'sources' must be present and must be " - "a list of source filenames" % lib_name) - sources = list(sources) - - log.info("building '%s' library", lib_name) - - # Make sure everything is the correct type. - # obj_deps should be a dictionary of keys as sources - # and a list/tuple of files that are its dependencies. - obj_deps = build_info.get('obj_deps', dict()) - if not isinstance(obj_deps, dict): - raise DistutilsSetupError( - "in 'libraries' option (library '%s'), " - "'obj_deps' must be a dictionary of " - "type 'source: list'" % lib_name) - dependencies = [] - - # Get the global dependencies that are specified by the '' key. - # These will go into every source's dependency list. - global_deps = obj_deps.get('', list()) - if not isinstance(global_deps, (list, tuple)): - raise DistutilsSetupError( - "in 'libraries' option (library '%s'), " - "'obj_deps' must be a dictionary of " - "type 'source: list'" % lib_name) - - # Build the list to be used by newer_pairwise_group - # each source will be auto-added to its dependencies. - for source in sources: - src_deps = [source] - src_deps.extend(global_deps) - extra_deps = obj_deps.get(source, list()) - if not isinstance(extra_deps, (list, tuple)): - raise DistutilsSetupError( - "in 'libraries' option (library '%s'), " - "'obj_deps' must be a dictionary of " - "type 'source: list'" % lib_name) - src_deps.extend(extra_deps) - dependencies.append(src_deps) - - expected_objects = self.compiler.object_filenames( - sources, - output_dir=self.build_temp, - ) - - if ( - newer_pairwise_group(dependencies, expected_objects) - != ([], []) - ): - # First, compile the source code to object files in the library - # directory. (This should probably change to putting object - # files in a temporary build directory.) - macros = build_info.get('macros') - include_dirs = build_info.get('include_dirs') - cflags = build_info.get('cflags') - self.compiler.compile( - sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - extra_postargs=cflags, - debug=self.debug - ) - - # Now "link" the object files together into a static library. - # (On Unix at least, this isn't really linking -- it just - # builds an archive. Whatever.) - self.compiler.create_static_lib( - expected_objects, - lib_name, - output_dir=self.build_clib, - debug=self.debug - ) diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/descr.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/descr.h deleted file mode 100644 index 92720cd56277e73a27da3bac85c3c2ae6a3589ac..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/descr.h +++ /dev/null @@ -1,100 +0,0 @@ -/* - pybind11/detail/descr.h: Helper type for concatenating type signatures at compile time - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "common.h" - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) -PYBIND11_NAMESPACE_BEGIN(detail) - -#if !defined(_MSC_VER) -# define PYBIND11_DESCR_CONSTEXPR static constexpr -#else -# define PYBIND11_DESCR_CONSTEXPR const -#endif - -/* Concatenate type signatures at compile time */ -template -struct descr { - char text[N + 1]; - - constexpr descr() : text{'\0'} { } - constexpr descr(char const (&s)[N+1]) : descr(s, make_index_sequence()) { } - - template - constexpr descr(char const (&s)[N+1], index_sequence) : text{s[Is]..., '\0'} { } - - template - constexpr descr(char c, Chars... cs) : text{c, static_cast(cs)..., '\0'} { } - - static constexpr std::array types() { - return {{&typeid(Ts)..., nullptr}}; - } -}; - -template -constexpr descr plus_impl(const descr &a, const descr &b, - index_sequence, index_sequence) { - return {a.text[Is1]..., b.text[Is2]...}; -} - -template -constexpr descr operator+(const descr &a, const descr &b) { - return plus_impl(a, b, make_index_sequence(), make_index_sequence()); -} - -template -constexpr descr _(char const(&text)[N]) { return descr(text); } -constexpr descr<0> _(char const(&)[1]) { return {}; } - -template struct int_to_str : int_to_str { }; -template struct int_to_str<0, Digits...> { - static constexpr auto digits = descr(('0' + Digits)...); -}; - -// Ternary description (like std::conditional) -template -constexpr enable_if_t> _(char const(&text1)[N1], char const(&)[N2]) { - return _(text1); -} -template -constexpr enable_if_t> _(char const(&)[N1], char const(&text2)[N2]) { - return _(text2); -} - -template -constexpr enable_if_t _(const T1 &d, const T2 &) { return d; } -template -constexpr enable_if_t _(const T1 &, const T2 &d) { return d; } - -template auto constexpr _() -> decltype(int_to_str::digits) { - return int_to_str::digits; -} - -template constexpr descr<1, Type> _() { return {'%'}; } - -constexpr descr<0> concat() { return {}; } - -template -constexpr descr concat(const descr &descr) { return descr; } - -template -constexpr auto concat(const descr &d, const Args &...args) - -> decltype(std::declval>() + concat(args...)) { - return d + _(", ") + concat(args...); -} - -template -constexpr descr type_descr(const descr &descr) { - return _("{") + descr + _("}"); -} - -PYBIND11_NAMESPACE_END(detail) -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/iter_swap.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/iter_swap.h deleted file mode 100644 index d9da52a6274c151e8602f41b72a0a9dafed13c26..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/iter_swap.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the iter_swap.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch iter_swap - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_ITER_SWAP_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/iter_swap.h> -#include __THRUST_HOST_SYSTEM_ITER_SWAP_HEADER -#undef __THRUST_HOST_SYSTEM_ITER_SWAP_HEADER - -#define __THRUST_DEVICE_SYSTEM_ITER_SWAP_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/iter_swap.h> -#include __THRUST_DEVICE_SYSTEM_ITER_SWAP_HEADER -#undef __THRUST_DEVICE_SYSTEM_ITER_SWAP_HEADER - diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/main.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/main.py deleted file mode 100644 index f56becc972000202a0e794e2b427072f21d15052..0000000000000000000000000000000000000000 --- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/main.py +++ /dev/null @@ -1,43 +0,0 @@ - -import ast -import pandas as pd - -from normalization.hand_normalization import normalize_hands_full -from normalization.body_normalization import normalize_body_full - - -# Load the dataset -df = pd.read_csv("/Users/matyasbohacek/Documents/WLASL_test_15fps.csv", encoding="utf-8") - -# Retrieve metadata -video_size_heights = df["video_size_height"].to_list() -video_size_widths = df["video_size_width"].to_list() - -# Delete redundant (non-related) properties -del df["video_size_height"] -del df["video_size_width"] - -# Temporarily remove other relevant metadata -labels = df["labels"].to_list() -video_fps = df["video_fps"].to_list() -del df["labels"] -del df["video_fps"] - -# Convert the strings into lists -convert = lambda x: ast.literal_eval(str(x)) -for column in df.columns: - df[column] = df[column].apply(convert) - -# Perform the normalizations -df = normalize_hands_full(df) -df, invalid_row_indexes = normalize_body_full(df) - -# Clear lists of items from deleted rows -# labels = [t for i, t in enumerate(labels) if i not in invalid_row_indexes] -# video_fps = [t for i, t in enumerate(video_fps) if i not in invalid_row_indexes] - -# Return the metadata back to the dataset -df["labels"] = labels -df["video_fps"] = video_fps - -df.to_csv("/Users/matyasbohacek/Desktop/WLASL_test_15fps_normalized.csv", encoding="utf-8", index=False) diff --git a/spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/th.py b/spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/th.py deleted file mode 100644 index ca6ef9385e3b5c0a439579d3fd7aa73b5dc62758..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/th.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch -from torch.autograd import Variable -import numpy as np -import collections - -__all__ = ['as_variable', 'as_numpy', 'mark_volatile'] - -def as_variable(obj): - if isinstance(obj, Variable): - return obj - if isinstance(obj, collections.Sequence): - return [as_variable(v) for v in obj] - elif isinstance(obj, collections.Mapping): - return {k: as_variable(v) for k, v in obj.items()} - else: - return Variable(obj) - -def as_numpy(obj): - if isinstance(obj, collections.Sequence): - return [as_numpy(v) for v in obj] - elif isinstance(obj, collections.Mapping): - return {k: as_numpy(v) for k, v in obj.items()} - elif isinstance(obj, Variable): - return obj.data.cpu().numpy() - elif torch.is_tensor(obj): - return obj.cpu().numpy() - else: - return np.array(obj) - -def mark_volatile(obj): - if torch.is_tensor(obj): - obj = Variable(obj) - if isinstance(obj, Variable): - obj.no_grad = True - return obj - elif isinstance(obj, collections.Mapping): - return {k: mark_volatile(o) for k, o in obj.items()} - elif isinstance(obj, collections.Sequence): - return [mark_volatile(o) for o in obj] - else: - return obj diff --git "a/spaces/CikeyQI/Yunzai/Yunzai/plugins/example/\350\277\233\347\276\244\351\200\200\347\276\244\351\200\232\347\237\245.js" "b/spaces/CikeyQI/Yunzai/Yunzai/plugins/example/\350\277\233\347\276\244\351\200\200\347\276\244\351\200\232\347\237\245.js" deleted file mode 100644 index e80ec3041a27cb990620efc792d804dfa175df48..0000000000000000000000000000000000000000 --- "a/spaces/CikeyQI/Yunzai/Yunzai/plugins/example/\350\277\233\347\276\244\351\200\200\347\276\244\351\200\232\347\237\245.js" +++ /dev/null @@ -1,64 +0,0 @@ -import plugin from '../../lib/plugins/plugin.js' -export class newcomer extends plugin { - constructor() { - super({ - name: '欢迎新人', - dsc: '新人入群欢迎', - /** https://oicqjs.github.io/oicq/#events */ - event: 'notice.group.increase', - priority: 5000 - }) - } - - /** 接受到消息都会执行一次 */ - async accept() { - if (this.e.user_id == this.e.self_id) return - - /** 定义入群欢迎内容 */ - let msg = '欢迎新人!' - /** 冷却cd 30s */ - let cd = 30 - - /** cd */ - let key = `Yz:newcomers:${this.e.group_id}` - if (await redis.get(key)) return - redis.set(key, '1', { EX: cd }) - - /** 回复 */ - await this.reply([ - segment.at(this.e.user_id), - // segment.image(), - msg - ]) - } -} - -export class outNotice extends plugin { - constructor() { - super({ - name: '退群通知', - dsc: 'xx退群了', - event: 'notice.group.decrease' - }) - - /** 退群提示词 */ - this.tips = '退群了' - } - - async accept() { - if (this.e.user_id == this.e.self_id) return - - let name, msg - if (this.e.member) { - name = this.e.member.card || this.e.member.nickname - } - - if (name) { - msg = `${name}(${this.e.user_id}) ${this.tips}` - } else { - msg = `${this.e.user_id} ${this.tips}` - } - logger.mark(`[退出通知]${this.e.logText} ${msg}`) - await this.reply(msg) - } -} \ No newline at end of file diff --git a/spaces/Cippppy/RegressionVisualization/app.py b/spaces/Cippppy/RegressionVisualization/app.py deleted file mode 100644 index 346449baa8c8a9a07b20a3e079eed134b1cbb276..0000000000000000000000000000000000000000 --- a/spaces/Cippppy/RegressionVisualization/app.py +++ /dev/null @@ -1,271 +0,0 @@ -## CHOOSE BETWEEN ALTAIR & MATPLOTLIB - -import gradio as gr -import altair as alt -import numpy as np -import pandas as pd -import matplotlib.pyplot as plt -import time - -def make_plot(plot_type, a, epoch, progress=gr.Progress()): - if plot_type == "log": - return logReg(a=a, epoch=epoch, progress=progress) - elif plot_type == "lin": - return linReg(a=a,epoch=epoch, progress=progress) - - -# a = learning rate -# epoch = number of training iterations -def logReg(a, epoch, progress): - #### generate random data-set #### - progress(0.2, desc="Generating Data") - time.sleep(1) - #np.random.seed(0) # set random seed (optional) - - ## set mean and covariance of our datasets - mean1 = [20,35] - cov1 = [[100,100],[-100,100]] - mean2 = [60,70] - cov2 = [[100,100],[100,-100]] - - ## concatenate values to set x values for datasets - x1, x2 = np.random.multivariate_normal(mean1, cov1, 100).T - x_1, x_2 = np.random.multivariate_normal(mean2, cov2, 100).T - x1 = (np.concatenate((x1, x_1), axis=0))/10 - x2 = (np.concatenate((x2, x_2), axis=0))/10 - - ## set y values of datasets - y1 = np.zeros(100) # y[0:100] is zero dataset (dataset we want our decision boundary to be above) - y2 = np.ones(100) # y[101:200] is one dataset (dataset we want our decision boundary to be below) - y = np.concatenate((y1, y2), axis=0) # combine datasets into one term - - w = np.matrix([(np.random.rand())/100,(np.random.rand())+0.0001/100]) # begin weights at random starting point - b = np.matrix([np.random.rand()]) # begin bias term at random starting point - wb = np.concatenate((b, w), axis=1) # combine w and b into one weight term - print('f = b + x1*w1 + x2*w2') - print('Starting weights:', 'f = ', wb[0,0],'+ x1', wb[0,1], '+ x2' , wb[0,2]) - - loss = np.empty([epoch]) # term to store all loss terms for plotting - iterat = np.empty([epoch]) # term to store all epoch numbers to be plotted vs loss - for n in range (epoch): - iterat[n] = n - - progress(0.5, desc="Finding Loss & Regression") - time.sleep(1.5) - - for p in range (epoch): - L, J = np.matrix([[0.0, 0.0, 0.0]]), 0.0 # reset gradient (∂J(w)/∂w) and loss for each epoch - #### Code the equations to solve for the loss and to update - #### the weights and biases for each epoch below. - - #### Hint: you will need to use the for loop below to create a summation to solve - #### for wb and J (loss) for each epoch. xj has been given as a starting point. - for i in range(len(x1)): - xj = np.matrix([1,x1[i],x2[i]]) - - # y_hat = (y_hat or h_w(x) expression) - y_hat = 1 / (1 + np.exp(-(wb * xj.T))) - # J = (cost function, also referred to as L) - J = -((y[i] * np.log(y_hat)) + ((1 - y[i])*np.log(1 - y_hat))) - # d_J = (∂J(w)/∂w function, equation can be solved with information on slide 27) - d_J = ((y_hat) - y[i]) * xj - # wb = (weight updating equation) - wb = wb - a * (d_J) - - loss[p] = J - if ((p % 100) == 0): - print('loss:', J,' Gradient (∂J(w)/∂w) [[b, w1, w2]]:',L[0]) - print('Updated weights:', 'f = ', wb[0,0],'+ x1', wb[0,1], '+ x2' , wb[0,2]) - equation = "f = {w1} + {w2}x1 + {w3}x2".format(w1 = wb[0,0], w2 = wb[0,1], w3 = wb[0,2]) - -## Plot decision boundary and data - - progress(0.8, desc="Plotting Data") - time.sleep(1.5) - - scatterData1 = pd.DataFrame({'x': x1[1:100], - 'y': x2[1:100]}) - scatterFig1 = alt.Chart(scatterData1).mark_point().encode( - x='x:Q', - y='y:Q' - ).properties( - title="Decision Boundary" - ) - scatterData2 = pd.DataFrame({'x': x1[101:200], - 'y': x2[101:200]}) - scatterFig2 = alt.Chart(scatterData2).mark_point(color='green').encode( - x='x:Q', - y='y:Q', - ).properties( - title="Decision Boundary" - ) - - y2 = np.array(np.array(-(x1*wb[0,1] + wb[0,0])/wb[0,2],dtype=float)) - - trendLine = pd.DataFrame({'x': x1.flatten(), - 'y': y2.flatten() }) - trendLineFig = alt.Chart(trendLine).mark_line().encode( - x='x:Q', - y='y:Q' - ).properties( - title="Decision Boundary" - ) - - finalFig = scatterFig1 + scatterFig2 + trendLineFig - - lossData = pd.DataFrame({'Number of Iterations': iterat[100:], - 'Loss Value': loss[100:] }) - lossFig = alt.Chart(lossData).mark_line().encode( - x='Number of Iterations:Q', - y='Loss Value:Q' - ).properties( - title='Plot of loss values over number of iterations' - ) - - plt.figure() - plt.plot(x1[1:100],x2[1:100],'x', x1[101:200], x2[101:200],'x') # plot random data points - plt.plot(x1, -(x1*wb[0,1] + wb[0,0])/wb[0,2] , linestyle = 'solid') # plot decision boundary - plt.axis('equal') - plt.xlabel('x1') - plt.ylabel('x2') - plt.title('Decision Boundary') - plt.savefig("plt1.png") - - ## Plot training loss v epoch - plt.figure() - plt.plot(iterat[100:],loss[100:],'x') - plt.xlabel('Epoch') - plt.ylabel('Loss') - plt.title('Training Loss v Epoch') - plt.savefig("plt2.png") - - return [finalFig.interactive(),lossFig.interactive(),"plt1.png","plt2.png",str(loss[len(loss)-1]),str(equation)] - -# a = learning rate step size -# epoch = number of training iterations -def linReg(a, epoch, progress): - # generate random data-set - progress(0.2, desc="Generating Data") - time.sleep(1) - # np.random.seed(0) # choose random seed (optional) - x = np.random.rand(100, 1) - y = 2 + 3 * x + np.random.rand(100, 1) - - # J = 0 # initialize J, this can be deleted once J is defined in the loop - w = np.matrix([np.random.rand(),np.random.rand()]) # slope and y-intercept - ite = epoch # number of training iterations - - jList = [] - numIte = [] - - # Write Linear Regression Code to Solve for w (slope and y-intercept) Here ## - progress(0.5, desc="Finding Loss & Regression") - time.sleep(1.5) - - for p in range (ite): - for i in range(len(x)): - # Calculate w and J here - x_vec = np.matrix([x[i][0],1]) # Option 1 | Setting up a vector for x (x_vec[j] corresponds to w[j]) - h = w * x_vec.T ## Hint: you may need to transpose x or w by adding .T to the end of the variable - w = w - a * (h - y[i]) * x_vec - J = (1/2) * (((h - y[i])) ** 2) - J = J.item() - - jList.append(J) - numIte.append(p) - print('Loss:', J) - - ## if done correctly the line should be in line with the data points ## - - print('f = ', w[0,0],'x + ', w[0,1]) - equation = "f = {w1}x + {w2}".format(w1 = w[0,0], w2 = w[0,1]) - - progress(0.8, desc="Plotting Data") - time.sleep(1.5) - y2 = np.array(np.array((w[0,1]+(w[0,0] * x)),dtype=float)).T - - scatterData = pd.DataFrame({'x': x.flatten(), - 'y': y.flatten()}) - scatterFig = alt.Chart(scatterData).mark_point().encode( - x='x:Q', - y='y:Q' - ).properties( - title='Plot of random data values with linear regression line' - ) - - trendLine = pd.DataFrame({'x': x.flatten(), - 'y': y2.flatten() }) - trendLineFig = alt.Chart(trendLine).mark_line().encode( - x='x:Q', - y='y:Q' - ) - - finalFig = scatterFig + trendLineFig - - lossData = pd.DataFrame({'Number of Iterations': range(1,len(jList)+1), - 'Loss Value': jList }) - lossFig = alt.Chart(lossData).mark_line().encode( - x='Number of Iterations:Q', - y='Loss Value:Q' - ).properties( - title='Plot of loss values over number of iterations' - ) - - # plot - plt.figure(1) - plt.scatter(x,y,s=ite) - plt.plot(x, w[0,1] + (w[0,0] * x), linestyle='solid') - plt.xlabel('x') - plt.ylabel('y') - plt.title('Plot of random data values with linear regression line') - plt.savefig("plt1.png") - - plt.figure(2) - plt.plot(jList) - plt.xlabel('Number of Iterations') - plt.ylabel('Loss Value') - plt.title('Plot of loss values over number of iterations') - plt.savefig("plt2.png") - - return [finalFig.interactive(),lossFig.interactive(),"plt1.png","plt2.png",str(jList[len(jList)-1]),str(equation)] - -with gr.Blocks(title="Regression Visualization") as demo: - gr.Markdown( - """ - # Regression Visualization for Machine Learning - Choose your variables below to create a linear or logistic regression model! - """) - with gr.Row(): - pack = gr.Radio(label="Plot Package",info="Choose 'MatPlot' for MatPlotLib, Choose 'Altair' for Altair", - choices=['MatPlot','Altair'], value='Altair') - bType = gr.Radio(label="Regression Type",info="Choose 'log' for logistic, Choose 'lin' for linear", - choices=['log','lin'], value='log') - l_rate = gr.Number(value=0.01,label="Learning Rate",info="Enter a value in the range 0.0 - 1.0") - epochs = gr.Number(value=100,label="Number of Epochs (Number of Training Iterations)",info="Enter an integer larger than 0",precision=0) - bStart = gr.Button(label="Start") - with gr.Row() as alt_row: - altPlot1 = gr.Plot() - altPlot2 = gr.Plot() - with gr.Row(visible=False) as mat_row: - matPlot1 = gr.Image(type='filepath',label="Regression Graph",height=600,width=600) - matPlot2 = gr.Image(type='filepath',label="Regression Graph",height=600,width=600) - loss = gr.Textbox(label="Final Loss Value") - equ = gr.Textbox(label="Equation for Plotted Line") - def changeComp(package): - if package == "Altair": - return { - alt_row: gr.Row.update(visible=True), - mat_row: gr.Row.update(visible=False) - } - else: - return { - alt_row: gr.Row.update(visible=False), - mat_row: gr.Row.update(visible=True) - } - - pack.input(changeComp, show_progress=True, inputs=[pack], outputs=[alt_row, mat_row]) - bStart.click(make_plot, show_progress=True, inputs=[bType,l_rate,epochs], outputs=[altPlot1,altPlot2, matPlot1, matPlot2, loss, equ]) - demo.load() - -if __name__== "__main__" : - demo.queue().launch() \ No newline at end of file diff --git a/spaces/ClearLove443/Robby-chatbot/README.md b/spaces/ClearLove443/Robby-chatbot/README.md deleted file mode 100644 index 956055a94f999f81f26356e764d8a25e1bf4e38d..0000000000000000000000000000000000000000 --- a/spaces/ClearLove443/Robby-chatbot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Robby Chatbot -emoji: 🌍 -colorFrom: pink -colorTo: blue -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Covert1107/sd-diffusers-webui/README.md b/spaces/Covert1107/sd-diffusers-webui/README.md deleted file mode 100644 index aba04433b7d51c3e29848dcf22487da1fe814c1d..0000000000000000000000000000000000000000 --- a/spaces/Covert1107/sd-diffusers-webui/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Sd Diffusers Webui -emoji: 🐳 -colorFrom: purple -colorTo: gray -sdk: docker -sdk_version: 3.9 -pinned: false -license: openrail -app_port: 7860 -duplicated_from: nyanko7/sd-diffusers-webui ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FitsImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FitsImagePlugin.py deleted file mode 100644 index 1359aeb1282ee78e38f40fc25b4a50b621db4043..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FitsImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# FITS file handling -# -# Copyright (c) 1998-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import math - -from . import Image, ImageFile - - -def _accept(prefix): - return prefix[:6] == b"SIMPLE" - - -class FitsImageFile(ImageFile.ImageFile): - format = "FITS" - format_description = "FITS" - - def _open(self): - headers = {} - while True: - header = self.fp.read(80) - if not header: - msg = "Truncated FITS file" - raise OSError(msg) - keyword = header[:8].strip() - if keyword == b"END": - break - value = header[8:].split(b"/")[0].strip() - if value.startswith(b"="): - value = value[1:].strip() - if not headers and (not _accept(keyword) or value != b"T"): - msg = "Not a FITS file" - raise SyntaxError(msg) - headers[keyword] = value - - naxis = int(headers[b"NAXIS"]) - if naxis == 0: - msg = "No image data" - raise ValueError(msg) - elif naxis == 1: - self._size = 1, int(headers[b"NAXIS1"]) - else: - self._size = int(headers[b"NAXIS1"]), int(headers[b"NAXIS2"]) - - number_of_bits = int(headers[b"BITPIX"]) - if number_of_bits == 8: - self.mode = "L" - elif number_of_bits == 16: - self.mode = "I" - # rawmode = "I;16S" - elif number_of_bits == 32: - self.mode = "I" - elif number_of_bits in (-32, -64): - self.mode = "F" - # rawmode = "F" if number_of_bits == -32 else "F;64F" - - offset = math.ceil(self.fp.tell() / 2880) * 2880 - self.tile = [("raw", (0, 0) + self.size, offset, (self.mode, 0, -1))] - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(FitsImageFile.format, FitsImageFile, _accept) - -Image.register_extensions(FitsImageFile.format, [".fit", ".fits"]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/arrayTools.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/arrayTools.py deleted file mode 100644 index 5fb01a838ae8769809b4f8ab28cb69ea5e84a3dc..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/arrayTools.py +++ /dev/null @@ -1,422 +0,0 @@ -"""Routines for calculating bounding boxes, point in rectangle calculations and -so on. -""" - -from fontTools.misc.roundTools import otRound -from fontTools.misc.vector import Vector as _Vector -import math -import warnings - - -def calcBounds(array): - """Calculate the bounding rectangle of a 2D points array. - - Args: - array: A sequence of 2D tuples. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - """ - if not array: - return 0, 0, 0, 0 - xs = [x for x, y in array] - ys = [y for x, y in array] - return min(xs), min(ys), max(xs), max(ys) - - -def calcIntBounds(array, round=otRound): - """Calculate the integer bounding rectangle of a 2D points array. - - Values are rounded to closest integer towards ``+Infinity`` using the - :func:`fontTools.misc.fixedTools.otRound` function by default, unless - an optional ``round`` function is passed. - - Args: - array: A sequence of 2D tuples. - round: A rounding function of type ``f(x: float) -> int``. - - Returns: - A four-item tuple of integers representing the bounding rectangle: - ``(xMin, yMin, xMax, yMax)``. - """ - return tuple(round(v) for v in calcBounds(array)) - - -def updateBounds(bounds, p, min=min, max=max): - """Add a point to a bounding rectangle. - - Args: - bounds: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - p: A 2D tuple representing a point. - min,max: functions to compute the minimum and maximum. - - Returns: - The updated bounding rectangle ``(xMin, yMin, xMax, yMax)``. - """ - (x, y) = p - xMin, yMin, xMax, yMax = bounds - return min(xMin, x), min(yMin, y), max(xMax, x), max(yMax, y) - - -def pointInRect(p, rect): - """Test if a point is inside a bounding rectangle. - - Args: - p: A 2D tuple representing a point. - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - ``True`` if the point is inside the rectangle, ``False`` otherwise. - """ - (x, y) = p - xMin, yMin, xMax, yMax = rect - return (xMin <= x <= xMax) and (yMin <= y <= yMax) - - -def pointsInRect(array, rect): - """Determine which points are inside a bounding rectangle. - - Args: - array: A sequence of 2D tuples. - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A list containing the points inside the rectangle. - """ - if len(array) < 1: - return [] - xMin, yMin, xMax, yMax = rect - return [(xMin <= x <= xMax) and (yMin <= y <= yMax) for x, y in array] - - -def vectorLength(vector): - """Calculate the length of the given vector. - - Args: - vector: A 2D tuple. - - Returns: - The Euclidean length of the vector. - """ - x, y = vector - return math.sqrt(x**2 + y**2) - - -def asInt16(array): - """Round a list of floats to 16-bit signed integers. - - Args: - array: List of float values. - - Returns: - A list of rounded integers. - """ - return [int(math.floor(i + 0.5)) for i in array] - - -def normRect(rect): - """Normalize a bounding box rectangle. - - This function "turns the rectangle the right way up", so that the following - holds:: - - xMin <= xMax and yMin <= yMax - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A normalized bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return min(xMin, xMax), min(yMin, yMax), max(xMin, xMax), max(yMin, yMax) - - -def scaleRect(rect, x, y): - """Scale a bounding box rectangle. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - x: Factor to scale the rectangle along the X axis. - Y: Factor to scale the rectangle along the Y axis. - - Returns: - A scaled bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin * x, yMin * y, xMax * x, yMax * y - - -def offsetRect(rect, dx, dy): - """Offset a bounding box rectangle. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - dx: Amount to offset the rectangle along the X axis. - dY: Amount to offset the rectangle along the Y axis. - - Returns: - An offset bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin + dx, yMin + dy, xMax + dx, yMax + dy - - -def insetRect(rect, dx, dy): - """Inset a bounding box rectangle on all sides. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - dx: Amount to inset the rectangle along the X axis. - dY: Amount to inset the rectangle along the Y axis. - - Returns: - An inset bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin + dx, yMin + dy, xMax - dx, yMax - dy - - -def sectRect(rect1, rect2): - """Test for rectangle-rectangle intersection. - - Args: - rect1: First bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - rect2: Second bounding rectangle. - - Returns: - A boolean and a rectangle. - If the input rectangles intersect, returns ``True`` and the intersecting - rectangle. Returns ``False`` and ``(0, 0, 0, 0)`` if the input - rectangles don't intersect. - """ - (xMin1, yMin1, xMax1, yMax1) = rect1 - (xMin2, yMin2, xMax2, yMax2) = rect2 - xMin, yMin, xMax, yMax = ( - max(xMin1, xMin2), - max(yMin1, yMin2), - min(xMax1, xMax2), - min(yMax1, yMax2), - ) - if xMin >= xMax or yMin >= yMax: - return False, (0, 0, 0, 0) - return True, (xMin, yMin, xMax, yMax) - - -def unionRect(rect1, rect2): - """Determine union of bounding rectangles. - - Args: - rect1: First bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - rect2: Second bounding rectangle. - - Returns: - The smallest rectangle in which both input rectangles are fully - enclosed. - """ - (xMin1, yMin1, xMax1, yMax1) = rect1 - (xMin2, yMin2, xMax2, yMax2) = rect2 - xMin, yMin, xMax, yMax = ( - min(xMin1, xMin2), - min(yMin1, yMin2), - max(xMax1, xMax2), - max(yMax1, yMax2), - ) - return (xMin, yMin, xMax, yMax) - - -def rectCenter(rect): - """Determine rectangle center. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A 2D tuple representing the point at the center of the rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return (xMin + xMax) / 2, (yMin + yMax) / 2 - - -def rectArea(rect): - """Determine rectangle area. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - The area of the rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return (yMax - yMin) * (xMax - xMin) - - -def intRect(rect): - """Round a rectangle to integer values. - - Guarantees that the resulting rectangle is NOT smaller than the original. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A rounded bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - xMin = int(math.floor(xMin)) - yMin = int(math.floor(yMin)) - xMax = int(math.ceil(xMax)) - yMax = int(math.ceil(yMax)) - return (xMin, yMin, xMax, yMax) - - -def quantizeRect(rect, factor=1): - """ - >>> bounds = (72.3, -218.4, 1201.3, 919.1) - >>> quantizeRect(bounds) - (72, -219, 1202, 920) - >>> quantizeRect(bounds, factor=10) - (70, -220, 1210, 920) - >>> quantizeRect(bounds, factor=100) - (0, -300, 1300, 1000) - """ - if factor < 1: - raise ValueError(f"Expected quantization factor >= 1, found: {factor!r}") - xMin, yMin, xMax, yMax = normRect(rect) - return ( - int(math.floor(xMin / factor) * factor), - int(math.floor(yMin / factor) * factor), - int(math.ceil(xMax / factor) * factor), - int(math.ceil(yMax / factor) * factor), - ) - - -class Vector(_Vector): - def __init__(self, *args, **kwargs): - warnings.warn( - "fontTools.misc.arrayTools.Vector has been deprecated, please use " - "fontTools.misc.vector.Vector instead.", - DeprecationWarning, - ) - - -def pairwise(iterable, reverse=False): - """Iterate over current and next items in iterable. - - Args: - iterable: An iterable - reverse: If true, iterate in reverse order. - - Returns: - A iterable yielding two elements per iteration. - - Example: - - >>> tuple(pairwise([])) - () - >>> tuple(pairwise([], reverse=True)) - () - >>> tuple(pairwise([0])) - ((0, 0),) - >>> tuple(pairwise([0], reverse=True)) - ((0, 0),) - >>> tuple(pairwise([0, 1])) - ((0, 1), (1, 0)) - >>> tuple(pairwise([0, 1], reverse=True)) - ((1, 0), (0, 1)) - >>> tuple(pairwise([0, 1, 2])) - ((0, 1), (1, 2), (2, 0)) - >>> tuple(pairwise([0, 1, 2], reverse=True)) - ((2, 1), (1, 0), (0, 2)) - >>> tuple(pairwise(['a', 'b', 'c', 'd'])) - (('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'a')) - >>> tuple(pairwise(['a', 'b', 'c', 'd'], reverse=True)) - (('d', 'c'), ('c', 'b'), ('b', 'a'), ('a', 'd')) - """ - if not iterable: - return - if reverse: - it = reversed(iterable) - else: - it = iter(iterable) - first = next(it, None) - a = first - for b in it: - yield (a, b) - a = b - yield (a, first) - - -def _test(): - """ - >>> import math - >>> calcBounds([]) - (0, 0, 0, 0) - >>> calcBounds([(0, 40), (0, 100), (50, 50), (80, 10)]) - (0, 10, 80, 100) - >>> updateBounds((0, 0, 0, 0), (100, 100)) - (0, 0, 100, 100) - >>> pointInRect((50, 50), (0, 0, 100, 100)) - True - >>> pointInRect((0, 0), (0, 0, 100, 100)) - True - >>> pointInRect((100, 100), (0, 0, 100, 100)) - True - >>> not pointInRect((101, 100), (0, 0, 100, 100)) - True - >>> list(pointsInRect([(50, 50), (0, 0), (100, 100), (101, 100)], (0, 0, 100, 100))) - [True, True, True, False] - >>> vectorLength((3, 4)) - 5.0 - >>> vectorLength((1, 1)) == math.sqrt(2) - True - >>> list(asInt16([0, 0.1, 0.5, 0.9])) - [0, 0, 1, 1] - >>> normRect((0, 10, 100, 200)) - (0, 10, 100, 200) - >>> normRect((100, 200, 0, 10)) - (0, 10, 100, 200) - >>> scaleRect((10, 20, 50, 150), 1.5, 2) - (15.0, 40, 75.0, 300) - >>> offsetRect((10, 20, 30, 40), 5, 6) - (15, 26, 35, 46) - >>> insetRect((10, 20, 50, 60), 5, 10) - (15, 30, 45, 50) - >>> insetRect((10, 20, 50, 60), -5, -10) - (5, 10, 55, 70) - >>> intersects, rect = sectRect((0, 10, 20, 30), (0, 40, 20, 50)) - >>> not intersects - True - >>> intersects, rect = sectRect((0, 10, 20, 30), (5, 20, 35, 50)) - >>> intersects - 1 - >>> rect - (5, 20, 20, 30) - >>> unionRect((0, 10, 20, 30), (0, 40, 20, 50)) - (0, 10, 20, 50) - >>> rectCenter((0, 0, 100, 200)) - (50.0, 100.0) - >>> rectCenter((0, 0, 100, 199.0)) - (50.0, 99.5) - >>> intRect((0.9, 2.9, 3.1, 4.1)) - (0, 2, 4, 5) - """ - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/qu2cu.c b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/qu2cu.c deleted file mode 100644 index 16f262624e082735e6d8c2c6dcb008a785f9ebb6..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/qu2cu.c +++ /dev/null @@ -1,13186 +0,0 @@ -/* Generated by Cython 0.29.36 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "define_macros": [ - [ - "CYTHON_TRACE_NOGIL", - "1" - ] - ], - "name": "fontTools.qu2cu.qu2cu", - "sources": [ - "Lib/fontTools/qu2cu/qu2cu.py" - ] - }, - "module_name": "fontTools.qu2cu.qu2cu" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_36" -#define CYTHON_HEX_VERSION 0x001D24F0 -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00) - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS (PY_VERSION_HEX < 0x030C00A5) - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #elif !defined(CYTHON_FAST_THREAD_STATE) - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000) - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS ((PY_VERSION_HEX >= 0x030600B1) && (PY_VERSION_HEX < 0x030C00A5)) - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject* co=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(0))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too; - if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here - if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too; - Py_XDECREF((PyObject*)co); - co = (PyCodeObject*)call_result; - call_result = NULL; - if (0) { - cleanup_code_too: - Py_XDECREF((PyObject*)co); - co = NULL; - } - end: - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(call_result); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return co; - } -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__fontTools__qu2cu__qu2cu -#define __PYX_HAVE_API__fontTools__qu2cu__qu2cu -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - -/* Header.proto */ -#if !defined(CYTHON_CCOMPLEX) - #if defined(__cplusplus) - #define CYTHON_CCOMPLEX 1 - #elif defined(_Complex_I) - #define CYTHON_CCOMPLEX 1 - #else - #define CYTHON_CCOMPLEX 0 - #endif -#endif -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - #include - #else - #include - #endif -#endif -#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) - #undef _Complex_I - #define _Complex_I 1.0fj -#endif - - -static const char *__pyx_f[] = { - "Lib/fontTools/qu2cu/qu2cu.py", -}; -/* Declarations.proto */ -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - typedef ::std::complex< double > __pyx_t_double_complex; - #else - typedef double _Complex __pyx_t_double_complex; - #endif -#else - typedef struct { double real, imag; } __pyx_t_double_complex; -#endif -static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); - - -/*--- Type declarations ---*/ -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves; -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr; -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves; -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr; - -/* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves { - PyObject_HEAD - PyObject *__pyx_8genexpr3__pyx_v_curve; -}; - - -/* "fontTools/qu2cu/qu2cu.py":238 - * - * if not is_complex: - * curves = [tuple((c.real, c.imag) for c in curve) for curve in curves] # <<<<<<<<<<<<<< - * return curves - * - */ -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr { - PyObject_HEAD - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *__pyx_outer_scope; - PyObject *__pyx_v_c; - int __pyx_v_cost; - int __pyx_v_is_complex; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "fontTools/qu2cu/qu2cu.py":268 - * u=cython.complex, - * ) - * def spline_to_curves(q, costs, tolerance=0.5, all_cubic=False): # <<<<<<<<<<<<<< - * """ - * q: quadratic spline with alternating on-curve / off-curve points. - */ -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves { - PyObject_HEAD - PyObject *__pyx_v_orig; - PyObject *__pyx_v_reconst; -}; - - -/* "fontTools/qu2cu/qu2cu.py":343 - * for k, reconst in enumerate(reconstructed): - * orig = elevated_quadratics[j + k] - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) # <<<<<<<<<<<<<< - * - * if not cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - */ -struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr { - PyObject_HEAD - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *__pyx_outer_scope; - int __pyx_v_all_cubic; - int __pyx_v_count; - double __pyx_v_err; - double __pyx_v_error; - int __pyx_v_i; - int __pyx_v_i_sol_count; - double __pyx_v_i_sol_error; - int __pyx_v_is_cubic; - int __pyx_v_j; - int __pyx_v_j_sol_count; - double __pyx_v_j_sol_error; - int __pyx_v_k; - __pyx_t_double_complex __pyx_v_p0; - __pyx_t_double_complex __pyx_v_p1; - __pyx_t_double_complex __pyx_v_p2; - __pyx_t_double_complex __pyx_v_p3; - int __pyx_v_start; - int __pyx_v_this_sol_count; - double __pyx_v_tolerance; - __pyx_t_double_complex __pyx_v_u; - __pyx_t_double_complex __pyx_v_v; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* AssertionsEnabled.proto */ -#define __Pyx_init_assertions_enabled() -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define __pyx_assertions_enabled() (1) -#elif PY_VERSION_HEX < 0x03080000 || CYTHON_COMPILING_IN_PYPY || defined(Py_LIMITED_API) - #define __pyx_assertions_enabled() (!Py_OptimizeFlag) -#elif CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030900A6 - static int __pyx_assertions_enabled_flag; - #define __pyx_assertions_enabled() (__pyx_assertions_enabled_flag) - #undef __Pyx_init_assertions_enabled - static void __Pyx_init_assertions_enabled(void) { - __pyx_assertions_enabled_flag = ! _PyInterpreterState_GetConfig(__Pyx_PyThreadState_Current->interp)->optimization_level; - } -#else - #define __pyx_assertions_enabled() (!Py_OptimizeFlag) -#endif - -/* py_abs.proto */ -#if CYTHON_USE_PYLONG_INTERNALS -static PyObject *__Pyx_PyLong_AbsNeg(PyObject *num); -#define __Pyx_PyNumber_Absolute(x)\ - ((likely(PyLong_CheckExact(x))) ?\ - (likely(Py_SIZE(x) >= 0) ? (Py_INCREF(x), (x)) : __Pyx_PyLong_AbsNeg(x)) :\ - PyNumber_Absolute(x)) -#else -#define __Pyx_PyNumber_Absolute(x) PyNumber_Absolute(x) -#endif - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* SliceTupleAndList.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_GetSlice(PyObject* src, Py_ssize_t start, Py_ssize_t stop); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_GetSlice(PyObject* src, Py_ssize_t start, Py_ssize_t stop); -#else -#define __Pyx_PyList_GetSlice(seq, start, stop) PySequence_GetSlice(seq, start, stop) -#define __Pyx_PyTuple_GetSlice(seq, start, stop) PySequence_GetSlice(seq, start, stop) -#endif - -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractCObj(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_SubtractCObj(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceSubtract(op1, op2) : PyNumber_Subtract(op1, op2)) -#endif - -/* None.proto */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname); - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if CYTHON_FAST_PYCALL - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif // CYTHON_FAST_PYCALL -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* SliceObject.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice( - PyObject* obj, Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** py_start, PyObject** py_stop, PyObject** py_slice, - int has_cstart, int has_cstop, int wraparound); - -/* PyObjectCallNoArg.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); -#else -#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL) -#endif - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* pop.proto */ -static CYTHON_INLINE PyObject* __Pyx__PyObject_Pop(PyObject* L); -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE PyObject* __Pyx_PyList_Pop(PyObject* L); -#define __Pyx_PyObject_Pop(L) (likely(PyList_CheckExact(L)) ?\ - __Pyx_PyList_Pop(L) : __Pyx__PyObject_Pop(L)) -#else -#define __Pyx_PyList_Pop(L) __Pyx__PyObject_Pop(L) -#define __Pyx_PyObject_Pop(L) __Pyx__PyObject_Pop(L) -#endif - -/* UnpackUnboundCMethod.proto */ -typedef struct { - PyObject *type; - PyObject **method_name; - PyCFunction func; - PyObject *method; - int flag; -} __Pyx_CachedCFunction; - -/* CallUnboundCMethod0.proto */ -static PyObject* __Pyx__CallUnboundCMethod0(__Pyx_CachedCFunction* cfunc, PyObject* self); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_CallUnboundCMethod0(cfunc, self)\ - (likely((cfunc)->func) ?\ - (likely((cfunc)->flag == METH_NOARGS) ? (*((cfunc)->func))(self, NULL) :\ - (PY_VERSION_HEX >= 0x030600B1 && likely((cfunc)->flag == METH_FASTCALL) ?\ - (PY_VERSION_HEX >= 0x030700A0 ?\ - (*(__Pyx_PyCFunctionFast)(void*)(PyCFunction)(cfunc)->func)(self, &__pyx_empty_tuple, 0) :\ - (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, &__pyx_empty_tuple, 0, NULL)) :\ - (PY_VERSION_HEX >= 0x030700A0 && (cfunc)->flag == (METH_FASTCALL | METH_KEYWORDS) ?\ - (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, &__pyx_empty_tuple, 0, NULL) :\ - (likely((cfunc)->flag == (METH_VARARGS | METH_KEYWORDS)) ? ((*(PyCFunctionWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, __pyx_empty_tuple, NULL)) :\ - ((cfunc)->flag == METH_VARARGS ? (*((cfunc)->func))(self, __pyx_empty_tuple) :\ - __Pyx__CallUnboundCMethod0(cfunc, self)))))) :\ - __Pyx__CallUnboundCMethod0(cfunc, self)) -#else -#define __Pyx_CallUnboundCMethod0(cfunc, self) __Pyx__CallUnboundCMethod0(cfunc, self) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* pyfrozenset_new.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyFrozenSet_New(PyObject* it); - -/* PySetContains.proto */ -static CYTHON_INLINE int __Pyx_PySet_ContainsTF(PyObject* key, PyObject* set, int eq); - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* IncludeStringH.proto */ -#include - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* FetchCommonType.proto */ -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED 1 -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { - PyCFunctionObject func; -#if PY_VERSION_HEX < 0x030500A0 - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; - PyObject *func_classobj; - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; -} __pyx_CyFunctionObject; -static PyTypeObject *__pyx_CyFunctionType = 0; -#define __Pyx_CyFunction_Check(obj) (__Pyx_TypeCheck(obj, __pyx_CyFunctionType)) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *self, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(void); - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* FromPy.proto */ -static __pyx_t_double_complex __Pyx_PyComplex_As___pyx_t_double_complex(PyObject*); - -/* GCCDiagnostics.proto */ -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* RealImag.proto */ -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - #define __Pyx_CREAL(z) ((z).real()) - #define __Pyx_CIMAG(z) ((z).imag()) - #else - #define __Pyx_CREAL(z) (__real__(z)) - #define __Pyx_CIMAG(z) (__imag__(z)) - #endif -#else - #define __Pyx_CREAL(z) ((z).real) - #define __Pyx_CIMAG(z) ((z).imag) -#endif -#if defined(__cplusplus) && CYTHON_CCOMPLEX\ - && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103) - #define __Pyx_SET_CREAL(z,x) ((z).real(x)) - #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) -#else - #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) - #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) -#endif - -/* Arithmetic.proto */ -#if CYTHON_CCOMPLEX - #define __Pyx_c_eq_double(a, b) ((a)==(b)) - #define __Pyx_c_sum_double(a, b) ((a)+(b)) - #define __Pyx_c_diff_double(a, b) ((a)-(b)) - #define __Pyx_c_prod_double(a, b) ((a)*(b)) - #define __Pyx_c_quot_double(a, b) ((a)/(b)) - #define __Pyx_c_neg_double(a) (-(a)) - #ifdef __cplusplus - #define __Pyx_c_is_zero_double(z) ((z)==(double)0) - #define __Pyx_c_conj_double(z) (::std::conj(z)) - #if 1 - #define __Pyx_c_abs_double(z) (::std::abs(z)) - #define __Pyx_c_pow_double(a, b) (::std::pow(a, b)) - #endif - #else - #define __Pyx_c_is_zero_double(z) ((z)==0) - #define __Pyx_c_conj_double(z) (conj(z)) - #if 1 - #define __Pyx_c_abs_double(z) (cabs(z)) - #define __Pyx_c_pow_double(a, b) (cpow(a, b)) - #endif - #endif -#else - static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex); - static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex); - #if 1 - static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex); - #endif -#endif - -/* ToPy.proto */ -#define __pyx_PyComplex_FromComplex(z)\ - PyComplex_FromDoubles((double)__Pyx_CREAL(z),\ - (double)__Pyx_CIMAG(z)) - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* PyObjectCallMethod1.proto */ -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); - -/* CoroutineBase.proto */ -typedef PyObject *(*__pyx_coroutine_body_t)(PyObject *, PyThreadState *, PyObject *); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_ExcInfoStruct _PyErr_StackItem -#else -typedef struct { - PyObject *exc_type; - PyObject *exc_value; - PyObject *exc_traceback; -} __Pyx_ExcInfoStruct; -#endif -typedef struct { - PyObject_HEAD - __pyx_coroutine_body_t body; - PyObject *closure; - __Pyx_ExcInfoStruct gi_exc_state; - PyObject *gi_weakreflist; - PyObject *classobj; - PyObject *yieldfrom; - PyObject *gi_name; - PyObject *gi_qualname; - PyObject *gi_modulename; - PyObject *gi_code; - PyObject *gi_frame; - int resume_label; - char is_running; -} __pyx_CoroutineObject; -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject *type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static CYTHON_INLINE void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *self); -static int __Pyx_Coroutine_clear(PyObject *self); -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value); -static PyObject *__Pyx_Coroutine_Close(PyObject *self); -static PyObject *__Pyx_Coroutine_Throw(PyObject *gen, PyObject *args); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_Coroutine_SwapException(self) -#define __Pyx_Coroutine_ResetAndClearException(self) __Pyx_Coroutine_ExceptionClear(&(self)->gi_exc_state) -#else -#define __Pyx_Coroutine_SwapException(self) {\ - __Pyx_ExceptionSwap(&(self)->gi_exc_state.exc_type, &(self)->gi_exc_state.exc_value, &(self)->gi_exc_state.exc_traceback);\ - __Pyx_Coroutine_ResetFrameBackpointer(&(self)->gi_exc_state);\ - } -#define __Pyx_Coroutine_ResetAndClearException(self) {\ - __Pyx_ExceptionReset((self)->gi_exc_state.exc_type, (self)->gi_exc_state.exc_value, (self)->gi_exc_state.exc_traceback);\ - (self)->gi_exc_state.exc_type = (self)->gi_exc_state.exc_value = (self)->gi_exc_state.exc_traceback = NULL;\ - } -#endif -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__pyx_tstate, pvalue) -#else -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, pvalue) -#endif -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *tstate, PyObject **pvalue); -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state); - -/* PatchModuleWithCoroutine.proto */ -static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code); - -/* PatchGeneratorABC.proto */ -static int __Pyx_patch_abc(void); - -/* Generator.proto */ -#define __Pyx_Generator_USED -static PyTypeObject *__pyx_GeneratorType = 0; -#define __Pyx_Generator_CheckExact(obj) (Py_TYPE(obj) == __pyx_GeneratorType) -#define __Pyx_Generator_New(body, code, closure, name, qualname, module_name)\ - __Pyx__Coroutine_New(__pyx_GeneratorType, body, code, closure, name, qualname, module_name) -static PyObject *__Pyx_Generator_Next(PyObject *self); -static int __pyx_Generator_init(void); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - - -/* Module declarations from 'cython' */ - -/* Module declarations from 'fontTools.qu2cu.qu2cu' */ -static PyTypeObject *__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves = 0; -static PyTypeObject *__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr = 0; -static PyTypeObject *__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves = 0; -static PyTypeObject *__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr = 0; -static int __pyx_f_9fontTools_5qu2cu_5qu2cu_cubic_farthest_fit_inside(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, double); /*proto*/ -static PyObject *__pyx_f_9fontTools_5qu2cu_5qu2cu_merge_curves(PyObject *, int, int); /*proto*/ -#define __Pyx_MODULE_NAME "fontTools.qu2cu.qu2cu" -extern int __pyx_module_is_main_fontTools__qu2cu__qu2cu; -int __pyx_module_is_main_fontTools__qu2cu__qu2cu = 0; - -/* Implementation of 'fontTools.qu2cu.qu2cu' */ -static PyObject *__pyx_builtin_AttributeError; -static PyObject *__pyx_builtin_ImportError; -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ZeroDivisionError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_reversed; -static PyObject *__pyx_builtin_zip; -static PyObject *__pyx_builtin_print; -static const char __pyx_k_i[] = "i"; -static const char __pyx_k_j[] = "j"; -static const char __pyx_k_k[] = "k"; -static const char __pyx_k_p[] = "p"; -static const char __pyx_k_q[] = "q"; -static const char __pyx_k_u[] = "u"; -static const char __pyx_k_v[] = "v"; -static const char __pyx_k_x[] = "x"; -static const char __pyx_k_y[] = "y"; -static const char __pyx_k_on[] = "on"; -static const char __pyx_k_p0[] = "p0"; -static const char __pyx_k_p1[] = "p1"; -static const char __pyx_k_p2[] = "p2"; -static const char __pyx_k_p3[] = "p3"; -static const char __pyx_k_qq[] = "qq"; -static const char __pyx_k_ts[] = "ts"; -static const char __pyx_k_all[] = "__all__"; -static const char __pyx_k_err[] = "err"; -static const char __pyx_k_pop[] = "pop"; -static const char __pyx_k_zip[] = "zip"; -static const char __pyx_k_List[] = "List"; -static const char __pyx_k_args[] = "args"; -static const char __pyx_k_cost[] = "cost"; -static const char __pyx_k_imag[] = "imag"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_math[] = "math"; -static const char __pyx_k_name[] = "__name__"; -static const char __pyx_k_off1[] = "off1"; -static const char __pyx_k_off2[] = "off2"; -static const char __pyx_k_orig[] = "orig"; -static const char __pyx_k_real[] = "real"; -static const char __pyx_k_send[] = "send"; -static const char __pyx_k_sols[] = "sols"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_Point[] = "Point"; -static const char __pyx_k_Tuple[] = "Tuple"; -static const char __pyx_k_Union[] = "Union"; -static const char __pyx_k_close[] = "close"; -static const char __pyx_k_costs[] = "costs"; -static const char __pyx_k_count[] = "count"; -static const char __pyx_k_cubic[] = "cubic"; -static const char __pyx_k_curve[] = "curve"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_float[] = "float"; -static const char __pyx_k_i_sol[] = "i_sol"; -static const char __pyx_k_print[] = "print"; -static const char __pyx_k_quads[] = "quads"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_throw[] = "throw"; -static const char __pyx_k_curves[] = "curves"; -static const char __pyx_k_cython[] = "cython"; -static const char __pyx_k_forced[] = "forced"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_main_2[] = "main"; -static const char __pyx_k_p1_2_3[] = "p1_2_3"; -static const char __pyx_k_return[] = "return"; -static const char __pyx_k_splits[] = "splits"; -static const char __pyx_k_typing[] = "typing"; -static const char __pyx_k_genexpr[] = "genexpr"; -static const char __pyx_k_max_err[] = "max_err"; -static const char __pyx_k_reconst[] = "reconst"; -static const char __pyx_k_COMPILED[] = "COMPILED"; -static const char __pyx_k_Solution[] = "Solution"; -static const char __pyx_k_best_sol[] = "best_sol"; -static const char __pyx_k_is_cubic[] = "is_cubic"; -static const char __pyx_k_reversed[] = "reversed"; -static const char __pyx_k_all_cubic[] = "all_cubic"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_tolerance[] = "tolerance"; -static const char __pyx_k_impossible[] = "impossible"; -static const char __pyx_k_is_complex[] = "is_complex"; -static const char __pyx_k_namedtuple[] = "namedtuple"; -static const char __pyx_k_num_points[] = "num_points"; -static const char __pyx_k_quadratics[] = "quadratics"; -static const char __pyx_k_this_count[] = "this_count"; -static const char __pyx_k_ImportError[] = "ImportError"; -static const char __pyx_k_collections[] = "collections"; -static const char __pyx_k_i_sol_count[] = "i_sol_count"; -static const char __pyx_k_i_sol_error[] = "i_sol_error"; -static const char __pyx_k_j_sol_count[] = "j_sol_count"; -static const char __pyx_k_j_sol_error[] = "j_sol_error"; -static const char __pyx_k_start_index[] = "start_index"; -static const char __pyx_k_num_offcurves[] = "num_offcurves"; -static const char __pyx_k_reconstructed[] = "reconstructed"; -static const char __pyx_k_AttributeError[] = "AttributeError"; -static const char __pyx_k_Original_curve[] = "Original curve:"; -static const char __pyx_k_fontTools_misc[] = "fontTools.misc"; -static const char __pyx_k_generate_curve[] = "generate_curve"; -static const char __pyx_k_splitCubicAtTC[] = "splitCubicAtTC"; -static const char __pyx_k_this_sol_count[] = "this_sol_count"; -static const char __pyx_k_fontTools_cu2qu[] = "fontTools.cu2qu"; -static const char __pyx_k_spline_to_curves[] = "spline_to_curves"; -static const char __pyx_k_ZeroDivisionError[] = "ZeroDivisionError"; -static const char __pyx_k_elevate_quadratic[] = "elevate_quadratic"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_curve_to_quadratic[] = "curve_to_quadratic"; -static const char __pyx_k_reconstructed_iter[] = "reconstructed_iter"; -static const char __pyx_k_elevated_quadratics[] = "elevated_quadratics"; -static const char __pyx_k_quadratic_to_curves[] = "quadratic_to_curves"; -static const char __pyx_k_Reconstructed_curve_s[] = "Reconstructed curve(s):"; -static const char __pyx_k_fontTools_qu2cu_qu2cu[] = "fontTools.qu2cu.qu2cu"; -static const char __pyx_k_reconstruct_tolerance[] = "reconstruct_tolerance"; -static const char __pyx_k_add_implicit_on_curves[] = "add_implicit_on_curves"; -static const char __pyx_k_fontTools_cu2qu_benchmark[] = "fontTools.cu2qu.benchmark"; -static const char __pyx_k_fontTools_misc_bezierTools[] = "fontTools.misc.bezierTools"; -static const char __pyx_k_Lib_fontTools_qu2cu_qu2cu_py[] = "Lib/fontTools/qu2cu/qu2cu.py"; -static const char __pyx_k_spline_to_curves_locals_genexpr[] = "spline_to_curves..genexpr"; -static const char __pyx_k_One_random_cubic_turned_into_d_q[] = "One random cubic turned into %d quadratics."; -static const char __pyx_k_Those_quadratics_turned_back_int[] = "Those quadratics turned back into %d cubics. "; -static const char __pyx_k_cu2qu_tolerance_g_qu2cu_toleranc[] = "cu2qu tolerance %g. qu2cu tolerance %g."; -static const char __pyx_k_quadratic_spline_requires_at_lea[] = "quadratic spline requires at least 3 points"; -static const char __pyx_k_quadratic_to_curves_locals_genex[] = "quadratic_to_curves..genexpr"; -static PyObject *__pyx_n_s_AttributeError; -static PyObject *__pyx_n_s_COMPILED; -static PyObject *__pyx_n_s_ImportError; -static PyObject *__pyx_kp_s_Lib_fontTools_qu2cu_qu2cu_py; -static PyObject *__pyx_n_s_List; -static PyObject *__pyx_kp_u_One_random_cubic_turned_into_d_q; -static PyObject *__pyx_kp_u_Original_curve; -static PyObject *__pyx_n_s_Point; -static PyObject *__pyx_kp_u_Reconstructed_curve_s; -static PyObject *__pyx_n_s_Solution; -static PyObject *__pyx_n_u_Solution; -static PyObject *__pyx_kp_u_Those_quadratics_turned_back_int; -static PyObject *__pyx_n_s_Tuple; -static PyObject *__pyx_n_s_Union; -static PyObject *__pyx_n_s_ZeroDivisionError; -static PyObject *__pyx_n_s_add_implicit_on_curves; -static PyObject *__pyx_n_s_all; -static PyObject *__pyx_n_s_all_cubic; -static PyObject *__pyx_n_s_args; -static PyObject *__pyx_n_s_best_sol; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_n_s_close; -static PyObject *__pyx_n_s_collections; -static PyObject *__pyx_n_s_cost; -static PyObject *__pyx_n_s_costs; -static PyObject *__pyx_n_s_count; -static PyObject *__pyx_kp_u_cu2qu_tolerance_g_qu2cu_toleranc; -static PyObject *__pyx_n_s_cubic; -static PyObject *__pyx_n_s_curve; -static PyObject *__pyx_n_s_curve_to_quadratic; -static PyObject *__pyx_n_s_curves; -static PyObject *__pyx_n_s_cython; -static PyObject *__pyx_n_s_elevate_quadratic; -static PyObject *__pyx_n_s_elevated_quadratics; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_err; -static PyObject *__pyx_n_s_error; -static PyObject *__pyx_n_u_error; -static PyObject *__pyx_n_u_float; -static PyObject *__pyx_n_s_fontTools_cu2qu; -static PyObject *__pyx_n_s_fontTools_cu2qu_benchmark; -static PyObject *__pyx_n_s_fontTools_misc; -static PyObject *__pyx_n_s_fontTools_misc_bezierTools; -static PyObject *__pyx_n_s_fontTools_qu2cu_qu2cu; -static PyObject *__pyx_n_s_forced; -static PyObject *__pyx_n_s_generate_curve; -static PyObject *__pyx_n_s_genexpr; -static PyObject *__pyx_n_s_i; -static PyObject *__pyx_n_s_i_sol; -static PyObject *__pyx_n_s_i_sol_count; -static PyObject *__pyx_n_s_i_sol_error; -static PyObject *__pyx_n_s_imag; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_impossible; -static PyObject *__pyx_n_s_is_complex; -static PyObject *__pyx_n_s_is_cubic; -static PyObject *__pyx_n_u_is_cubic; -static PyObject *__pyx_n_s_j; -static PyObject *__pyx_n_s_j_sol_count; -static PyObject *__pyx_n_s_j_sol_error; -static PyObject *__pyx_n_s_k; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_u_main; -static PyObject *__pyx_n_s_main_2; -static PyObject *__pyx_n_s_math; -static PyObject *__pyx_n_s_max_err; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_namedtuple; -static PyObject *__pyx_n_s_num_offcurves; -static PyObject *__pyx_n_s_num_points; -static PyObject *__pyx_n_u_num_points; -static PyObject *__pyx_n_s_off1; -static PyObject *__pyx_n_s_off2; -static PyObject *__pyx_n_s_on; -static PyObject *__pyx_n_s_orig; -static PyObject *__pyx_n_s_p; -static PyObject *__pyx_n_s_p0; -static PyObject *__pyx_n_s_p1; -static PyObject *__pyx_n_s_p1_2_3; -static PyObject *__pyx_n_s_p2; -static PyObject *__pyx_n_s_p3; -static PyObject *__pyx_n_s_pop; -static PyObject *__pyx_n_s_print; -static PyObject *__pyx_n_s_q; -static PyObject *__pyx_n_s_qq; -static PyObject *__pyx_kp_u_quadratic_spline_requires_at_lea; -static PyObject *__pyx_n_s_quadratic_to_curves; -static PyObject *__pyx_n_u_quadratic_to_curves; -static PyObject *__pyx_n_s_quadratic_to_curves_locals_genex; -static PyObject *__pyx_n_s_quadratics; -static PyObject *__pyx_n_s_quads; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_real; -static PyObject *__pyx_n_s_reconst; -static PyObject *__pyx_n_s_reconstruct_tolerance; -static PyObject *__pyx_n_s_reconstructed; -static PyObject *__pyx_n_s_reconstructed_iter; -static PyObject *__pyx_n_s_return; -static PyObject *__pyx_n_s_reversed; -static PyObject *__pyx_n_s_send; -static PyObject *__pyx_n_s_sols; -static PyObject *__pyx_n_s_spline_to_curves; -static PyObject *__pyx_n_s_spline_to_curves_locals_genexpr; -static PyObject *__pyx_n_s_splitCubicAtTC; -static PyObject *__pyx_n_s_splits; -static PyObject *__pyx_n_s_start; -static PyObject *__pyx_n_s_start_index; -static PyObject *__pyx_n_u_start_index; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_n_s_this_count; -static PyObject *__pyx_n_s_this_sol_count; -static PyObject *__pyx_n_s_throw; -static PyObject *__pyx_n_s_tolerance; -static PyObject *__pyx_n_s_ts; -static PyObject *__pyx_n_s_typing; -static PyObject *__pyx_n_s_u; -static PyObject *__pyx_n_s_v; -static PyObject *__pyx_n_s_x; -static PyObject *__pyx_n_s_y; -static PyObject *__pyx_n_s_zip; -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_elevate_quadratic(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2); /* proto */ -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_2add_implicit_on_curves(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_p); /* proto */ -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_19quadratic_to_curves_8genexpr3_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_4quadratic_to_curves(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_quads, double __pyx_v_max_err, PyObject *__pyx_v_all_cubic); /* proto */ -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_6spline_to_curves(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_q, PyObject *__pyx_v_costs, double __pyx_v_tolerance, int __pyx_v_all_cubic); /* proto */ -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_8main(CYTHON_UNUSED PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static __Pyx_CachedCFunction __pyx_umethod_PyList_Type_pop = {0, &__pyx_n_s_pop, 0, 0, 0}; -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_3; -static PyObject *__pyx_slice_; -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_codeobj__4; -static PyObject *__pyx_codeobj__6; -static PyObject *__pyx_codeobj__8; -static PyObject *__pyx_codeobj__10; -static PyObject *__pyx_codeobj__12; -/* Late includes */ - -/* "fontTools/qu2cu/qu2cu.py":53 - * ) - * @cython.locals(mid=cython.complex, deriv3=cython.complex) - * def cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): # <<<<<<<<<<<<<< - * """Check if a cubic Bezier lies within a given distance of the origin. - * - */ - -static int __pyx_f_9fontTools_5qu2cu_5qu2cu_cubic_farthest_fit_inside(__pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3, double __pyx_v_tolerance) { - __pyx_t_double_complex __pyx_v_mid; - __pyx_t_double_complex __pyx_v_deriv3; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("cubic_farthest_fit_inside", 0); - - /* "fontTools/qu2cu/qu2cu.py":72 - * """ - * # First check p2 then p1, as p2 has higher error early on. - * if abs(p2) <= tolerance and abs(p1) <= tolerance: # <<<<<<<<<<<<<< - * return True - * - */ - __pyx_t_2 = ((__Pyx_c_abs_double(__pyx_v_p2) <= __pyx_v_tolerance) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = ((__Pyx_c_abs_double(__pyx_v_p1) <= __pyx_v_tolerance) != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "fontTools/qu2cu/qu2cu.py":73 - * # First check p2 then p1, as p2 has higher error early on. - * if abs(p2) <= tolerance and abs(p1) <= tolerance: - * return True # <<<<<<<<<<<<<< - * - * # Split. - */ - __pyx_r = 1; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":72 - * """ - * # First check p2 then p1, as p2 has higher error early on. - * if abs(p2) <= tolerance and abs(p1) <= tolerance: # <<<<<<<<<<<<<< - * return True - * - */ - } - - /* "fontTools/qu2cu/qu2cu.py":76 - * - * # Split. - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 # <<<<<<<<<<<<<< - * if abs(mid) > tolerance: - * return False - */ - __pyx_v_mid = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__pyx_v_p0, __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __Pyx_c_sum_double(__pyx_v_p1, __pyx_v_p2))), __pyx_v_p3), __pyx_t_double_complex_from_parts(0.125, 0)); - - /* "fontTools/qu2cu/qu2cu.py":77 - * # Split. - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * if abs(mid) > tolerance: # <<<<<<<<<<<<<< - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - */ - __pyx_t_1 = ((__Pyx_c_abs_double(__pyx_v_mid) > __pyx_v_tolerance) != 0); - if (__pyx_t_1) { - - /* "fontTools/qu2cu/qu2cu.py":78 - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * if abs(mid) > tolerance: - * return False # <<<<<<<<<<<<<< - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return cubic_farthest_fit_inside( - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":77 - * # Split. - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * if abs(mid) > tolerance: # <<<<<<<<<<<<<< - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - */ - } - - /* "fontTools/qu2cu/qu2cu.py":79 - * if abs(mid) > tolerance: - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 # <<<<<<<<<<<<<< - * return cubic_farthest_fit_inside( - * p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - */ - __pyx_v_deriv3 = __Pyx_c_prod_double(__Pyx_c_diff_double(__Pyx_c_diff_double(__Pyx_c_sum_double(__pyx_v_p3, __pyx_v_p2), __pyx_v_p1), __pyx_v_p0), __pyx_t_double_complex_from_parts(0.125, 0)); - - /* "fontTools/qu2cu/qu2cu.py":80 - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return cubic_farthest_fit_inside( # <<<<<<<<<<<<<< - * p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - * ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance) - */ - __pyx_t_4 = __pyx_f_9fontTools_5qu2cu_5qu2cu_cubic_farthest_fit_inside(__pyx_v_p0, __Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_p0, __pyx_v_p1), __pyx_t_double_complex_from_parts(0.5, 0)), __Pyx_c_diff_double(__pyx_v_mid, __pyx_v_deriv3), __pyx_v_mid, __pyx_v_tolerance); - if (__pyx_t_4) { - } else { - __pyx_t_3 = __pyx_t_4; - goto __pyx_L7_bool_binop_done; - } - - /* "fontTools/qu2cu/qu2cu.py":82 - * return cubic_farthest_fit_inside( - * p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - * ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = __pyx_f_9fontTools_5qu2cu_5qu2cu_cubic_farthest_fit_inside(__pyx_v_mid, __Pyx_c_sum_double(__pyx_v_mid, __pyx_v_deriv3), __Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_p2, __pyx_v_p3), __pyx_t_double_complex_from_parts(0.5, 0)), __pyx_v_p3, __pyx_v_tolerance); - __pyx_t_3 = __pyx_t_4; - __pyx_L7_bool_binop_done:; - __pyx_r = __pyx_t_3; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":53 - * ) - * @cython.locals(mid=cython.complex, deriv3=cython.complex) - * def cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): # <<<<<<<<<<<<<< - * """Check if a cubic Bezier lies within a given distance of the origin. - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":91 - * p1_2_3=cython.complex, - * ) - * def elevate_quadratic(p0, p1, p2): # <<<<<<<<<<<<<< - * """Given a quadratic bezier curve, return its degree-elevated cubic.""" - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_1elevate_quadratic(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_5qu2cu_5qu2cu_elevate_quadratic[] = "elevate_quadratic(double complex p0, double complex p1, double complex p2)\nGiven a quadratic bezier curve, return its degree-elevated cubic."; -static PyMethodDef __pyx_mdef_9fontTools_5qu2cu_5qu2cu_1elevate_quadratic = {"elevate_quadratic", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_5qu2cu_5qu2cu_1elevate_quadratic, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_5qu2cu_5qu2cu_elevate_quadratic}; -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_1elevate_quadratic(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - __pyx_t_double_complex __pyx_v_p0; - __pyx_t_double_complex __pyx_v_p1; - __pyx_t_double_complex __pyx_v_p2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("elevate_quadratic (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_p0,&__pyx_n_s_p1,&__pyx_n_s_p2,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p0)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("elevate_quadratic", 1, 3, 3, 1); __PYX_ERR(0, 91, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p2)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("elevate_quadratic", 1, 3, 3, 2); __PYX_ERR(0, 91, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "elevate_quadratic") < 0)) __PYX_ERR(0, 91, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v_p0 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 91, __pyx_L3_error) - __pyx_v_p1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 91, __pyx_L3_error) - __pyx_v_p2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 91, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("elevate_quadratic", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 91, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.elevate_quadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_5qu2cu_5qu2cu_elevate_quadratic(__pyx_self, __pyx_v_p0, __pyx_v_p1, __pyx_v_p2); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_elevate_quadratic(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2) { - __pyx_t_double_complex __pyx_v_p1_2_3; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - __pyx_t_double_complex __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("elevate_quadratic", 0); - - /* "fontTools/qu2cu/qu2cu.py":95 - * - * # https://pomax.github.io/bezierinfo/#reordering - * p1_2_3 = p1 * (2 / 3) # <<<<<<<<<<<<<< - * return ( - * p0, - */ - __pyx_v_p1_2_3 = __Pyx_c_prod_double(__pyx_v_p1, __pyx_t_double_complex_from_parts((2.0 / 3.0), 0)); - - /* "fontTools/qu2cu/qu2cu.py":96 - * # https://pomax.github.io/bezierinfo/#reordering - * p1_2_3 = p1 * (2 / 3) - * return ( # <<<<<<<<<<<<<< - * p0, - * (p0 * (1 / 3) + p1_2_3), - */ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/qu2cu/qu2cu.py":97 - * p1_2_3 = p1 * (2 / 3) - * return ( - * p0, # <<<<<<<<<<<<<< - * (p0 * (1 / 3) + p1_2_3), - * (p2 * (1 / 3) + p1_2_3), - */ - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_p0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/qu2cu/qu2cu.py":98 - * return ( - * p0, - * (p0 * (1 / 3) + p1_2_3), # <<<<<<<<<<<<<< - * (p2 * (1 / 3) + p1_2_3), - * p2, - */ - __pyx_t_2 = __Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_v_p0, __pyx_t_double_complex_from_parts((1.0 / 3.0), 0)), __pyx_v_p1_2_3); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/qu2cu/qu2cu.py":99 - * p0, - * (p0 * (1 / 3) + p1_2_3), - * (p2 * (1 / 3) + p1_2_3), # <<<<<<<<<<<<<< - * p2, - * ) - */ - __pyx_t_2 = __Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_v_p2, __pyx_t_double_complex_from_parts((1.0 / 3.0), 0)), __pyx_v_p1_2_3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - - /* "fontTools/qu2cu/qu2cu.py":100 - * (p0 * (1 / 3) + p1_2_3), - * (p2 * (1 / 3) + p1_2_3), - * p2, # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_v_p2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - - /* "fontTools/qu2cu/qu2cu.py":97 - * p1_2_3 = p1 * (2 / 3) - * return ( - * p0, # <<<<<<<<<<<<<< - * (p0 * (1 / 3) + p1_2_3), - * (p2 * (1 / 3) + p1_2_3), - */ - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_5); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":91 - * p1_2_3=cython.complex, - * ) - * def elevate_quadratic(p0, p1, p2): # <<<<<<<<<<<<<< - * """Given a quadratic bezier curve, return its degree-elevated cubic.""" - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.elevate_quadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":118 - * p3=cython.complex, - * ) - * def merge_curves(curves, start, n): # <<<<<<<<<<<<<< - * """Give a cubic-Bezier spline, reconstruct one cubic-Bezier - * that has the same endpoints and tangents and approxmates - */ - -static PyObject *__pyx_f_9fontTools_5qu2cu_5qu2cu_merge_curves(PyObject *__pyx_v_curves, int __pyx_v_start, int __pyx_v_n) { - int __pyx_v_k; - double __pyx_v_prod_ratio; - double __pyx_v_sum_ratio; - double __pyx_v_ratio; - __pyx_t_double_complex __pyx_v_p0; - __pyx_t_double_complex __pyx_v_p1; - __pyx_t_double_complex __pyx_v_p2; - __pyx_t_double_complex __pyx_v_p3; - PyObject *__pyx_v_ts = NULL; - PyObject *__pyx_v_ck = NULL; - PyObject *__pyx_v_c_before = NULL; - PyObject *__pyx_v_curve = NULL; - double __pyx_7genexpr__pyx_v_t; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - double __pyx_t_11; - int __pyx_t_12; - Py_ssize_t __pyx_t_13; - __pyx_t_double_complex __pyx_t_14; - PyObject *__pyx_t_15 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("merge_curves", 0); - - /* "fontTools/qu2cu/qu2cu.py":124 - * - * # Reconstruct the t values of the cut segments - * prod_ratio = 1.0 # <<<<<<<<<<<<<< - * sum_ratio = 1.0 - * ts = [1] - */ - __pyx_v_prod_ratio = 1.0; - - /* "fontTools/qu2cu/qu2cu.py":125 - * # Reconstruct the t values of the cut segments - * prod_ratio = 1.0 - * sum_ratio = 1.0 # <<<<<<<<<<<<<< - * ts = [1] - * for k in range(1, n): - */ - __pyx_v_sum_ratio = 1.0; - - /* "fontTools/qu2cu/qu2cu.py":126 - * prod_ratio = 1.0 - * sum_ratio = 1.0 - * ts = [1] # <<<<<<<<<<<<<< - * for k in range(1, n): - * ck = curves[start + k] - */ - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_int_1); - __pyx_v_ts = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":127 - * sum_ratio = 1.0 - * ts = [1] - * for k in range(1, n): # <<<<<<<<<<<<<< - * ck = curves[start + k] - * c_before = curves[start + k - 1] - */ - __pyx_t_2 = __pyx_v_n; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 1; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_k = __pyx_t_4; - - /* "fontTools/qu2cu/qu2cu.py":128 - * ts = [1] - * for k in range(1, n): - * ck = curves[start + k] # <<<<<<<<<<<<<< - * c_before = curves[start + k - 1] - * - */ - __pyx_t_5 = (__pyx_v_start + __pyx_v_k); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_curves, __pyx_t_5, int, 1, __Pyx_PyInt_From_int, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XDECREF_SET(__pyx_v_ck, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":129 - * for k in range(1, n): - * ck = curves[start + k] - * c_before = curves[start + k - 1] # <<<<<<<<<<<<<< - * - * # |t_(k+1) - t_k| / |t_k - t_(k - 1)| = ratio - */ - __pyx_t_6 = ((__pyx_v_start + __pyx_v_k) - 1); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_curves, __pyx_t_6, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 129, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XDECREF_SET(__pyx_v_c_before, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":132 - * - * # |t_(k+1) - t_k| / |t_k - t_(k - 1)| = ratio - * assert ck[0] == c_before[3] # <<<<<<<<<<<<<< - * ratio = abs(ck[1] - ck[0]) / abs(c_before[3] - c_before[2]) - * - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_ck, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 132, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_c_before, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 132, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyObject_RichCompare(__pyx_t_1, __pyx_t_7, Py_EQ); __Pyx_XGOTREF(__pyx_t_8); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 132, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 132, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_9)) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(0, 132, __pyx_L1_error) - } - } - #endif - - /* "fontTools/qu2cu/qu2cu.py":133 - * # |t_(k+1) - t_k| / |t_k - t_(k - 1)| = ratio - * assert ck[0] == c_before[3] - * ratio = abs(ck[1] - ck[0]) / abs(c_before[3] - c_before[2]) # <<<<<<<<<<<<<< - * - * prod_ratio *= ratio - */ - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_ck, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_ck, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = PyNumber_Subtract(__pyx_t_8, __pyx_t_7); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyNumber_Absolute(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_c_before, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_c_before, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_10 = PyNumber_Subtract(__pyx_t_1, __pyx_t_8); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyNumber_Absolute(__pyx_t_10); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyNumber_Divide(__pyx_t_7, __pyx_t_8); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_11 = __pyx_PyFloat_AsDouble(__pyx_t_10); if (unlikely((__pyx_t_11 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_ratio = __pyx_t_11; - - /* "fontTools/qu2cu/qu2cu.py":135 - * ratio = abs(ck[1] - ck[0]) / abs(c_before[3] - c_before[2]) - * - * prod_ratio *= ratio # <<<<<<<<<<<<<< - * sum_ratio += prod_ratio - * ts.append(sum_ratio) - */ - __pyx_v_prod_ratio = (__pyx_v_prod_ratio * __pyx_v_ratio); - - /* "fontTools/qu2cu/qu2cu.py":136 - * - * prod_ratio *= ratio - * sum_ratio += prod_ratio # <<<<<<<<<<<<<< - * ts.append(sum_ratio) - * - */ - __pyx_v_sum_ratio = (__pyx_v_sum_ratio + __pyx_v_prod_ratio); - - /* "fontTools/qu2cu/qu2cu.py":137 - * prod_ratio *= ratio - * sum_ratio += prod_ratio - * ts.append(sum_ratio) # <<<<<<<<<<<<<< - * - * # (t(n) - t(n - 1)) / (t_(1) - t(0)) = prod_ratio - */ - __pyx_t_10 = PyFloat_FromDouble(__pyx_v_sum_ratio); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_12 = __Pyx_PyList_Append(__pyx_v_ts, __pyx_t_10); if (unlikely(__pyx_t_12 == ((int)-1))) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - } - - /* "fontTools/qu2cu/qu2cu.py":141 - * # (t(n) - t(n - 1)) / (t_(1) - t(0)) = prod_ratio - * - * ts = [t / sum_ratio for t in ts[:-1]] # <<<<<<<<<<<<<< - * - * p0 = curves[start][0] - */ - { /* enter inner scope */ - __pyx_t_10 = PyList_New(0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_8 = __Pyx_PyList_GetSlice(__pyx_v_ts, 0, -1L); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = __pyx_t_8; __Pyx_INCREF(__pyx_t_7); __pyx_t_13 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - for (;;) { - if (__pyx_t_13 >= PyList_GET_SIZE(__pyx_t_7)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_8 = PyList_GET_ITEM(__pyx_t_7, __pyx_t_13); __Pyx_INCREF(__pyx_t_8); __pyx_t_13++; if (unlikely(0 < 0)) __PYX_ERR(0, 141, __pyx_L1_error) - #else - __pyx_t_8 = PySequence_ITEM(__pyx_t_7, __pyx_t_13); __pyx_t_13++; if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - __pyx_t_11 = __pyx_PyFloat_AsDouble(__pyx_t_8); if (unlikely((__pyx_t_11 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_7genexpr__pyx_v_t = __pyx_t_11; - if (unlikely(__pyx_v_sum_ratio == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 141, __pyx_L1_error) - } - __pyx_t_8 = PyFloat_FromDouble((__pyx_7genexpr__pyx_v_t / __pyx_v_sum_ratio)); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_10, (PyObject*)__pyx_t_8))) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_ts, ((PyObject*)__pyx_t_10)); - __pyx_t_10 = 0; - - /* "fontTools/qu2cu/qu2cu.py":143 - * ts = [t / sum_ratio for t in ts[:-1]] - * - * p0 = curves[start][0] # <<<<<<<<<<<<<< - * p1 = curves[start][1] - * p2 = curves[start + n - 1][2] - */ - __pyx_t_10 = __Pyx_GetItemInt(__pyx_v_curves, __pyx_v_start, int, 1, __Pyx_PyInt_From_int, 0, 1, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_7 = __Pyx_GetItemInt(__pyx_t_10, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_14 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_7); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_v_p0 = __pyx_t_14; - - /* "fontTools/qu2cu/qu2cu.py":144 - * - * p0 = curves[start][0] - * p1 = curves[start][1] # <<<<<<<<<<<<<< - * p2 = curves[start + n - 1][2] - * p3 = curves[start + n - 1][3] - */ - __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_curves, __pyx_v_start, int, 1, __Pyx_PyInt_From_int, 0, 1, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_10 = __Pyx_GetItemInt(__pyx_t_7, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_14 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_10); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_p1 = __pyx_t_14; - - /* "fontTools/qu2cu/qu2cu.py":145 - * p0 = curves[start][0] - * p1 = curves[start][1] - * p2 = curves[start + n - 1][2] # <<<<<<<<<<<<<< - * p3 = curves[start + n - 1][3] - * - */ - __pyx_t_6 = ((__pyx_v_start + __pyx_v_n) - 1); - __pyx_t_10 = __Pyx_GetItemInt(__pyx_v_curves, __pyx_t_6, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_7 = __Pyx_GetItemInt(__pyx_t_10, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_14 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_7); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_v_p2 = __pyx_t_14; - - /* "fontTools/qu2cu/qu2cu.py":146 - * p1 = curves[start][1] - * p2 = curves[start + n - 1][2] - * p3 = curves[start + n - 1][3] # <<<<<<<<<<<<<< - * - * # Build the curve by scaling the control-points. - */ - __pyx_t_6 = ((__pyx_v_start + __pyx_v_n) - 1); - __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_curves, __pyx_t_6, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 146, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_10 = __Pyx_GetItemInt(__pyx_t_7, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 146, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_14 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_10); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 146, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_p3 = __pyx_t_14; - - /* "fontTools/qu2cu/qu2cu.py":149 - * - * # Build the curve by scaling the control-points. - * p1 = p0 + (p1 - p0) / (ts[0] if ts else 1) # <<<<<<<<<<<<<< - * p2 = p3 + (p2 - p3) / ((1 - ts[-1]) if ts else 1) - * - */ - __pyx_t_10 = __pyx_PyComplex_FromComplex(__pyx_v_p0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_14 = __Pyx_c_diff_double(__pyx_v_p1, __pyx_v_p0); - __pyx_t_7 = __pyx_PyComplex_FromComplex(__pyx_t_14); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_9 = (PyList_GET_SIZE(__pyx_v_ts) != 0); - if (__pyx_t_9) { - __pyx_t_1 = __Pyx_GetItemInt_List(__pyx_v_ts, 0, long, 1, __Pyx_PyInt_From_long, 1, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = __pyx_t_1; - __pyx_t_1 = 0; - } else { - __Pyx_INCREF(__pyx_int_1); - __pyx_t_8 = __pyx_int_1; - } - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_7, __pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Add(__pyx_t_10, __pyx_t_1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_14 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_p1 = __pyx_t_14; - - /* "fontTools/qu2cu/qu2cu.py":150 - * # Build the curve by scaling the control-points. - * p1 = p0 + (p1 - p0) / (ts[0] if ts else 1) - * p2 = p3 + (p2 - p3) / ((1 - ts[-1]) if ts else 1) # <<<<<<<<<<<<<< - * - * curve = (p0, p1, p2, p3) - */ - __pyx_t_8 = __pyx_PyComplex_FromComplex(__pyx_v_p3); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_14 = __Pyx_c_diff_double(__pyx_v_p2, __pyx_v_p3); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_t_14); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = (PyList_GET_SIZE(__pyx_v_ts) != 0); - if (__pyx_t_9) { - __pyx_t_7 = __Pyx_GetItemInt_List(__pyx_v_ts, -1L, long, 1, __Pyx_PyInt_From_long, 1, 1, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_15 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_t_7, 1, 0, 0); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_15); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_10 = __pyx_t_15; - __pyx_t_15 = 0; - } else { - __Pyx_INCREF(__pyx_int_1); - __pyx_t_10 = __pyx_int_1; - } - __pyx_t_15 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_t_10); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_15); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = PyNumber_Add(__pyx_t_8, __pyx_t_15); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - __pyx_t_14 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_10); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_p2 = __pyx_t_14; - - /* "fontTools/qu2cu/qu2cu.py":152 - * p2 = p3 + (p2 - p3) / ((1 - ts[-1]) if ts else 1) - * - * curve = (p0, p1, p2, p3) # <<<<<<<<<<<<<< - * - * return curve, ts - */ - __pyx_t_10 = __pyx_PyComplex_FromComplex(__pyx_v_p0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_15 = __pyx_PyComplex_FromComplex(__pyx_v_p1); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_15); - __pyx_t_8 = __pyx_PyComplex_FromComplex(__pyx_v_p2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_p3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = PyTuple_New(4); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_10); - PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_10); - __Pyx_GIVEREF(__pyx_t_15); - PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_15); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_8); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_7, 3, __pyx_t_1); - __pyx_t_10 = 0; - __pyx_t_15 = 0; - __pyx_t_8 = 0; - __pyx_t_1 = 0; - __pyx_v_curve = __pyx_t_7; - __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":154 - * curve = (p0, p1, p2, p3) - * - * return curve, ts # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_INCREF(__pyx_v_curve); - __Pyx_GIVEREF(__pyx_v_curve); - PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_v_curve); - __Pyx_INCREF(__pyx_v_ts); - __Pyx_GIVEREF(__pyx_v_ts); - PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_v_ts); - __pyx_r = __pyx_t_7; - __pyx_t_7 = 0; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":118 - * p3=cython.complex, - * ) - * def merge_curves(curves, start, n): # <<<<<<<<<<<<<< - * """Give a cubic-Bezier spline, reconstruct one cubic-Bezier - * that has the same endpoints and tangents and approxmates - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.merge_curves", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_ts); - __Pyx_XDECREF(__pyx_v_ck); - __Pyx_XDECREF(__pyx_v_c_before); - __Pyx_XDECREF(__pyx_v_curve); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":165 - * on=cython.complex, - * ) - * def add_implicit_on_curves(p): # <<<<<<<<<<<<<< - * q = list(p) - * count = 0 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_3add_implicit_on_curves(PyObject *__pyx_self, PyObject *__pyx_v_p); /*proto*/ -static char __pyx_doc_9fontTools_5qu2cu_5qu2cu_2add_implicit_on_curves[] = "add_implicit_on_curves(p)"; -static PyMethodDef __pyx_mdef_9fontTools_5qu2cu_5qu2cu_3add_implicit_on_curves = {"add_implicit_on_curves", (PyCFunction)__pyx_pw_9fontTools_5qu2cu_5qu2cu_3add_implicit_on_curves, METH_O, __pyx_doc_9fontTools_5qu2cu_5qu2cu_2add_implicit_on_curves}; -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_3add_implicit_on_curves(PyObject *__pyx_self, PyObject *__pyx_v_p) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("add_implicit_on_curves (wrapper)", 0); - __pyx_r = __pyx_pf_9fontTools_5qu2cu_5qu2cu_2add_implicit_on_curves(__pyx_self, ((PyObject *)__pyx_v_p)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_2add_implicit_on_curves(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_p) { - int __pyx_v_count; - int __pyx_v_num_offcurves; - int __pyx_v_i; - __pyx_t_double_complex __pyx_v_off1; - __pyx_t_double_complex __pyx_v_off2; - __pyx_t_double_complex __pyx_v_on; - PyObject *__pyx_v_q = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - __pyx_t_double_complex __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("add_implicit_on_curves", 0); - - /* "fontTools/qu2cu/qu2cu.py":166 - * ) - * def add_implicit_on_curves(p): - * q = list(p) # <<<<<<<<<<<<<< - * count = 0 - * num_offcurves = len(p) - 2 - */ - __pyx_t_1 = PySequence_List(__pyx_v_p); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_q = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":167 - * def add_implicit_on_curves(p): - * q = list(p) - * count = 0 # <<<<<<<<<<<<<< - * num_offcurves = len(p) - 2 - * for i in range(1, num_offcurves): - */ - __pyx_v_count = 0; - - /* "fontTools/qu2cu/qu2cu.py":168 - * q = list(p) - * count = 0 - * num_offcurves = len(p) - 2 # <<<<<<<<<<<<<< - * for i in range(1, num_offcurves): - * off1 = p[i] - */ - __pyx_t_2 = PyObject_Length(__pyx_v_p); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 168, __pyx_L1_error) - __pyx_v_num_offcurves = (__pyx_t_2 - 2); - - /* "fontTools/qu2cu/qu2cu.py":169 - * count = 0 - * num_offcurves = len(p) - 2 - * for i in range(1, num_offcurves): # <<<<<<<<<<<<<< - * off1 = p[i] - * off2 = p[i + 1] - */ - __pyx_t_3 = __pyx_v_num_offcurves; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 1; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "fontTools/qu2cu/qu2cu.py":170 - * num_offcurves = len(p) - 2 - * for i in range(1, num_offcurves): - * off1 = p[i] # <<<<<<<<<<<<<< - * off2 = p[i + 1] - * on = off1 + (off2 - off1) * 0.5 - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_p, __pyx_v_i, int, 1, __Pyx_PyInt_From_int, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_off1 = __pyx_t_6; - - /* "fontTools/qu2cu/qu2cu.py":171 - * for i in range(1, num_offcurves): - * off1 = p[i] - * off2 = p[i + 1] # <<<<<<<<<<<<<< - * on = off1 + (off2 - off1) * 0.5 - * q.insert(i + 1 + count, on) - */ - __pyx_t_7 = (__pyx_v_i + 1); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_p, __pyx_t_7, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 171, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_off2 = __pyx_t_6; - - /* "fontTools/qu2cu/qu2cu.py":172 - * off1 = p[i] - * off2 = p[i + 1] - * on = off1 + (off2 - off1) * 0.5 # <<<<<<<<<<<<<< - * q.insert(i + 1 + count, on) - * count += 1 - */ - __pyx_v_on = __Pyx_c_sum_double(__pyx_v_off1, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_off2, __pyx_v_off1), __pyx_t_double_complex_from_parts(0.5, 0))); - - /* "fontTools/qu2cu/qu2cu.py":173 - * off2 = p[i + 1] - * on = off1 + (off2 - off1) * 0.5 - * q.insert(i + 1 + count, on) # <<<<<<<<<<<<<< - * count += 1 - * return q - */ - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_on); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = PyList_Insert(__pyx_v_q, ((__pyx_v_i + 1) + __pyx_v_count), __pyx_t_1); if (unlikely(__pyx_t_8 == ((int)-1))) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":174 - * on = off1 + (off2 - off1) * 0.5 - * q.insert(i + 1 + count, on) - * count += 1 # <<<<<<<<<<<<<< - * return q - * - */ - __pyx_v_count = (__pyx_v_count + 1); - } - - /* "fontTools/qu2cu/qu2cu.py":175 - * q.insert(i + 1 + count, on) - * count += 1 - * return q # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_q); - __pyx_r = __pyx_v_q; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":165 - * on=cython.complex, - * ) - * def add_implicit_on_curves(p): # <<<<<<<<<<<<<< - * q = list(p) - * count = 0 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.add_implicit_on_curves", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_q); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_5quadratic_to_curves(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_5qu2cu_5qu2cu_4quadratic_to_curves[] = "quadratic_to_curves(quads: List[List[Point]], double max_err: float = 0.5, all_cubic: bool = False) -> List[Tuple[Point, ...]]\nConverts a connecting list of quadratic splines to a list of quadratic\n and cubic curves.\n\n A quadratic spline is specified as a list of points. Either each point is\n a 2-tuple of X,Y coordinates, or each point is a complex number with\n real/imaginary components representing X,Y coordinates.\n\n The first and last points are on-curve points and the rest are off-curve\n points, with an implied on-curve point in the middle between every two\n consequtive off-curve points.\n\n Returns:\n The output is a list of tuples of points. Points are represented\n in the same format as the input, either as 2-tuples or complex numbers.\n\n Each tuple is either of length three, for a quadratic curve, or four,\n for a cubic curve. Each curve's last point is the same as the next\n curve's first point.\n\n Args:\n quads: quadratic splines\n\n max_err: absolute error tolerance; defaults to 0.5\n\n all_cubic: if True, only cubic curves are generated; defaults to False\n "; -static PyMethodDef __pyx_mdef_9fontTools_5qu2cu_5qu2cu_5quadratic_to_curves = {"quadratic_to_curves", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_5qu2cu_5qu2cu_5quadratic_to_curves, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_5qu2cu_5qu2cu_4quadratic_to_curves}; -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_5quadratic_to_curves(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_quads = 0; - double __pyx_v_max_err; - PyObject *__pyx_v_all_cubic = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("quadratic_to_curves (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_quads,&__pyx_n_s_max_err,&__pyx_n_s_all_cubic,0}; - PyObject* values[3] = {0,0,0}; - - /* "fontTools/qu2cu/qu2cu.py":188 - * quads: List[List[Point]], - * max_err: float = 0.5, - * all_cubic: bool = False, # <<<<<<<<<<<<<< - * ) -> List[Tuple[Point, ...]]: - * """Converts a connecting list of quadratic splines to a list of quadratic - */ - values[2] = ((PyObject *)((PyObject *)Py_False)); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_quads)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_max_err); - if (value) { values[1] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_all_cubic); - if (value) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "quadratic_to_curves") < 0)) __PYX_ERR(0, 185, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_quads = values[0]; - if (values[1]) { - __pyx_v_max_err = __pyx_PyFloat_AsDouble(values[1]); if (unlikely((__pyx_v_max_err == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 187, __pyx_L3_error) - } else { - __pyx_v_max_err = ((double)((double)0.5)); - } - __pyx_v_all_cubic = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("quadratic_to_curves", 0, 1, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 185, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.quadratic_to_curves", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_5qu2cu_5qu2cu_4quadratic_to_curves(__pyx_self, __pyx_v_quads, __pyx_v_max_err, __pyx_v_all_cubic); - - /* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_5qu2cu_5qu2cu_19quadratic_to_curves_8genexpr3_2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/qu2cu/qu2cu.py":238 - * - * if not is_complex: - * curves = [tuple((c.real, c.imag) for c in curve) for curve in curves] # <<<<<<<<<<<<<< - * return curves - * - */ - -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_19quadratic_to_curves_8genexpr3_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *)__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr(__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 238, __pyx_L1_error) - } else { - __Pyx_GOTREF(__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *) __pyx_self; - __Pyx_INCREF(((PyObject *)__pyx_cur_scope->__pyx_outer_scope)); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_5qu2cu_5qu2cu_19quadratic_to_curves_8genexpr3_2generator, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_quadratic_to_curves_locals_genex, __pyx_n_s_fontTools_qu2cu_qu2cu); if (unlikely(!gen)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.quadratic_to_curves.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF(((PyObject *)__pyx_cur_scope)); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_5qu2cu_5qu2cu_19quadratic_to_curves_8genexpr3_2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 238, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_8genexpr3__pyx_v_curve)) { __Pyx_RaiseClosureNameError("curve"); __PYX_ERR(0, 238, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_8genexpr3__pyx_v_curve)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_8genexpr3__pyx_v_curve)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_outer_scope->__pyx_8genexpr3__pyx_v_curve; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_outer_scope->__pyx_8genexpr3__pyx_v_curve); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 238, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_2 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(0, 238, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(0, 238, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 238, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_c, __pyx_n_s_real); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_c, __pyx_n_s_imag); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_5); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_3; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_3 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 238, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ - -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_4quadratic_to_curves(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_quads, double __pyx_v_max_err, PyObject *__pyx_v_all_cubic) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *__pyx_cur_scope; - int __pyx_v_cost; - int __pyx_v_is_complex; - PyObject *__pyx_v_q = NULL; - PyObject *__pyx_v_costs = NULL; - PyObject *__pyx_v_p = NULL; - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - PyObject *__pyx_v_qq = NULL; - PyObject *__pyx_v_curves = NULL; - PyObject *__pyx_8genexpr1__pyx_v_p = NULL; - PyObject *__pyx_8genexpr2__pyx_v_x = NULL; - PyObject *__pyx_8genexpr2__pyx_v_y = NULL; - PyObject *__pyx_8genexpr3__pyx_v_0 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - PyObject *(*__pyx_t_5)(PyObject *); - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - PyObject *(*__pyx_t_9)(PyObject *); - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - PyObject *(*__pyx_t_14)(PyObject *); - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - int __pyx_t_17; - int __pyx_t_18; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("quadratic_to_curves", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *)__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves(__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 185, __pyx_L1_error) - } else { - __Pyx_GOTREF(__pyx_cur_scope); - } - __Pyx_INCREF(__pyx_v_quads); - - /* "fontTools/qu2cu/qu2cu.py":216 - * all_cubic: if True, only cubic curves are generated; defaults to False - * """ - * is_complex = type(quads[0][0]) is complex # <<<<<<<<<<<<<< - * if not is_complex: - * quads = [[complex(x, y) for (x, y) in p] for p in quads] - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_quads, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 216, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 216, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_3 = (((PyObject *)Py_TYPE(__pyx_t_2)) == ((PyObject *)(&PyComplex_Type))); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_is_complex = __pyx_t_3; - - /* "fontTools/qu2cu/qu2cu.py":217 - * """ - * is_complex = type(quads[0][0]) is complex - * if not is_complex: # <<<<<<<<<<<<<< - * quads = [[complex(x, y) for (x, y) in p] for p in quads] - * - */ - __pyx_t_3 = ((!(__pyx_v_is_complex != 0)) != 0); - if (__pyx_t_3) { - - /* "fontTools/qu2cu/qu2cu.py":218 - * is_complex = type(quads[0][0]) is complex - * if not is_complex: - * quads = [[complex(x, y) for (x, y) in p] for p in quads] # <<<<<<<<<<<<<< - * - * q = [quads[0][0]] - */ - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 218, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(PyList_CheckExact(__pyx_v_quads)) || PyTuple_CheckExact(__pyx_v_quads)) { - __pyx_t_1 = __pyx_v_quads; __Pyx_INCREF(__pyx_t_1); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - } else { - __pyx_t_4 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_quads); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 218, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 218, __pyx_L6_error) - } - for (;;) { - if (likely(!__pyx_t_5)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_4); __Pyx_INCREF(__pyx_t_6); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 218, __pyx_L6_error) - #else - __pyx_t_6 = PySequence_ITEM(__pyx_t_1, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 218, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } else { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_4); __Pyx_INCREF(__pyx_t_6); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 218, __pyx_L6_error) - #else - __pyx_t_6 = PySequence_ITEM(__pyx_t_1, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 218, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } - } else { - __pyx_t_6 = __pyx_t_5(__pyx_t_1); - if (unlikely(!__pyx_t_6)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 218, __pyx_L6_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_6); - } - __Pyx_XDECREF_SET(__pyx_8genexpr1__pyx_v_p, __pyx_t_6); - __pyx_t_6 = 0; - { /* enter inner scope */ - __pyx_t_6 = PyList_New(0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_6); - if (likely(PyList_CheckExact(__pyx_8genexpr1__pyx_v_p)) || PyTuple_CheckExact(__pyx_8genexpr1__pyx_v_p)) { - __pyx_t_7 = __pyx_8genexpr1__pyx_v_p; __Pyx_INCREF(__pyx_t_7); __pyx_t_8 = 0; - __pyx_t_9 = NULL; - } else { - __pyx_t_8 = -1; __pyx_t_7 = PyObject_GetIter(__pyx_8genexpr1__pyx_v_p); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_9 = Py_TYPE(__pyx_t_7)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 218, __pyx_L11_error) - } - for (;;) { - if (likely(!__pyx_t_9)) { - if (likely(PyList_CheckExact(__pyx_t_7))) { - if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_7)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_10 = PyList_GET_ITEM(__pyx_t_7, __pyx_t_8); __Pyx_INCREF(__pyx_t_10); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 218, __pyx_L11_error) - #else - __pyx_t_10 = PySequence_ITEM(__pyx_t_7, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_10); - #endif - } else { - if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_7)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_10 = PyTuple_GET_ITEM(__pyx_t_7, __pyx_t_8); __Pyx_INCREF(__pyx_t_10); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 218, __pyx_L11_error) - #else - __pyx_t_10 = PySequence_ITEM(__pyx_t_7, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_10); - #endif - } - } else { - __pyx_t_10 = __pyx_t_9(__pyx_t_7); - if (unlikely(!__pyx_t_10)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 218, __pyx_L11_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_10); - } - if ((likely(PyTuple_CheckExact(__pyx_t_10))) || (PyList_CheckExact(__pyx_t_10))) { - PyObject* sequence = __pyx_t_10; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 218, __pyx_L11_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_11 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_12 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_11 = PyList_GET_ITEM(sequence, 0); - __pyx_t_12 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(__pyx_t_12); - #else - __pyx_t_11 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_13 = PyObject_GetIter(__pyx_t_10); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_14 = Py_TYPE(__pyx_t_13)->tp_iternext; - index = 0; __pyx_t_11 = __pyx_t_14(__pyx_t_13); if (unlikely(!__pyx_t_11)) goto __pyx_L14_unpacking_failed; - __Pyx_GOTREF(__pyx_t_11); - index = 1; __pyx_t_12 = __pyx_t_14(__pyx_t_13); if (unlikely(!__pyx_t_12)) goto __pyx_L14_unpacking_failed; - __Pyx_GOTREF(__pyx_t_12); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_14(__pyx_t_13), 2) < 0) __PYX_ERR(0, 218, __pyx_L11_error) - __pyx_t_14 = NULL; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - goto __pyx_L15_unpacking_done; - __pyx_L14_unpacking_failed:; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_14 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 218, __pyx_L11_error) - __pyx_L15_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_8genexpr2__pyx_v_x, __pyx_t_11); - __pyx_t_11 = 0; - __Pyx_XDECREF_SET(__pyx_8genexpr2__pyx_v_y, __pyx_t_12); - __pyx_t_12 = 0; - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_INCREF(__pyx_8genexpr2__pyx_v_x); - __Pyx_GIVEREF(__pyx_8genexpr2__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_8genexpr2__pyx_v_x); - __Pyx_INCREF(__pyx_8genexpr2__pyx_v_y); - __Pyx_GIVEREF(__pyx_8genexpr2__pyx_v_y); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_8genexpr2__pyx_v_y); - __pyx_t_12 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_10, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_6, (PyObject*)__pyx_t_12))) __PYX_ERR(0, 218, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); __pyx_8genexpr2__pyx_v_x = 0; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_y); __pyx_8genexpr2__pyx_v_y = 0; - goto __pyx_L16_exit_scope; - __pyx_L11_error:; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); __pyx_8genexpr2__pyx_v_x = 0; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_y); __pyx_8genexpr2__pyx_v_y = 0; - goto __pyx_L6_error; - __pyx_L16_exit_scope:; - } /* exit inner scope */ - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(0, 218, __pyx_L6_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_p); __pyx_8genexpr1__pyx_v_p = 0; - goto __pyx_L17_exit_scope; - __pyx_L6_error:; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_p); __pyx_8genexpr1__pyx_v_p = 0; - goto __pyx_L1_error; - __pyx_L17_exit_scope:; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_quads, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":217 - * """ - * is_complex = type(quads[0][0]) is complex - * if not is_complex: # <<<<<<<<<<<<<< - * quads = [[complex(x, y) for (x, y) in p] for p in quads] - * - */ - } - - /* "fontTools/qu2cu/qu2cu.py":220 - * quads = [[complex(x, y) for (x, y) in p] for p in quads] - * - * q = [quads[0][0]] # <<<<<<<<<<<<<< - * costs = [1] - * cost = 1 - */ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_quads, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_v_q = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":221 - * - * q = [quads[0][0]] - * costs = [1] # <<<<<<<<<<<<<< - * cost = 1 - * for p in quads: - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_int_1); - __pyx_v_costs = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":222 - * q = [quads[0][0]] - * costs = [1] - * cost = 1 # <<<<<<<<<<<<<< - * for p in quads: - * assert q[-1] == p[0] - */ - __pyx_v_cost = 1; - - /* "fontTools/qu2cu/qu2cu.py":223 - * costs = [1] - * cost = 1 - * for p in quads: # <<<<<<<<<<<<<< - * assert q[-1] == p[0] - * for i in range(len(p) - 2): - */ - if (likely(PyList_CheckExact(__pyx_v_quads)) || PyTuple_CheckExact(__pyx_v_quads)) { - __pyx_t_2 = __pyx_v_quads; __Pyx_INCREF(__pyx_t_2); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - } else { - __pyx_t_4 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_quads); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 223, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_5)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 223, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 223, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_5(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 223, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XDECREF_SET(__pyx_v_p, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":224 - * cost = 1 - * for p in quads: - * assert q[-1] == p[0] # <<<<<<<<<<<<<< - * for i in range(len(p) - 2): - * cost += 1 - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_1 = __Pyx_GetItemInt_List(__pyx_v_q, -1L, long, 1, __Pyx_PyInt_From_long, 1, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_p, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = PyObject_RichCompare(__pyx_t_1, __pyx_t_6, Py_EQ); __Pyx_XGOTREF(__pyx_t_7); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_7); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_3)) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(0, 224, __pyx_L1_error) - } - } - #endif - - /* "fontTools/qu2cu/qu2cu.py":225 - * for p in quads: - * assert q[-1] == p[0] - * for i in range(len(p) - 2): # <<<<<<<<<<<<<< - * cost += 1 - * costs.append(cost) - */ - __pyx_t_8 = PyObject_Length(__pyx_v_p); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(0, 225, __pyx_L1_error) - __pyx_t_15 = (__pyx_t_8 - 2); - __pyx_t_8 = __pyx_t_15; - for (__pyx_t_16 = 0; __pyx_t_16 < __pyx_t_8; __pyx_t_16+=1) { - __pyx_v_i = __pyx_t_16; - - /* "fontTools/qu2cu/qu2cu.py":226 - * assert q[-1] == p[0] - * for i in range(len(p) - 2): - * cost += 1 # <<<<<<<<<<<<<< - * costs.append(cost) - * costs.append(cost) - */ - __pyx_v_cost = (__pyx_v_cost + 1); - - /* "fontTools/qu2cu/qu2cu.py":227 - * for i in range(len(p) - 2): - * cost += 1 - * costs.append(cost) # <<<<<<<<<<<<<< - * costs.append(cost) - * qq = add_implicit_on_curves(p)[1:] - */ - __pyx_t_7 = __Pyx_PyInt_From_int(__pyx_v_cost); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 227, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_17 = __Pyx_PyList_Append(__pyx_v_costs, __pyx_t_7); if (unlikely(__pyx_t_17 == ((int)-1))) __PYX_ERR(0, 227, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":228 - * cost += 1 - * costs.append(cost) - * costs.append(cost) # <<<<<<<<<<<<<< - * qq = add_implicit_on_curves(p)[1:] - * costs.pop() - */ - __pyx_t_7 = __Pyx_PyInt_From_int(__pyx_v_cost); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_17 = __Pyx_PyList_Append(__pyx_v_costs, __pyx_t_7); if (unlikely(__pyx_t_17 == ((int)-1))) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - - /* "fontTools/qu2cu/qu2cu.py":229 - * costs.append(cost) - * costs.append(cost) - * qq = add_implicit_on_curves(p)[1:] # <<<<<<<<<<<<<< - * costs.pop() - * q.extend(qq) - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_add_implicit_on_curves); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - } - } - __pyx_t_7 = (__pyx_t_1) ? __Pyx_PyObject_Call2Args(__pyx_t_6, __pyx_t_1, __pyx_v_p) : __Pyx_PyObject_CallOneArg(__pyx_t_6, __pyx_v_p); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetSlice(__pyx_t_7, 1, 0, NULL, NULL, &__pyx_slice_, 1, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF_SET(__pyx_v_qq, __pyx_t_6); - __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":230 - * costs.append(cost) - * qq = add_implicit_on_curves(p)[1:] - * costs.pop() # <<<<<<<<<<<<<< - * q.extend(qq) - * cost += 1 - */ - __pyx_t_6 = __Pyx_PyList_Pop(__pyx_v_costs); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 230, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":231 - * qq = add_implicit_on_curves(p)[1:] - * costs.pop() - * q.extend(qq) # <<<<<<<<<<<<<< - * cost += 1 - * costs.append(cost) - */ - __pyx_t_17 = __Pyx_PyList_Extend(__pyx_v_q, __pyx_v_qq); if (unlikely(__pyx_t_17 == ((int)-1))) __PYX_ERR(0, 231, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":232 - * costs.pop() - * q.extend(qq) - * cost += 1 # <<<<<<<<<<<<<< - * costs.append(cost) - * - */ - __pyx_v_cost = (__pyx_v_cost + 1); - - /* "fontTools/qu2cu/qu2cu.py":233 - * q.extend(qq) - * cost += 1 - * costs.append(cost) # <<<<<<<<<<<<<< - * - * curves = spline_to_curves(q, costs, max_err, all_cubic) - */ - __pyx_t_6 = __Pyx_PyInt_From_int(__pyx_v_cost); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_17 = __Pyx_PyList_Append(__pyx_v_costs, __pyx_t_6); if (unlikely(__pyx_t_17 == ((int)-1))) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":223 - * costs = [1] - * cost = 1 - * for p in quads: # <<<<<<<<<<<<<< - * assert q[-1] == p[0] - * for i in range(len(p) - 2): - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":235 - * costs.append(cost) - * - * curves = spline_to_curves(q, costs, max_err, all_cubic) # <<<<<<<<<<<<<< - * - * if not is_complex: - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_spline_to_curves); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = PyFloat_FromDouble(__pyx_v_max_err); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = NULL; - __pyx_t_18 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_18 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[5] = {__pyx_t_1, __pyx_v_q, __pyx_v_costs, __pyx_t_7, __pyx_v_all_cubic}; - __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_18, 4+__pyx_t_18); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[5] = {__pyx_t_1, __pyx_v_q, __pyx_v_costs, __pyx_t_7, __pyx_v_all_cubic}; - __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_18, 4+__pyx_t_18); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } else - #endif - { - __pyx_t_12 = PyTuple_New(4+__pyx_t_18); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (__pyx_t_1) { - __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_1); __pyx_t_1 = NULL; - } - __Pyx_INCREF(__pyx_v_q); - __Pyx_GIVEREF(__pyx_v_q); - PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_18, __pyx_v_q); - __Pyx_INCREF(__pyx_v_costs); - __Pyx_GIVEREF(__pyx_v_costs); - PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_18, __pyx_v_costs); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_12, 2+__pyx_t_18, __pyx_t_7); - __Pyx_INCREF(__pyx_v_all_cubic); - __Pyx_GIVEREF(__pyx_v_all_cubic); - PyTuple_SET_ITEM(__pyx_t_12, 3+__pyx_t_18, __pyx_v_all_cubic); - __pyx_t_7 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_curves = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":237 - * curves = spline_to_curves(q, costs, max_err, all_cubic) - * - * if not is_complex: # <<<<<<<<<<<<<< - * curves = [tuple((c.real, c.imag) for c in curve) for curve in curves] - * return curves - */ - __pyx_t_3 = ((!(__pyx_v_is_complex != 0)) != 0); - if (__pyx_t_3) { - - /* "fontTools/qu2cu/qu2cu.py":238 - * - * if not is_complex: - * curves = [tuple((c.real, c.imag) for c in curve) for curve in curves] # <<<<<<<<<<<<<< - * return curves - * - */ - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(PyList_CheckExact(__pyx_v_curves)) || PyTuple_CheckExact(__pyx_v_curves)) { - __pyx_t_6 = __pyx_v_curves; __Pyx_INCREF(__pyx_t_6); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - } else { - __pyx_t_4 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_v_curves); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = Py_TYPE(__pyx_t_6)->tp_iternext; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 238, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_5)) { - if (likely(PyList_CheckExact(__pyx_t_6))) { - if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_6)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_12 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_4); __Pyx_INCREF(__pyx_t_12); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 238, __pyx_L1_error) - #else - __pyx_t_12 = PySequence_ITEM(__pyx_t_6, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - } else { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_6)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_12 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_4); __Pyx_INCREF(__pyx_t_12); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 238, __pyx_L1_error) - #else - __pyx_t_12 = PySequence_ITEM(__pyx_t_6, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - } - } else { - __pyx_t_12 = __pyx_t_5(__pyx_t_6); - if (unlikely(!__pyx_t_12)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 238, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_12); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_8genexpr3__pyx_v_curve); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_8genexpr3__pyx_v_curve, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_12); - __pyx_t_12 = 0; - __pyx_t_12 = __pyx_pf_9fontTools_5qu2cu_5qu2cu_19quadratic_to_curves_8genexpr3_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_7 = __Pyx_PySequence_Tuple(__pyx_t_12); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_7))) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_curves, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":237 - * curves = spline_to_curves(q, costs, max_err, all_cubic) - * - * if not is_complex: # <<<<<<<<<<<<<< - * curves = [tuple((c.real, c.imag) for c in curve) for curve in curves] - * return curves - */ - } - - /* "fontTools/qu2cu/qu2cu.py":239 - * if not is_complex: - * curves = [tuple((c.real, c.imag) for c in curve) for curve in curves] - * return curves # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_curves); - __pyx_r = __pyx_v_curves; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.quadratic_to_curves", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_q); - __Pyx_XDECREF(__pyx_v_costs); - __Pyx_XDECREF(__pyx_v_p); - __Pyx_XDECREF(__pyx_v_qq); - __Pyx_XDECREF(__pyx_v_curves); - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_p); - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_y); - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_0); - __Pyx_XDECREF(__pyx_v_quads); - __Pyx_DECREF(((PyObject *)__pyx_cur_scope)); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":268 - * u=cython.complex, - * ) - * def spline_to_curves(q, costs, tolerance=0.5, all_cubic=False): # <<<<<<<<<<<<<< - * """ - * q: quadratic spline with alternating on-curve / off-curve points. - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_7spline_to_curves(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_5qu2cu_5qu2cu_6spline_to_curves[] = "spline_to_curves(q, costs, double tolerance=0.5, int all_cubic=False)\n\n q: quadratic spline with alternating on-curve / off-curve points.\n\n costs: cumulative list of encoding cost of q in terms of number of\n points that need to be encoded. Implied on-curve points do not\n contribute to the cost. If all points need to be encoded, then\n costs will be range(1, len(q)+1).\n "; -static PyMethodDef __pyx_mdef_9fontTools_5qu2cu_5qu2cu_7spline_to_curves = {"spline_to_curves", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_5qu2cu_5qu2cu_7spline_to_curves, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_5qu2cu_5qu2cu_6spline_to_curves}; -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_7spline_to_curves(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_q = 0; - PyObject *__pyx_v_costs = 0; - double __pyx_v_tolerance; - int __pyx_v_all_cubic; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("spline_to_curves (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_q,&__pyx_n_s_costs,&__pyx_n_s_tolerance,&__pyx_n_s_all_cubic,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_q)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_costs)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("spline_to_curves", 0, 2, 4, 1); __PYX_ERR(0, 268, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_tolerance); - if (value) { values[2] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_all_cubic); - if (value) { values[3] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "spline_to_curves") < 0)) __PYX_ERR(0, 268, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_q = values[0]; - __pyx_v_costs = values[1]; - if (values[2]) { - __pyx_v_tolerance = __pyx_PyFloat_AsDouble(values[2]); if (unlikely((__pyx_v_tolerance == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 268, __pyx_L3_error) - } else { - __pyx_v_tolerance = ((double)((double)0.5)); - } - if (values[3]) { - __pyx_v_all_cubic = __Pyx_PyInt_As_int(values[3]); if (unlikely((__pyx_v_all_cubic == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 268, __pyx_L3_error) - } else { - __pyx_v_all_cubic = ((int)((int)0)); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("spline_to_curves", 0, 2, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 268, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.spline_to_curves", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_5qu2cu_5qu2cu_6spline_to_curves(__pyx_self, __pyx_v_q, __pyx_v_costs, __pyx_v_tolerance, __pyx_v_all_cubic); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_2generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/qu2cu/qu2cu.py":343 - * for k, reconst in enumerate(reconstructed): - * orig = elevated_quadratics[j + k] - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) # <<<<<<<<<<<<<< - * - * if not cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - */ - -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *)__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr(__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 343, __pyx_L1_error) - } else { - __Pyx_GOTREF(__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *) __pyx_self; - __Pyx_INCREF(((PyObject *)__pyx_cur_scope->__pyx_outer_scope)); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_2generator1, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_spline_to_curves_locals_genexpr, __pyx_n_s_fontTools_qu2cu_qu2cu); if (unlikely(!gen)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.spline_to_curves.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF(((PyObject *)__pyx_cur_scope)); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_2generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - __pyx_t_double_complex __pyx_t_9; - __pyx_t_double_complex __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L8_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 343, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_reconst)) { __Pyx_RaiseClosureNameError("reconst"); __PYX_ERR(0, 343, __pyx_L1_error) } - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_orig)) { __Pyx_RaiseClosureNameError("orig"); __PYX_ERR(0, 343, __pyx_L1_error) } - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_reconst); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_reconst); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_reconst); - __Pyx_INCREF(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_orig); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_orig); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_orig); - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_zip, __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 343, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(0, 343, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(0, 343, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_4(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 343, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 343, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_5 = PyList_GET_ITEM(sequence, 0); - __pyx_t_6 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - #else - __pyx_t_5 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_8 = Py_TYPE(__pyx_t_7)->tp_iternext; - index = 0; __pyx_t_5 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_5)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - index = 1; __pyx_t_6 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_6)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 2) < 0) __PYX_ERR(0, 343, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L7_unpacking_done; - __pyx_L6_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 343, __pyx_L1_error) - __pyx_L7_unpacking_done:; - } - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_5); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_10 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_6); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_cur_scope->__pyx_v_v = __pyx_t_9; - __pyx_cur_scope->__pyx_v_u = __pyx_t_10; - __pyx_t_10 = __Pyx_c_diff_double(__pyx_cur_scope->__pyx_v_v, __pyx_cur_scope->__pyx_v_u); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_t_10); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_3; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_4; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L8_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_3 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_4 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 343, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":268 - * u=cython.complex, - * ) - * def spline_to_curves(q, costs, tolerance=0.5, all_cubic=False): # <<<<<<<<<<<<<< - * """ - * q: quadratic spline with alternating on-curve / off-curve points. - */ - -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_6spline_to_curves(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_q, PyObject *__pyx_v_costs, double __pyx_v_tolerance, int __pyx_v_all_cubic) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *__pyx_cur_scope; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_v_k; - int __pyx_v_start; - int __pyx_v_i_sol_count; - int __pyx_v_j_sol_count; - double __pyx_v_err; - double __pyx_v_error; - double __pyx_v_i_sol_error; - double __pyx_v_j_sol_error; - int __pyx_v_is_cubic; - int __pyx_v_count; - __pyx_t_double_complex __pyx_v_p0; - __pyx_t_double_complex __pyx_v_p1; - __pyx_t_double_complex __pyx_v_p2; - __pyx_t_double_complex __pyx_v_p3; - PyObject *__pyx_v_elevated_quadratics = NULL; - PyObject *__pyx_v_forced = NULL; - PyObject *__pyx_v_sols = NULL; - PyObject *__pyx_v_impossible = NULL; - PyObject *__pyx_v_best_sol = NULL; - PyObject *__pyx_v_this_count = NULL; - PyObject *__pyx_v_i_sol = NULL; - PyObject *__pyx_v_curve = NULL; - PyObject *__pyx_v_ts = NULL; - PyObject *__pyx_v_reconstructed_iter = NULL; - PyObject *__pyx_v_reconstructed = NULL; - PyObject *__pyx_v_splits = NULL; - PyObject *__pyx_v_cubic = NULL; - PyObject *__pyx_v_curves = NULL; - int __pyx_8genexpr5__pyx_v_i; - PyObject *__pyx_gb_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_2generator1 = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - long __pyx_t_8; - __pyx_t_double_complex __pyx_t_9; - int __pyx_t_10; - int __pyx_t_11; - PyObject *__pyx_t_12 = NULL; - int __pyx_t_13; - int __pyx_t_14; - int __pyx_t_15; - int __pyx_t_16; - double __pyx_t_17; - PyObject *__pyx_t_18 = NULL; - PyObject *__pyx_t_19 = NULL; - PyObject *__pyx_t_20 = NULL; - PyObject *__pyx_t_21 = NULL; - PyObject *__pyx_t_22 = NULL; - PyObject *(*__pyx_t_23)(PyObject *); - Py_ssize_t __pyx_t_24; - PyObject *(*__pyx_t_25)(PyObject *); - int __pyx_t_26; - double __pyx_t_27; - double __pyx_t_28; - __pyx_t_double_complex __pyx_t_29; - __pyx_t_double_complex __pyx_t_30; - __pyx_t_double_complex __pyx_t_31; - int __pyx_t_32; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("spline_to_curves", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *)__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves(__pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 268, __pyx_L1_error) - } else { - __Pyx_GOTREF(__pyx_cur_scope); - } - - /* "fontTools/qu2cu/qu2cu.py":278 - * """ - * - * assert len(q) >= 3, "quadratic spline requires at least 3 points" # <<<<<<<<<<<<<< - * - * # Elevate quadratic segments to cubic - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_1 = PyObject_Length(__pyx_v_q); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 278, __pyx_L1_error) - if (unlikely(!((__pyx_t_1 >= 3) != 0))) { - PyErr_SetObject(PyExc_AssertionError, __pyx_kp_u_quadratic_spline_requires_at_lea); - __PYX_ERR(0, 278, __pyx_L1_error) - } - } - #endif - - /* "fontTools/qu2cu/qu2cu.py":281 - * - * # Elevate quadratic segments to cubic - * elevated_quadratics = [ # <<<<<<<<<<<<<< - * elevate_quadratic(*q[i : i + 3]) for i in range(0, len(q) - 2, 2) - * ] - */ - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/qu2cu/qu2cu.py":282 - * # Elevate quadratic segments to cubic - * elevated_quadratics = [ - * elevate_quadratic(*q[i : i + 3]) for i in range(0, len(q) - 2, 2) # <<<<<<<<<<<<<< - * ] - * - */ - __pyx_t_1 = PyObject_Length(__pyx_v_q); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 282, __pyx_L1_error) - __pyx_t_3 = (__pyx_t_1 - 2); - __pyx_t_1 = __pyx_t_3; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_1; __pyx_t_4+=2) { - __pyx_8genexpr5__pyx_v_i = __pyx_t_4; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_elevate_quadratic); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetSlice(__pyx_v_q, __pyx_8genexpr5__pyx_v_i, (__pyx_8genexpr5__pyx_v_i + 3), NULL, NULL, NULL, 1, 1, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PySequence_Tuple(__pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_7, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - } /* exit inner scope */ - __pyx_v_elevated_quadratics = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":286 - * - * # Find sharp corners; they have to be oncurves for sure. - * forced = set() # <<<<<<<<<<<<<< - * for i in range(1, len(elevated_quadratics)): - * p0 = elevated_quadratics[i - 1][2] - */ - __pyx_t_2 = PySet_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_forced = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":287 - * # Find sharp corners; they have to be oncurves for sure. - * forced = set() - * for i in range(1, len(elevated_quadratics)): # <<<<<<<<<<<<<< - * p0 = elevated_quadratics[i - 1][2] - * p1 = elevated_quadratics[i][0] - */ - __pyx_t_3 = PyList_GET_SIZE(__pyx_v_elevated_quadratics); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(0, 287, __pyx_L1_error) - __pyx_t_1 = __pyx_t_3; - for (__pyx_t_4 = 1; __pyx_t_4 < __pyx_t_1; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "fontTools/qu2cu/qu2cu.py":288 - * forced = set() - * for i in range(1, len(elevated_quadratics)): - * p0 = elevated_quadratics[i - 1][2] # <<<<<<<<<<<<<< - * p1 = elevated_quadratics[i][0] - * p2 = elevated_quadratics[i][1] - */ - __pyx_t_8 = (__pyx_v_i - 1); - __pyx_t_2 = __Pyx_GetItemInt_List(__pyx_v_elevated_quadratics, __pyx_t_8, long, 1, __Pyx_PyInt_From_long, 1, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_2, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_6); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_p0 = __pyx_t_9; - - /* "fontTools/qu2cu/qu2cu.py":289 - * for i in range(1, len(elevated_quadratics)): - * p0 = elevated_quadratics[i - 1][2] - * p1 = elevated_quadratics[i][0] # <<<<<<<<<<<<<< - * p2 = elevated_quadratics[i][1] - * if abs(p1 - p0) + abs(p2 - p1) > tolerance + abs(p2 - p0): - */ - __pyx_t_6 = __Pyx_GetItemInt_List(__pyx_v_elevated_quadratics, __pyx_v_i, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_6, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_p1 = __pyx_t_9; - - /* "fontTools/qu2cu/qu2cu.py":290 - * p0 = elevated_quadratics[i - 1][2] - * p1 = elevated_quadratics[i][0] - * p2 = elevated_quadratics[i][1] # <<<<<<<<<<<<<< - * if abs(p1 - p0) + abs(p2 - p1) > tolerance + abs(p2 - p0): - * forced.add(i) - */ - __pyx_t_2 = __Pyx_GetItemInt_List(__pyx_v_elevated_quadratics, __pyx_v_i, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 290, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 290, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_6); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 290, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_p2 = __pyx_t_9; - - /* "fontTools/qu2cu/qu2cu.py":291 - * p1 = elevated_quadratics[i][0] - * p2 = elevated_quadratics[i][1] - * if abs(p1 - p0) + abs(p2 - p1) > tolerance + abs(p2 - p0): # <<<<<<<<<<<<<< - * forced.add(i) - * - */ - __pyx_t_10 = (((__Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_p1, __pyx_v_p0)) + __Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_p2, __pyx_v_p1))) > (__pyx_v_tolerance + __Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_p2, __pyx_v_p0)))) != 0); - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":292 - * p2 = elevated_quadratics[i][1] - * if abs(p1 - p0) + abs(p2 - p1) > tolerance + abs(p2 - p0): - * forced.add(i) # <<<<<<<<<<<<<< - * - * # Dynamic-Programming to find the solution with fewest number of - */ - __pyx_t_6 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_11 = PySet_Add(__pyx_v_forced, __pyx_t_6); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 292, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":291 - * p1 = elevated_quadratics[i][0] - * p2 = elevated_quadratics[i][1] - * if abs(p1 - p0) + abs(p2 - p1) > tolerance + abs(p2 - p0): # <<<<<<<<<<<<<< - * forced.add(i) - * - */ - } - } - - /* "fontTools/qu2cu/qu2cu.py":296 - * # Dynamic-Programming to find the solution with fewest number of - * # cubic curves, and within those the one with smallest error. - * sols = [Solution(0, 0, 0, False)] # <<<<<<<<<<<<<< - * impossible = Solution(len(elevated_quadratics) * 3 + 1, 0, 1, False) - * start = 0 - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_Solution); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyList_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_2); - PyList_SET_ITEM(__pyx_t_6, 0, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_v_sols = ((PyObject*)__pyx_t_6); - __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":297 - * # cubic curves, and within those the one with smallest error. - * sols = [Solution(0, 0, 0, False)] - * impossible = Solution(len(elevated_quadratics) * 3 + 1, 0, 1, False) # <<<<<<<<<<<<<< - * start = 0 - * for i in range(1, len(elevated_quadratics) + 1): - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Solution); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyList_GET_SIZE(__pyx_v_elevated_quadratics); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(0, 297, __pyx_L1_error) - __pyx_t_7 = PyInt_FromSsize_t(((__pyx_t_3 * 3) + 1)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_2)) { - PyObject *__pyx_temp[5] = {__pyx_t_5, __pyx_t_7, __pyx_int_0, __pyx_int_1, Py_False}; - __pyx_t_6 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) { - PyObject *__pyx_temp[5] = {__pyx_t_5, __pyx_t_7, __pyx_int_0, __pyx_int_1, Py_False}; - __pyx_t_6 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } else - #endif - { - __pyx_t_12 = PyTuple_New(4+__pyx_t_4); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (__pyx_t_5) { - __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_5); __pyx_t_5 = NULL; - } - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_4, __pyx_t_7); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_4, __pyx_int_0); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_12, 2+__pyx_t_4, __pyx_int_1); - __Pyx_INCREF(Py_False); - __Pyx_GIVEREF(Py_False); - PyTuple_SET_ITEM(__pyx_t_12, 3+__pyx_t_4, Py_False); - __pyx_t_7 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_12, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_impossible = __pyx_t_6; - __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":298 - * sols = [Solution(0, 0, 0, False)] - * impossible = Solution(len(elevated_quadratics) * 3 + 1, 0, 1, False) - * start = 0 # <<<<<<<<<<<<<< - * for i in range(1, len(elevated_quadratics) + 1): - * best_sol = impossible - */ - __pyx_v_start = 0; - - /* "fontTools/qu2cu/qu2cu.py":299 - * impossible = Solution(len(elevated_quadratics) * 3 + 1, 0, 1, False) - * start = 0 - * for i in range(1, len(elevated_quadratics) + 1): # <<<<<<<<<<<<<< - * best_sol = impossible - * for j in range(start, i): - */ - __pyx_t_3 = PyList_GET_SIZE(__pyx_v_elevated_quadratics); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(0, 299, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_3 + 1); - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 1; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "fontTools/qu2cu/qu2cu.py":300 - * start = 0 - * for i in range(1, len(elevated_quadratics) + 1): - * best_sol = impossible # <<<<<<<<<<<<<< - * for j in range(start, i): - * j_sol_count, j_sol_error = sols[j].num_points, sols[j].error - */ - __Pyx_INCREF(__pyx_v_impossible); - __Pyx_XDECREF_SET(__pyx_v_best_sol, __pyx_v_impossible); - - /* "fontTools/qu2cu/qu2cu.py":301 - * for i in range(1, len(elevated_quadratics) + 1): - * best_sol = impossible - * for j in range(start, i): # <<<<<<<<<<<<<< - * j_sol_count, j_sol_error = sols[j].num_points, sols[j].error - * - */ - __pyx_t_13 = __pyx_v_i; - __pyx_t_14 = __pyx_t_13; - for (__pyx_t_15 = __pyx_v_start; __pyx_t_15 < __pyx_t_14; __pyx_t_15+=1) { - __pyx_v_j = __pyx_t_15; - - /* "fontTools/qu2cu/qu2cu.py":302 - * best_sol = impossible - * for j in range(start, i): - * j_sol_count, j_sol_error = sols[j].num_points, sols[j].error # <<<<<<<<<<<<<< - * - * if not all_cubic: - */ - __pyx_t_6 = __Pyx_GetItemInt_List(__pyx_v_sols, __pyx_v_j, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_num_points); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_16 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_16 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt_List(__pyx_v_sols, __pyx_v_j, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_17 = __pyx_PyFloat_AsDouble(__pyx_t_6); if (unlikely((__pyx_t_17 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_j_sol_count = __pyx_t_16; - __pyx_v_j_sol_error = __pyx_t_17; - - /* "fontTools/qu2cu/qu2cu.py":304 - * j_sol_count, j_sol_error = sols[j].num_points, sols[j].error - * - * if not all_cubic: # <<<<<<<<<<<<<< - * # Solution with quadratics between j:i - * this_count = costs[2 * i - 1] - costs[2 * j] + 1 - */ - __pyx_t_10 = ((!(__pyx_v_all_cubic != 0)) != 0); - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":306 - * if not all_cubic: - * # Solution with quadratics between j:i - * this_count = costs[2 * i - 1] - costs[2 * j] + 1 # <<<<<<<<<<<<<< - * i_sol_count = j_sol_count + this_count - * i_sol_error = j_sol_error - */ - __pyx_t_8 = ((2 * __pyx_v_i) - 1); - __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_costs, __pyx_t_8, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = (2 * __pyx_v_j); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_costs, __pyx_t_8, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_12 = PyNumber_Subtract(__pyx_t_6, __pyx_t_2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_12, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF_SET(__pyx_v_this_count, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":307 - * # Solution with quadratics between j:i - * this_count = costs[2 * i - 1] - costs[2 * j] + 1 - * i_sol_count = j_sol_count + this_count # <<<<<<<<<<<<<< - * i_sol_error = j_sol_error - * i_sol = Solution(i_sol_count, i_sol_error, i - j, False) - */ - __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_j_sol_count); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 307, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_12 = PyNumber_Add(__pyx_t_2, __pyx_v_this_count); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 307, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_16 = __Pyx_PyInt_As_int(__pyx_t_12); if (unlikely((__pyx_t_16 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 307, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_v_i_sol_count = __pyx_t_16; - - /* "fontTools/qu2cu/qu2cu.py":308 - * this_count = costs[2 * i - 1] - costs[2 * j] + 1 - * i_sol_count = j_sol_count + this_count - * i_sol_error = j_sol_error # <<<<<<<<<<<<<< - * i_sol = Solution(i_sol_count, i_sol_error, i - j, False) - * if i_sol < best_sol: - */ - __pyx_v_i_sol_error = __pyx_v_j_sol_error; - - /* "fontTools/qu2cu/qu2cu.py":309 - * i_sol_count = j_sol_count + this_count - * i_sol_error = j_sol_error - * i_sol = Solution(i_sol_count, i_sol_error, i - j, False) # <<<<<<<<<<<<<< - * if i_sol < best_sol: - * best_sol = i_sol - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Solution); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyInt_From_int(__pyx_v_i_sol_count); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = PyFloat_FromDouble(__pyx_v_i_sol_error); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = __Pyx_PyInt_From_int((__pyx_v_i - __pyx_v_j)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_18 = NULL; - __pyx_t_16 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_18 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_18)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_18); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_16 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_2)) { - PyObject *__pyx_temp[5] = {__pyx_t_18, __pyx_t_6, __pyx_t_7, __pyx_t_5, Py_False}; - __pyx_t_12 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_16, 4+__pyx_t_16); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) { - PyObject *__pyx_temp[5] = {__pyx_t_18, __pyx_t_6, __pyx_t_7, __pyx_t_5, Py_False}; - __pyx_t_12 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_16, 4+__pyx_t_16); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else - #endif - { - __pyx_t_19 = PyTuple_New(4+__pyx_t_16); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - if (__pyx_t_18) { - __Pyx_GIVEREF(__pyx_t_18); PyTuple_SET_ITEM(__pyx_t_19, 0, __pyx_t_18); __pyx_t_18 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_19, 0+__pyx_t_16, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_19, 1+__pyx_t_16, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_19, 2+__pyx_t_16, __pyx_t_5); - __Pyx_INCREF(Py_False); - __Pyx_GIVEREF(Py_False); - PyTuple_SET_ITEM(__pyx_t_19, 3+__pyx_t_16, Py_False); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_5 = 0; - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_19, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_i_sol, __pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":310 - * i_sol_error = j_sol_error - * i_sol = Solution(i_sol_count, i_sol_error, i - j, False) - * if i_sol < best_sol: # <<<<<<<<<<<<<< - * best_sol = i_sol - * - */ - __pyx_t_12 = PyObject_RichCompare(__pyx_v_i_sol, __pyx_v_best_sol, Py_LT); __Pyx_XGOTREF(__pyx_t_12); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 310, __pyx_L1_error) - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":311 - * i_sol = Solution(i_sol_count, i_sol_error, i - j, False) - * if i_sol < best_sol: - * best_sol = i_sol # <<<<<<<<<<<<<< - * - * if this_count <= 3: - */ - __Pyx_INCREF(__pyx_v_i_sol); - __Pyx_DECREF_SET(__pyx_v_best_sol, __pyx_v_i_sol); - - /* "fontTools/qu2cu/qu2cu.py":310 - * i_sol_error = j_sol_error - * i_sol = Solution(i_sol_count, i_sol_error, i - j, False) - * if i_sol < best_sol: # <<<<<<<<<<<<<< - * best_sol = i_sol - * - */ - } - - /* "fontTools/qu2cu/qu2cu.py":313 - * best_sol = i_sol - * - * if this_count <= 3: # <<<<<<<<<<<<<< - * # Can't get any better than this in the path below - * continue - */ - __pyx_t_12 = PyObject_RichCompare(__pyx_v_this_count, __pyx_int_3, Py_LE); __Pyx_XGOTREF(__pyx_t_12); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 313, __pyx_L1_error) - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":315 - * if this_count <= 3: - * # Can't get any better than this in the path below - * continue # <<<<<<<<<<<<<< - * - * # Fit elevated_quadratics[j:i] into one cubic - */ - goto __pyx_L10_continue; - - /* "fontTools/qu2cu/qu2cu.py":313 - * best_sol = i_sol - * - * if this_count <= 3: # <<<<<<<<<<<<<< - * # Can't get any better than this in the path below - * continue - */ - } - - /* "fontTools/qu2cu/qu2cu.py":304 - * j_sol_count, j_sol_error = sols[j].num_points, sols[j].error - * - * if not all_cubic: # <<<<<<<<<<<<<< - * # Solution with quadratics between j:i - * this_count = costs[2 * i - 1] - costs[2 * j] + 1 - */ - } - - /* "fontTools/qu2cu/qu2cu.py":318 - * - * # Fit elevated_quadratics[j:i] into one cubic - * try: # <<<<<<<<<<<<<< - * curve, ts = merge_curves(elevated_quadratics, j, i - j) - * except ZeroDivisionError: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_20, &__pyx_t_21, &__pyx_t_22); - __Pyx_XGOTREF(__pyx_t_20); - __Pyx_XGOTREF(__pyx_t_21); - __Pyx_XGOTREF(__pyx_t_22); - /*try:*/ { - - /* "fontTools/qu2cu/qu2cu.py":319 - * # Fit elevated_quadratics[j:i] into one cubic - * try: - * curve, ts = merge_curves(elevated_quadratics, j, i - j) # <<<<<<<<<<<<<< - * except ZeroDivisionError: - * continue - */ - __pyx_t_12 = __pyx_f_9fontTools_5qu2cu_5qu2cu_merge_curves(__pyx_v_elevated_quadratics, __pyx_v_j, (__pyx_v_i - __pyx_v_j)); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 319, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_12); - if ((likely(PyTuple_CheckExact(__pyx_t_12))) || (PyList_CheckExact(__pyx_t_12))) { - PyObject* sequence = __pyx_t_12; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 319, __pyx_L15_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_19 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_19 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_19); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 319, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_19 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 319, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_19); - #endif - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_5 = PyObject_GetIter(__pyx_t_12); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 319, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_23 = Py_TYPE(__pyx_t_5)->tp_iternext; - index = 0; __pyx_t_2 = __pyx_t_23(__pyx_t_5); if (unlikely(!__pyx_t_2)) goto __pyx_L23_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_19 = __pyx_t_23(__pyx_t_5); if (unlikely(!__pyx_t_19)) goto __pyx_L23_unpacking_failed; - __Pyx_GOTREF(__pyx_t_19); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_23(__pyx_t_5), 2) < 0) __PYX_ERR(0, 319, __pyx_L15_error) - __pyx_t_23 = NULL; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L24_unpacking_done; - __pyx_L23_unpacking_failed:; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_23 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 319, __pyx_L15_error) - __pyx_L24_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_curve, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_ts, __pyx_t_19); - __pyx_t_19 = 0; - - /* "fontTools/qu2cu/qu2cu.py":318 - * - * # Fit elevated_quadratics[j:i] into one cubic - * try: # <<<<<<<<<<<<<< - * curve, ts = merge_curves(elevated_quadratics, j, i - j) - * except ZeroDivisionError: - */ - } - __Pyx_XDECREF(__pyx_t_20); __pyx_t_20 = 0; - __Pyx_XDECREF(__pyx_t_21); __pyx_t_21 = 0; - __Pyx_XDECREF(__pyx_t_22); __pyx_t_22 = 0; - goto __pyx_L22_try_end; - __pyx_L15_error:; - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_XDECREF(__pyx_t_19); __pyx_t_19 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":320 - * try: - * curve, ts = merge_curves(elevated_quadratics, j, i - j) - * except ZeroDivisionError: # <<<<<<<<<<<<<< - * continue - * - */ - __pyx_t_16 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_ZeroDivisionError); - if (__pyx_t_16) { - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.spline_to_curves", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_12, &__pyx_t_19, &__pyx_t_2) < 0) __PYX_ERR(0, 320, __pyx_L17_except_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GOTREF(__pyx_t_19); - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/qu2cu/qu2cu.py":321 - * curve, ts = merge_curves(elevated_quadratics, j, i - j) - * except ZeroDivisionError: - * continue # <<<<<<<<<<<<<< - * - * # Now reconstruct the segments from the fitted curve - */ - goto __pyx_L26_except_continue; - __pyx_L26_except_continue:; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L21_try_continue; - } - goto __pyx_L17_except_error; - __pyx_L17_except_error:; - - /* "fontTools/qu2cu/qu2cu.py":318 - * - * # Fit elevated_quadratics[j:i] into one cubic - * try: # <<<<<<<<<<<<<< - * curve, ts = merge_curves(elevated_quadratics, j, i - j) - * except ZeroDivisionError: - */ - __Pyx_XGIVEREF(__pyx_t_20); - __Pyx_XGIVEREF(__pyx_t_21); - __Pyx_XGIVEREF(__pyx_t_22); - __Pyx_ExceptionReset(__pyx_t_20, __pyx_t_21, __pyx_t_22); - goto __pyx_L1_error; - __pyx_L21_try_continue:; - __Pyx_XGIVEREF(__pyx_t_20); - __Pyx_XGIVEREF(__pyx_t_21); - __Pyx_XGIVEREF(__pyx_t_22); - __Pyx_ExceptionReset(__pyx_t_20, __pyx_t_21, __pyx_t_22); - goto __pyx_L10_continue; - __pyx_L22_try_end:; - } - - /* "fontTools/qu2cu/qu2cu.py":324 - * - * # Now reconstruct the segments from the fitted curve - * reconstructed_iter = splitCubicAtTC(*curve, *ts) # <<<<<<<<<<<<<< - * reconstructed = [] - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_splitCubicAtTC); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_19 = __Pyx_PySequence_Tuple(__pyx_v_curve); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - __pyx_t_12 = __Pyx_PySequence_Tuple(__pyx_v_ts); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_5 = PyNumber_Add(__pyx_t_19, __pyx_t_12); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF_SET(__pyx_v_reconstructed_iter, __pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":325 - * # Now reconstruct the segments from the fitted curve - * reconstructed_iter = splitCubicAtTC(*curve, *ts) - * reconstructed = [] # <<<<<<<<<<<<<< - * - * # Knot errors - */ - __pyx_t_12 = PyList_New(0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_XDECREF_SET(__pyx_v_reconstructed, ((PyObject*)__pyx_t_12)); - __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":328 - * - * # Knot errors - * error = 0 # <<<<<<<<<<<<<< - * for k, reconst in enumerate(reconstructed_iter): - * orig = elevated_quadratics[j + k] - */ - __pyx_v_error = 0.0; - - /* "fontTools/qu2cu/qu2cu.py":329 - * # Knot errors - * error = 0 - * for k, reconst in enumerate(reconstructed_iter): # <<<<<<<<<<<<<< - * orig = elevated_quadratics[j + k] - * err = abs(reconst[3] - orig[3]) - */ - __pyx_t_16 = 0; - if (likely(PyList_CheckExact(__pyx_v_reconstructed_iter)) || PyTuple_CheckExact(__pyx_v_reconstructed_iter)) { - __pyx_t_12 = __pyx_v_reconstructed_iter; __Pyx_INCREF(__pyx_t_12); __pyx_t_24 = 0; - __pyx_t_25 = NULL; - } else { - __pyx_t_24 = -1; __pyx_t_12 = PyObject_GetIter(__pyx_v_reconstructed_iter); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_25 = Py_TYPE(__pyx_t_12)->tp_iternext; if (unlikely(!__pyx_t_25)) __PYX_ERR(0, 329, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_25)) { - if (likely(PyList_CheckExact(__pyx_t_12))) { - if (__pyx_t_24 >= PyList_GET_SIZE(__pyx_t_12)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_12, __pyx_t_24); __Pyx_INCREF(__pyx_t_5); __pyx_t_24++; if (unlikely(0 < 0)) __PYX_ERR(0, 329, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_12, __pyx_t_24); __pyx_t_24++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_24 >= PyTuple_GET_SIZE(__pyx_t_12)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_12, __pyx_t_24); __Pyx_INCREF(__pyx_t_5); __pyx_t_24++; if (unlikely(0 < 0)) __PYX_ERR(0, 329, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_12, __pyx_t_24); __pyx_t_24++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_25(__pyx_t_12); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 329, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_reconst); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_reconst, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_k = __pyx_t_16; - __pyx_t_16 = (__pyx_t_16 + 1); - - /* "fontTools/qu2cu/qu2cu.py":330 - * error = 0 - * for k, reconst in enumerate(reconstructed_iter): - * orig = elevated_quadratics[j + k] # <<<<<<<<<<<<<< - * err = abs(reconst[3] - orig[3]) - * error = max(error, err) - */ - __pyx_t_26 = (__pyx_v_j + __pyx_v_k); - __pyx_t_5 = __Pyx_GetItemInt_List(__pyx_v_elevated_quadratics, __pyx_t_26, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 330, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_orig); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_orig, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - - /* "fontTools/qu2cu/qu2cu.py":331 - * for k, reconst in enumerate(reconstructed_iter): - * orig = elevated_quadratics[j + k] - * err = abs(reconst[3] - orig[3]) # <<<<<<<<<<<<<< - * error = max(error, err) - * if error > tolerance: - */ - __pyx_t_5 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_reconst, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_orig, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_19 = PyNumber_Subtract(__pyx_t_5, __pyx_t_2); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyNumber_Absolute(__pyx_t_19); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __pyx_t_17 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_17 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_err = __pyx_t_17; - - /* "fontTools/qu2cu/qu2cu.py":332 - * orig = elevated_quadratics[j + k] - * err = abs(reconst[3] - orig[3]) - * error = max(error, err) # <<<<<<<<<<<<<< - * if error > tolerance: - * break - */ - __pyx_t_17 = __pyx_v_err; - __pyx_t_27 = __pyx_v_error; - if (((__pyx_t_17 > __pyx_t_27) != 0)) { - __pyx_t_28 = __pyx_t_17; - } else { - __pyx_t_28 = __pyx_t_27; - } - __pyx_v_error = __pyx_t_28; - - /* "fontTools/qu2cu/qu2cu.py":333 - * err = abs(reconst[3] - orig[3]) - * error = max(error, err) - * if error > tolerance: # <<<<<<<<<<<<<< - * break - * reconstructed.append(reconst) - */ - __pyx_t_10 = ((__pyx_v_error > __pyx_v_tolerance) != 0); - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":334 - * error = max(error, err) - * if error > tolerance: - * break # <<<<<<<<<<<<<< - * reconstructed.append(reconst) - * if error > tolerance: - */ - goto __pyx_L28_break; - - /* "fontTools/qu2cu/qu2cu.py":333 - * err = abs(reconst[3] - orig[3]) - * error = max(error, err) - * if error > tolerance: # <<<<<<<<<<<<<< - * break - * reconstructed.append(reconst) - */ - } - - /* "fontTools/qu2cu/qu2cu.py":335 - * if error > tolerance: - * break - * reconstructed.append(reconst) # <<<<<<<<<<<<<< - * if error > tolerance: - * # Not feasible - */ - __pyx_t_2 = __pyx_cur_scope->__pyx_v_reconst; - __Pyx_INCREF(__pyx_t_2); - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_reconstructed, __pyx_t_2); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 335, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":329 - * # Knot errors - * error = 0 - * for k, reconst in enumerate(reconstructed_iter): # <<<<<<<<<<<<<< - * orig = elevated_quadratics[j + k] - * err = abs(reconst[3] - orig[3]) - */ - } - __pyx_L28_break:; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":336 - * break - * reconstructed.append(reconst) - * if error > tolerance: # <<<<<<<<<<<<<< - * # Not feasible - * continue - */ - __pyx_t_10 = ((__pyx_v_error > __pyx_v_tolerance) != 0); - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":338 - * if error > tolerance: - * # Not feasible - * continue # <<<<<<<<<<<<<< - * - * # Interior errors - */ - goto __pyx_L10_continue; - - /* "fontTools/qu2cu/qu2cu.py":336 - * break - * reconstructed.append(reconst) - * if error > tolerance: # <<<<<<<<<<<<<< - * # Not feasible - * continue - */ - } - - /* "fontTools/qu2cu/qu2cu.py":341 - * - * # Interior errors - * for k, reconst in enumerate(reconstructed): # <<<<<<<<<<<<<< - * orig = elevated_quadratics[j + k] - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) - */ - __pyx_t_16 = 0; - __pyx_t_12 = __pyx_v_reconstructed; __Pyx_INCREF(__pyx_t_12); __pyx_t_24 = 0; - for (;;) { - if (__pyx_t_24 >= PyList_GET_SIZE(__pyx_t_12)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_12, __pyx_t_24); __Pyx_INCREF(__pyx_t_2); __pyx_t_24++; if (unlikely(0 < 0)) __PYX_ERR(0, 341, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_12, __pyx_t_24); __pyx_t_24++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_reconst); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_reconst, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_v_k = __pyx_t_16; - __pyx_t_16 = (__pyx_t_16 + 1); - - /* "fontTools/qu2cu/qu2cu.py":342 - * # Interior errors - * for k, reconst in enumerate(reconstructed): - * orig = elevated_quadratics[j + k] # <<<<<<<<<<<<<< - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) - * - */ - __pyx_t_26 = (__pyx_v_j + __pyx_v_k); - __pyx_t_2 = __Pyx_GetItemInt_List(__pyx_v_elevated_quadratics, __pyx_t_26, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 342, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_orig); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_orig, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":343 - * for k, reconst in enumerate(reconstructed): - * orig = elevated_quadratics[j + k] - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) # <<<<<<<<<<<<<< - * - * if not cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - */ - __pyx_t_2 = __pyx_pf_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_19 = __Pyx_PySequence_Tuple(__pyx_t_2); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (1) { - PyObject* sequence = __pyx_t_19; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 343, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(__pyx_t_6); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_5,&__pyx_t_7,&__pyx_t_6}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - } - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_29 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_5); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_30 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_7); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_31 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_6); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_p0 = __pyx_t_9; - __pyx_v_p1 = __pyx_t_29; - __pyx_v_p2 = __pyx_t_30; - __pyx_v_p3 = __pyx_t_31; - - /* "fontTools/qu2cu/qu2cu.py":345 - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) - * - * if not cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): # <<<<<<<<<<<<<< - * error = tolerance + 1 - * break - */ - __pyx_t_10 = ((!(__pyx_f_9fontTools_5qu2cu_5qu2cu_cubic_farthest_fit_inside(__pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3, __pyx_v_tolerance) != 0)) != 0); - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":346 - * - * if not cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - * error = tolerance + 1 # <<<<<<<<<<<<<< - * break - * if error > tolerance: - */ - __pyx_v_error = (__pyx_v_tolerance + 1.0); - - /* "fontTools/qu2cu/qu2cu.py":347 - * if not cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - * error = tolerance + 1 - * break # <<<<<<<<<<<<<< - * if error > tolerance: - * # Not feasible - */ - goto __pyx_L32_break; - - /* "fontTools/qu2cu/qu2cu.py":345 - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) - * - * if not cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): # <<<<<<<<<<<<<< - * error = tolerance + 1 - * break - */ - } - - /* "fontTools/qu2cu/qu2cu.py":341 - * - * # Interior errors - * for k, reconst in enumerate(reconstructed): # <<<<<<<<<<<<<< - * orig = elevated_quadratics[j + k] - * p0, p1, p2, p3 = tuple(v - u for v, u in zip(reconst, orig)) - */ - } - __pyx_L32_break:; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":348 - * error = tolerance + 1 - * break - * if error > tolerance: # <<<<<<<<<<<<<< - * # Not feasible - * continue - */ - __pyx_t_10 = ((__pyx_v_error > __pyx_v_tolerance) != 0); - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":350 - * if error > tolerance: - * # Not feasible - * continue # <<<<<<<<<<<<<< - * - * # Save best solution - */ - goto __pyx_L10_continue; - - /* "fontTools/qu2cu/qu2cu.py":348 - * error = tolerance + 1 - * break - * if error > tolerance: # <<<<<<<<<<<<<< - * # Not feasible - * continue - */ - } - - /* "fontTools/qu2cu/qu2cu.py":353 - * - * # Save best solution - * i_sol_count = j_sol_count + 3 # <<<<<<<<<<<<<< - * i_sol_error = max(j_sol_error, error) - * i_sol = Solution(i_sol_count, i_sol_error, i - j, True) - */ - __pyx_v_i_sol_count = (__pyx_v_j_sol_count + 3); - - /* "fontTools/qu2cu/qu2cu.py":354 - * # Save best solution - * i_sol_count = j_sol_count + 3 - * i_sol_error = max(j_sol_error, error) # <<<<<<<<<<<<<< - * i_sol = Solution(i_sol_count, i_sol_error, i - j, True) - * if i_sol < best_sol: - */ - __pyx_t_28 = __pyx_v_error; - __pyx_t_17 = __pyx_v_j_sol_error; - if (((__pyx_t_28 > __pyx_t_17) != 0)) { - __pyx_t_27 = __pyx_t_28; - } else { - __pyx_t_27 = __pyx_t_17; - } - __pyx_v_i_sol_error = __pyx_t_27; - - /* "fontTools/qu2cu/qu2cu.py":355 - * i_sol_count = j_sol_count + 3 - * i_sol_error = max(j_sol_error, error) - * i_sol = Solution(i_sol_count, i_sol_error, i - j, True) # <<<<<<<<<<<<<< - * if i_sol < best_sol: - * best_sol = i_sol - */ - __Pyx_GetModuleGlobalName(__pyx_t_19, __pyx_n_s_Solution); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - __pyx_t_6 = __Pyx_PyInt_From_int(__pyx_v_i_sol_count); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = PyFloat_FromDouble(__pyx_v_i_sol_error); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = __Pyx_PyInt_From_int((__pyx_v_i - __pyx_v_j)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = NULL; - __pyx_t_16 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_19))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_19); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_19); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_19, function); - __pyx_t_16 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_19)) { - PyObject *__pyx_temp[5] = {__pyx_t_2, __pyx_t_6, __pyx_t_7, __pyx_t_5, Py_True}; - __pyx_t_12 = __Pyx_PyFunction_FastCall(__pyx_t_19, __pyx_temp+1-__pyx_t_16, 4+__pyx_t_16); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_19)) { - PyObject *__pyx_temp[5] = {__pyx_t_2, __pyx_t_6, __pyx_t_7, __pyx_t_5, Py_True}; - __pyx_t_12 = __Pyx_PyCFunction_FastCall(__pyx_t_19, __pyx_temp+1-__pyx_t_16, 4+__pyx_t_16); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else - #endif - { - __pyx_t_18 = PyTuple_New(4+__pyx_t_16); if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_18); - if (__pyx_t_2) { - __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_18, 0, __pyx_t_2); __pyx_t_2 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_18, 0+__pyx_t_16, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_18, 1+__pyx_t_16, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_18, 2+__pyx_t_16, __pyx_t_5); - __Pyx_INCREF(Py_True); - __Pyx_GIVEREF(Py_True); - PyTuple_SET_ITEM(__pyx_t_18, 3+__pyx_t_16, Py_True); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_5 = 0; - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_19, __pyx_t_18, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_18); __pyx_t_18 = 0; - } - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __Pyx_XDECREF_SET(__pyx_v_i_sol, __pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":356 - * i_sol_error = max(j_sol_error, error) - * i_sol = Solution(i_sol_count, i_sol_error, i - j, True) - * if i_sol < best_sol: # <<<<<<<<<<<<<< - * best_sol = i_sol - * - */ - __pyx_t_12 = PyObject_RichCompare(__pyx_v_i_sol, __pyx_v_best_sol, Py_LT); __Pyx_XGOTREF(__pyx_t_12); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 356, __pyx_L1_error) - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 356, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":357 - * i_sol = Solution(i_sol_count, i_sol_error, i - j, True) - * if i_sol < best_sol: - * best_sol = i_sol # <<<<<<<<<<<<<< - * - * if i_sol_count == 3: - */ - __Pyx_INCREF(__pyx_v_i_sol); - __Pyx_DECREF_SET(__pyx_v_best_sol, __pyx_v_i_sol); - - /* "fontTools/qu2cu/qu2cu.py":356 - * i_sol_error = max(j_sol_error, error) - * i_sol = Solution(i_sol_count, i_sol_error, i - j, True) - * if i_sol < best_sol: # <<<<<<<<<<<<<< - * best_sol = i_sol - * - */ - } - - /* "fontTools/qu2cu/qu2cu.py":359 - * best_sol = i_sol - * - * if i_sol_count == 3: # <<<<<<<<<<<<<< - * # Can't get any better than this - * break - */ - __pyx_t_10 = ((__pyx_v_i_sol_count == 3) != 0); - if (__pyx_t_10) { - - /* "fontTools/qu2cu/qu2cu.py":361 - * if i_sol_count == 3: - * # Can't get any better than this - * break # <<<<<<<<<<<<<< - * - * sols.append(best_sol) - */ - goto __pyx_L11_break; - - /* "fontTools/qu2cu/qu2cu.py":359 - * best_sol = i_sol - * - * if i_sol_count == 3: # <<<<<<<<<<<<<< - * # Can't get any better than this - * break - */ - } - __pyx_L10_continue:; - } - __pyx_L11_break:; - - /* "fontTools/qu2cu/qu2cu.py":363 - * break - * - * sols.append(best_sol) # <<<<<<<<<<<<<< - * if i in forced: - * start = i - */ - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_sols, __pyx_v_best_sol); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 363, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":364 - * - * sols.append(best_sol) - * if i in forced: # <<<<<<<<<<<<<< - * start = i - * - */ - __pyx_t_12 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 364, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_10 = (__Pyx_PySet_ContainsTF(__pyx_t_12, __pyx_v_forced, Py_EQ)); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 364, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_32 = (__pyx_t_10 != 0); - if (__pyx_t_32) { - - /* "fontTools/qu2cu/qu2cu.py":365 - * sols.append(best_sol) - * if i in forced: - * start = i # <<<<<<<<<<<<<< - * - * # Reconstruct solution - */ - __pyx_v_start = __pyx_v_i; - - /* "fontTools/qu2cu/qu2cu.py":364 - * - * sols.append(best_sol) - * if i in forced: # <<<<<<<<<<<<<< - * start = i - * - */ - } - } - - /* "fontTools/qu2cu/qu2cu.py":368 - * - * # Reconstruct solution - * splits = [] # <<<<<<<<<<<<<< - * cubic = [] - * i = len(sols) - 1 - */ - __pyx_t_12 = PyList_New(0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 368, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_v_splits = ((PyObject*)__pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":369 - * # Reconstruct solution - * splits = [] - * cubic = [] # <<<<<<<<<<<<<< - * i = len(sols) - 1 - * while i: - */ - __pyx_t_12 = PyList_New(0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 369, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_v_cubic = ((PyObject*)__pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":370 - * splits = [] - * cubic = [] - * i = len(sols) - 1 # <<<<<<<<<<<<<< - * while i: - * count, is_cubic = sols[i].start_index, sols[i].is_cubic - */ - __pyx_t_1 = PyList_GET_SIZE(__pyx_v_sols); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 370, __pyx_L1_error) - __pyx_v_i = (__pyx_t_1 - 1); - - /* "fontTools/qu2cu/qu2cu.py":371 - * cubic = [] - * i = len(sols) - 1 - * while i: # <<<<<<<<<<<<<< - * count, is_cubic = sols[i].start_index, sols[i].is_cubic - * splits.append(i) - */ - while (1) { - __pyx_t_32 = (__pyx_v_i != 0); - if (!__pyx_t_32) break; - - /* "fontTools/qu2cu/qu2cu.py":372 - * i = len(sols) - 1 - * while i: - * count, is_cubic = sols[i].start_index, sols[i].is_cubic # <<<<<<<<<<<<<< - * splits.append(i) - * cubic.append(is_cubic) - */ - __pyx_t_12 = __Pyx_GetItemInt_List(__pyx_v_sols, __pyx_v_i, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 372, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_19 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_start_index); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 372, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_19); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 372, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __pyx_t_19 = __Pyx_GetItemInt_List(__pyx_v_sols, __pyx_v_i, int, 1, __Pyx_PyInt_From_int, 1, 1, 1); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 372, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_19, __pyx_n_s_is_cubic); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 372, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __pyx_t_13 = __Pyx_PyInt_As_int(__pyx_t_12); if (unlikely((__pyx_t_13 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 372, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_v_count = __pyx_t_4; - __pyx_v_is_cubic = __pyx_t_13; - - /* "fontTools/qu2cu/qu2cu.py":373 - * while i: - * count, is_cubic = sols[i].start_index, sols[i].is_cubic - * splits.append(i) # <<<<<<<<<<<<<< - * cubic.append(is_cubic) - * i -= count - */ - __pyx_t_12 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 373, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_splits, __pyx_t_12); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 373, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":374 - * count, is_cubic = sols[i].start_index, sols[i].is_cubic - * splits.append(i) - * cubic.append(is_cubic) # <<<<<<<<<<<<<< - * i -= count - * curves = [] - */ - __pyx_t_12 = __Pyx_PyInt_From_int(__pyx_v_is_cubic); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_cubic, __pyx_t_12); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":375 - * splits.append(i) - * cubic.append(is_cubic) - * i -= count # <<<<<<<<<<<<<< - * curves = [] - * j = 0 - */ - __pyx_v_i = (__pyx_v_i - __pyx_v_count); - } - - /* "fontTools/qu2cu/qu2cu.py":376 - * cubic.append(is_cubic) - * i -= count - * curves = [] # <<<<<<<<<<<<<< - * j = 0 - * for i, is_cubic in reversed(list(zip(splits, cubic))): - */ - __pyx_t_12 = PyList_New(0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 376, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_v_curves = ((PyObject*)__pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/qu2cu/qu2cu.py":377 - * i -= count - * curves = [] - * j = 0 # <<<<<<<<<<<<<< - * for i, is_cubic in reversed(list(zip(splits, cubic))): - * if is_cubic: - */ - __pyx_v_j = 0; - - /* "fontTools/qu2cu/qu2cu.py":378 - * curves = [] - * j = 0 - * for i, is_cubic in reversed(list(zip(splits, cubic))): # <<<<<<<<<<<<<< - * if is_cubic: - * curves.append(merge_curves(elevated_quadratics, j, i - j)[0]) - */ - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_v_splits); - __Pyx_GIVEREF(__pyx_v_splits); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_v_splits); - __Pyx_INCREF(__pyx_v_cubic); - __Pyx_GIVEREF(__pyx_v_cubic); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_v_cubic); - __pyx_t_19 = __Pyx_PyObject_Call(__pyx_builtin_zip, __pyx_t_12, NULL); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_19); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PySequence_List(__pyx_t_19); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __pyx_t_19 = __pyx_t_12; __Pyx_INCREF(__pyx_t_19); __pyx_t_1 = PyList_GET_SIZE(__pyx_t_19) - 1; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - for (;;) { - if (__pyx_t_1 < 0) break; - if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_19)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_12 = PyList_GET_ITEM(__pyx_t_19, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1--; if (unlikely(0 < 0)) __PYX_ERR(0, 378, __pyx_L1_error) - #else - __pyx_t_12 = PySequence_ITEM(__pyx_t_19, __pyx_t_1); __pyx_t_1--; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - if ((likely(PyTuple_CheckExact(__pyx_t_12))) || (PyList_CheckExact(__pyx_t_12))) { - PyObject* sequence = __pyx_t_12; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 378, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_18 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_18 = PyList_GET_ITEM(sequence, 0); - __pyx_t_5 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_18); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_18 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_18); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_12); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_23 = Py_TYPE(__pyx_t_7)->tp_iternext; - index = 0; __pyx_t_18 = __pyx_t_23(__pyx_t_7); if (unlikely(!__pyx_t_18)) goto __pyx_L42_unpacking_failed; - __Pyx_GOTREF(__pyx_t_18); - index = 1; __pyx_t_5 = __pyx_t_23(__pyx_t_7); if (unlikely(!__pyx_t_5)) goto __pyx_L42_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_23(__pyx_t_7), 2) < 0) __PYX_ERR(0, 378, __pyx_L1_error) - __pyx_t_23 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L43_unpacking_done; - __pyx_L42_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_23 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 378, __pyx_L1_error) - __pyx_L43_unpacking_done:; - } - __pyx_t_13 = __Pyx_PyInt_As_int(__pyx_t_18); if (unlikely((__pyx_t_13 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_18); __pyx_t_18 = 0; - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_5); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_i = __pyx_t_13; - __pyx_v_is_cubic = __pyx_t_4; - - /* "fontTools/qu2cu/qu2cu.py":379 - * j = 0 - * for i, is_cubic in reversed(list(zip(splits, cubic))): - * if is_cubic: # <<<<<<<<<<<<<< - * curves.append(merge_curves(elevated_quadratics, j, i - j)[0]) - * else: - */ - __pyx_t_32 = (__pyx_v_is_cubic != 0); - if (__pyx_t_32) { - - /* "fontTools/qu2cu/qu2cu.py":380 - * for i, is_cubic in reversed(list(zip(splits, cubic))): - * if is_cubic: - * curves.append(merge_curves(elevated_quadratics, j, i - j)[0]) # <<<<<<<<<<<<<< - * else: - * for k in range(j, i): - */ - __pyx_t_12 = __pyx_f_9fontTools_5qu2cu_5qu2cu_merge_curves(__pyx_v_elevated_quadratics, __pyx_v_j, (__pyx_v_i - __pyx_v_j)); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_12, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_curves, __pyx_t_5); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/qu2cu/qu2cu.py":379 - * j = 0 - * for i, is_cubic in reversed(list(zip(splits, cubic))): - * if is_cubic: # <<<<<<<<<<<<<< - * curves.append(merge_curves(elevated_quadratics, j, i - j)[0]) - * else: - */ - goto __pyx_L44; - } - - /* "fontTools/qu2cu/qu2cu.py":382 - * curves.append(merge_curves(elevated_quadratics, j, i - j)[0]) - * else: - * for k in range(j, i): # <<<<<<<<<<<<<< - * curves.append(q[k * 2 : k * 2 + 3]) - * j = i - */ - /*else*/ { - __pyx_t_4 = __pyx_v_i; - __pyx_t_13 = __pyx_t_4; - for (__pyx_t_14 = __pyx_v_j; __pyx_t_14 < __pyx_t_13; __pyx_t_14+=1) { - __pyx_v_k = __pyx_t_14; - - /* "fontTools/qu2cu/qu2cu.py":383 - * else: - * for k in range(j, i): - * curves.append(q[k * 2 : k * 2 + 3]) # <<<<<<<<<<<<<< - * j = i - * - */ - __pyx_t_5 = __Pyx_PyObject_GetSlice(__pyx_v_q, (__pyx_v_k * 2), ((__pyx_v_k * 2) + 3), NULL, NULL, NULL, 1, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 383, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_curves, __pyx_t_5); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 383, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - } - __pyx_L44:; - - /* "fontTools/qu2cu/qu2cu.py":384 - * for k in range(j, i): - * curves.append(q[k * 2 : k * 2 + 3]) - * j = i # <<<<<<<<<<<<<< - * - * return curves - */ - __pyx_v_j = __pyx_v_i; - - /* "fontTools/qu2cu/qu2cu.py":378 - * curves = [] - * j = 0 - * for i, is_cubic in reversed(list(zip(splits, cubic))): # <<<<<<<<<<<<<< - * if is_cubic: - * curves.append(merge_curves(elevated_quadratics, j, i - j)[0]) - */ - } - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - - /* "fontTools/qu2cu/qu2cu.py":386 - * j = i - * - * return curves # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_curves); - __pyx_r = __pyx_v_curves; - goto __pyx_L0; - - /* "fontTools/qu2cu/qu2cu.py":268 - * u=cython.complex, - * ) - * def spline_to_curves(q, costs, tolerance=0.5, all_cubic=False): # <<<<<<<<<<<<<< - * """ - * q: quadratic spline with alternating on-curve / off-curve points. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_18); - __Pyx_XDECREF(__pyx_t_19); - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.spline_to_curves", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_elevated_quadratics); - __Pyx_XDECREF(__pyx_v_forced); - __Pyx_XDECREF(__pyx_v_sols); - __Pyx_XDECREF(__pyx_v_impossible); - __Pyx_XDECREF(__pyx_v_best_sol); - __Pyx_XDECREF(__pyx_v_this_count); - __Pyx_XDECREF(__pyx_v_i_sol); - __Pyx_XDECREF(__pyx_v_curve); - __Pyx_XDECREF(__pyx_v_ts); - __Pyx_XDECREF(__pyx_v_reconstructed_iter); - __Pyx_XDECREF(__pyx_v_reconstructed); - __Pyx_XDECREF(__pyx_v_splits); - __Pyx_XDECREF(__pyx_v_cubic); - __Pyx_XDECREF(__pyx_v_curves); - __Pyx_XDECREF(__pyx_gb_9fontTools_5qu2cu_5qu2cu_16spline_to_curves_2generator1); - __Pyx_DECREF(((PyObject *)__pyx_cur_scope)); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/qu2cu/qu2cu.py":389 - * - * - * def main(): # <<<<<<<<<<<<<< - * from fontTools.cu2qu.benchmark import generate_curve - * from fontTools.cu2qu import curve_to_quadratic - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_9main(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static char __pyx_doc_9fontTools_5qu2cu_5qu2cu_8main[] = "main()"; -static PyMethodDef __pyx_mdef_9fontTools_5qu2cu_5qu2cu_9main = {"main", (PyCFunction)__pyx_pw_9fontTools_5qu2cu_5qu2cu_9main, METH_NOARGS, __pyx_doc_9fontTools_5qu2cu_5qu2cu_8main}; -static PyObject *__pyx_pw_9fontTools_5qu2cu_5qu2cu_9main(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("main (wrapper)", 0); - __pyx_r = __pyx_pf_9fontTools_5qu2cu_5qu2cu_8main(__pyx_self); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_5qu2cu_5qu2cu_8main(CYTHON_UNUSED PyObject *__pyx_self) { - PyObject *__pyx_v_generate_curve = NULL; - PyObject *__pyx_v_curve_to_quadratic = NULL; - double __pyx_v_tolerance; - double __pyx_v_reconstruct_tolerance; - PyObject *__pyx_v_curve = NULL; - PyObject *__pyx_v_quadratics = NULL; - PyObject *__pyx_v_curves = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - Py_ssize_t __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("main", 0); - - /* "fontTools/qu2cu/qu2cu.py":390 - * - * def main(): - * from fontTools.cu2qu.benchmark import generate_curve # <<<<<<<<<<<<<< - * from fontTools.cu2qu import curve_to_quadratic - * - */ - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 390, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_generate_curve); - __Pyx_GIVEREF(__pyx_n_s_generate_curve); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_generate_curve); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_fontTools_cu2qu_benchmark, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 390, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_generate_curve); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 390, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __pyx_v_generate_curve = __pyx_t_1; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/qu2cu/qu2cu.py":391 - * def main(): - * from fontTools.cu2qu.benchmark import generate_curve - * from fontTools.cu2qu import curve_to_quadratic # <<<<<<<<<<<<<< - * - * tolerance = 0.05 - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 391, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_curve_to_quadratic); - __Pyx_GIVEREF(__pyx_n_s_curve_to_quadratic); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_curve_to_quadratic); - __pyx_t_1 = __Pyx_Import(__pyx_n_s_fontTools_cu2qu, __pyx_t_2, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 391, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_curve_to_quadratic); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 391, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - __pyx_v_curve_to_quadratic = __pyx_t_2; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":393 - * from fontTools.cu2qu import curve_to_quadratic - * - * tolerance = 0.05 # <<<<<<<<<<<<<< - * reconstruct_tolerance = tolerance * 1 - * curve = generate_curve() - */ - __pyx_v_tolerance = 0.05; - - /* "fontTools/qu2cu/qu2cu.py":394 - * - * tolerance = 0.05 - * reconstruct_tolerance = tolerance * 1 # <<<<<<<<<<<<<< - * curve = generate_curve() - * quadratics = curve_to_quadratic(curve, tolerance) - */ - __pyx_v_reconstruct_tolerance = (__pyx_v_tolerance * 1.0); - - /* "fontTools/qu2cu/qu2cu.py":395 - * tolerance = 0.05 - * reconstruct_tolerance = tolerance * 1 - * curve = generate_curve() # <<<<<<<<<<<<<< - * quadratics = curve_to_quadratic(curve, tolerance) - * print( - */ - __Pyx_INCREF(__pyx_v_generate_curve); - __pyx_t_2 = __pyx_v_generate_curve; __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 395, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_curve = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":396 - * reconstruct_tolerance = tolerance * 1 - * curve = generate_curve() - * quadratics = curve_to_quadratic(curve, tolerance) # <<<<<<<<<<<<<< - * print( - * "cu2qu tolerance %g. qu2cu tolerance %g." % (tolerance, reconstruct_tolerance) - */ - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_tolerance); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 396, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_curve_to_quadratic); - __pyx_t_3 = __pyx_v_curve_to_quadratic; __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_v_curve, __pyx_t_2}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 396, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_v_curve, __pyx_t_2}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 396, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else - #endif - { - __pyx_t_6 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 396, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (__pyx_t_4) { - __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_4); __pyx_t_4 = NULL; - } - __Pyx_INCREF(__pyx_v_curve); - __Pyx_GIVEREF(__pyx_v_curve); - PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_v_curve); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 396, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_quadratics = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/qu2cu/qu2cu.py":398 - * quadratics = curve_to_quadratic(curve, tolerance) - * print( - * "cu2qu tolerance %g. qu2cu tolerance %g." % (tolerance, reconstruct_tolerance) # <<<<<<<<<<<<<< - * ) - * print("One random cubic turned into %d quadratics." % len(quadratics)) - */ - __pyx_t_1 = PyFloat_FromDouble(__pyx_v_tolerance); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 398, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyFloat_FromDouble(__pyx_v_reconstruct_tolerance); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 398, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 398, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_cu2qu_tolerance_g_qu2cu_toleranc, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 398, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":397 - * curve = generate_curve() - * quadratics = curve_to_quadratic(curve, tolerance) - * print( # <<<<<<<<<<<<<< - * "cu2qu tolerance %g. qu2cu tolerance %g." % (tolerance, reconstruct_tolerance) - * ) - */ - __pyx_t_6 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":400 - * "cu2qu tolerance %g. qu2cu tolerance %g." % (tolerance, reconstruct_tolerance) - * ) - * print("One random cubic turned into %d quadratics." % len(quadratics)) # <<<<<<<<<<<<<< - * curves = quadratic_to_curves([quadratics], reconstruct_tolerance) - * print("Those quadratics turned back into %d cubics. " % len(curves)) - */ - __pyx_t_7 = PyObject_Length(__pyx_v_quadratics); if (unlikely(__pyx_t_7 == ((Py_ssize_t)-1))) __PYX_ERR(0, 400, __pyx_L1_error) - __pyx_t_6 = PyInt_FromSsize_t(__pyx_t_7); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 400, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_One_random_cubic_turned_into_d_q, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 400, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 400, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":401 - * ) - * print("One random cubic turned into %d quadratics." % len(quadratics)) - * curves = quadratic_to_curves([quadratics], reconstruct_tolerance) # <<<<<<<<<<<<<< - * print("Those quadratics turned back into %d cubics. " % len(curves)) - * print("Original curve:", curve) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_quadratic_to_curves); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_quadratics); - __Pyx_GIVEREF(__pyx_v_quadratics); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_v_quadratics); - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_reconstruct_tolerance); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_t_1, __pyx_t_2}; - __pyx_t_6 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 401, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_t_1, __pyx_t_2}; - __pyx_t_6 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 401, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else - #endif - { - __pyx_t_8 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__pyx_t_4) { - __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_4); __pyx_t_4 = NULL; - } - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_5, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_5, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_8, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_curves = __pyx_t_6; - __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":402 - * print("One random cubic turned into %d quadratics." % len(quadratics)) - * curves = quadratic_to_curves([quadratics], reconstruct_tolerance) - * print("Those quadratics turned back into %d cubics. " % len(curves)) # <<<<<<<<<<<<<< - * print("Original curve:", curve) - * print("Reconstructed curve(s):", curves) - */ - __pyx_t_7 = PyObject_Length(__pyx_v_curves); if (unlikely(__pyx_t_7 == ((Py_ssize_t)-1))) __PYX_ERR(0, 402, __pyx_L1_error) - __pyx_t_6 = PyInt_FromSsize_t(__pyx_t_7); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 402, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_Those_quadratics_turned_back_int, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 402, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 402, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":403 - * curves = quadratic_to_curves([quadratics], reconstruct_tolerance) - * print("Those quadratics turned back into %d cubics. " % len(curves)) - * print("Original curve:", curve) # <<<<<<<<<<<<<< - * print("Reconstructed curve(s):", curves) - * - */ - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 403, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(__pyx_kp_u_Original_curve); - __Pyx_GIVEREF(__pyx_kp_u_Original_curve); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_kp_u_Original_curve); - __Pyx_INCREF(__pyx_v_curve); - __Pyx_GIVEREF(__pyx_v_curve); - PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_v_curve); - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 403, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/qu2cu/qu2cu.py":404 - * print("Those quadratics turned back into %d cubics. " % len(curves)) - * print("Original curve:", curve) - * print("Reconstructed curve(s):", curves) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 404, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_kp_u_Reconstructed_curve_s); - __Pyx_GIVEREF(__pyx_kp_u_Reconstructed_curve_s); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_Reconstructed_curve_s); - __Pyx_INCREF(__pyx_v_curves); - __Pyx_GIVEREF(__pyx_v_curves); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_curves); - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 404, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":389 - * - * - * def main(): # <<<<<<<<<<<<<< - * from fontTools.cu2qu.benchmark import generate_curve - * from fontTools.cu2qu import curve_to_quadratic - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu.main", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_generate_curve); - __Pyx_XDECREF(__pyx_v_curve_to_quadratic); - __Pyx_XDECREF(__pyx_v_curve); - __Pyx_XDECREF(__pyx_v_quadratics); - __Pyx_XDECREF(__pyx_v_curves); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves[8]; -static int __pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves = 0; - -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves)))) { - o = (PyObject*)__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves[--__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - return o; -} - -static void __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves(PyObject *o) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_8genexpr3__pyx_v_curve); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves)))) { - __pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves[__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves++] = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *)o; - if (p->__pyx_8genexpr3__pyx_v_curve) { - e = (*v)(p->__pyx_8genexpr3__pyx_v_curve, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves *)o; - tmp = ((PyObject*)p->__pyx_8genexpr3__pyx_v_curve); - p->__pyx_8genexpr3__pyx_v_curve = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyTypeObject __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.qu2cu.qu2cu.__pyx_scope_struct__quadratic_to_curves", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves, /*tp_traverse*/ - __pyx_tp_clear_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr[8]; -static int __pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr = 0; - -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr)))) { - o = (PyObject*)__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr[--__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - return o; -} - -static void __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr(PyObject *o) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_c); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr)))) { - __pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr[__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr++] = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_c) { - e = (*v)(p->__pyx_v_c, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} - -static PyTypeObject __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.qu2cu.qu2cu.__pyx_scope_struct_1_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves[8]; -static int __pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves = 0; - -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves)))) { - o = (PyObject*)__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves[--__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - return o; -} - -static void __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves(PyObject *o) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_orig); - Py_CLEAR(p->__pyx_v_reconst); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves)))) { - __pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves[__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves++] = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *)o; - if (p->__pyx_v_orig) { - e = (*v)(p->__pyx_v_orig, a); if (e) return e; - } - if (p->__pyx_v_reconst) { - e = (*v)(p->__pyx_v_reconst, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves *)o; - tmp = ((PyObject*)p->__pyx_v_orig); - p->__pyx_v_orig = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_reconst); - p->__pyx_v_reconst = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyTypeObject __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.qu2cu.qu2cu.__pyx_scope_struct_2_spline_to_curves", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves, /*tp_traverse*/ - __pyx_tp_clear_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr[8]; -static int __pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr = 0; - -static PyObject *__pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr)))) { - o = (PyObject*)__pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr[--__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - return o; -} - -static void __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr(PyObject *o) { - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr)))) { - __pyx_freelist_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr[__pyx_freecount_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr++] = ((struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *p = (struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} - -static PyTypeObject __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.qu2cu.qu2cu.__pyx_scope_struct_3_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_qu2cu(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_qu2cu}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "qu2cu", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_AttributeError, __pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 0, 1, 1}, - {&__pyx_n_s_COMPILED, __pyx_k_COMPILED, sizeof(__pyx_k_COMPILED), 0, 0, 1, 1}, - {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, - {&__pyx_kp_s_Lib_fontTools_qu2cu_qu2cu_py, __pyx_k_Lib_fontTools_qu2cu_qu2cu_py, sizeof(__pyx_k_Lib_fontTools_qu2cu_qu2cu_py), 0, 0, 1, 0}, - {&__pyx_n_s_List, __pyx_k_List, sizeof(__pyx_k_List), 0, 0, 1, 1}, - {&__pyx_kp_u_One_random_cubic_turned_into_d_q, __pyx_k_One_random_cubic_turned_into_d_q, sizeof(__pyx_k_One_random_cubic_turned_into_d_q), 0, 1, 0, 0}, - {&__pyx_kp_u_Original_curve, __pyx_k_Original_curve, sizeof(__pyx_k_Original_curve), 0, 1, 0, 0}, - {&__pyx_n_s_Point, __pyx_k_Point, sizeof(__pyx_k_Point), 0, 0, 1, 1}, - {&__pyx_kp_u_Reconstructed_curve_s, __pyx_k_Reconstructed_curve_s, sizeof(__pyx_k_Reconstructed_curve_s), 0, 1, 0, 0}, - {&__pyx_n_s_Solution, __pyx_k_Solution, sizeof(__pyx_k_Solution), 0, 0, 1, 1}, - {&__pyx_n_u_Solution, __pyx_k_Solution, sizeof(__pyx_k_Solution), 0, 1, 0, 1}, - {&__pyx_kp_u_Those_quadratics_turned_back_int, __pyx_k_Those_quadratics_turned_back_int, sizeof(__pyx_k_Those_quadratics_turned_back_int), 0, 1, 0, 0}, - {&__pyx_n_s_Tuple, __pyx_k_Tuple, sizeof(__pyx_k_Tuple), 0, 0, 1, 1}, - {&__pyx_n_s_Union, __pyx_k_Union, sizeof(__pyx_k_Union), 0, 0, 1, 1}, - {&__pyx_n_s_ZeroDivisionError, __pyx_k_ZeroDivisionError, sizeof(__pyx_k_ZeroDivisionError), 0, 0, 1, 1}, - {&__pyx_n_s_add_implicit_on_curves, __pyx_k_add_implicit_on_curves, sizeof(__pyx_k_add_implicit_on_curves), 0, 0, 1, 1}, - {&__pyx_n_s_all, __pyx_k_all, sizeof(__pyx_k_all), 0, 0, 1, 1}, - {&__pyx_n_s_all_cubic, __pyx_k_all_cubic, sizeof(__pyx_k_all_cubic), 0, 0, 1, 1}, - {&__pyx_n_s_args, __pyx_k_args, sizeof(__pyx_k_args), 0, 0, 1, 1}, - {&__pyx_n_s_best_sol, __pyx_k_best_sol, sizeof(__pyx_k_best_sol), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_close, __pyx_k_close, sizeof(__pyx_k_close), 0, 0, 1, 1}, - {&__pyx_n_s_collections, __pyx_k_collections, sizeof(__pyx_k_collections), 0, 0, 1, 1}, - {&__pyx_n_s_cost, __pyx_k_cost, sizeof(__pyx_k_cost), 0, 0, 1, 1}, - {&__pyx_n_s_costs, __pyx_k_costs, sizeof(__pyx_k_costs), 0, 0, 1, 1}, - {&__pyx_n_s_count, __pyx_k_count, sizeof(__pyx_k_count), 0, 0, 1, 1}, - {&__pyx_kp_u_cu2qu_tolerance_g_qu2cu_toleranc, __pyx_k_cu2qu_tolerance_g_qu2cu_toleranc, sizeof(__pyx_k_cu2qu_tolerance_g_qu2cu_toleranc), 0, 1, 0, 0}, - {&__pyx_n_s_cubic, __pyx_k_cubic, sizeof(__pyx_k_cubic), 0, 0, 1, 1}, - {&__pyx_n_s_curve, __pyx_k_curve, sizeof(__pyx_k_curve), 0, 0, 1, 1}, - {&__pyx_n_s_curve_to_quadratic, __pyx_k_curve_to_quadratic, sizeof(__pyx_k_curve_to_quadratic), 0, 0, 1, 1}, - {&__pyx_n_s_curves, __pyx_k_curves, sizeof(__pyx_k_curves), 0, 0, 1, 1}, - {&__pyx_n_s_cython, __pyx_k_cython, sizeof(__pyx_k_cython), 0, 0, 1, 1}, - {&__pyx_n_s_elevate_quadratic, __pyx_k_elevate_quadratic, sizeof(__pyx_k_elevate_quadratic), 0, 0, 1, 1}, - {&__pyx_n_s_elevated_quadratics, __pyx_k_elevated_quadratics, sizeof(__pyx_k_elevated_quadratics), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_err, __pyx_k_err, sizeof(__pyx_k_err), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_u_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 1, 0, 1}, - {&__pyx_n_u_float, __pyx_k_float, sizeof(__pyx_k_float), 0, 1, 0, 1}, - {&__pyx_n_s_fontTools_cu2qu, __pyx_k_fontTools_cu2qu, sizeof(__pyx_k_fontTools_cu2qu), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_cu2qu_benchmark, __pyx_k_fontTools_cu2qu_benchmark, sizeof(__pyx_k_fontTools_cu2qu_benchmark), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc, __pyx_k_fontTools_misc, sizeof(__pyx_k_fontTools_misc), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc_bezierTools, __pyx_k_fontTools_misc_bezierTools, sizeof(__pyx_k_fontTools_misc_bezierTools), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_qu2cu_qu2cu, __pyx_k_fontTools_qu2cu_qu2cu, sizeof(__pyx_k_fontTools_qu2cu_qu2cu), 0, 0, 1, 1}, - {&__pyx_n_s_forced, __pyx_k_forced, sizeof(__pyx_k_forced), 0, 0, 1, 1}, - {&__pyx_n_s_generate_curve, __pyx_k_generate_curve, sizeof(__pyx_k_generate_curve), 0, 0, 1, 1}, - {&__pyx_n_s_genexpr, __pyx_k_genexpr, sizeof(__pyx_k_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1}, - {&__pyx_n_s_i_sol, __pyx_k_i_sol, sizeof(__pyx_k_i_sol), 0, 0, 1, 1}, - {&__pyx_n_s_i_sol_count, __pyx_k_i_sol_count, sizeof(__pyx_k_i_sol_count), 0, 0, 1, 1}, - {&__pyx_n_s_i_sol_error, __pyx_k_i_sol_error, sizeof(__pyx_k_i_sol_error), 0, 0, 1, 1}, - {&__pyx_n_s_imag, __pyx_k_imag, sizeof(__pyx_k_imag), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_impossible, __pyx_k_impossible, sizeof(__pyx_k_impossible), 0, 0, 1, 1}, - {&__pyx_n_s_is_complex, __pyx_k_is_complex, sizeof(__pyx_k_is_complex), 0, 0, 1, 1}, - {&__pyx_n_s_is_cubic, __pyx_k_is_cubic, sizeof(__pyx_k_is_cubic), 0, 0, 1, 1}, - {&__pyx_n_u_is_cubic, __pyx_k_is_cubic, sizeof(__pyx_k_is_cubic), 0, 1, 0, 1}, - {&__pyx_n_s_j, __pyx_k_j, sizeof(__pyx_k_j), 0, 0, 1, 1}, - {&__pyx_n_s_j_sol_count, __pyx_k_j_sol_count, sizeof(__pyx_k_j_sol_count), 0, 0, 1, 1}, - {&__pyx_n_s_j_sol_error, __pyx_k_j_sol_error, sizeof(__pyx_k_j_sol_error), 0, 0, 1, 1}, - {&__pyx_n_s_k, __pyx_k_k, sizeof(__pyx_k_k), 0, 0, 1, 1}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_u_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 1, 0, 1}, - {&__pyx_n_s_main_2, __pyx_k_main_2, sizeof(__pyx_k_main_2), 0, 0, 1, 1}, - {&__pyx_n_s_math, __pyx_k_math, sizeof(__pyx_k_math), 0, 0, 1, 1}, - {&__pyx_n_s_max_err, __pyx_k_max_err, sizeof(__pyx_k_max_err), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_namedtuple, __pyx_k_namedtuple, sizeof(__pyx_k_namedtuple), 0, 0, 1, 1}, - {&__pyx_n_s_num_offcurves, __pyx_k_num_offcurves, sizeof(__pyx_k_num_offcurves), 0, 0, 1, 1}, - {&__pyx_n_s_num_points, __pyx_k_num_points, sizeof(__pyx_k_num_points), 0, 0, 1, 1}, - {&__pyx_n_u_num_points, __pyx_k_num_points, sizeof(__pyx_k_num_points), 0, 1, 0, 1}, - {&__pyx_n_s_off1, __pyx_k_off1, sizeof(__pyx_k_off1), 0, 0, 1, 1}, - {&__pyx_n_s_off2, __pyx_k_off2, sizeof(__pyx_k_off2), 0, 0, 1, 1}, - {&__pyx_n_s_on, __pyx_k_on, sizeof(__pyx_k_on), 0, 0, 1, 1}, - {&__pyx_n_s_orig, __pyx_k_orig, sizeof(__pyx_k_orig), 0, 0, 1, 1}, - {&__pyx_n_s_p, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1}, - {&__pyx_n_s_p0, __pyx_k_p0, sizeof(__pyx_k_p0), 0, 0, 1, 1}, - {&__pyx_n_s_p1, __pyx_k_p1, sizeof(__pyx_k_p1), 0, 0, 1, 1}, - {&__pyx_n_s_p1_2_3, __pyx_k_p1_2_3, sizeof(__pyx_k_p1_2_3), 0, 0, 1, 1}, - {&__pyx_n_s_p2, __pyx_k_p2, sizeof(__pyx_k_p2), 0, 0, 1, 1}, - {&__pyx_n_s_p3, __pyx_k_p3, sizeof(__pyx_k_p3), 0, 0, 1, 1}, - {&__pyx_n_s_pop, __pyx_k_pop, sizeof(__pyx_k_pop), 0, 0, 1, 1}, - {&__pyx_n_s_print, __pyx_k_print, sizeof(__pyx_k_print), 0, 0, 1, 1}, - {&__pyx_n_s_q, __pyx_k_q, sizeof(__pyx_k_q), 0, 0, 1, 1}, - {&__pyx_n_s_qq, __pyx_k_qq, sizeof(__pyx_k_qq), 0, 0, 1, 1}, - {&__pyx_kp_u_quadratic_spline_requires_at_lea, __pyx_k_quadratic_spline_requires_at_lea, sizeof(__pyx_k_quadratic_spline_requires_at_lea), 0, 1, 0, 0}, - {&__pyx_n_s_quadratic_to_curves, __pyx_k_quadratic_to_curves, sizeof(__pyx_k_quadratic_to_curves), 0, 0, 1, 1}, - {&__pyx_n_u_quadratic_to_curves, __pyx_k_quadratic_to_curves, sizeof(__pyx_k_quadratic_to_curves), 0, 1, 0, 1}, - {&__pyx_n_s_quadratic_to_curves_locals_genex, __pyx_k_quadratic_to_curves_locals_genex, sizeof(__pyx_k_quadratic_to_curves_locals_genex), 0, 0, 1, 1}, - {&__pyx_n_s_quadratics, __pyx_k_quadratics, sizeof(__pyx_k_quadratics), 0, 0, 1, 1}, - {&__pyx_n_s_quads, __pyx_k_quads, sizeof(__pyx_k_quads), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_real, __pyx_k_real, sizeof(__pyx_k_real), 0, 0, 1, 1}, - {&__pyx_n_s_reconst, __pyx_k_reconst, sizeof(__pyx_k_reconst), 0, 0, 1, 1}, - {&__pyx_n_s_reconstruct_tolerance, __pyx_k_reconstruct_tolerance, sizeof(__pyx_k_reconstruct_tolerance), 0, 0, 1, 1}, - {&__pyx_n_s_reconstructed, __pyx_k_reconstructed, sizeof(__pyx_k_reconstructed), 0, 0, 1, 1}, - {&__pyx_n_s_reconstructed_iter, __pyx_k_reconstructed_iter, sizeof(__pyx_k_reconstructed_iter), 0, 0, 1, 1}, - {&__pyx_n_s_return, __pyx_k_return, sizeof(__pyx_k_return), 0, 0, 1, 1}, - {&__pyx_n_s_reversed, __pyx_k_reversed, sizeof(__pyx_k_reversed), 0, 0, 1, 1}, - {&__pyx_n_s_send, __pyx_k_send, sizeof(__pyx_k_send), 0, 0, 1, 1}, - {&__pyx_n_s_sols, __pyx_k_sols, sizeof(__pyx_k_sols), 0, 0, 1, 1}, - {&__pyx_n_s_spline_to_curves, __pyx_k_spline_to_curves, sizeof(__pyx_k_spline_to_curves), 0, 0, 1, 1}, - {&__pyx_n_s_spline_to_curves_locals_genexpr, __pyx_k_spline_to_curves_locals_genexpr, sizeof(__pyx_k_spline_to_curves_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_splitCubicAtTC, __pyx_k_splitCubicAtTC, sizeof(__pyx_k_splitCubicAtTC), 0, 0, 1, 1}, - {&__pyx_n_s_splits, __pyx_k_splits, sizeof(__pyx_k_splits), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_start_index, __pyx_k_start_index, sizeof(__pyx_k_start_index), 0, 0, 1, 1}, - {&__pyx_n_u_start_index, __pyx_k_start_index, sizeof(__pyx_k_start_index), 0, 1, 0, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_n_s_this_count, __pyx_k_this_count, sizeof(__pyx_k_this_count), 0, 0, 1, 1}, - {&__pyx_n_s_this_sol_count, __pyx_k_this_sol_count, sizeof(__pyx_k_this_sol_count), 0, 0, 1, 1}, - {&__pyx_n_s_throw, __pyx_k_throw, sizeof(__pyx_k_throw), 0, 0, 1, 1}, - {&__pyx_n_s_tolerance, __pyx_k_tolerance, sizeof(__pyx_k_tolerance), 0, 0, 1, 1}, - {&__pyx_n_s_ts, __pyx_k_ts, sizeof(__pyx_k_ts), 0, 0, 1, 1}, - {&__pyx_n_s_typing, __pyx_k_typing, sizeof(__pyx_k_typing), 0, 0, 1, 1}, - {&__pyx_n_s_u, __pyx_k_u, sizeof(__pyx_k_u), 0, 0, 1, 1}, - {&__pyx_n_s_v, __pyx_k_v, sizeof(__pyx_k_v), 0, 0, 1, 1}, - {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {&__pyx_n_s_y, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1}, - {&__pyx_n_s_zip, __pyx_k_zip, sizeof(__pyx_k_zip), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_n_s_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 23, __pyx_L1_error) - __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 23, __pyx_L1_error) - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 127, __pyx_L1_error) - __pyx_builtin_ZeroDivisionError = __Pyx_GetBuiltinName(__pyx_n_s_ZeroDivisionError); if (!__pyx_builtin_ZeroDivisionError) __PYX_ERR(0, 320, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(0, 329, __pyx_L1_error) - __pyx_builtin_reversed = __Pyx_GetBuiltinName(__pyx_n_s_reversed); if (!__pyx_builtin_reversed) __PYX_ERR(0, 378, __pyx_L1_error) - __pyx_builtin_zip = __Pyx_GetBuiltinName(__pyx_n_s_zip); if (!__pyx_builtin_zip) __PYX_ERR(0, 378, __pyx_L1_error) - __pyx_builtin_print = __Pyx_GetBuiltinName(__pyx_n_s_print); if (!__pyx_builtin_print) __PYX_ERR(0, 397, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "fontTools/qu2cu/qu2cu.py":229 - * costs.append(cost) - * costs.append(cost) - * qq = add_implicit_on_curves(p)[1:] # <<<<<<<<<<<<<< - * costs.pop() - * q.extend(qq) - */ - __pyx_slice_ = PySlice_New(__pyx_int_1, Py_None, Py_None); if (unlikely(!__pyx_slice_)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice_); - __Pyx_GIVEREF(__pyx_slice_); - - /* "fontTools/qu2cu/qu2cu.py":296 - * # Dynamic-Programming to find the solution with fewest number of - * # cubic curves, and within those the one with smallest error. - * sols = [Solution(0, 0, 0, False)] # <<<<<<<<<<<<<< - * impossible = Solution(len(elevated_quadratics) * 3 + 1, 0, 1, False) - * start = 0 - */ - __pyx_tuple__2 = PyTuple_Pack(4, __pyx_int_0, __pyx_int_0, __pyx_int_0, Py_False); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "fontTools/qu2cu/qu2cu.py":91 - * p1_2_3=cython.complex, - * ) - * def elevate_quadratic(p0, p1, p2): # <<<<<<<<<<<<<< - * """Given a quadratic bezier curve, return its degree-elevated cubic.""" - * - */ - __pyx_tuple__3 = PyTuple_Pack(4, __pyx_n_s_p0, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_p1_2_3); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - __pyx_codeobj__4 = (PyObject*)__Pyx_PyCode_New(3, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__3, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_qu2cu_qu2cu_py, __pyx_n_s_elevate_quadratic, 91, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__4)) __PYX_ERR(0, 91, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":165 - * on=cython.complex, - * ) - * def add_implicit_on_curves(p): # <<<<<<<<<<<<<< - * q = list(p) - * count = 0 - */ - __pyx_tuple__5 = PyTuple_Pack(8, __pyx_n_s_p, __pyx_n_s_count, __pyx_n_s_num_offcurves, __pyx_n_s_i, __pyx_n_s_off1, __pyx_n_s_off2, __pyx_n_s_on, __pyx_n_s_q); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - __pyx_codeobj__6 = (PyObject*)__Pyx_PyCode_New(1, 0, 8, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__5, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_qu2cu_qu2cu_py, __pyx_n_s_add_implicit_on_curves, 165, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__6)) __PYX_ERR(0, 165, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ - __pyx_tuple__7 = PyTuple_Pack(17, __pyx_n_s_quads, __pyx_n_s_max_err, __pyx_n_s_all_cubic, __pyx_n_s_cost, __pyx_n_s_is_complex, __pyx_n_s_q, __pyx_n_s_costs, __pyx_n_s_p, __pyx_n_s_i, __pyx_n_s_qq, __pyx_n_s_curves, __pyx_n_s_p, __pyx_n_s_x, __pyx_n_s_y, __pyx_n_s_curve, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - __pyx_codeobj__8 = (PyObject*)__Pyx_PyCode_New(3, 0, 17, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__7, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_qu2cu_qu2cu_py, __pyx_n_s_quadratic_to_curves, 185, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__8)) __PYX_ERR(0, 185, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":268 - * u=cython.complex, - * ) - * def spline_to_curves(q, costs, tolerance=0.5, all_cubic=False): # <<<<<<<<<<<<<< - * """ - * q: quadratic spline with alternating on-curve / off-curve points. - */ - __pyx_tuple__9 = PyTuple_Pack(42, __pyx_n_s_q, __pyx_n_s_costs, __pyx_n_s_tolerance, __pyx_n_s_all_cubic, __pyx_n_s_i, __pyx_n_s_j, __pyx_n_s_k, __pyx_n_s_start, __pyx_n_s_i_sol_count, __pyx_n_s_j_sol_count, __pyx_n_s_this_sol_count, __pyx_n_s_err, __pyx_n_s_error, __pyx_n_s_i_sol_error, __pyx_n_s_j_sol_error, __pyx_n_s_is_cubic, __pyx_n_s_count, __pyx_n_s_p0, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_p3, __pyx_n_s_v, __pyx_n_s_u, __pyx_n_s_elevated_quadratics, __pyx_n_s_forced, __pyx_n_s_sols, __pyx_n_s_impossible, __pyx_n_s_best_sol, __pyx_n_s_this_count, __pyx_n_s_i_sol, __pyx_n_s_curve, __pyx_n_s_ts, __pyx_n_s_reconstructed_iter, __pyx_n_s_reconstructed, __pyx_n_s_reconst, __pyx_n_s_orig, __pyx_n_s_splits, __pyx_n_s_cubic, __pyx_n_s_curves, __pyx_n_s_i, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - __pyx_codeobj__10 = (PyObject*)__Pyx_PyCode_New(4, 0, 42, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__9, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_qu2cu_qu2cu_py, __pyx_n_s_spline_to_curves, 268, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__10)) __PYX_ERR(0, 268, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":389 - * - * - * def main(): # <<<<<<<<<<<<<< - * from fontTools.cu2qu.benchmark import generate_curve - * from fontTools.cu2qu import curve_to_quadratic - */ - __pyx_tuple__11 = PyTuple_Pack(7, __pyx_n_s_generate_curve, __pyx_n_s_curve_to_quadratic, __pyx_n_s_tolerance, __pyx_n_s_reconstruct_tolerance, __pyx_n_s_curve, __pyx_n_s_quadratics, __pyx_n_s_curves); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(0, 389, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - __pyx_codeobj__12 = (PyObject*)__Pyx_PyCode_New(0, 0, 7, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__11, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_qu2cu_qu2cu_py, __pyx_n_s_main_2, 389, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__12)) __PYX_ERR(0, 389, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* AssertionsEnabled.init */ - __Pyx_init_assertions_enabled(); - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - __pyx_umethod_PyList_Type_pop.type = (PyObject*)&PyList_Type; - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_3 = PyInt_FromLong(3); if (unlikely(!__pyx_int_3)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - if (PyType_Ready(&__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves) < 0) __PYX_ERR(0, 185, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves.tp_dictoffset && __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - __pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves = &__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct__quadratic_to_curves; - if (PyType_Ready(&__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr) < 0) __PYX_ERR(0, 238, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr.tp_dictoffset && __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - __pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr = &__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_1_genexpr; - if (PyType_Ready(&__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves) < 0) __PYX_ERR(0, 268, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves.tp_dictoffset && __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - __pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves = &__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_2_spline_to_curves; - if (PyType_Ready(&__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr) < 0) __PYX_ERR(0, 343, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr.tp_dictoffset && __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - __pyx_ptype_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr = &__pyx_type_9fontTools_5qu2cu_5qu2cu___pyx_scope_struct_3_genexpr; - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initqu2cu(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initqu2cu(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_qu2cu(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_qu2cu(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_qu2cu(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'qu2cu' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_qu2cu(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("qu2cu", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_fontTools__qu2cu__qu2cu) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "fontTools.qu2cu.qu2cu")) { - if (unlikely(PyDict_SetItemString(modules, "fontTools.qu2cu.qu2cu", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "fontTools/qu2cu/qu2cu.py":19 - * # limitations under the License. - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - /*try:*/ { - - /* "fontTools/qu2cu/qu2cu.py":22 - * import cython - * - * COMPILED = cython.compiled # <<<<<<<<<<<<<< - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_True) < 0) __PYX_ERR(0, 22, __pyx_L2_error) - - /* "fontTools/qu2cu/qu2cu.py":19 - * # limitations under the License. - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L7_try_end; - __pyx_L2_error:; - - /* "fontTools/qu2cu/qu2cu.py":23 - * - * COMPILED = cython.compiled - * except (AttributeError, ImportError): # <<<<<<<<<<<<<< - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython - */ - __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_AttributeError) || __Pyx_PyErr_ExceptionMatches(__pyx_builtin_ImportError); - if (__pyx_t_4) { - __Pyx_AddTraceback("fontTools.qu2cu.qu2cu", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(0, 23, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GOTREF(__pyx_t_7); - - /* "fontTools/qu2cu/qu2cu.py":25 - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython # <<<<<<<<<<<<<< - * - * COMPILED = False - */ - __pyx_t_8 = PyList_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 25, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_n_s_cython); - __Pyx_GIVEREF(__pyx_n_s_cython); - PyList_SET_ITEM(__pyx_t_8, 0, __pyx_n_s_cython); - __pyx_t_9 = __Pyx_Import(__pyx_n_s_fontTools_misc, __pyx_t_8, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 25, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_ImportFrom(__pyx_t_9, __pyx_n_s_cython); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 25, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cython, __pyx_t_8) < 0) __PYX_ERR(0, 25, __pyx_L4_except_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/qu2cu/qu2cu.py":27 - * from fontTools.misc import cython - * - * COMPILED = False # <<<<<<<<<<<<<< - * - * from fontTools.misc.bezierTools import splitCubicAtTC - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_False) < 0) __PYX_ERR(0, 27, __pyx_L4_except_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L3_exception_handled; - } - goto __pyx_L4_except_error; - __pyx_L4_except_error:; - - /* "fontTools/qu2cu/qu2cu.py":19 - * # limitations under the License. - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - goto __pyx_L1_error; - __pyx_L3_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - __pyx_L7_try_end:; - } - - /* "fontTools/qu2cu/qu2cu.py":29 - * COMPILED = False - * - * from fontTools.misc.bezierTools import splitCubicAtTC # <<<<<<<<<<<<<< - * from collections import namedtuple - * import math - */ - __pyx_t_7 = PyList_New(1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_INCREF(__pyx_n_s_splitCubicAtTC); - __Pyx_GIVEREF(__pyx_n_s_splitCubicAtTC); - PyList_SET_ITEM(__pyx_t_7, 0, __pyx_n_s_splitCubicAtTC); - __pyx_t_6 = __Pyx_Import(__pyx_n_s_fontTools_misc_bezierTools, __pyx_t_7, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_ImportFrom(__pyx_t_6, __pyx_n_s_splitCubicAtTC); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitCubicAtTC, __pyx_t_7) < 0) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":30 - * - * from fontTools.misc.bezierTools import splitCubicAtTC - * from collections import namedtuple # <<<<<<<<<<<<<< - * import math - * from typing import ( - */ - __pyx_t_6 = PyList_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 30, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(__pyx_n_s_namedtuple); - __Pyx_GIVEREF(__pyx_n_s_namedtuple); - PyList_SET_ITEM(__pyx_t_6, 0, __pyx_n_s_namedtuple); - __pyx_t_7 = __Pyx_Import(__pyx_n_s_collections, __pyx_t_6, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 30, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_namedtuple); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 30, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_namedtuple, __pyx_t_6) < 0) __PYX_ERR(0, 30, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":31 - * from fontTools.misc.bezierTools import splitCubicAtTC - * from collections import namedtuple - * import math # <<<<<<<<<<<<<< - * from typing import ( - * List, - */ - __pyx_t_7 = __Pyx_Import(__pyx_n_s_math, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_math, __pyx_t_7) < 0) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":33 - * import math - * from typing import ( - * List, # <<<<<<<<<<<<<< - * Tuple, - * Union, - */ - __pyx_t_7 = PyList_New(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_INCREF(__pyx_n_s_List); - __Pyx_GIVEREF(__pyx_n_s_List); - PyList_SET_ITEM(__pyx_t_7, 0, __pyx_n_s_List); - __Pyx_INCREF(__pyx_n_s_Tuple); - __Pyx_GIVEREF(__pyx_n_s_Tuple); - PyList_SET_ITEM(__pyx_t_7, 1, __pyx_n_s_Tuple); - __Pyx_INCREF(__pyx_n_s_Union); - __Pyx_GIVEREF(__pyx_n_s_Union); - PyList_SET_ITEM(__pyx_t_7, 2, __pyx_n_s_Union); - - /* "fontTools/qu2cu/qu2cu.py":32 - * from collections import namedtuple - * import math - * from typing import ( # <<<<<<<<<<<<<< - * List, - * Tuple, - */ - __pyx_t_6 = __Pyx_Import(__pyx_n_s_typing, __pyx_t_7, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_ImportFrom(__pyx_t_6, __pyx_n_s_List); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_List, __pyx_t_7) < 0) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_ImportFrom(__pyx_t_6, __pyx_n_s_Tuple); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Tuple, __pyx_t_7) < 0) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_ImportFrom(__pyx_t_6, __pyx_n_s_Union); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Union, __pyx_t_7) < 0) __PYX_ERR(0, 35, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":39 - * - * - * __all__ = ["quadratic_to_curves"] # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_6 = PyList_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 39, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(__pyx_n_u_quadratic_to_curves); - __Pyx_GIVEREF(__pyx_n_u_quadratic_to_curves); - PyList_SET_ITEM(__pyx_t_6, 0, __pyx_n_u_quadratic_to_curves); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_all, __pyx_t_6) < 0) __PYX_ERR(0, 39, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":91 - * p1_2_3=cython.complex, - * ) - * def elevate_quadratic(p0, p1, p2): # <<<<<<<<<<<<<< - * """Given a quadratic bezier curve, return its degree-elevated cubic.""" - * - */ - __pyx_t_6 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5qu2cu_5qu2cu_1elevate_quadratic, 0, __pyx_n_s_elevate_quadratic, NULL, __pyx_n_s_fontTools_qu2cu_qu2cu, __pyx_d, ((PyObject *)__pyx_codeobj__4)); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_elevate_quadratic, __pyx_t_6) < 0) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":165 - * on=cython.complex, - * ) - * def add_implicit_on_curves(p): # <<<<<<<<<<<<<< - * q = list(p) - * count = 0 - */ - __pyx_t_6 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5qu2cu_5qu2cu_3add_implicit_on_curves, 0, __pyx_n_s_add_implicit_on_curves, NULL, __pyx_n_s_fontTools_qu2cu_qu2cu, __pyx_d, ((PyObject *)__pyx_codeobj__6)); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_add_implicit_on_curves, __pyx_t_6) < 0) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/qu2cu/qu2cu.py":178 - * - * - * Point = Union[Tuple[float, float], complex] # <<<<<<<<<<<<<< - * - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_Union); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_Tuple); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)(&PyFloat_Type))); - __Pyx_GIVEREF(((PyObject *)(&PyFloat_Type))); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)(&PyFloat_Type))); - __Pyx_INCREF(((PyObject *)(&PyFloat_Type))); - __Pyx_GIVEREF(((PyObject *)(&PyFloat_Type))); - PyTuple_SET_ITEM(__pyx_t_5, 1, ((PyObject *)(&PyFloat_Type))); - __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_t_7, __pyx_t_5); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_9); - __Pyx_INCREF(((PyObject *)(&PyComplex_Type))); - __Pyx_GIVEREF(((PyObject *)(&PyComplex_Type))); - PyTuple_SET_ITEM(__pyx_t_5, 1, ((PyObject *)(&PyComplex_Type))); - __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_t_6, __pyx_t_5); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Point, __pyx_t_9) < 0) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/qu2cu/qu2cu.py":187 - * def quadratic_to_curves( - * quads: List[List[Point]], - * max_err: float = 0.5, # <<<<<<<<<<<<<< - * all_cubic: bool = False, - * ) -> List[Tuple[Point, ...]]: - */ - __pyx_t_9 = PyFloat_FromDouble(((double)0.5)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - - /* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_9); - __Pyx_INCREF(((PyObject *)Py_False)); - __Pyx_GIVEREF(((PyObject *)Py_False)); - PyTuple_SET_ITEM(__pyx_t_5, 1, ((PyObject *)Py_False)); - __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyDict_NewPresized(4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - - /* "fontTools/qu2cu/qu2cu.py":186 - * ) - * def quadratic_to_curves( - * quads: List[List[Point]], # <<<<<<<<<<<<<< - * max_err: float = 0.5, - * all_cubic: bool = False, - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_List); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_List); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_Point); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_10 = __Pyx_PyObject_GetItem(__pyx_t_7, __pyx_t_8); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_GetItem(__pyx_t_6, __pyx_t_10); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (PyDict_SetItem(__pyx_t_9, __pyx_n_s_quads, __pyx_t_8) < 0) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (PyDict_SetItem(__pyx_t_9, __pyx_n_s_max_err, __pyx_n_u_float) < 0) __PYX_ERR(0, 185, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":188 - * quads: List[List[Point]], - * max_err: float = 0.5, - * all_cubic: bool = False, # <<<<<<<<<<<<<< - * ) -> List[Tuple[Point, ...]]: - * """Converts a connecting list of quadratic splines to a list of quadratic - */ - if (PyDict_SetItem(__pyx_t_9, __pyx_n_s_all_cubic, ((PyObject*)&PyBool_Type)) < 0) __PYX_ERR(0, 185, __pyx_L1_error) - - /* "fontTools/qu2cu/qu2cu.py":189 - * max_err: float = 0.5, - * all_cubic: bool = False, - * ) -> List[Tuple[Point, ...]]: # <<<<<<<<<<<<<< - * """Converts a connecting list of quadratic splines to a list of quadratic - * and cubic curves. - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_List); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_Tuple); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_Point); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_6); - __Pyx_INCREF(Py_Ellipsis); - __Pyx_GIVEREF(Py_Ellipsis); - PyTuple_SET_ITEM(__pyx_t_7, 1, Py_Ellipsis); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_t_10, __pyx_t_7); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_GetItem(__pyx_t_8, __pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (PyDict_SetItem(__pyx_t_9, __pyx_n_s_return, __pyx_t_7) < 0) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":185 - * is_complex=cython.int, - * ) - * def quadratic_to_curves( # <<<<<<<<<<<<<< - * quads: List[List[Point]], - * max_err: float = 0.5, - */ - __pyx_t_7 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5qu2cu_5qu2cu_5quadratic_to_curves, 0, __pyx_n_s_quadratic_to_curves, NULL, __pyx_n_s_fontTools_qu2cu_qu2cu, __pyx_d, ((PyObject *)__pyx_codeobj__8)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_7, __pyx_t_5); - __Pyx_CyFunction_SetAnnotationsDict(__pyx_t_7, __pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_quadratic_to_curves, __pyx_t_7) < 0) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":242 - * - * - * Solution = namedtuple("Solution", ["num_points", "error", "start_index", "is_cubic"]) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_namedtuple); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_9 = PyList_New(4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_INCREF(__pyx_n_u_num_points); - __Pyx_GIVEREF(__pyx_n_u_num_points); - PyList_SET_ITEM(__pyx_t_9, 0, __pyx_n_u_num_points); - __Pyx_INCREF(__pyx_n_u_error); - __Pyx_GIVEREF(__pyx_n_u_error); - PyList_SET_ITEM(__pyx_t_9, 1, __pyx_n_u_error); - __Pyx_INCREF(__pyx_n_u_start_index); - __Pyx_GIVEREF(__pyx_n_u_start_index); - PyList_SET_ITEM(__pyx_t_9, 2, __pyx_n_u_start_index); - __Pyx_INCREF(__pyx_n_u_is_cubic); - __Pyx_GIVEREF(__pyx_n_u_is_cubic); - PyList_SET_ITEM(__pyx_t_9, 3, __pyx_n_u_is_cubic); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_n_u_Solution); - __Pyx_GIVEREF(__pyx_n_u_Solution); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_n_u_Solution); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_5, NULL); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Solution, __pyx_t_9) < 0) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/qu2cu/qu2cu.py":268 - * u=cython.complex, - * ) - * def spline_to_curves(q, costs, tolerance=0.5, all_cubic=False): # <<<<<<<<<<<<<< - * """ - * q: quadratic spline with alternating on-curve / off-curve points. - */ - __pyx_t_9 = PyFloat_FromDouble(((double)0.5)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_5 = __Pyx_PyBool_FromLong(((int)0)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_5); - __pyx_t_9 = 0; - __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5qu2cu_5qu2cu_7spline_to_curves, 0, __pyx_n_s_spline_to_curves, NULL, __pyx_n_s_fontTools_qu2cu_qu2cu, __pyx_d, ((PyObject *)__pyx_codeobj__10)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_5, __pyx_t_7); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_spline_to_curves, __pyx_t_5) < 0) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/qu2cu/qu2cu.py":389 - * - * - * def main(): # <<<<<<<<<<<<<< - * from fontTools.cu2qu.benchmark import generate_curve - * from fontTools.cu2qu import curve_to_quadratic - */ - __pyx_t_5 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5qu2cu_5qu2cu_9main, 0, __pyx_n_s_main_2, NULL, __pyx_n_s_fontTools_qu2cu_qu2cu, __pyx_d, ((PyObject *)__pyx_codeobj__12)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 389, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_main_2, __pyx_t_5) < 0) __PYX_ERR(0, 389, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/qu2cu/qu2cu.py":407 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * main() - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_name); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_11 = (__Pyx_PyUnicode_Equals(__pyx_t_5, __pyx_n_u_main, Py_EQ)); if (unlikely(__pyx_t_11 < 0)) __PYX_ERR(0, 407, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_11) { - - /* "fontTools/qu2cu/qu2cu.py":408 - * - * if __name__ == "__main__": - * main() # <<<<<<<<<<<<<< - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_main_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 408, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = __Pyx_PyObject_CallNoArg(__pyx_t_5); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 408, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/qu2cu/qu2cu.py":407 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * main() - */ - } - - /* "fontTools/qu2cu/qu2cu.py":1 - * # cython: language_level=3 # <<<<<<<<<<<<<< - * # distutils: define_macros=CYTHON_TRACE_NOGIL=1 - * - */ - __pyx_t_7 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_7) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init fontTools.qu2cu.qu2cu", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init fontTools.qu2cu.qu2cu"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; - if (likely(m && m->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { - Py_ssize_t l = m->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return m->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* py_abs */ -#if CYTHON_USE_PYLONG_INTERNALS -static PyObject *__Pyx_PyLong_AbsNeg(PyObject *n) { - if (likely(Py_SIZE(n) == -1)) { - return PyLong_FromLong(((PyLongObject*)n)->ob_digit[0]); - } -#if CYTHON_COMPILING_IN_CPYTHON - { - PyObject *copy = _PyLong_Copy((PyLongObject*)n); - if (likely(copy)) { - __Pyx_SET_SIZE(copy, -Py_SIZE(copy)); - } - return copy; - } -#else - return PyNumber_Negative(n); -#endif -} -#endif - -/* SliceTupleAndList */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_crop_slice(Py_ssize_t* _start, Py_ssize_t* _stop, Py_ssize_t* _length) { - Py_ssize_t start = *_start, stop = *_stop, length = *_length; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - else if (stop > length) - stop = length; - *_length = stop - start; - *_start = start; - *_stop = stop; -} -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject** CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject* __Pyx_PyList_GetSlice( - PyObject* src, Py_ssize_t start, Py_ssize_t stop) { - PyObject* dest; - Py_ssize_t length = PyList_GET_SIZE(src); - __Pyx_crop_slice(&start, &stop, &length); - if (unlikely(length <= 0)) - return PyList_New(0); - dest = PyList_New(length); - if (unlikely(!dest)) - return NULL; - __Pyx_copy_object_array( - ((PyListObject*)src)->ob_item + start, - ((PyListObject*)dest)->ob_item, - length); - return dest; -} -static CYTHON_INLINE PyObject* __Pyx_PyTuple_GetSlice( - PyObject* src, Py_ssize_t start, Py_ssize_t stop) { - PyObject* dest; - Py_ssize_t length = PyTuple_GET_SIZE(src); - __Pyx_crop_slice(&start, &stop, &length); - if (unlikely(length <= 0)) - return PyTuple_New(0); - dest = PyTuple_New(length); - if (unlikely(!dest)) - return NULL; - __Pyx_copy_object_array( - ((PyTupleObject*)src)->ob_item + start, - ((PyTupleObject*)dest)->ob_item, - length); - return dest; -} -#endif - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractCObj(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, int inplace, int zerodivision_check) { - (void)inplace; - (void)zerodivision_check; - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op2))) { - const long a = intval; - long x; - long b = PyInt_AS_LONG(op2); - x = (long)((unsigned long)a - b); - if (likely((x^a) >= 0 || (x^~b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op2))) { - const long a = intval; - long b, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG lla = intval; - PY_LONG_LONG llb, llx; -#endif - const digit* digits = ((PyLongObject*)op2)->ob_digit; - const Py_ssize_t size = Py_SIZE(op2); - if (likely(__Pyx_sst_abs(size) <= 1)) { - b = likely(size) ? digits[0] : 0; - if (size == -1) b = -b; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - b = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - llb = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - b = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - llb = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - b = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - llb = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - b = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - llb = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - b = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - llb = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - b = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - llb = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - } - x = a - b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla - llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op2)) { - const long a = intval; - double b = PyFloat_AS_DOUBLE(op2); - double result; - PyFPE_START_PROTECT("subtract", return NULL) - result = ((double)a) - (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceSubtract : PyNumber_Subtract)(op1, op2); -} -#endif - -/* None */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname) { - PyErr_Format(PyExc_NameError, "free variable '%s' referenced before assignment in enclosing scope", varname); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { -#if CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* exc_type = tstate->curexc_type; - if (unlikely(exc_type)) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) { - PyObject *exc_value, *exc_tb; - exc_value = tstate->curexc_value; - exc_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - Py_DECREF(exc_type); - Py_XDECREF(exc_value); - Py_XDECREF(exc_tb); - return 0; - } else { - return -1; - } - } - return 0; -#else - if (unlikely(PyErr_Occurred())) { - if (likely(PyErr_ExceptionMatches(PyExc_StopIteration))) { - PyErr_Clear(); - return 0; - } else { - return -1; - } - } - return 0; -#endif -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (__Pyx_PyFastCFunction_Check(func)) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* SliceObject */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice(PyObject* obj, - Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** _py_start, PyObject** _py_stop, PyObject** _py_slice, - int has_cstart, int has_cstop, CYTHON_UNUSED int wraparound) { -#if CYTHON_USE_TYPE_SLOTS - PyMappingMethods* mp; -#if PY_MAJOR_VERSION < 3 - PySequenceMethods* ms = Py_TYPE(obj)->tp_as_sequence; - if (likely(ms && ms->sq_slice)) { - if (!has_cstart) { - if (_py_start && (*_py_start != Py_None)) { - cstart = __Pyx_PyIndex_AsSsize_t(*_py_start); - if ((cstart == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstart = 0; - } - if (!has_cstop) { - if (_py_stop && (*_py_stop != Py_None)) { - cstop = __Pyx_PyIndex_AsSsize_t(*_py_stop); - if ((cstop == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstop = PY_SSIZE_T_MAX; - } - if (wraparound && unlikely((cstart < 0) | (cstop < 0)) && likely(ms->sq_length)) { - Py_ssize_t l = ms->sq_length(obj); - if (likely(l >= 0)) { - if (cstop < 0) { - cstop += l; - if (cstop < 0) cstop = 0; - } - if (cstart < 0) { - cstart += l; - if (cstart < 0) cstart = 0; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - goto bad; - PyErr_Clear(); - } - } - return ms->sq_slice(obj, cstart, cstop); - } -#endif - mp = Py_TYPE(obj)->tp_as_mapping; - if (likely(mp && mp->mp_subscript)) -#endif - { - PyObject* result; - PyObject *py_slice, *py_start, *py_stop; - if (_py_slice) { - py_slice = *_py_slice; - } else { - PyObject* owned_start = NULL; - PyObject* owned_stop = NULL; - if (_py_start) { - py_start = *_py_start; - } else { - if (has_cstart) { - owned_start = py_start = PyInt_FromSsize_t(cstart); - if (unlikely(!py_start)) goto bad; - } else - py_start = Py_None; - } - if (_py_stop) { - py_stop = *_py_stop; - } else { - if (has_cstop) { - owned_stop = py_stop = PyInt_FromSsize_t(cstop); - if (unlikely(!py_stop)) { - Py_XDECREF(owned_start); - goto bad; - } - } else - py_stop = Py_None; - } - py_slice = PySlice_New(py_start, py_stop, Py_None); - Py_XDECREF(owned_start); - Py_XDECREF(owned_stop); - if (unlikely(!py_slice)) goto bad; - } -#if CYTHON_USE_TYPE_SLOTS - result = mp->mp_subscript(obj, py_slice); -#else - result = PyObject_GetItem(obj, py_slice); -#endif - if (!_py_slice) { - Py_DECREF(py_slice); - } - return result; - } - PyErr_Format(PyExc_TypeError, - "'%.200s' object is unsliceable", Py_TYPE(obj)->tp_name); -bad: - return NULL; -} - -/* PyObjectCallNoArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, NULL, 0); - } -#endif -#if defined(__Pyx_CyFunction_USED) && defined(NDEBUG) - if (likely(PyCFunction_Check(func) || __Pyx_CyFunction_Check(func))) -#else - if (likely(PyCFunction_Check(func))) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL); -} -#endif - -/* PyObjectGetMethod */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if PY_MAJOR_VERSION >= 3 - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || (Py_TYPE(descr) == &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || (Py_TYPE(descr) == &PyMethodDescr_Type))) - #endif -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (descr != NULL) { - *method = descr; - return 0; - } - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(name)); -#endif - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod0 */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -} - -/* UnpackUnboundCMethod */ -static int __Pyx_TryUnpackUnboundCMethod(__Pyx_CachedCFunction* target) { - PyObject *method; - method = __Pyx_PyObject_GetAttrStr(target->type, *target->method_name); - if (unlikely(!method)) - return -1; - target->method = method; -#if CYTHON_COMPILING_IN_CPYTHON - #if PY_MAJOR_VERSION >= 3 - if (likely(__Pyx_TypeCheck(method, &PyMethodDescr_Type))) - #endif - { - PyMethodDescrObject *descr = (PyMethodDescrObject*) method; - target->func = descr->d_method->ml_meth; - target->flag = descr->d_method->ml_flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_STACKLESS); - } -#endif - return 0; -} - -/* CallUnboundCMethod0 */ -static PyObject* __Pyx__CallUnboundCMethod0(__Pyx_CachedCFunction* cfunc, PyObject* self) { - PyObject *args, *result = NULL; - if (unlikely(!cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL; -#if CYTHON_ASSUME_SAFE_MACROS - args = PyTuple_New(1); - if (unlikely(!args)) goto bad; - Py_INCREF(self); - PyTuple_SET_ITEM(args, 0, self); -#else - args = PyTuple_Pack(1, self); - if (unlikely(!args)) goto bad; -#endif - result = __Pyx_PyObject_Call(cfunc->method, args, NULL); - Py_DECREF(args); -bad: - return result; -} - -/* pop */ -static CYTHON_INLINE PyObject* __Pyx__PyObject_Pop(PyObject* L) { - if (Py_TYPE(L) == &PySet_Type) { - return PySet_Pop(L); - } - return __Pyx_PyObject_CallMethod0(L, __pyx_n_s_pop); -} -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE PyObject* __Pyx_PyList_Pop(PyObject* L) { - if (likely(PyList_GET_SIZE(L) > (((PyListObject*)L)->allocated >> 1))) { - __Pyx_SET_SIZE(L, Py_SIZE(L) - 1); - return PyList_GET_ITEM(L, PyList_GET_SIZE(L)); - } - return __Pyx_CallUnboundCMethod0(&__pyx_umethod_PyList_Type_pop, L); -} -#endif - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, int inplace, int zerodivision_check) { - (void)inplace; - (void)zerodivision_check; - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long x; - long a = PyInt_AS_LONG(op1); - x = (long)((unsigned long)a + b); - if (likely((x^a) >= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = PyFloat_AS_DOUBLE(op1); - double result; - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* pyfrozenset_new */ -static CYTHON_INLINE PyObject* __Pyx_PyFrozenSet_New(PyObject* it) { - if (it) { - PyObject* result; -#if CYTHON_COMPILING_IN_PYPY - PyObject* args; - args = PyTuple_Pack(1, it); - if (unlikely(!args)) - return NULL; - result = PyObject_Call((PyObject*)&PyFrozenSet_Type, args, NULL); - Py_DECREF(args); - return result; -#else - if (PyFrozenSet_CheckExact(it)) { - Py_INCREF(it); - return it; - } - result = PyFrozenSet_New(it); - if (unlikely(!result)) - return NULL; - if ((PY_VERSION_HEX >= 0x031000A1) || likely(PySet_GET_SIZE(result))) - return result; - Py_DECREF(result); -#endif - } -#if CYTHON_USE_TYPE_SLOTS - return PyFrozenSet_Type.tp_new(&PyFrozenSet_Type, __pyx_empty_tuple, NULL); -#else - return PyObject_Call((PyObject*)&PyFrozenSet_Type, __pyx_empty_tuple, NULL); -#endif -} - -/* PySetContains */ -static int __Pyx_PySet_ContainsUnhashable(PyObject *set, PyObject *key) { - int result = -1; - if (PySet_Check(key) && PyErr_ExceptionMatches(PyExc_TypeError)) { - PyObject *tmpkey; - PyErr_Clear(); - tmpkey = __Pyx_PyFrozenSet_New(key); - if (tmpkey != NULL) { - result = PySet_Contains(set, tmpkey); - Py_DECREF(tmpkey); - } - } - return result; -} -static CYTHON_INLINE int __Pyx_PySet_ContainsTF(PyObject* key, PyObject* set, int eq) { - int result = PySet_Contains(set, key); - if (unlikely(result < 0)) { - result = __Pyx_PySet_ContainsUnhashable(set, key); - } - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, attr_name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(attr_name)); -#endif - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* FetchCommonType */ -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* fake_module; - PyTypeObject* cached_type = NULL; - fake_module = PyImport_AddModule((char*) "_cython_" CYTHON_ABI); - if (!fake_module) return NULL; - Py_INCREF(fake_module); - cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name); - if (cached_type) { - if (!PyType_Check((PyObject*)cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", - type->tp_name); - goto bad; - } - if (cached_type->tp_basicsize != type->tp_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - type->tp_name); - goto bad; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; - } -done: - Py_DECREF(fake_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} - -/* CythonFunctionShared */ -#include -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure) -{ - if (unlikely(op->func_doc == NULL)) { - if (op->func.m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(op->func.m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(op->func.m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp = op->func_doc; - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - op->func_doc = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - if (unlikely(op->func_name == NULL)) { -#if PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(op->func.m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(op->func.m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - tmp = op->func_name; - Py_INCREF(value); - op->func_name = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - tmp = op->func_qualname; - Py_INCREF(value); - op->func_qualname = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_self(__pyx_CyFunctionObject *m, CYTHON_UNUSED void *closure) -{ - PyObject *self; - self = m->func_closure; - if (self == NULL) - self = Py_None; - Py_INCREF(self); - return self; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - tmp = op->func_dict; - Py_INCREF(value); - op->func_dict = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value) { - value = Py_None; - } else if (value != Py_None && !PyTuple_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - Py_INCREF(value); - tmp = op->defaults_tuple; - op->defaults_tuple = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->defaults_tuple; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value) { - value = Py_None; - } else if (value != Py_None && !PyDict_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - Py_INCREF(value); - tmp = op->defaults_kwdict; - op->defaults_kwdict = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->defaults_kwdict; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value || value == Py_None) { - value = NULL; - } else if (!PyDict_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - tmp = op->func_annotations; - op->func_annotations = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->func_annotations; - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "__self__", (getter)__Pyx_CyFunction_get_self, 0, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), PY_WRITE_RESTRICTED, 0}, - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, CYTHON_UNUSED PyObject *args) -{ -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(m->func.m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func.m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - if (unlikely(op == NULL)) - return NULL; - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; - op->func.m_ml = ml; - op->func.m_self = (PyObject *) op; - Py_XINCREF(closure); - op->func_closure = closure; - Py_XINCREF(module); - op->func.m_module = module; - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; - op->func_classobj = NULL; - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); - Py_CLEAR(m->func.m_module); - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); - Py_CLEAR(m->func_classobj); - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - PyObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); - Py_VISIT(m->func.m_module); - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); - Py_VISIT(m->func_classobj); - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject *__Pyx_CyFunction_descr_get(PyObject *func, PyObject *obj, PyObject *type) -{ -#if PY_MAJOR_VERSION < 3 - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - if (m->flags & __Pyx_CYFUNCTION_STATICMETHOD) { - Py_INCREF(func); - return func; - } - if (m->flags & __Pyx_CYFUNCTION_CLASSMETHOD) { - if (type == NULL) - type = (PyObject *)(Py_TYPE(obj)); - return __Pyx_PyMethod_New(func, type, (PyObject *)(Py_TYPE(type))); - } - if (obj == Py_None) - obj = NULL; -#endif - return __Pyx_PyMethod_New(func, obj, type); -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - Py_ssize_t size; - switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 0)) - return (*meth)(self, NULL); - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags in " - "__Pyx_CyFunction_Call. METH_OLDARGS is no " - "longer supported!"); - return NULL; - } - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw); -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; - argc = PyTuple_GET_SIZE(args); - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); -#if PY_MAJOR_VERSION > 2 - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); -#else - PyErr_SetString(PyExc_TypeError, - "unbound method needs an argument"); -#endif - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, - 0, - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_CyFunction_descr_get, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -static int __pyx_CyFunction_init(void) { - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) { - PyObject *runerr = NULL; - Py_ssize_t key_value; - PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence; - if (unlikely(!(m && m->sq_item))) { - PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name); - return NULL; - } - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) { - PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping; - if (likely(m && m->mp_subscript)) { - return m->mp_subscript(obj, key); - } - return __Pyx_PyObject_GetIndex(obj, key); -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -/* FromPy */ -static __pyx_t_double_complex __Pyx_PyComplex_As___pyx_t_double_complex(PyObject* o) { - Py_complex cval; -#if !CYTHON_COMPILING_IN_PYPY - if (PyComplex_CheckExact(o)) - cval = ((PyComplexObject *)o)->cval; - else -#endif - cval = PyComplex_AsCComplex(o); - return __pyx_t_double_complex_from_parts( - (double)cval.real, - (double)cval.imag); -} - -/* CIntFromPyVerify */ -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* Declarations */ -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - return ::std::complex< double >(x, y); - } - #else - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - return x + y*(__pyx_t_double_complex)_Complex_I; - } - #endif -#else - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - __pyx_t_double_complex z; - z.real = x; - z.imag = y; - return z; - } -#endif - -/* Arithmetic */ -#if CYTHON_CCOMPLEX -#else - static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - return (a.real == b.real) && (a.imag == b.imag); - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real + b.real; - z.imag = a.imag + b.imag; - return z; - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real - b.real; - z.imag = a.imag - b.imag; - return z; - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real * b.real - a.imag * b.imag; - z.imag = a.real * b.imag + a.imag * b.real; - return z; - } - #if 1 - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - if (b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); - } else if (fabs(b.real) >= fabs(b.imag)) { - if (b.real == 0 && b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag); - } else { - double r = b.imag / b.real; - double s = (double)(1.0) / (b.real + b.imag * r); - return __pyx_t_double_complex_from_parts( - (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); - } - } else { - double r = b.real / b.imag; - double s = (double)(1.0) / (b.imag + b.real * r); - return __pyx_t_double_complex_from_parts( - (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); - } - } - #else - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - if (b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); - } else { - double denom = b.real * b.real + b.imag * b.imag; - return __pyx_t_double_complex_from_parts( - (a.real * b.real + a.imag * b.imag) / denom, - (a.imag * b.real - a.real * b.imag) / denom); - } - } - #endif - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) { - __pyx_t_double_complex z; - z.real = -a.real; - z.imag = -a.imag; - return z; - } - static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) { - return (a.real == 0) && (a.imag == 0); - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) { - __pyx_t_double_complex z; - z.real = a.real; - z.imag = -a.imag; - return z; - } - #if 1 - static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) { - #if !defined(HAVE_HYPOT) || defined(_MSC_VER) - return sqrt(z.real*z.real + z.imag*z.imag); - #else - return hypot(z.real, z.imag); - #endif - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - double r, lnr, theta, z_r, z_theta; - if (b.imag == 0 && b.real == (int)b.real) { - if (b.real < 0) { - double denom = a.real * a.real + a.imag * a.imag; - a.real = a.real / denom; - a.imag = -a.imag / denom; - b.real = -b.real; - } - switch ((int)b.real) { - case 0: - z.real = 1; - z.imag = 0; - return z; - case 1: - return a; - case 2: - return __Pyx_c_prod_double(a, a); - case 3: - z = __Pyx_c_prod_double(a, a); - return __Pyx_c_prod_double(z, a); - case 4: - z = __Pyx_c_prod_double(a, a); - return __Pyx_c_prod_double(z, z); - } - } - if (a.imag == 0) { - if (a.real == 0) { - return a; - } else if ((b.imag == 0) && (a.real >= 0)) { - z.real = pow(a.real, b.real); - z.imag = 0; - return z; - } else if (a.real > 0) { - r = a.real; - theta = 0; - } else { - r = -a.real; - theta = atan2(0.0, -1.0); - } - } else { - r = __Pyx_c_abs_double(a); - theta = atan2(a.imag, a.real); - } - lnr = log(r); - z_r = exp(lnr * b.real - theta * b.imag); - z_theta = theta * b.real + lnr * b.imag; - z.real = z_r * cos(z_theta); - z.imag = z_r * sin(z_theta); - return z; - } - #endif -#endif - -/* CIntFromPy */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPy */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* PyObjectCallMethod1 */ -static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) { - PyObject *result = __Pyx_PyObject_CallOneArg(method, arg); - Py_DECREF(method); - return result; -} -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) { - PyObject *method = NULL, *result; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_Call2Args(method, obj, arg); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) return NULL; - return __Pyx__PyObject_CallMethod1(method, arg); -} - -/* CoroutineBase */ -#include -#include -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#define __Pyx_Coroutine_Undelegate(gen) Py_CLEAR((gen)->yieldfrom) -static int __Pyx_PyGen__FetchStopIterationValue(CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject **pvalue) { - PyObject *et, *ev, *tb; - PyObject *value = NULL; - __Pyx_ErrFetch(&et, &ev, &tb); - if (!et) { - Py_XDECREF(tb); - Py_XDECREF(ev); - Py_INCREF(Py_None); - *pvalue = Py_None; - return 0; - } - if (likely(et == PyExc_StopIteration)) { - if (!ev) { - Py_INCREF(Py_None); - value = Py_None; - } -#if PY_VERSION_HEX >= 0x030300A0 - else if (Py_TYPE(ev) == (PyTypeObject*)PyExc_StopIteration) { - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); - } -#endif - else if (unlikely(PyTuple_Check(ev))) { - if (PyTuple_GET_SIZE(ev) >= 1) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - value = PyTuple_GET_ITEM(ev, 0); - Py_INCREF(value); -#else - value = PySequence_ITEM(ev, 0); -#endif - } else { - Py_INCREF(Py_None); - value = Py_None; - } - Py_DECREF(ev); - } - else if (!__Pyx_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration)) { - value = ev; - } - if (likely(value)) { - Py_XDECREF(tb); - Py_DECREF(et); - *pvalue = value; - return 0; - } - } else if (!__Pyx_PyErr_GivenExceptionMatches(et, PyExc_StopIteration)) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - PyErr_NormalizeException(&et, &ev, &tb); - if (unlikely(!PyObject_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration))) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - Py_XDECREF(tb); - Py_DECREF(et); -#if PY_VERSION_HEX >= 0x030300A0 - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); -#else - { - PyObject* args = __Pyx_PyObject_GetAttrStr(ev, __pyx_n_s_args); - Py_DECREF(ev); - if (likely(args)) { - value = PySequence_GetItem(args, 0); - Py_DECREF(args); - } - if (unlikely(!value)) { - __Pyx_ErrRestore(NULL, NULL, NULL); - Py_INCREF(Py_None); - value = Py_None; - } - } -#endif - *pvalue = value; - return 0; -} -static CYTHON_INLINE -void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *exc_state) { - PyObject *t, *v, *tb; - t = exc_state->exc_type; - v = exc_state->exc_value; - tb = exc_state->exc_traceback; - exc_state->exc_type = NULL; - exc_state->exc_value = NULL; - exc_state->exc_traceback = NULL; - Py_XDECREF(t); - Py_XDECREF(v); - Py_XDECREF(tb); -} -#define __Pyx_Coroutine_AlreadyRunningError(gen) (__Pyx__Coroutine_AlreadyRunningError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyRunningError(CYTHON_UNUSED __pyx_CoroutineObject *gen) { - const char *msg; - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check((PyObject*)gen)) { - msg = "coroutine already executing"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact((PyObject*)gen)) { - msg = "async generator already executing"; - #endif - } else { - msg = "generator already executing"; - } - PyErr_SetString(PyExc_ValueError, msg); -} -#define __Pyx_Coroutine_NotStartedError(gen) (__Pyx__Coroutine_NotStartedError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_NotStartedError(CYTHON_UNUSED PyObject *gen) { - const char *msg; - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(gen)) { - msg = "can't send non-None value to a just-started coroutine"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(gen)) { - msg = "can't send non-None value to a just-started async generator"; - #endif - } else { - msg = "can't send non-None value to a just-started generator"; - } - PyErr_SetString(PyExc_TypeError, msg); -} -#define __Pyx_Coroutine_AlreadyTerminatedError(gen, value, closing) (__Pyx__Coroutine_AlreadyTerminatedError(gen, value, closing), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyTerminatedError(CYTHON_UNUSED PyObject *gen, PyObject *value, CYTHON_UNUSED int closing) { - #ifdef __Pyx_Coroutine_USED - if (!closing && __Pyx_Coroutine_Check(gen)) { - PyErr_SetString(PyExc_RuntimeError, "cannot reuse already awaited coroutine"); - } else - #endif - if (value) { - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - PyErr_SetNone(__Pyx_PyExc_StopAsyncIteration); - else - #endif - PyErr_SetNone(PyExc_StopIteration); - } -} -static -PyObject *__Pyx_Coroutine_SendEx(__pyx_CoroutineObject *self, PyObject *value, int closing) { - __Pyx_PyThreadState_declare - PyThreadState *tstate; - __Pyx_ExcInfoStruct *exc_state; - PyObject *retval; - assert(!self->is_running); - if (unlikely(self->resume_label == 0)) { - if (unlikely(value && value != Py_None)) { - return __Pyx_Coroutine_NotStartedError((PyObject*)self); - } - } - if (unlikely(self->resume_label == -1)) { - return __Pyx_Coroutine_AlreadyTerminatedError((PyObject*)self, value, closing); - } -#if CYTHON_FAST_THREAD_STATE - __Pyx_PyThreadState_assign - tstate = __pyx_tstate; -#else - tstate = __Pyx_PyThreadState_Current; -#endif - exc_state = &self->gi_exc_state; - if (exc_state->exc_type) { - #if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_PYSTON - #else - if (exc_state->exc_traceback) { - PyTracebackObject *tb = (PyTracebackObject *) exc_state->exc_traceback; - PyFrameObject *f = tb->tb_frame; - assert(f->f_back == NULL); - #if PY_VERSION_HEX >= 0x030B00A1 - f->f_back = PyThreadState_GetFrame(tstate); - #else - Py_XINCREF(tstate->frame); - f->f_back = tstate->frame; - #endif - } - #endif - } -#if CYTHON_USE_EXC_INFO_STACK - exc_state->previous_item = tstate->exc_info; - tstate->exc_info = exc_state; -#else - if (exc_state->exc_type) { - __Pyx_ExceptionSwap(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } else { - __Pyx_Coroutine_ExceptionClear(exc_state); - __Pyx_ExceptionSave(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } -#endif - self->is_running = 1; - retval = self->body((PyObject *) self, tstate, value); - self->is_running = 0; -#if CYTHON_USE_EXC_INFO_STACK - exc_state = &self->gi_exc_state; - tstate->exc_info = exc_state->previous_item; - exc_state->previous_item = NULL; - __Pyx_Coroutine_ResetFrameBackpointer(exc_state); -#endif - return retval; -} -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state) { - PyObject *exc_tb = exc_state->exc_traceback; - if (likely(exc_tb)) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_PYSTON -#else - PyTracebackObject *tb = (PyTracebackObject *) exc_tb; - PyFrameObject *f = tb->tb_frame; - Py_CLEAR(f->f_back); -#endif - } -} -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_MethodReturn(CYTHON_UNUSED PyObject* gen, PyObject *retval) { - if (unlikely(!retval)) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (!__Pyx_PyErr_Occurred()) { - PyObject *exc = PyExc_StopIteration; - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - exc = __Pyx_PyExc_StopAsyncIteration; - #endif - __Pyx_PyErr_SetNone(exc); - } - } - return retval; -} -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) -static CYTHON_INLINE -PyObject *__Pyx_PyGen_Send(PyGenObject *gen, PyObject *arg) { -#if PY_VERSION_HEX <= 0x030A00A1 - return _PyGen_Send(gen, arg); -#else - PyObject *result; - if (PyIter_Send((PyObject*)gen, arg ? arg : Py_None, &result) == PYGEN_RETURN) { - if (PyAsyncGen_CheckExact(gen)) { - assert(result == Py_None); - PyErr_SetNone(PyExc_StopAsyncIteration); - } - else if (result == Py_None) { - PyErr_SetNone(PyExc_StopIteration); - } - else { - _PyGen_SetStopIterationValue(result); - } - Py_CLEAR(result); - } - return result; -#endif -} -#endif -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_FinishDelegation(__pyx_CoroutineObject *gen) { - PyObject *ret; - PyObject *val = NULL; - __Pyx_Coroutine_Undelegate(gen); - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, &val); - ret = __Pyx_Coroutine_SendEx(gen, val, 0); - Py_XDECREF(val); - return ret; -} -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value) { - PyObject *retval; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - ret = __Pyx_async_gen_asend_send(yf, value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03050000 && defined(PyCoro_CheckExact) && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyCoro_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - { - if (value == Py_None) - ret = Py_TYPE(yf)->tp_iternext(yf); - else - ret = __Pyx_PyObject_CallMethod1(yf, __pyx_n_s_send, value); - } - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - retval = __Pyx_Coroutine_FinishDelegation(gen); - } else { - retval = __Pyx_Coroutine_SendEx(gen, value, 0); - } - return __Pyx_Coroutine_MethodReturn(self, retval); -} -static int __Pyx_Coroutine_CloseIter(__pyx_CoroutineObject *gen, PyObject *yf) { - PyObject *retval = NULL; - int err = 0; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - if (__Pyx_CoroutineAwait_CheckExact(yf)) { - retval = __Pyx_CoroutineAwait_Close((__pyx_CoroutineAwaitObject*)yf, NULL); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - retval = __Pyx_async_gen_asend_close(yf, NULL); - } else - if (__pyx_PyAsyncGenAThrow_CheckExact(yf)) { - retval = __Pyx_async_gen_athrow_close(yf, NULL); - } else - #endif - { - PyObject *meth; - gen->is_running = 1; - meth = __Pyx_PyObject_GetAttrStr(yf, __pyx_n_s_close); - if (unlikely(!meth)) { - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_WriteUnraisable(yf); - } - PyErr_Clear(); - } else { - retval = PyObject_CallFunction(meth, NULL); - Py_DECREF(meth); - if (!retval) - err = -1; - } - gen->is_running = 0; - } - Py_XDECREF(retval); - return err; -} -static PyObject *__Pyx_Generator_Next(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Generator_Next(yf); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, NULL); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, Py_None); - } else - #endif - ret = Py_TYPE(yf)->tp_iternext(yf); - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - return __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_SendEx(gen, Py_None, 0); -} -static PyObject *__Pyx_Coroutine_Close_Method(PyObject *self, CYTHON_UNUSED PyObject *arg) { - return __Pyx_Coroutine_Close(self); -} -static PyObject *__Pyx_Coroutine_Close(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *retval, *raised_exception; - PyObject *yf = gen->yieldfrom; - int err = 0; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - Py_INCREF(yf); - err = __Pyx_Coroutine_CloseIter(gen, yf); - __Pyx_Coroutine_Undelegate(gen); - Py_DECREF(yf); - } - if (err == 0) - PyErr_SetNone(PyExc_GeneratorExit); - retval = __Pyx_Coroutine_SendEx(gen, NULL, 1); - if (unlikely(retval)) { - const char *msg; - Py_DECREF(retval); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(self)) { - msg = "coroutine ignored GeneratorExit"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(self)) { -#if PY_VERSION_HEX < 0x03060000 - msg = "async generator ignored GeneratorExit - might require Python 3.6+ finalisation (PEP 525)"; -#else - msg = "async generator ignored GeneratorExit"; -#endif - #endif - } else { - msg = "generator ignored GeneratorExit"; - } - PyErr_SetString(PyExc_RuntimeError, msg); - return NULL; - } - raised_exception = PyErr_Occurred(); - if (likely(!raised_exception || __Pyx_PyErr_GivenExceptionMatches2(raised_exception, PyExc_GeneratorExit, PyExc_StopIteration))) { - if (raised_exception) PyErr_Clear(); - Py_INCREF(Py_None); - return Py_None; - } - return NULL; -} -static PyObject *__Pyx__Coroutine_Throw(PyObject *self, PyObject *typ, PyObject *val, PyObject *tb, - PyObject *args, int close_on_genexit) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - Py_INCREF(yf); - if (__Pyx_PyErr_GivenExceptionMatches(typ, PyExc_GeneratorExit) && close_on_genexit) { - int err = __Pyx_Coroutine_CloseIter(gen, yf); - Py_DECREF(yf); - __Pyx_Coroutine_Undelegate(gen); - if (err < 0) - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); - goto throw_here; - } - gen->is_running = 1; - if (0 - #ifdef __Pyx_Generator_USED - || __Pyx_Generator_CheckExact(yf) - #endif - #ifdef __Pyx_Coroutine_USED - || __Pyx_Coroutine_Check(yf) - #endif - ) { - ret = __Pyx__Coroutine_Throw(yf, typ, val, tb, args, close_on_genexit); - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_CoroutineAwait_CheckExact(yf)) { - ret = __Pyx__Coroutine_Throw(((__pyx_CoroutineAwaitObject*)yf)->coroutine, typ, val, tb, args, close_on_genexit); - #endif - } else { - PyObject *meth = __Pyx_PyObject_GetAttrStr(yf, __pyx_n_s_throw); - if (unlikely(!meth)) { - Py_DECREF(yf); - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) { - gen->is_running = 0; - return NULL; - } - PyErr_Clear(); - __Pyx_Coroutine_Undelegate(gen); - gen->is_running = 0; - goto throw_here; - } - if (likely(args)) { - ret = PyObject_CallObject(meth, args); - } else { - ret = PyObject_CallFunctionObjArgs(meth, typ, val, tb, NULL); - } - Py_DECREF(meth); - } - gen->is_running = 0; - Py_DECREF(yf); - if (!ret) { - ret = __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_MethodReturn(self, ret); - } -throw_here: - __Pyx_Raise(typ, val, tb, NULL); - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); -} -static PyObject *__Pyx_Coroutine_Throw(PyObject *self, PyObject *args) { - PyObject *typ; - PyObject *val = NULL; - PyObject *tb = NULL; - if (!PyArg_UnpackTuple(args, (char *)"throw", 1, 3, &typ, &val, &tb)) - return NULL; - return __Pyx__Coroutine_Throw(self, typ, val, tb, args, 1); -} -static CYTHON_INLINE int __Pyx_Coroutine_traverse_excstate(__Pyx_ExcInfoStruct *exc_state, visitproc visit, void *arg) { - Py_VISIT(exc_state->exc_type); - Py_VISIT(exc_state->exc_value); - Py_VISIT(exc_state->exc_traceback); - return 0; -} -static int __Pyx_Coroutine_traverse(__pyx_CoroutineObject *gen, visitproc visit, void *arg) { - Py_VISIT(gen->closure); - Py_VISIT(gen->classobj); - Py_VISIT(gen->yieldfrom); - return __Pyx_Coroutine_traverse_excstate(&gen->gi_exc_state, visit, arg); -} -static int __Pyx_Coroutine_clear(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - Py_CLEAR(gen->closure); - Py_CLEAR(gen->classobj); - Py_CLEAR(gen->yieldfrom); - __Pyx_Coroutine_ExceptionClear(&gen->gi_exc_state); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - Py_CLEAR(((__pyx_PyAsyncGenObject*)gen)->ag_finalizer); - } -#endif - Py_CLEAR(gen->gi_code); - Py_CLEAR(gen->gi_frame); - Py_CLEAR(gen->gi_name); - Py_CLEAR(gen->gi_qualname); - Py_CLEAR(gen->gi_modulename); - return 0; -} -static void __Pyx_Coroutine_dealloc(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject_GC_UnTrack(gen); - if (gen->gi_weakreflist != NULL) - PyObject_ClearWeakRefs(self); - if (gen->resume_label >= 0) { - PyObject_GC_Track(self); -#if PY_VERSION_HEX >= 0x030400a1 && CYTHON_USE_TP_FINALIZE - if (PyObject_CallFinalizerFromDealloc(self)) -#else - Py_TYPE(gen)->tp_del(self); - if (Py_REFCNT(self) > 0) -#endif - { - return; - } - PyObject_GC_UnTrack(self); - } -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - /* We have to handle this case for asynchronous generators - right here, because this code has to be between UNTRACK - and GC_Del. */ - Py_CLEAR(((__pyx_PyAsyncGenObject*)self)->ag_finalizer); - } -#endif - __Pyx_Coroutine_clear(self); - PyObject_GC_Del(gen); -} -static void __Pyx_Coroutine_del(PyObject *self) { - PyObject *error_type, *error_value, *error_traceback; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - __Pyx_PyThreadState_declare - if (gen->resume_label < 0) { - return; - } -#if !CYTHON_USE_TP_FINALIZE - assert(self->ob_refcnt == 0); - __Pyx_SET_REFCNT(self, 1); -#endif - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&error_type, &error_value, &error_traceback); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - __pyx_PyAsyncGenObject *agen = (__pyx_PyAsyncGenObject*)self; - PyObject *finalizer = agen->ag_finalizer; - if (finalizer && !agen->ag_closed) { - PyObject *res = __Pyx_PyObject_CallOneArg(finalizer, self); - if (unlikely(!res)) { - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); - return; - } - } -#endif - if (unlikely(gen->resume_label == 0 && !error_value)) { -#ifdef __Pyx_Coroutine_USED -#ifdef __Pyx_Generator_USED - if (!__Pyx_Generator_CheckExact(self)) -#endif - { - PyObject_GC_UnTrack(self); -#if PY_MAJOR_VERSION >= 3 || defined(PyErr_WarnFormat) - if (unlikely(PyErr_WarnFormat(PyExc_RuntimeWarning, 1, "coroutine '%.50S' was never awaited", gen->gi_qualname) < 0)) - PyErr_WriteUnraisable(self); -#else - {PyObject *msg; - char *cmsg; - #if CYTHON_COMPILING_IN_PYPY - msg = NULL; - cmsg = (char*) "coroutine was never awaited"; - #else - char *cname; - PyObject *qualname; - qualname = gen->gi_qualname; - cname = PyString_AS_STRING(qualname); - msg = PyString_FromFormat("coroutine '%.50s' was never awaited", cname); - if (unlikely(!msg)) { - PyErr_Clear(); - cmsg = (char*) "coroutine was never awaited"; - } else { - cmsg = PyString_AS_STRING(msg); - } - #endif - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, cmsg, 1) < 0)) - PyErr_WriteUnraisable(self); - Py_XDECREF(msg);} -#endif - PyObject_GC_Track(self); - } -#endif - } else { - PyObject *res = __Pyx_Coroutine_Close(self); - if (unlikely(!res)) { - if (PyErr_Occurred()) - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); -#if !CYTHON_USE_TP_FINALIZE - assert(Py_REFCNT(self) > 0); - if (--self->ob_refcnt == 0) { - return; - } - { - Py_ssize_t refcnt = Py_REFCNT(self); - _Py_NewReference(self); - __Pyx_SET_REFCNT(self, refcnt); - } -#if CYTHON_COMPILING_IN_CPYTHON - assert(PyType_IS_GC(Py_TYPE(self)) && - _Py_AS_GC(self)->gc.gc_refs != _PyGC_REFS_UNTRACKED); - _Py_DEC_REFTOTAL; -#endif -#ifdef COUNT_ALLOCS - --Py_TYPE(self)->tp_frees; - --Py_TYPE(self)->tp_allocs; -#endif -#endif -} -static PyObject * -__Pyx_Coroutine_get_name(__pyx_CoroutineObject *self, CYTHON_UNUSED void *context) -{ - PyObject *name = self->gi_name; - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_name(__pyx_CoroutineObject *self, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - tmp = self->gi_name; - Py_INCREF(value); - self->gi_name = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_qualname(__pyx_CoroutineObject *self, CYTHON_UNUSED void *context) -{ - PyObject *name = self->gi_qualname; - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_qualname(__pyx_CoroutineObject *self, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - tmp = self->gi_qualname; - Py_INCREF(value); - self->gi_qualname = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_frame(__pyx_CoroutineObject *self, CYTHON_UNUSED void *context) -{ - PyObject *frame = self->gi_frame; - if (!frame) { - if (unlikely(!self->gi_code)) { - Py_RETURN_NONE; - } - frame = (PyObject *) PyFrame_New( - PyThreadState_Get(), /*PyThreadState *tstate,*/ - (PyCodeObject*) self->gi_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (unlikely(!frame)) - return NULL; - self->gi_frame = frame; - } - Py_INCREF(frame); - return frame; -} -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject* type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - __pyx_CoroutineObject *gen = PyObject_GC_New(__pyx_CoroutineObject, type); - if (unlikely(!gen)) - return NULL; - return __Pyx__Coroutine_NewInit(gen, body, code, closure, name, qualname, module_name); -} -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - gen->body = body; - gen->closure = closure; - Py_XINCREF(closure); - gen->is_running = 0; - gen->resume_label = 0; - gen->classobj = NULL; - gen->yieldfrom = NULL; - gen->gi_exc_state.exc_type = NULL; - gen->gi_exc_state.exc_value = NULL; - gen->gi_exc_state.exc_traceback = NULL; -#if CYTHON_USE_EXC_INFO_STACK - gen->gi_exc_state.previous_item = NULL; -#endif - gen->gi_weakreflist = NULL; - Py_XINCREF(qualname); - gen->gi_qualname = qualname; - Py_XINCREF(name); - gen->gi_name = name; - Py_XINCREF(module_name); - gen->gi_modulename = module_name; - Py_XINCREF(code); - gen->gi_code = code; - gen->gi_frame = NULL; - PyObject_GC_Track(gen); - return gen; -} - -/* PatchModuleWithCoroutine */ -static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - int result; - PyObject *globals, *result_obj; - globals = PyDict_New(); if (unlikely(!globals)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_coroutine_type", - #ifdef __Pyx_Coroutine_USED - (PyObject*)__pyx_CoroutineType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_generator_type", - #ifdef __Pyx_Generator_USED - (PyObject*)__pyx_GeneratorType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "_module", module) < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "__builtins__", __pyx_b) < 0)) goto ignore; - result_obj = PyRun_String(py_code, Py_file_input, globals, globals); - if (unlikely(!result_obj)) goto ignore; - Py_DECREF(result_obj); - Py_DECREF(globals); - return module; -ignore: - Py_XDECREF(globals); - PyErr_WriteUnraisable(module); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, "Cython module failed to patch module with custom type", 1) < 0)) { - Py_DECREF(module); - module = NULL; - } -#else - py_code++; -#endif - return module; -} - -/* PatchGeneratorABC */ -#ifndef CYTHON_REGISTER_ABCS -#define CYTHON_REGISTER_ABCS 1 -#endif -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) -static PyObject* __Pyx_patch_abc_module(PyObject *module); -static PyObject* __Pyx_patch_abc_module(PyObject *module) { - module = __Pyx_Coroutine_patch_module( - module, "" -"if _cython_generator_type is not None:\n" -" try: Generator = _module.Generator\n" -" except AttributeError: pass\n" -" else: Generator.register(_cython_generator_type)\n" -"if _cython_coroutine_type is not None:\n" -" try: Coroutine = _module.Coroutine\n" -" except AttributeError: pass\n" -" else: Coroutine.register(_cython_coroutine_type)\n" - ); - return module; -} -#endif -static int __Pyx_patch_abc(void) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - static int abc_patched = 0; - if (CYTHON_REGISTER_ABCS && !abc_patched) { - PyObject *module; - module = PyImport_ImportModule((PY_MAJOR_VERSION >= 3) ? "collections.abc" : "collections"); - if (!module) { - PyErr_WriteUnraisable(NULL); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, - ((PY_MAJOR_VERSION >= 3) ? - "Cython module failed to register with collections.abc module" : - "Cython module failed to register with collections module"), 1) < 0)) { - return -1; - } - } else { - module = __Pyx_patch_abc_module(module); - abc_patched = 1; - if (unlikely(!module)) - return -1; - Py_DECREF(module); - } - module = PyImport_ImportModule("backports_abc"); - if (module) { - module = __Pyx_patch_abc_module(module); - Py_XDECREF(module); - } - if (!module) { - PyErr_Clear(); - } - } -#else - if ((0)) __Pyx_Coroutine_patch_module(NULL, NULL); -#endif - return 0; -} - -/* Generator */ -static PyMethodDef __pyx_Generator_methods[] = { - {"send", (PyCFunction) __Pyx_Coroutine_Send, METH_O, - (char*) PyDoc_STR("send(arg) -> send 'arg' into generator,\nreturn next yielded value or raise StopIteration.")}, - {"throw", (PyCFunction) __Pyx_Coroutine_Throw, METH_VARARGS, - (char*) PyDoc_STR("throw(typ[,val[,tb]]) -> raise exception in generator,\nreturn next yielded value or raise StopIteration.")}, - {"close", (PyCFunction) __Pyx_Coroutine_Close_Method, METH_NOARGS, - (char*) PyDoc_STR("close() -> raise GeneratorExit inside generator.")}, - {0, 0, 0, 0} -}; -static PyMemberDef __pyx_Generator_memberlist[] = { - {(char *) "gi_running", T_BOOL, offsetof(__pyx_CoroutineObject, is_running), READONLY, NULL}, - {(char*) "gi_yieldfrom", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY, - (char*) PyDoc_STR("object being iterated by 'yield from', or None")}, - {(char*) "gi_code", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_code), READONLY, NULL}, - {0, 0, 0, 0, 0} -}; -static PyGetSetDef __pyx_Generator_getsets[] = { - {(char *) "__name__", (getter)__Pyx_Coroutine_get_name, (setter)__Pyx_Coroutine_set_name, - (char*) PyDoc_STR("name of the generator"), 0}, - {(char *) "__qualname__", (getter)__Pyx_Coroutine_get_qualname, (setter)__Pyx_Coroutine_set_qualname, - (char*) PyDoc_STR("qualified name of the generator"), 0}, - {(char *) "gi_frame", (getter)__Pyx_Coroutine_get_frame, NULL, - (char*) PyDoc_STR("Frame of the generator"), 0}, - {0, 0, 0, 0, 0} -}; -static PyTypeObject __pyx_GeneratorType_type = { - PyVarObject_HEAD_INIT(0, 0) - "generator", - sizeof(__pyx_CoroutineObject), - 0, - (destructor) __Pyx_Coroutine_dealloc, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, - 0, - (traverseproc) __Pyx_Coroutine_traverse, - 0, - 0, - offsetof(__pyx_CoroutineObject, gi_weakreflist), - 0, - (iternextfunc) __Pyx_Generator_Next, - __pyx_Generator_methods, - __pyx_Generator_memberlist, - __pyx_Generator_getsets, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if CYTHON_USE_TP_FINALIZE - 0, -#else - __Pyx_Coroutine_del, -#endif - 0, -#if CYTHON_USE_TP_FINALIZE - __Pyx_Coroutine_del, -#elif PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -static int __pyx_Generator_init(void) { - __pyx_GeneratorType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - __pyx_GeneratorType_type.tp_iter = PyObject_SelfIter; - __pyx_GeneratorType = __Pyx_FetchCommonType(&__pyx_GeneratorType_type); - if (unlikely(!__pyx_GeneratorType)) { - return -1; - } - return 0; -} - -/* CheckBinaryVersion */ -static int __Pyx_check_binary_version(void) { - char ctversion[5]; - int same=1, i, found_dot; - const char* rt_from_call = Py_GetVersion(); - PyOS_snprintf(ctversion, 5, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - found_dot = 0; - for (i = 0; i < 4; i++) { - if (!ctversion[i]) { - same = (rt_from_call[i] < '0' || rt_from_call[i] > '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/DakMak/gradio-start/index.html b/spaces/DakMak/gradio-start/index.html deleted file mode 100644 index e519b94973ea2a63a6d0219b7f193bc6decf158f..0000000000000000000000000000000000000000 --- a/spaces/DakMak/gradio-start/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -qr-1234.png \ No newline at end of file diff --git a/spaces/Daniele/forma-locutionis/README.md b/spaces/Daniele/forma-locutionis/README.md deleted file mode 100644 index f00251f2ca5f5b4c6857446e7ecf909ea727efce..0000000000000000000000000000000000000000 --- a/spaces/Daniele/forma-locutionis/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Dante Thoughts -emoji: 🌌 ⌛ 📔 -colorFrom: green -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Detomo/ai-comic-generation/src/lib/utils.ts b/spaces/Detomo/ai-comic-generation/src/lib/utils.ts deleted file mode 100644 index ec79801fe9cdd7711f6dbef26678a134c634a8be..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/lib/utils.ts +++ /dev/null @@ -1,6 +0,0 @@ -import { type ClassValue, clsx } from "clsx" -import { twMerge } from "tailwind-merge" - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} diff --git a/spaces/Devap001/top-5_movies_recommendation/README.md b/spaces/Devap001/top-5_movies_recommendation/README.md deleted file mode 100644 index 7dc344f56518619dad2de9117775de459d1d5da3..0000000000000000000000000000000000000000 --- a/spaces/Devap001/top-5_movies_recommendation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Top-5 Movies Recommendation -emoji: 🏢 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Dragneel/Recon/app.py b/spaces/Dragneel/Recon/app.py deleted file mode 100644 index b09688da251eb5c3b781bb9683807d7bf59c5d8d..0000000000000000000000000000000000000000 --- a/spaces/Dragneel/Recon/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import streamlit as st -import random -import pickle -from sentiment import get_sentiment - -# Load the data -novel_list = pickle.load(open('data/novel_list.pkl', 'rb')) -novel_list['english_publisher'] = novel_list['english_publisher'].fillna('unknown') -name_list = novel_list['name'].values - -def recommend(novel, slider_start): - try: - similarity = pickle.load(open('data/similarity.pkl', 'rb')) - novel_index = novel_list[novel_list['name'] == novel].index[0] - distances = similarity[novel_index] - new_novel_list = sorted(list(enumerate(distances)), reverse=True, key=lambda x: x[1])[slider_start:slider_start+9] - except IndexError: - return None - - recommend_novel = [{'name': novel_list.iloc[i[0]]['name'], 'image_url': novel_list.iloc[i[0]]['image_url'], 'english_publisher': novel_list.iloc[i[0]]['english_publisher']} for i in new_novel_list] - return recommend_novel - -def main(): - st.title("📚 Novel Recommender System") - - # Input fields and buttons - selected_novel_name = st.text_input("🔎 Choose a Novel to get Recommendations", "Mother of Learning") - slider_value = st.slider("Slider", 1, 100, 1) - - col1, col2, col3 = st.columns(3) # Create three columns to place buttons side by side - with col1: - btn_recommend = st.button("💡 Recommend") - with col2: - btn_random = st.button("🎲 Random") - with col3: - btn_analysis = st.button("Analysis") - - if btn_recommend: - recommendations = recommend(selected_novel_name, slider_value) - if recommendations: - for i in range(0, len(recommendations), 3): # Process 3 recommendations at a time - cols = st.columns(3) - for j in range(3): - if i + j < len(recommendations): - novel = recommendations[i + j] - with cols[j]: - st.image(novel["image_url"], use_column_width=True) - st.write(novel["name"]) - else: - st.warning("Novel not found in our database. Please try another one.") - - if btn_random: - random_novels = random.sample(list(name_list), 9) - for i in range(0, len(random_novels), 3): - cols = st.columns(3) - for j in range(3): - if i + j < len(random_novels): - novel_name = random_novels[i + j] - novel_img = novel_list[novel_list['name'] == novel_name]['image_url'].values[0] - with cols[j]: - st.image(novel_img, use_column_width=True) - st.write(novel_name) - - if btn_analysis: - try: - positive, negative, wordcloud = get_sentiment(selected_novel_name) - st.write(f"😊 {positive}% Positive") - st.write(f"☹️ {negative}% Negative") - print(wordcloud) - - - - st.image(wordcloud) - - except Exception as e: - st.error("An error occurred during sentiment analysis.") - -if __name__ == "__main__": - main() diff --git a/spaces/EnD-Diffusers/Photography-Test/app.py b/spaces/EnD-Diffusers/Photography-Test/app.py deleted file mode 100644 index 52a948655132770976dfa9789d3370b15666f5d5..0000000000000000000000000000000000000000 --- a/spaces/EnD-Diffusers/Photography-Test/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import gradio as gr - -API_KEY=os.environ.get('HUGGING_FACE_HUB_TOKEN', None) - -article = """--- -This space was created using [SD Space Creator](https://huggingface.co/spaces/anzorq/sd-space-creator).""" - -gr.Interface.load( - name="models/Duskfallcrew/photography-and-landscapes", - title="""Photography And Landscapes""", - description="""Demo for Photography And Landscapes Stable Diffusion model.""", - article=article, - api_key=API_KEY, - ).queue(concurrency_count=20).launch() diff --git a/spaces/EuroPython2022/clickbaitonator/fudge/evaluate_topic.py b/spaces/EuroPython2022/clickbaitonator/fudge/evaluate_topic.py deleted file mode 100644 index ac927bf94d63aa9e0a1ccba3300c8c087d35b34e..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/clickbaitonator/fudge/evaluate_topic.py +++ /dev/null @@ -1,143 +0,0 @@ -import os -import random -import time -import pickle -import math -from argparse import ArgumentParser -from collections import defaultdict -import string -import csv - -from tqdm import tqdm -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, set_seed, GPT2Tokenizer, GPT2Model - -from data import Dataset -from model import Model -from util import save_checkpoint, ProgressMeter, AverageMeter, num_params, pad_mask -from predict_topic import predict -from constants import * - - -def main(args): - with open(args.dataset_info, 'rb') as rf: - dataset_info = pickle.load(rf) - gpt_tokenizer = AutoTokenizer.from_pretrained(args.model_string) - gpt_tokenizer.add_special_tokens({'pad_token': PAD_TOKEN}) - gpt_pad_id = gpt_tokenizer.encode(PAD_TOKEN)[0] - gpt_model = AutoModelWithLMHead.from_pretrained(args.model_string).to(args.device) - gpt_model.eval() - - checkpoint = torch.load(args.ckpt, map_location=args.device) - model_args = checkpoint['args'] - conditioning_model = Model(model_args, gpt_pad_id, len(dataset_info.index2word)) # no need to get the glove embeddings when reloading since they're saved in model ckpt anyway - conditioning_model.load_state_dict(checkpoint['state_dict']) - conditioning_model = conditioning_model.to(args.device) - conditioning_model.eval() - if args.verbose: - print("=> loaded checkpoint '{}' (epoch {})" - .format(args.ckpt, checkpoint['epoch'])) - print('num params', num_params(conditioning_model)) - - input_texts, conditions, categories = [], [], [] - - if args.condition_file is not None: - with open(args.condition_file, 'r') as rf: - for line in rf: - input_texts.append(line.strip().split('\t')[0]) - conditions.append(line.strip().split('\t')[1]) - categories.append(None) - for cw in conditions[-1].split(): - assert cw in dataset_info.word2index - else: - prefixes = [] - with open(args.prefix_file, 'r') as rf: - for line in rf: - prefixes.append(line.strip()) - condition_wordlists = [] - for root, _, files in os.walk(args.wordlist_dir): - for fname in files: - words = [] - with open(os.path.join(root, fname), 'r') as rf: - for line in rf: - word = line.strip() - if word in dataset_info.word2index: - words.append(word) - else: - if args.verbose: - print('word not found:', word) - condition_wordlists.append((' '.join(words), fname.split('.')[0])) - for p in prefixes: - for c, category in condition_wordlists: - input_texts.append(p) - conditions.append(c) - categories.append(category) - - all_cr = [] - pair_num = 0 - for input_text, condition_words, category in tqdm(zip(input_texts, conditions, categories), total=len(conditions)): - predict_function = predict - condition_results = [] - for i in range(0, args.sample_size, args.max_sample_batch): - num_samples = min(args.max_sample_batch, args.sample_size - i) - condition_results += predict_function(gpt_model, - gpt_tokenizer, - conditioning_model, - [input_text for _ in range(num_samples)], - condition_words, - dataset_info, - args.precondition_topk, - args.topk, - args.length_cutoff, - condition_lambda=args.condition_lambda, - device=args.device) - all_cr.append((input_text, category, condition_results)) - pair_num += 1 - if args.max_pairs > 0 and pair_num >= args.max_pairs: - break - with open(args.log_file, 'w') as wf: - writer = csv.DictWriter(wf, fieldnames=['category', 'input_text', 'generation']) - writer.writeheader() - for cr_group in all_cr: - for cr in cr_group[2]: - writer.writerow({'category': cr_group[1], 'input_text': cr_group[0], 'generation': cr}) - - -if __name__=='__main__': - parser = ArgumentParser() - - # DATA - parser.add_argument('--ckpt', type=str, required=True) - parser.add_argument('--log_file', type=str, required=True, help='file to write outputs to (csv format)') - parser.add_argument('--dataset_info', type=str, required=True, help='saved dataset info') - parser.add_argument('--model_string', type=str, default='gpt2-medium') - - parser.add_argument('--condition_file', type=str, default=None, help='file of inputs and conditions') - parser.add_argument('--prefix_file', type=str, default=None, help='prefix set') - parser.add_argument('--wordlist_dir', type=str, default=None, help='dir of bow wordlists for categories') - parser.add_argument('--sample_size', type=int, default=3, help='samples per input text-condition pair') - parser.add_argument('--max_sample_batch', type=int, default=3, help='max samples at a time') - parser.add_argument('--max_pairs', type=int, default=-1, help='max input-condition pairs, for debugging quickly') - - parser.add_argument('--precondition_topk', type=int, default=200, help='consider top k outputs from gpt at each step before conditioning and re-pruning') - parser.add_argument('--topk', type=int, default=10, help='consider top k outputs from gpt at each step') - parser.add_argument('--condition_lambda', type=float, default=1.0, help='lambda weight on conditioning model') - parser.add_argument('--length_cutoff', type=int, default=80, help='max length') - - parser.add_argument('--seed', type=int, default=1, help='random seed') - parser.add_argument('--device', type=str, default='cuda', choices=['cpu', 'cuda']) - parser.add_argument('--debug', action='store_true', default=False) - parser.add_argument('--verbose', action='store_true', default=False) - - args = parser.parse_args() - - assert (args.condition_file is not None) != (args.prefix_file is not None and args.wordlist_dir is not None) # one of two interfaces for specifying - - random.seed(args.seed) - np.random.seed(args.seed) - torch.manual_seed(args.seed) - - main(args) \ No newline at end of file diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/commons.py b/spaces/EyanAn/vits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/EyanAn/vits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/FebryanS/Wakaranai/app.py b/spaces/FebryanS/Wakaranai/app.py deleted file mode 100644 index 708d50e2572cde2fd35ad82a7552883e3f8b1695..0000000000000000000000000000000000000000 --- a/spaces/FebryanS/Wakaranai/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -import requests -import random - -def yandere(popular: bool, tags, limit=1): - if popular is True: - params = { - 'period': tags, - 'api_version': 2, - 'include_tags': 1 - } - pop = requests.get("https://yande.re/post/popular_recent.json?", params=params).json() - return pop - elif popular is False: - params = { - 'api_version': 2, - 'tags': tags, - 'limit': limit, - 'include_tags': 1 - } - res = requests.get("https://yande.re/post.json", params=params).json() - return res - -iface = gr.Interface( - fn=yandere, - inputs= [ - "checkbox", - "text", - "number" - ], - outputs= "list" -) -iface.launch() \ No newline at end of file diff --git a/spaces/Felladrin/MiniSearch/src/components/ResponseView.tsx b/spaces/Felladrin/MiniSearch/src/components/ResponseView.tsx deleted file mode 100644 index 98545d8760d76edb0bf1457782871aa6e8ed9dc9..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/MiniSearch/src/components/ResponseView.tsx +++ /dev/null @@ -1,58 +0,0 @@ -import Markdown from "markdown-to-jsx"; -import { SearchResults, searchQueryKey } from "../modules/search"; -import { getDisableAiResponseSetting } from "../modules/pubSub"; -import { SearchResultsList } from "./SearchResultsList"; -import { Tooltip } from "react-tooltip"; - -export function ResponseView({ - prompt, - response, - searchResults, - urlsDescriptions, -}: { - prompt: string; - response: string; - searchResults: SearchResults; - urlsDescriptions: Record; -}) { - return ( - <> - -
{ - localStorage.setItem(searchQueryKey, prompt); - window.location.href = window.location.origin; - }} - data-tooltip-id="query-tooltip" - data-tooltip-content="Click to edit the query" - > - {prompt} -
- {!getDisableAiResponseSetting() && ( - <> -
- {response} -
-
- - )} -
- {searchResults.length > 0 ? ( - - ) : ( - "Searching the Web..." - )} -
- - ); -} diff --git a/spaces/FrancisLi/advance_autotrain/Dockerfile b/spaces/FrancisLi/advance_autotrain/Dockerfile deleted file mode 100644 index 94ee76a4f45af463ab7f945633c9258172f9cc80..0000000000000000000000000000000000000000 --- a/spaces/FrancisLi/advance_autotrain/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM huggingface/autotrain-advanced:latest -CMD autotrain app --port 7860 diff --git a/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/cleaners.py b/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/cleaners.py deleted file mode 100644 index 263df9c0f7c185290600454abfff464e7f774576..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/cleaners.py +++ /dev/null @@ -1,134 +0,0 @@ -import re -from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3 -from text.korean import latin_to_hangul, number_to_hangul, divide_hangul, korean_to_lazy_ipa, korean_to_ipa -from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2 -from text.sanskrit import devanagari_to_ipa -from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2 -from text.thai import num_to_thai, latin_to_thai -# from text.shanghainese import shanghainese_to_ipa -# from text.cantonese import cantonese_to_ipa -# from text.ngu_dialect import ngu_dialect_to_ipa - - -def japanese_cleaners(text): - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -# def chinese_cleaners(text): -# '''Pipeline for Chinese text''' -# text = number_to_chinese(text) -# text = chinese_to_bopomofo(text) -# text = latin_to_bopomofo(text) -# text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) -# return text - -def chinese_cleaners(text): - from pypinyin import Style, pinyin - text = text.replace("[ZH]", "") - phones = [phone[0] for phone in pinyin(text, style=Style.TONE3)] - return ' '.join(phones) - - -def zh_ja_mixture_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - text = re.sub(r'([^।])$', r'\1।', text) - return text - - -def cjks_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -# def shanghainese_cleaners(text): -# text = shanghainese_to_ipa(text) -# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) -# return text - - -# def chinese_dialect_cleaners(text): -# text = re.sub(r'\[ZH\](.*?)\[ZH\]', -# lambda x: chinese_to_ipa2(x.group(1))+' ', text) -# text = re.sub(r'\[JA\](.*?)\[JA\]', -# lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) -# text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', -# '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) -# text = re.sub(r'\[GD\](.*?)\[GD\]', -# lambda x: cantonese_to_ipa(x.group(1))+' ', text) -# text = re.sub(r'\[EN\](.*?)\[EN\]', -# lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) -# text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( -# 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) -# text = re.sub(r'\s+$', '', text) -# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) -# return text diff --git a/spaces/Frorozcol/financIA/app.py b/spaces/Frorozcol/financIA/app.py deleted file mode 100644 index 96d7d32558efae1bbc8608a0e44c3317726718ca..0000000000000000000000000000000000000000 --- a/spaces/Frorozcol/financIA/app.py +++ /dev/null @@ -1,24 +0,0 @@ -import streamlit as st -from src import get_predict -def main(): - st.title("FinacIA") - st.write("Este es un DEMO del proyecto FinacIA, una aplicación que permite analizar noticias financieras y predecir el sentimiento del mercado") - st.write("En las páginas posteriores se encuentra la documentación del proyecto, el entrenamiento del modelo, el análisis exploratorio de los datos y tambien la arquitectura del proyecto") - - st.write("Ingresa el titulo de una noticia y te diremos si es positiva, negativa o neutral") - - texto = st.text_input("Ingresa un texto") - if not texto: - texto = "Una de cada diez empresas cerrará y la mitad entrará en pérdidas" - resultado = get_predict(texto) - st.write("Texto:", texto) - st.write("Resultado:", resultado) - - st.subheader("Creadores:") - st.write("👽 Alejandro Bedoya Taborda") - st.write("🕵 Alejandro Noriega Soto") - st.write("👨‍🏭 Cristian Camilo Henao Rojas") - st.write("💂 Fredy Alberto Orozco Loaiza") - st.write("👨‍⚖ Ronald Gabriel Palencia") -if __name__ == '__main__': - main() diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/metas/__init__.py b/spaces/GaenKoki/voicevox/voicevox_engine/metas/__init__.py deleted file mode 100644 index 4907fdf38d604dc7949dd361812938afd9db0abb..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/metas/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from . import Metas, MetasStore - -__all__ = [ - "Metas", - "MetasStore", -] diff --git a/spaces/Gaurav261/medical_image_classification/README.md b/spaces/Gaurav261/medical_image_classification/README.md deleted file mode 100644 index 6132eeb7a955e3979a7a083048826c4eea365f14..0000000000000000000000000000000000000000 --- a/spaces/Gaurav261/medical_image_classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Medical Image Classification -emoji: 📊 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GeekedReals/jonatasgrosman-wav2vec2-large-xlsr-53-english/README.md b/spaces/GeekedReals/jonatasgrosman-wav2vec2-large-xlsr-53-english/README.md deleted file mode 100644 index 7edc261253e49a5a37af9bb4ce1e87f8f9b899b8..0000000000000000000000000000000000000000 --- a/spaces/GeekedReals/jonatasgrosman-wav2vec2-large-xlsr-53-english/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Jonatasgrosman Wav2vec2 Large Xlsr 53 English -emoji: 🐢 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/one_stream_attention_lang_fusion.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/one_stream_attention_lang_fusion.py deleted file mode 100644 index 0e433a6d60d933ae259ee84ef2d0371bdfb3a20e..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/one_stream_attention_lang_fusion.py +++ /dev/null @@ -1,23 +0,0 @@ -"""Attention module.""" - -import cliport.models as models -from cliport.models.streams.two_stream_attention_lang_fusion import TwoStreamAttentionLangFusion - - -class OneStreamAttentionLangFusion(TwoStreamAttentionLangFusion): - """Attention (a.k.a Pick) module with language features fused at the bottleneck.""" - - def __init__(self, stream_fcn, in_shape, n_rotations, preprocess, cfg, device): - self.fusion_type = cfg['train']['attn_stream_fusion_type'] - super().__init__(stream_fcn, in_shape, n_rotations, preprocess, cfg, device) - - def _build_nets(self): - stream_one_fcn, _ = self.stream_fcn - stream_one_model = models.names[stream_one_fcn] - - self.attn_stream_one = stream_one_model(self.in_shape, 1, self.cfg, self.device, self.preprocess) - print(f"Attn FCN: {stream_one_fcn}") - - def attend(self, x, l): - x = self.attn_stream_one(x, l) - return x \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/misc/prepare_finetune_gpt.py b/spaces/Gen-Sim/Gen-Sim/misc/prepare_finetune_gpt.py deleted file mode 100644 index 784923bf4380983904d1db86d8a4c4ff363e86ad..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/misc/prepare_finetune_gpt.py +++ /dev/null @@ -1,115 +0,0 @@ -import cv2 -import numpy as np -import IPython -import os - -import openai -import pandas as pd -import json -import subprocess - - -# create dataset by loading the python file -def format_prompt(task_name): - instruction_text = open('misc/finetune_instructions_prompt.txt').read() - instruction_text = instruction_text.replace("TASK_NAME_TEMPLATE", task_name) - prompt_text = "\n Instructions: " + instruction_text + "\n\n###\n\n" - return prompt_text - -def format_completion(task_name, descriptions, code): - completion_text = f" \nDescriptions: \n ```{task_name}: {descriptions} \n\n###\n\n" - completion_text += "Implementation: \n ```python\n" + code + "<|endoftext|>" - return completion_text - -# test if using the finetuned model can generate better task coed than the base model -# https://platform.openai.com/docs/guides/fine-tuning -data_path = 'prompts/data' -def load_offline_memory(): - """get the current task descriptions, assets, and code""" - base_task_path = os.path.join(data_path, "base_tasks.json") - base_asset_path = os.path.join(data_path, "base_assets.json") - base_task_code_path = os.path.join(data_path, "base_task_codes.json") - - base_tasks = json.load(open(base_task_path)) - base_assets = json.load(open(base_asset_path)) - base_task_codes = json.load(open(base_task_code_path)) - - generated_task_path = os.path.join(data_path, "generated_tasks.json") - generated_asset_path = os.path.join(data_path, "generated_assets.json") - generated_task_code_path = os.path.join(data_path, "generated_task_codes.json") - - # print("original base task num:", len(base_tasks)) - base_tasks.update(json.load(open(generated_task_path))) - # base_assets.update(json.load(open(generated_asset_path))) - - for task in json.load(open(generated_task_code_path)): - if task not in base_task_codes: - base_task_codes.append(task) - - # print("current base task num:", len(base_tasks)) - return base_tasks, base_assets, base_task_codes - - -code_buffer = {} -base_tasks, base_assets, base_task_codes = load_offline_memory() -TOTAL_DATASET_TOKENS = 0 - -added_tasks = [] -df = pd.DataFrame() -for task_file in base_task_codes: - ## TODO(lirui): consider adding more structure here. - task_name = task_file[:-3].replace("_", "-") - if task_name in added_tasks: - continue - - if task_name not in base_tasks: - print(f"{task_name} missing") - continue - - added_tasks.append(task_name) - task_description = base_tasks[task_name] - - if os.path.exists("cliport/tasks/" + task_file): - task_code = open("cliport/tasks/" + task_file).read() - - # the generated cliport task path - elif os.path.exists("cliport/generated_tasks/" + task_file): - task_code = open("cliport/generated_tasks/" + task_file).read() - - prompt = format_prompt(task_name) - completion = format_completion(task_name, task_description, task_code) - - # rough estimates - TOTAL_DATASET_TOKENS += len(prompt) / 4 - TOTAL_DATASET_TOKENS += len(completion) / 4 - new_row = { 'prompt': prompt, - 'completion': completion} - new_row = pd.DataFrame([new_row]) - df = pd.concat([df, new_row], axis=0, ignore_index=True) - -df.to_csv("misc/finetune_data.csv", index=False) -print("======================================") -print("estimate number of tokens:", TOTAL_DATASET_TOKENS) -print("estimate price for davinci:", TOTAL_DATASET_TOKENS / 1000 * 0.03) -print("total number of instructions:", len(df)) -print("======================================") -# actual finetuning - -## prepared_data.csv --> prepared_data_prepared.json -subprocess.run('openai tools fine_tunes.prepare_data --file misc/finetune_data.csv --quiet'.split()) - -print("now you can run \n openai api fine_tunes.create --training_file output/finetune_data_prepared.jsonl --model davinci --suffix 'GenSim'") -# Model Training Usage -# Ada $0.0004 / 1K tokens $0.0016 / 1K tokens -# Curie $0.0030 / 1K tokens $0.0120 / 1K tokens -# Davinci $0.0300 / 1K tokens $0.1200 / 1K tokens - -# ## Start fine-tuning -# openai api fine_tunes.create --training_file output/finetune_data_prepared.jsonl --model davinci --suffix "GenSim" -# subprocess.run('openai api fine_tunes.create --training_file output/finetune_data_prepared.jsonl --model davinci --suffix "GenSim"'.split()) - - -# Tracking Finetune Status -# openai api fine_tunes.follow -i -# openai api fine_tunes.get -i -# openai wandb sync \ No newline at end of file diff --git a/spaces/Geonmo/nllb-translation-demo/app.py b/spaces/Geonmo/nllb-translation-demo/app.py deleted file mode 100644 index d0e56d0ea7c003b0eda096331214e4c4cf2a208a..0000000000000000000000000000000000000000 --- a/spaces/Geonmo/nllb-translation-demo/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import os -import torch -import gradio as gr -import time -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline -from flores200_codes import flores_codes - - -def load_models(): - # build model and tokenizer - model_name_dict = {'nllb-distilled-600M': 'facebook/nllb-200-distilled-600M', - #'nllb-1.3B': 'facebook/nllb-200-1.3B', - #'nllb-distilled-1.3B': 'facebook/nllb-200-distilled-1.3B', - #'nllb-3.3B': 'facebook/nllb-200-3.3B', - } - - model_dict = {} - - for call_name, real_name in model_name_dict.items(): - print('\tLoading model: %s' % call_name) - model = AutoModelForSeq2SeqLM.from_pretrained(real_name) - tokenizer = AutoTokenizer.from_pretrained(real_name) - model_dict[call_name+'_model'] = model - model_dict[call_name+'_tokenizer'] = tokenizer - - return model_dict - - -def translation(source, target, text): - if len(model_dict) == 2: - model_name = 'nllb-distilled-600M' - - start_time = time.time() - source = flores_codes[source] - target = flores_codes[target] - - model = model_dict[model_name + '_model'] - tokenizer = model_dict[model_name + '_tokenizer'] - - translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang=source, tgt_lang=target) - output = translator(text, max_length=400) - - end_time = time.time() - - output = output[0]['translation_text'] - result = {'inference_time': end_time - start_time, - 'source': source, - 'target': target, - 'result': output} - return result - - -if __name__ == '__main__': - print('\tinit models') - - global model_dict - - model_dict = load_models() - - # define gradio demo - lang_codes = list(flores_codes.keys()) - #inputs = [gr.inputs.Radio(['nllb-distilled-600M', 'nllb-1.3B', 'nllb-distilled-1.3B'], label='NLLB Model'), - inputs = [gr.inputs.Dropdown(lang_codes, default='English', label='Source'), - gr.inputs.Dropdown(lang_codes, default='Korean', label='Target'), - gr.inputs.Textbox(lines=5, label="Input text"), - ] - - outputs = gr.outputs.JSON() - - title = "NLLB distilled 600M demo" - - demo_status = "Demo is running on CPU" - description = f"Details: https://github.com/facebookresearch/fairseq/tree/nllb. {demo_status}" - examples = [ - ['English', 'Korean', 'Hi. nice to meet you'] - ] - - gr.Interface(translation, - inputs, - outputs, - title=title, - description=description, - ).launch() - - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rpn/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rpn/README.md deleted file mode 100644 index 2b0c6de8f2508ca3f1bdd3d4a203a71dfcffce3a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rpn/README.md +++ /dev/null @@ -1,29 +0,0 @@ -# Cascade RPN - -[ALGORITHM] - -We provide the code for reproducing experiment results of [Cascade RPN](https://arxiv.org/abs/1909.06720). - -``` -@inproceedings{vu2019cascade, - title={Cascade RPN: Delving into High-Quality Region Proposal Network with Adaptive Convolution}, - author={Vu, Thang and Jang, Hyunjun and Pham, Trung X and Yoo, Chang D}, - booktitle={Conference on Neural Information Processing Systems (NeurIPS)}, - year={2019} -} -``` - -## Benchmark - -### Region proposal performance - -| Method | Backbone | Style | Mem (GB) | Train time (s/iter) | Inf time (fps) | AR 1000 | Download | -|:------:|:--------:|:-----:|:--------:|:-------------------:|:--------------:|:-------:|:--------------------------------------:| -| CRPN | R-50-FPN | caffe | - | - | - | 72.0 | [model](https://drive.google.com/file/d/1qxVdOnCgK-ee7_z0x6mvAir_glMu2Ihi/view?usp=sharing) | - -### Detection performance - -| Method | Proposal | Backbone | Style | Schedule | Mem (GB) | Train time (s/iter) | Inf time (fps) | box AP | Download | -|:-------------:|:-----------:|:--------:|:-------:|:--------:|:--------:|:-------------------:|:--------------:|:------:|:--------------------------------------------:| -| Fast R-CNN | Cascade RPN | R-50-FPN | caffe | 1x | - | - | - | 39.9 | [model](https://drive.google.com/file/d/1NmbnuY5VHi8I9FE8xnp5uNvh2i-t-6_L/view?usp=sharing) | -| Faster R-CNN | Cascade RPN | R-50-FPN | caffe | 1x | - | - | - | 40.4 | [model](https://drive.google.com/file/d/1dS3Q66qXMJpcuuQgDNkLp669E5w1UMuZ/view?usp=sharing) | diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py deleted file mode 100644 index d2edab113649c38cac3c7dc3ff425462f7c40ffd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_2x_coco.py deleted file mode 100644 index 4f7150ca718e2ead46eb63e74b6be06f50aa0fce..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py' -# learning policy -lr_config = dict(step=[16, 23]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/__init__.py deleted file mode 100644 index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .musicgen import MusicGen -from .lm import LMModel -from .encodec import CompressionModel, EncodecModel diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/hubert/__init__.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Hallucinate/demo/ldm/models/diffusion/__init__.py b/spaces/Hallucinate/demo/ldm/models/diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/transformer_utils.py b/spaces/HaloMaster/chinesesummary/fengshen/models/transformer_utils.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh deleted file mode 100644 index ca3591b3db1715f136773d62e4b9b9ede97d436c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh +++ /dev/null @@ -1,225 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -#echo 'Cloning Moses github repository (for tokenization scripts)...' -#git clone https://github.com/moses-smt/mosesdecoder.git - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - - -data_root=${WORKDIR_ROOT}/iwsltv2 -DESTDIR=${WORKDIR_ROOT}/ML50/raw - - -langs="ar_AR it_IT nl_XX ko_KR vi_VN" -echo "data_root: $data_root" - -download_path=${data_root}/downloads -raw=${DESTDIR} -tmp=${data_root}/tmp -orig=${data_root}/orig - -mkdir -p $download_path $orig $raw $tmp -####################### -download_iwslt(){ - iwslt_key=$1 - src=$2 - tgt=$3 - save_prefix=$4 - pushd ${download_path} - if [[ ! -f ${save_prefix}$src-$tgt.tgz ]]; then - wget https://wit3.fbk.eu/archive/${iwslt_key}/texts/$src/$tgt/$src-$tgt.tgz -O ${save_prefix}$src-$tgt.tgz - [ $? -eq 0 ] && return 0 - fi - popd -} - -extract_iwslt(){ - src=$1 - tgt=$2 - prefix=$3 - pushd $orig - tar zxvf ${download_path}/${prefix}$src-${tgt}.tgz - popd -} - -generate_train(){ - lsrc=$1 - ltgt=$2 - src=${lsrc:0:2} - tgt=${ltgt:0:2} - for ll in $lsrc $ltgt; do - l=${ll:0:2} - f="$orig/*/train.tags.$src-$tgt.$l" - f_raw=$raw/train.$lsrc-$ltgt.$ll - cat $f \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | sed -e 's///g' \ - | sed -e 's/<\/title>//g' \ - | sed -e 's/<description>//g' \ - | sed -e 's/<\/description>//g' \ - | sed 's/^\s*//g' \ - | sed 's/\s*$//g' \ - > $f_raw - [ $? -eq 0 ] && echo "extracted $f to $f_raw" - done - return 0 -} - -convert_valid_test(){ - src=$1 - tgt=$2 - for l in $src $tgt; do - echo "lang: ${l}" - for o in `ls $orig/*/IWSLT*.TED*.$src-$tgt.$l.xml`; do - fname=${o##*/} - f=$tmp/${fname%.*} - echo "$o => $f" - grep '<seg id' $o \ - | sed -e 's/<seg id="[0-9]*">\s*//g' \ - | sed -e 's/\s*<\/seg>\s*//g' \ - | sed -e "s/\’/\'/g" \ - > $f - echo "" - done - done -} - -generate_subset(){ - lsrc=$1 - ltgt=$2 - src=${lsrc:0:2} - tgt=${ltgt:0:2} - subset=$3 - prefix=$4 - for ll in $lsrc $ltgt; do - l=${ll:0:2} - f=$tmp/$prefix.${src}-${tgt}.$l - if [[ -f $f ]]; then - cp $f $raw/$subset.${lsrc}-$ltgt.${ll} - fi - done -} -################# - -echo "downloading iwslt training and dev data" -# using multilingual for it, nl -download_iwslt "2017-01-trnmted" DeEnItNlRo DeEnItNlRo -download_iwslt "2017-01-trnted" ar en -download_iwslt "2017-01-trnted" en ar -download_iwslt "2017-01-trnted" ko en -download_iwslt "2017-01-trnted" en ko -download_iwslt "2015-01" vi en -download_iwslt "2015-01" en vi - -echo "donwloading iwslt test data" -download_iwslt "2017-01-mted-test" it en "test." -download_iwslt "2017-01-mted-test" en it "test." -download_iwslt "2017-01-mted-test" nl en "test." -download_iwslt "2017-01-mted-test" en nl "test." - -download_iwslt "2017-01-ted-test" ar en "test." -download_iwslt "2017-01-ted-test" en ar "test." -download_iwslt "2017-01-ted-test" ko en "test." -download_iwslt "2017-01-ted-test" en ko "test." -download_iwslt "2015-01-test" vi en "test." -download_iwslt "2015-01-test" en vi "test." - -echo "extract training data tar balls" -extract_iwslt DeEnItNlRo DeEnItNlRo -extract_iwslt ar en -extract_iwslt en ar -extract_iwslt ko en -extract_iwslt en ko -extract_iwslt vi en -extract_iwslt en vi - - -echo "extracting iwslt test data" -for lang in $langs; do - l=${lang:0:2} - extract_iwslt $l en "test." - extract_iwslt en $l "test." -done - -echo "convert dev and test data" -for lang in $langs; do - s_lang=${lang:0:2} - convert_valid_test $s_lang en - convert_valid_test en $s_lang -done - - - -echo "creating training data into $raw" -for lang in $langs; do - generate_train $lang en_XX - generate_train en_XX $lang -done - -echo "creating iwslt dev data into raw" -generate_subset en_XX vi_VN valid "IWSLT15.TED.tst2013" -generate_subset vi_VN en_XX valid "IWSLT15.TED.tst2013" - -generate_subset en_XX ar_AR valid "IWSLT17.TED.tst2016" -generate_subset ar_AR en_XX valid "IWSLT17.TED.tst2016" -generate_subset en_XX ko_KR valid "IWSLT17.TED.tst2016" -generate_subset ko_KR en_XX valid "IWSLT17.TED.tst2016" - - -generate_subset en_XX it_IT valid "IWSLT17.TED.tst2010" -generate_subset it_IT en_XX valid "IWSLT17.TED.tst2010" -generate_subset en_XX nl_XX valid "IWSLT17.TED.tst2010" -generate_subset nl_XX en_XX valid "IWSLT17.TED.tst2010" - -echo "creating iswslt test data into raw" -generate_subset en_XX vi_VN test "IWSLT15.TED.tst2015" -generate_subset vi_VN en_XX test "IWSLT15.TED.tst2015" - -generate_subset en_XX ar_AR test "IWSLT17.TED.tst2017" -generate_subset ar_AR en_XX test "IWSLT17.TED.tst2017" -generate_subset en_XX ko_KR test "IWSLT17.TED.tst2017" -generate_subset ko_KR en_XX test "IWSLT17.TED.tst2017" - -generate_subset en_XX it_IT test "IWSLT17.TED.tst2017.mltlng" -generate_subset it_IT en_XX test "IWSLT17.TED.tst2017.mltlng" -generate_subset en_XX nl_XX test "IWSLT17.TED.tst2017.mltlng" -generate_subset nl_XX en_XX test "IWSLT17.TED.tst2017.mltlng" - -# normalze iwslt directions into x-en -pushd $raw -for lang in $langs; do - for split in test valid; do - x_en_f1=$split.$lang-en_XX.en_XX - x_en_f2=$split.$lang-en_XX.${lang} - - en_x_f1=$split.en_XX-$lang.en_XX - en_x_f2=$split.en_XX-$lang.${lang} - - if [ -f $en_x_f1 ] && [ ! -f $x_en_f1 ]; then - echo "cp $en_x_f1 $x_en_f1" - cp $en_x_f1 $x_en_f1 - fi - if [ -f $x_en_f2 ] && [ ! -f $x_en_f2 ]; then - echo "cp $en_x_f2 $x_en_f2" - cp $en_x_f2 $x_en_f2 - fi - done -done -popd \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py deleted file mode 100644 index 724c6912a62d48fc61988cac1434a4f5c8754521..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py +++ /dev/null @@ -1,126 +0,0 @@ -from typing import Optional, Dict -from torch import Tensor -import torch - - -def waitk_p_choose( - tgt_len: int, - src_len: int, - bsz: int, - waitk_lagging: int, - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None -): - - max_src_len = src_len - if incremental_state is not None: - # Retrieve target length from incremental states - # For inference the length of query is always 1 - max_tgt_len = incremental_state["steps"]["tgt"] - assert max_tgt_len is not None - max_tgt_len = int(max_tgt_len) - else: - max_tgt_len = tgt_len - - if max_src_len < waitk_lagging: - if incremental_state is not None: - max_tgt_len = 1 - return torch.zeros( - bsz, max_tgt_len, max_src_len - ) - - # Assuming the p_choose looks like this for wait k=3 - # src_len = 6, max_tgt_len = 5 - # [0, 0, 1, 0, 0, 0, 0] - # [0, 0, 0, 1, 0, 0, 0] - # [0, 0, 0, 0, 1, 0, 0] - # [0, 0, 0, 0, 0, 1, 0] - # [0, 0, 0, 0, 0, 0, 1] - # linearize the p_choose matrix: - # [0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...] - # The indices of linearized matrix that equals 1 is - # 2 + 6 * 0 - # 3 + 6 * 1 - # ... - # n + src_len * n + k - 1 = n * (src_len + 1) + k - 1 - # n from 0 to max_tgt_len - 1 - # - # First, generate the indices (activate_indices_offset: bsz, max_tgt_len) - # Second, scatter a zeros tensor (bsz, max_tgt_len * src_len) - # with activate_indices_offset - # Third, resize the tensor to (bsz, max_tgt_len, src_len) - - activate_indices_offset = ( - ( - torch.arange(max_tgt_len) * (max_src_len + 1) - + waitk_lagging - 1 - ) - .unsqueeze(0) - .expand(bsz, max_tgt_len) - .long() - ) - - if key_padding_mask is not None: - if key_padding_mask[:, 0].any(): - # Left padding - activate_indices_offset += ( - key_padding_mask.sum(dim=1, keepdim=True) - ) - - # Need to clamp the indices that are too large - activate_indices_offset = ( - activate_indices_offset - .clamp( - 0, - min( - [ - max_tgt_len, - max_src_len - waitk_lagging + 1 - ] - ) * max_src_len - 1 - ) - ) - - p_choose = torch.zeros(bsz, max_tgt_len * max_src_len) - - p_choose = p_choose.scatter( - 1, - activate_indices_offset, - 1.0 - ).view(bsz, max_tgt_len, max_src_len) - - if key_padding_mask is not None: - p_choose = p_choose.to(key_padding_mask) - p_choose = p_choose.masked_fill(key_padding_mask.unsqueeze(1), 0) - - if incremental_state is not None: - p_choose = p_choose[:, -1:] - - return p_choose.float() - - -def learnable_p_choose( - energy, - noise_mean: float = 0.0, - noise_var: float = 0.0, - training: bool = True -): - """ - Calculating step wise prob for reading and writing - 1 to read, 0 to write - energy: bsz, tgt_len, src_len - """ - - noise = 0 - if training: - # add noise here to encourage discretness - noise = ( - torch.normal(noise_mean, noise_var, energy.size()) - .type_as(energy) - .to(energy.device) - ) - - p_choose = torch.sigmoid(energy + noise) - - # p_choose: bsz * self.num_heads, tgt_len, src_len - return p_choose diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/id_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/id_dataset.py deleted file mode 100644 index 3e4d7969cf2a26e852b466f165a6fadabae3b35f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/id_dataset.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class IdDataset(FairseqDataset): - def __getitem__(self, index): - return index - - def __len__(self): - return 0 - - def collater(self, samples): - return torch.tensor(samples) diff --git a/spaces/Harsh502s/Anime-Recommender/README.md b/spaces/Harsh502s/Anime-Recommender/README.md deleted file mode 100644 index 5cc22b70918059cb7f141f2c5b77fc0b37cb6e35..0000000000000000000000000000000000000000 --- a/spaces/Harsh502s/Anime-Recommender/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Recommender -emoji: 📈 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/README.md b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/README.md deleted file mode 100644 index bbad28a86e6c8a3f38f44a55a3ae392b9853be70..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Vakyansh Malayalam TTS -emoji: 🏃 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/hifi/prepare_iitm_data_hifi.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/hifi/prepare_iitm_data_hifi.py deleted file mode 100644 index 1e1de2e28735143aeef8ddb10bc5a4672c02564b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/hifi/prepare_iitm_data_hifi.py +++ /dev/null @@ -1,64 +0,0 @@ - -import glob -import random -import sys -import os -import argparse - - - - -def process_data(args): - - path = args.input_path - valid_files = args.valid_files - test_files = args.test_files - dest_path = args.dest_path - - list_paths = path.split(',') - - valid_set = [] - training_set = [] - test_set = [] - - for local_path in list_paths: - files = glob.glob(local_path+'/*.wav') - print(f"Total files: {len(files)}") - - valid_set_local = random.sample(files, valid_files) - - test_set_local = random.sample(valid_set_local, test_files) - valid_set.extend(list(set(valid_set_local) - set(test_set_local))) - test_set.extend(test_set_local) - - print(len(valid_set_local)) - - training_set_local = set(files) - set(valid_set_local) - print(len(training_set_local)) - training_set.extend(training_set_local) - - - valid_set = random.sample(valid_set, len(valid_set)) - test_set = random.sample(test_set, len(test_set)) - training_set = random.sample(training_set, len(training_set)) - - with open(os.path.join(dest_path , 'valid.txt'), mode = 'w+') as file: - file.write("\n".join(list(valid_set))) - - with open(os.path.join(dest_path , 'train.txt'), mode = 'w+') as file: - file.write("\n".join(list(training_set))) - - with open(os.path.join(dest_path , 'test.txt'), mode = 'w+') as file: - file.write("\n".join(list(test_set))) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument('-i','--input-path',type=str,help='path to input wav files') - parser.add_argument('-v','--valid-files',type=int,help='number of valid files') - parser.add_argument('-t','--test-files',type=int,help='number of test files') - parser.add_argument('-d','--dest-path',type=str,help='destination path to output filelists') - - args = parser.parse_args() - - process_data(args) \ No newline at end of file diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/transliterate/__init__.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/transliterate/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Hila/RobustViT/ViT/helpers.py b/spaces/Hila/RobustViT/ViT/helpers.py deleted file mode 100644 index e2840ea741a5dad1473c14fba1edfd68d91a12d9..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/ViT/helpers.py +++ /dev/null @@ -1,295 +0,0 @@ -""" Model creation / weight loading / state_dict helpers - -Hacked together by / Copyright 2020 Ross Wightman -""" -import logging -import os -import math -from collections import OrderedDict -from copy import deepcopy -from typing import Callable - -import torch -import torch.nn as nn -import torch.utils.model_zoo as model_zoo - -_logger = logging.getLogger(__name__) - - -def load_state_dict(checkpoint_path, use_ema=False): - if checkpoint_path and os.path.isfile(checkpoint_path): - checkpoint = torch.load(checkpoint_path, map_location='cpu') - state_dict_key = 'state_dict' - if isinstance(checkpoint, dict): - if use_ema and 'state_dict_ema' in checkpoint: - state_dict_key = 'state_dict_ema' - if state_dict_key and state_dict_key in checkpoint: - new_state_dict = OrderedDict() - for k, v in checkpoint[state_dict_key].items(): - # strip `module.` prefix - name = k[7:] if k.startswith('module') else k - new_state_dict[name] = v - state_dict = new_state_dict - else: - state_dict = checkpoint - _logger.info("Loaded {} from checkpoint '{}'".format(state_dict_key, checkpoint_path)) - return state_dict - else: - _logger.error("No checkpoint found at '{}'".format(checkpoint_path)) - raise FileNotFoundError() - - -def load_checkpoint(model, checkpoint_path, use_ema=False, strict=True): - state_dict = load_state_dict(checkpoint_path, use_ema) - model.load_state_dict(state_dict, strict=strict) - - -def resume_checkpoint(model, checkpoint_path, optimizer=None, loss_scaler=None, log_info=True): - resume_epoch = None - if os.path.isfile(checkpoint_path): - checkpoint = torch.load(checkpoint_path, map_location='cpu') - if isinstance(checkpoint, dict) and 'state_dict' in checkpoint: - if log_info: - _logger.info('Restoring model state from checkpoint...') - new_state_dict = OrderedDict() - for k, v in checkpoint['state_dict'].items(): - name = k[7:] if k.startswith('module') else k - new_state_dict[name] = v - model.load_state_dict(new_state_dict) - - if optimizer is not None and 'optimizer' in checkpoint: - if log_info: - _logger.info('Restoring optimizer state from checkpoint...') - optimizer.load_state_dict(checkpoint['optimizer']) - - if loss_scaler is not None and loss_scaler.state_dict_key in checkpoint: - if log_info: - _logger.info('Restoring AMP loss scaler state from checkpoint...') - loss_scaler.load_state_dict(checkpoint[loss_scaler.state_dict_key]) - - if 'epoch' in checkpoint: - resume_epoch = checkpoint['epoch'] - if 'version' in checkpoint and checkpoint['version'] > 1: - resume_epoch += 1 # start at the next epoch, old checkpoints incremented before save - - if log_info: - _logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, checkpoint['epoch'])) - else: - model.load_state_dict(checkpoint) - if log_info: - _logger.info("Loaded checkpoint '{}'".format(checkpoint_path)) - return resume_epoch - else: - _logger.error("No checkpoint found at '{}'".format(checkpoint_path)) - raise FileNotFoundError() - - -def load_pretrained(model, cfg=None, num_classes=1000, in_chans=3, filter_fn=None, strict=True): - if cfg is None: - cfg = getattr(model, 'default_cfg') - if cfg is None or 'url' not in cfg or not cfg['url']: - _logger.warning("Pretrained model URL is invalid, using random initialization.") - return - - state_dict = model_zoo.load_url(cfg['url'], progress=False, map_location='cpu') - - if filter_fn is not None: - state_dict = filter_fn(state_dict) - - if in_chans == 1: - conv1_name = cfg['first_conv'] - _logger.info('Converting first conv (%s) pretrained weights from 3 to 1 channel' % conv1_name) - conv1_weight = state_dict[conv1_name + '.weight'] - # Some weights are in torch.half, ensure it's float for sum on CPU - conv1_type = conv1_weight.dtype - conv1_weight = conv1_weight.float() - O, I, J, K = conv1_weight.shape - if I > 3: - assert conv1_weight.shape[1] % 3 == 0 - # For models with space2depth stems - conv1_weight = conv1_weight.reshape(O, I // 3, 3, J, K) - conv1_weight = conv1_weight.sum(dim=2, keepdim=False) - else: - conv1_weight = conv1_weight.sum(dim=1, keepdim=True) - conv1_weight = conv1_weight.to(conv1_type) - state_dict[conv1_name + '.weight'] = conv1_weight - elif in_chans != 3: - conv1_name = cfg['first_conv'] - conv1_weight = state_dict[conv1_name + '.weight'] - conv1_type = conv1_weight.dtype - conv1_weight = conv1_weight.float() - O, I, J, K = conv1_weight.shape - if I != 3: - _logger.warning('Deleting first conv (%s) from pretrained weights.' % conv1_name) - del state_dict[conv1_name + '.weight'] - strict = False - else: - # NOTE this strategy should be better than random init, but there could be other combinations of - # the original RGB input layer weights that'd work better for specific cases. - _logger.info('Repeating first conv (%s) weights in channel dim.' % conv1_name) - repeat = int(math.ceil(in_chans / 3)) - conv1_weight = conv1_weight.repeat(1, repeat, 1, 1)[:, :in_chans, :, :] - conv1_weight *= (3 / float(in_chans)) - conv1_weight = conv1_weight.to(conv1_type) - state_dict[conv1_name + '.weight'] = conv1_weight - - classifier_name = cfg['classifier'] - if num_classes == 1000 and cfg['num_classes'] == 1001: - # special case for imagenet trained models with extra background class in pretrained weights - classifier_weight = state_dict[classifier_name + '.weight'] - state_dict[classifier_name + '.weight'] = classifier_weight[1:] - classifier_bias = state_dict[classifier_name + '.bias'] - state_dict[classifier_name + '.bias'] = classifier_bias[1:] - elif num_classes != cfg['num_classes']: - # completely discard fully connected for all other differences between pretrained and created model - del state_dict[classifier_name + '.weight'] - del state_dict[classifier_name + '.bias'] - strict = False - - model.load_state_dict(state_dict, strict=strict) - - -def extract_layer(model, layer): - layer = layer.split('.') - module = model - if hasattr(model, 'module') and layer[0] != 'module': - module = model.module - if not hasattr(model, 'module') and layer[0] == 'module': - layer = layer[1:] - for l in layer: - if hasattr(module, l): - if not l.isdigit(): - module = getattr(module, l) - else: - module = module[int(l)] - else: - return module - return module - - -def set_layer(model, layer, val): - layer = layer.split('.') - module = model - if hasattr(model, 'module') and layer[0] != 'module': - module = model.module - lst_index = 0 - module2 = module - for l in layer: - if hasattr(module2, l): - if not l.isdigit(): - module2 = getattr(module2, l) - else: - module2 = module2[int(l)] - lst_index += 1 - lst_index -= 1 - for l in layer[:lst_index]: - if not l.isdigit(): - module = getattr(module, l) - else: - module = module[int(l)] - l = layer[lst_index] - setattr(module, l, val) - - -def adapt_model_from_string(parent_module, model_string): - separator = '***' - state_dict = {} - lst_shape = model_string.split(separator) - for k in lst_shape: - k = k.split(':') - key = k[0] - shape = k[1][1:-1].split(',') - if shape[0] != '': - state_dict[key] = [int(i) for i in shape] - - new_module = deepcopy(parent_module) - for n, m in parent_module.named_modules(): - old_module = extract_layer(parent_module, n) - if isinstance(old_module, nn.Conv2d) or isinstance(old_module, Conv2dSame): - if isinstance(old_module, Conv2dSame): - conv = Conv2dSame - else: - conv = nn.Conv2d - s = state_dict[n + '.weight'] - in_channels = s[1] - out_channels = s[0] - g = 1 - if old_module.groups > 1: - in_channels = out_channels - g = in_channels - new_conv = conv( - in_channels=in_channels, out_channels=out_channels, kernel_size=old_module.kernel_size, - bias=old_module.bias is not None, padding=old_module.padding, dilation=old_module.dilation, - groups=g, stride=old_module.stride) - set_layer(new_module, n, new_conv) - if isinstance(old_module, nn.BatchNorm2d): - new_bn = nn.BatchNorm2d( - num_features=state_dict[n + '.weight'][0], eps=old_module.eps, momentum=old_module.momentum, - affine=old_module.affine, track_running_stats=True) - set_layer(new_module, n, new_bn) - if isinstance(old_module, nn.Linear): - # FIXME extra checks to ensure this is actually the FC classifier layer and not a diff Linear layer? - num_features = state_dict[n + '.weight'][1] - new_fc = nn.Linear( - in_features=num_features, out_features=old_module.out_features, bias=old_module.bias is not None) - set_layer(new_module, n, new_fc) - if hasattr(new_module, 'num_features'): - new_module.num_features = num_features - new_module.eval() - parent_module.eval() - - return new_module - - -def adapt_model_from_file(parent_module, model_variant): - adapt_file = os.path.join(os.path.dirname(__file__), 'pruned', model_variant + '.txt') - with open(adapt_file, 'r') as f: - return adapt_model_from_string(parent_module, f.read().strip()) - - -def build_model_with_cfg( - model_cls: Callable, - variant: str, - pretrained: bool, - default_cfg: dict, - model_cfg: dict = None, - feature_cfg: dict = None, - pretrained_strict: bool = True, - pretrained_filter_fn: Callable = None, - **kwargs): - pruned = kwargs.pop('pruned', False) - features = False - feature_cfg = feature_cfg or {} - - if kwargs.pop('features_only', False): - features = True - feature_cfg.setdefault('out_indices', (0, 1, 2, 3, 4)) - if 'out_indices' in kwargs: - feature_cfg['out_indices'] = kwargs.pop('out_indices') - - model = model_cls(**kwargs) if model_cfg is None else model_cls(cfg=model_cfg, **kwargs) - model.default_cfg = deepcopy(default_cfg) - - if pruned: - model = adapt_model_from_file(model, variant) - - if pretrained: - load_pretrained( - model, - num_classes=kwargs.get('num_classes', 0), - in_chans=kwargs.get('in_chans', 3), - filter_fn=pretrained_filter_fn, strict=pretrained_strict) - - if features: - feature_cls = FeatureListNet - if 'feature_cls' in feature_cfg: - feature_cls = feature_cfg.pop('feature_cls') - if isinstance(feature_cls, str): - feature_cls = feature_cls.lower() - if 'hook' in feature_cls: - feature_cls = FeatureHookNet - else: - assert False, f'Unknown feature class {feature_cls}' - model = feature_cls(model, **feature_cfg) - - return model \ No newline at end of file diff --git a/spaces/Hina4867/bingo/src/components/tone-selector.tsx b/spaces/Hina4867/bingo/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( - <div className="fieldset"> - <div className="legend"> - 选择对话样式 - </div> - <div className="options-list-container"> - <ul id="tone-options" className="options"> - { - ToneList.map(tone => ( - <li className="option" key={tone.name} onClick={() => onChange?.(tone.type)}> - <button className={cn(`tone-${type.toLowerCase()}`, { selected: tone.type === type}) } aria-pressed="true" > - <span className="caption-2-strong label-modifier">更</span> - <span className="body-1-strong label">{tone.name}</span> - </button> - </li> - )) - } - </ul> - </div> - </div> - ) -} diff --git a/spaces/Hina4867/bingo/tailwind.config.js b/spaces/Hina4867/bingo/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / <alpha-value>)', - secondary: 'rgb(var(--color-secondary) / <alpha-value>)', - 'primary-background': 'rgb(var(--primary-background) / <alpha-value>)', - 'primary-text': 'rgb(var(--primary-text) / <alpha-value>)', - 'secondary-text': 'rgb(var(--secondary-text) / <alpha-value>)', - 'light-text': 'rgb(var(--light-text) / <alpha-value>)', - 'primary-border': 'rgb(var(--primary-border) / <alpha-value>)', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/Hobe/bing/Dockerfile b/spaces/Hobe/bing/Dockerfile deleted file mode 100644 index 139c333a3bba5ac3680d42b6f356824207f05255..0000000000000000000000000000000000000000 --- a/spaces/Hobe/bing/Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,并且清除缓存🧹 -RUN apk --no-cache add git && \ - git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app && \ - apk del git - -# 设置工作目录 -WORKDIR /workspace/app - -# 编译 go 项目 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像🪞 -FROM alpine - -# 设置工作目录💼 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件👔 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# (可选)设置环境变量✍️ -ENV Go_Proxy_BingAI_USER_TOKEN_1="G4hJ9k544565uhjjhjlkjh6356223p3EaYc0FvIjHmLzXeRfAq" - -# 端口 -EXPOSE 8080 - -# 容器运行✅ -CMD ["/workspace/app/go-proxy-bingai"] diff --git a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/chunks/index.0d3f7c7a.js b/spaces/HugoDzz/spaceship_drift/build/_app/immutable/chunks/index.0d3f7c7a.js deleted file mode 100644 index 201a9e8eabdf4ba1fc43634ee314891dd098b062..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/chunks/index.0d3f7c7a.js +++ /dev/null @@ -1 +0,0 @@ -function $(){}function F(t,e){for(const n in e)t[n]=e[n];return t}function T(t){return t()}function M(){return Object.create(null)}function g(t){t.forEach(T)}function B(t){return typeof t=="function"}function at(t,e){return t!=t?e==e:t!==e||t&&typeof t=="object"||typeof t=="function"}let x;function st(t,e){return x||(x=document.createElement("a")),x.href=e,t===x.href}function H(t){return Object.keys(t).length===0}function I(t,...e){if(t==null)return $;const n=t.subscribe(...e);return n.unsubscribe?()=>n.unsubscribe():n}function ft(t,e,n){t.$$.on_destroy.push(I(e,n))}function dt(t,e,n,i){if(t){const r=L(t,e,n,i);return t[0](r)}}function L(t,e,n,i){return t[1]&&i?F(n.ctx.slice(),t[1](i(e))):n.ctx}function _t(t,e,n,i){if(t[2]&&i){const r=t[2](i(n));if(e.dirty===void 0)return r;if(typeof r=="object"){const a=[],l=Math.max(e.dirty.length,r.length);for(let o=0;o<l;o+=1)a[o]=e.dirty[o]|r[o];return a}return e.dirty|r}return e.dirty}function ht(t,e,n,i,r,a){if(r){const l=L(e,n,i,a);t.p(l,r)}}function mt(t){if(t.ctx.length>32){const e=[],n=t.ctx.length/32;for(let i=0;i<n;i++)e[i]=-1;return e}return-1}const G=typeof window<"u"?window:typeof globalThis<"u"?globalThis:global;"WeakMap"in G;let v=!1;function J(){v=!0}function K(){v=!1}function Q(t,e,n,i){for(;t<e;){const r=t+(e-t>>1);n(r)<=i?t=r+1:e=r}return t}function R(t){if(t.hydrate_init)return;t.hydrate_init=!0;let e=t.childNodes;if(t.nodeName==="HEAD"){const u=[];for(let c=0;c<e.length;c++){const f=e[c];f.claim_order!==void 0&&u.push(f)}e=u}const n=new Int32Array(e.length+1),i=new Int32Array(e.length);n[0]=-1;let r=0;for(let u=0;u<e.length;u++){const c=e[u].claim_order,f=(r>0&&e[n[r]].claim_order<=c?r+1:Q(1,r,b=>e[n[b]].claim_order,c))-1;i[u]=n[f]+1;const s=f+1;n[s]=u,r=Math.max(s,r)}const a=[],l=[];let o=e.length-1;for(let u=n[r]+1;u!=0;u=i[u-1]){for(a.push(e[u-1]);o>=u;o--)l.push(e[o]);o--}for(;o>=0;o--)l.push(e[o]);a.reverse(),l.sort((u,c)=>u.claim_order-c.claim_order);for(let u=0,c=0;u<l.length;u++){for(;c<a.length&&l[u].claim_order>=a[c].claim_order;)c++;const f=c<a.length?a[c]:null;t.insertBefore(l[u],f)}}function U(t,e){if(v){for(R(t),(t.actual_end_child===void 0||t.actual_end_child!==null&&t.actual_end_child.parentNode!==t)&&(t.actual_end_child=t.firstChild);t.actual_end_child!==null&&t.actual_end_child.claim_order===void 0;)t.actual_end_child=t.actual_end_child.nextSibling;e!==t.actual_end_child?(e.claim_order!==void 0||e.parentNode!==t)&&t.insertBefore(e,t.actual_end_child):t.actual_end_child=e.nextSibling}else(e.parentNode!==t||e.nextSibling!==null)&&t.appendChild(e)}function pt(t,e,n){v&&!n?U(t,e):(e.parentNode!==t||e.nextSibling!=n)&&t.insertBefore(e,n||null)}function V(t){t.parentNode&&t.parentNode.removeChild(t)}function X(t){return document.createElement(t)}function A(t){return document.createTextNode(t)}function yt(){return A(" ")}function gt(){return A("")}function bt(t,e,n,i){return t.addEventListener(e,n,i),()=>t.removeEventListener(e,n,i)}function xt(t){return function(e){return e.preventDefault(),t.call(this,e)}}function wt(t,e,n){n==null?t.removeAttribute(e):t.getAttribute(e)!==n&&t.setAttribute(e,n)}function Y(t){return Array.from(t.childNodes)}function Z(t){t.claim_info===void 0&&(t.claim_info={last_index:0,total_claimed:0})}function O(t,e,n,i,r=!1){Z(t);const a=(()=>{for(let l=t.claim_info.last_index;l<t.length;l++){const o=t[l];if(e(o)){const u=n(o);return u===void 0?t.splice(l,1):t[l]=u,r||(t.claim_info.last_index=l),o}}for(let l=t.claim_info.last_index-1;l>=0;l--){const o=t[l];if(e(o)){const u=n(o);return u===void 0?t.splice(l,1):t[l]=u,r?u===void 0&&t.claim_info.last_index--:t.claim_info.last_index=l,o}}return i()})();return a.claim_order=t.claim_info.total_claimed,t.claim_info.total_claimed+=1,a}function tt(t,e,n,i){return O(t,r=>r.nodeName===e,r=>{const a=[];for(let l=0;l<r.attributes.length;l++){const o=r.attributes[l];n[o.name]||a.push(o.name)}a.forEach(l=>r.removeAttribute(l))},()=>i(e))}function $t(t,e,n){return tt(t,e,n,X)}function et(t,e){return O(t,n=>n.nodeType===3,n=>{const i=""+e;if(n.data.startsWith(i)){if(n.data.length!==i.length)return n.splitText(i.length)}else n.data=i},()=>A(e),!0)}function vt(t){return et(t," ")}function Et(t,e){e=""+e,t.data!==e&&(t.data=e)}function Nt(t,e,n,i){n==null?t.style.removeProperty(e):t.style.setProperty(e,n,i?"important":"")}function St(t,e){return new t(e)}let y;function p(t){y=t}function D(){if(!y)throw new Error("Function called outside component initialization");return y}function At(t){D().$$.on_mount.push(t)}function jt(t){D().$$.after_update.push(t)}const h=[],q=[];let m=[];const C=[],P=Promise.resolve();let N=!1;function W(){N||(N=!0,P.then(z))}function kt(){return W(),P}function S(t){m.push(t)}const E=new Set;let _=0;function z(){if(_!==0)return;const t=y;do{try{for(;_<h.length;){const e=h[_];_++,p(e),nt(e.$$)}}catch(e){throw h.length=0,_=0,e}for(p(null),h.length=0,_=0;q.length;)q.pop()();for(let e=0;e<m.length;e+=1){const n=m[e];E.has(n)||(E.add(n),n())}m.length=0}while(h.length);for(;C.length;)C.pop()();N=!1,E.clear(),p(t)}function nt(t){if(t.fragment!==null){t.update(),g(t.before_update);const e=t.dirty;t.dirty=[-1],t.fragment&&t.fragment.p(t.ctx,e),t.after_update.forEach(S)}}function it(t){const e=[],n=[];m.forEach(i=>t.indexOf(i)===-1?e.push(i):n.push(i)),n.forEach(i=>i()),m=e}const w=new Set;let d;function Mt(){d={r:0,c:[],p:d}}function qt(){d.r||g(d.c),d=d.p}function rt(t,e){t&&t.i&&(w.delete(t),t.i(e))}function Ct(t,e,n,i){if(t&&t.o){if(w.has(t))return;w.add(t),d.c.push(()=>{w.delete(t),i&&(n&&t.d(1),i())}),t.o(e)}else i&&i()}const lt=["allowfullscreen","allowpaymentrequest","async","autofocus","autoplay","checked","controls","default","defer","disabled","formnovalidate","hidden","inert","ismap","loop","multiple","muted","nomodule","novalidate","open","playsinline","readonly","required","reversed","selected"];[...lt];function Tt(t){t&&t.c()}function Bt(t,e){t&&t.l(e)}function ut(t,e,n,i){const{fragment:r,after_update:a}=t.$$;r&&r.m(e,n),i||S(()=>{const l=t.$$.on_mount.map(T).filter(B);t.$$.on_destroy?t.$$.on_destroy.push(...l):g(l),t.$$.on_mount=[]}),a.forEach(S)}function ct(t,e){const n=t.$$;n.fragment!==null&&(it(n.after_update),g(n.on_destroy),n.fragment&&n.fragment.d(e),n.on_destroy=n.fragment=null,n.ctx=[])}function ot(t,e){t.$$.dirty[0]===-1&&(h.push(t),W(),t.$$.dirty.fill(0)),t.$$.dirty[e/31|0]|=1<<e%31}function Lt(t,e,n,i,r,a,l,o=[-1]){const u=y;p(t);const c=t.$$={fragment:null,ctx:[],props:a,update:$,not_equal:r,bound:M(),on_mount:[],on_destroy:[],on_disconnect:[],before_update:[],after_update:[],context:new Map(e.context||(u?u.$$.context:[])),callbacks:M(),dirty:o,skip_bound:!1,root:e.target||u.$$.root};l&&l(c.root);let f=!1;if(c.ctx=n?n(t,e.props||{},(s,b,...j)=>{const k=j.length?j[0]:b;return c.ctx&&r(c.ctx[s],c.ctx[s]=k)&&(!c.skip_bound&&c.bound[s]&&c.bound[s](k),f&&ot(t,s)),b}):[],c.update(),f=!0,g(c.before_update),c.fragment=i?i(c.ctx):!1,e.target){if(e.hydrate){J();const s=Y(e.target);c.fragment&&c.fragment.l(s),s.forEach(V)}else c.fragment&&c.fragment.c();e.intro&&rt(t.$$.fragment),ut(t,e.target,e.anchor,e.customElement),K(),z()}p(u)}class Ot{$destroy(){ct(this,1),this.$destroy=$}$on(e,n){if(!B(n))return $;const i=this.$$.callbacks[e]||(this.$$.callbacks[e]=[]);return i.push(n),()=>{const r=i.indexOf(n);r!==-1&&i.splice(r,1)}}$set(e){this.$$set&&!H(e)&&(this.$$.skip_bound=!0,this.$$set(e),this.$$.skip_bound=!1)}}export{ut as A,ct as B,dt as C,ht as D,mt as E,_t as F,U as G,$ as H,ft as I,st as J,bt as K,xt as L,Ot as S,yt as a,pt as b,vt as c,Ct as d,gt as e,qt as f,rt as g,V as h,Lt as i,jt as j,X as k,$t as l,Y as m,wt as n,At as o,Nt as p,A as q,et as r,at as s,kt as t,Et as u,Mt as v,q as w,St as x,Tt as y,Bt as z}; diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/scripts/binarize_manifest.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/scripts/binarize_manifest.sh deleted file mode 100644 index 6f201bdb524fad51a69d8c45889eaa1578efc62d..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/scripts/binarize_manifest.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env bash - -# usage: bash binarize_manifest <dest_dir> <train_split> <valid_split> - -DEST_DIR=$1 -TRAIN_SPLIT=$2 -VALID_SPLIT=$3 -FAIRSEQ_ROOT=$4 - -mkdir -p $DEST_DIR - -# split file path and lengths into separate files -cut -f1 $TRAIN_SPLIT.tsv > $DEST_DIR/train_fnames.txt -cut -f1 $VALID_SPLIT.tsv > $DEST_DIR/valid_fnames.txt -cut -f2 $TRAIN_SPLIT.tsv > $DEST_DIR/train.lengths -cut -f2 $VALID_SPLIT.tsv > $DEST_DIR/valid.lengths - -# copy root directory -head -1 $TRAIN_SPLIT.tsv > $DEST_DIR/train.root -head -1 $VALID_SPLIT.tsv > $DEST_DIR/valid.root - -# remove root directory -sed -i '1d' $DEST_DIR/train_fnames.txt -sed -i '1d' $DEST_DIR/valid_fnames.txt -sed -i '1d' $DEST_DIR/train.lengths -sed -i '1d' $DEST_DIR/valid.lengths - -# insert spaces between characters -sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/train_fnames.txt -sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/valid_fnames.txt - -# run preprocessor -PYTHONPATH=$FAIRSEQ_ROOT python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $DEST_DIR/train_fnames.txt --validpref $DEST_DIR/valid_fnames.txt --workers 60 --only-source --destdir $DEST_DIR diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py deleted file mode 100644 index 27792ebda842057e33fed3dc53dd9d8a594d0483..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py +++ /dev/null @@ -1,637 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from enum import Enum, auto -import math -import numpy as np -from typing import Tuple, List, Optional, Dict - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import autograd - -from fairseq import checkpoint_utils, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - SamePad, - TransposeLast, -) - - -class SegmentationType(Enum): - NONE = auto() - RANDOM = auto() - UNIFORM_RANDOM = auto() - UNIFORM_RANDOM_JOIN = auto() - JOIN = auto() - - -@dataclass -class SegmentationConfig(FairseqDataclass): - type: SegmentationType = SegmentationType.NONE - subsample_rate: float = 0.25 - mean_pool: bool = True - mean_pool_join: bool = False - remove_zeros: bool = False - - -@dataclass -class Wav2vec_UConfig(FairseqDataclass): - - discriminator_kernel: int = 3 - discriminator_dilation: int = 1 - discriminator_dim: int = 256 - discriminator_causal: bool = True - discriminator_linear_emb: bool = False - discriminator_depth: int = 1 - discriminator_max_pool: bool = False - discriminator_act_after_linear: bool = False - discriminator_dropout: float = 0.0 - discriminator_spectral_norm: bool = False - discriminator_weight_norm: bool = False - - generator_kernel: int = 4 - generator_dilation: int = 1 - generator_stride: int = 1 - generator_bias: bool = False - generator_dropout: float = 0.0 - - blank_weight: float = 0 - blank_mode: str = "add" - blank_is_sil: bool = False - no_softmax: bool = False - - smoothness_weight: float = 0.0 - smoothing: float = 0.0 - smoothing_one_sided: bool = False - gradient_penalty: float = 0.0 - probabilistic_grad_penalty_slicing: bool = False - code_penalty: float = 0.0 - gumbel: bool = False - hard_gumbel: bool = True - temp: Tuple[float, float, float] = (2, 0.1, 0.99995) - input_dim: int = 128 - - segmentation: SegmentationConfig = SegmentationConfig() - - -class Segmenter(nn.Module): - cfg: SegmentationConfig - - def __init__(self, cfg: SegmentationConfig): - super().__init__() - self.cfg = cfg - self.subsample_rate = cfg.subsample_rate - - def pre_segment(self, dense_x, dense_padding_mask): - return dense_x, dense_padding_mask - - def logit_segment(self, logits, padding_mask): - return logits, padding_mask - - -class RandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - target_num = math.ceil(dense_x.size(1) * self.subsample_rate) - ones = torch.ones(dense_x.shape[:-1], device=dense_x.device) - indices, _ = ones.multinomial(target_num).sort(dim=-1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, dense_x.size(-1)) - dense_x = dense_x.gather(1, indices_ld) - dense_padding_mask = dense_padding_mask.gather(1, index=indices) - return dense_x, dense_padding_mask - - -class UniformRandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - bsz, tsz, fsz = dense_x.shape - - target_num = math.ceil(tsz * self.subsample_rate) - - rem = tsz % target_num - - if rem > 0: - dense_x = F.pad(dense_x, [0, 0, 0, target_num - rem]) - dense_padding_mask = F.pad( - dense_padding_mask, [0, target_num - rem], value=True - ) - - dense_x = dense_x.view(bsz, target_num, -1, fsz) - dense_padding_mask = dense_padding_mask.view(bsz, target_num, -1) - - if self.cfg.mean_pool: - dense_x = dense_x.mean(dim=-2) - dense_padding_mask = dense_padding_mask.all(dim=-1) - else: - ones = torch.ones((bsz, dense_x.size(2)), device=dense_x.device) - indices = ones.multinomial(1) - indices = indices.unsqueeze(-1).expand(-1, target_num, -1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, -1, fsz) - dense_x = dense_x.gather(2, indices_ld).reshape(bsz, -1, fsz) - dense_padding_mask = dense_padding_mask.gather(2, index=indices).reshape( - bsz, -1 - ) - return dense_x, dense_padding_mask - - -class JoinSegmenter(Segmenter): - def logit_segment(self, logits, padding_mask): - preds = logits.argmax(dim=-1) - - if padding_mask.any(): - preds[padding_mask] = -1 # mark pad - uniques = [] - - bsz, tsz, csz = logits.shape - - for p in preds: - uniques.append( - p.cpu().unique_consecutive(return_inverse=True, return_counts=True) - ) - - new_tsz = max(u[0].numel() for u in uniques) - new_logits = logits.new_zeros(bsz, new_tsz, csz) - new_pad = padding_mask.new_zeros(bsz, new_tsz) - - for b in range(bsz): - u, idx, c = uniques[b] - keep = u != -1 - - if self.cfg.remove_zeros: - keep.logical_and_(u != 0) - - if self.training and not self.cfg.mean_pool_join: - u[0] = 0 - u[1:] = c.cumsum(0)[:-1] - m = c > 1 - r = torch.rand(m.sum()) - o = (c[m] * r).long() - u[m] += o - new_logits[b, : u.numel()] = logits[b, u] - else: - new_logits[b].index_add_( - dim=0, index=idx.to(new_logits.device), source=logits[b] - ) - new_logits[b, : c.numel()] /= c.unsqueeze(-1).to(new_logits.device) - - new_sz = keep.sum() - if not keep.all(): - kept_logits = new_logits[b, : c.numel()][keep] - new_logits[b, :new_sz] = kept_logits - - if new_sz < new_tsz: - pad = new_tsz - new_sz - new_logits[b, -pad:] = 0 - new_pad[b, -pad:] = True - - return new_logits, new_pad - - -class UniformRandomJoinSegmenter(UniformRandomSegmenter, JoinSegmenter): - pass - - -SEGMENT_FACTORY = { - SegmentationType.NONE: Segmenter, - SegmentationType.RANDOM: RandomSegmenter, - SegmentationType.UNIFORM_RANDOM: UniformRandomSegmenter, - SegmentationType.UNIFORM_RANDOM_JOIN: UniformRandomJoinSegmenter, - SegmentationType.JOIN: JoinSegmenter, -} - - -class Discriminator(nn.Module): - def __init__(self, dim, cfg: Wav2vec_UConfig): - super().__init__() - - inner_dim = cfg.discriminator_dim - kernel = cfg.discriminator_kernel - dilation = cfg.discriminator_dilation - self.max_pool = cfg.discriminator_max_pool - - if cfg.discriminator_causal: - padding = kernel - 1 - else: - padding = kernel // 2 - - def make_conv(in_d, out_d, k, p=0, has_dilation=True): - conv = nn.Conv1d( - in_d, - out_d, - kernel_size=k, - padding=p, - dilation=dilation if has_dilation else 1, - ) - if cfg.discriminator_spectral_norm: - conv = nn.utils.spectral_norm(conv) - elif cfg.discriminator_weight_norm: - conv = nn.utils.weight_norm(conv) - return conv - - inner_net = [ - nn.Sequential( - make_conv(inner_dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - nn.Dropout(cfg.discriminator_dropout), - nn.GELU(), - ) - for _ in range(cfg.discriminator_depth - 1) - ] + [ - make_conv(inner_dim, 1, kernel, padding, has_dilation=False), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_linear_emb: - emb_net = [make_conv(dim, inner_dim, 1)] - else: - emb_net = [ - make_conv(dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_act_after_linear: - emb_net.append(nn.GELU()) - - self.net = nn.Sequential( - *emb_net, - nn.Dropout(cfg.discriminator_dropout), - *inner_net, - ) - - def forward(self, x, padding_mask): - x = x.transpose(1, 2) # BTC -> BCT - x = self.net(x) - x = x.transpose(1, 2) - x_sz = x.size(1) - if padding_mask is not None and padding_mask.any() and padding_mask.dim() > 1: - padding_mask = padding_mask[:, : x.size(1)] - x[padding_mask] = float("-inf") if self.max_pool else 0 - x_sz = x_sz - padding_mask.sum(dim=-1) - x = x.squeeze(-1) - if self.max_pool: - x, _ = x.max(dim=-1) - else: - x = x.sum(dim=-1) - x = x / x_sz - return x - - -class Generator(nn.Module): - def __init__(self, input_dim, output_dim, cfg: Wav2vec_UConfig): - super().__init__() - - self.cfg = cfg - self.output_dim = output_dim - self.stride = cfg.generator_stride - self.dropout = nn.Dropout(cfg.generator_dropout) - - padding = cfg.generator_kernel // 2 - self.proj = nn.Sequential( - TransposeLast(), - nn.Conv1d( - input_dim, - output_dim, - kernel_size=cfg.generator_kernel, - stride=cfg.generator_stride, - dilation=cfg.generator_dilation, - padding=padding, - bias=cfg.generator_bias, - ), - TransposeLast(), - ) - - def forward(self, dense_x, tokens, dense_padding_mask): - dense_x = self.dropout(dense_x) - - dense_x = self.proj(dense_x) - if self.stride > 1: - dense_padding_mask = dense_padding_mask[:, :: self.stride] - - if dense_padding_mask.size(1) != dense_x.size(1): - new_padding = dense_padding_mask.new_zeros(dense_x.shape[:-1]) - diff = new_padding.size(1) - dense_padding_mask.size(1) - assert ( - diff > 0 - ), f"{new_padding.shape}, {dense_padding_mask.shape}, {dense_x.shape}, {diff}" - if diff > 0: - new_padding[:, diff:] = dense_padding_mask - else: - assert diff < 0 - new_padding = dense_padding_mask[:, :diff] - - dense_padding_mask = new_padding - - result = {} - - token_x = None - if tokens is not None: - token_x = dense_x.new_zeros(tokens.numel(), self.output_dim) - token_x.scatter_(1, tokens.view(-1, 1).long(), 1) - token_x = token_x.view(tokens.shape + (self.output_dim,)) - - result["dense_x"] = dense_x - result["token_x"] = token_x - result["dense_padding_mask"] = dense_padding_mask - - return result - - -@register_model("wav2vec_u", dataclass=Wav2vec_UConfig) -class Wav2vec_U(BaseFairseqModel): - def calc_gradient_penalty(self, real_data, fake_data): - - b_size = min(real_data.size(0), fake_data.size(0)) - t_size = min(real_data.size(1), fake_data.size(1)) - - if self.cfg.probabilistic_grad_penalty_slicing: - - def get_slice(data, dim, target_size): - - size = data.size(dim) - diff = size - target_size - if diff <= 0: - return data - - start = np.random.randint(0, diff + 1) - return data.narrow(dim=dim, start=start, length=target_size) - - real_data = get_slice(real_data, 0, b_size) - real_data = get_slice(real_data, 1, t_size) - fake_data = get_slice(fake_data, 0, b_size) - fake_data = get_slice(fake_data, 1, t_size) - - else: - real_data = real_data[:b_size, :t_size] - fake_data = fake_data[:b_size, :t_size] - - alpha = torch.rand(real_data.size(0), 1, 1) - alpha = alpha.expand(real_data.size()) - alpha = alpha.to(real_data.device) - - interpolates = alpha * real_data + ((1 - alpha) * fake_data) - - disc_interpolates = self.discriminator(interpolates, None) - - gradients = autograd.grad( - outputs=disc_interpolates, - inputs=interpolates, - grad_outputs=torch.ones(disc_interpolates.size(), device=real_data.device), - create_graph=True, - retain_graph=True, - only_inputs=True, - )[0] - - gradient_penalty = (gradients.norm(2, dim=1) - 1) ** 2 - return gradient_penalty - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self.update_num = num_updates - self.curr_temp = max( - self.max_temp * self.temp_decay ** num_updates, self.min_temp - ) - - def discrim_step(self, num_updates): - return num_updates % 2 == 1 - - def get_groups_for_update(self, num_updates): - return "discriminator" if self.discrim_step(num_updates) else "generator" - - def __init__(self, cfg: Wav2vec_UConfig, target_dict): - super().__init__() - - self.cfg = cfg - self.zero_index = target_dict.index("<SIL>") if "<SIL>" in target_dict else 0 - self.smoothness_weight = cfg.smoothness_weight - - output_size = len(target_dict) - self.pad = target_dict.pad() - self.eos = target_dict.eos() - self.smoothing = cfg.smoothing - self.smoothing_one_sided = cfg.smoothing_one_sided - self.no_softmax = cfg.no_softmax - self.gumbel = cfg.gumbel - self.hard_gumbel = cfg.hard_gumbel - self.last_acc = None - - self.gradient_penalty = cfg.gradient_penalty - self.code_penalty = cfg.code_penalty - self.blank_weight = cfg.blank_weight - self.blank_mode = cfg.blank_mode - self.blank_index = target_dict.index("<SIL>") if cfg.blank_is_sil else 0 - assert self.blank_index != target_dict.unk() - - self.discriminator = Discriminator(output_size, cfg) - for p in self.discriminator.parameters(): - p.param_group = "discriminator" - - self.pca_A = self.pca_b = None - d = cfg.input_dim - - self.segmenter = SEGMENT_FACTORY[cfg.segmentation.type](cfg.segmentation) - - self.generator = Generator(d, output_size, cfg) - - for p in self.generator.parameters(): - p.param_group = "generator" - - for p in self.segmenter.parameters(): - p.param_group = "generator" - - self.max_temp, self.min_temp, self.temp_decay = cfg.temp - self.curr_temp = self.max_temp - self.update_num = 0 - - @classmethod - def build_model(cls, cfg, task): - return cls(cfg, task.target_dictionary) - - def get_logits( - self, - net_output: Optional[Dict[str, List[Optional[torch.Tensor]]]], - normalize: bool = False, - ): - logits = net_output["logits"] - - if self.blank_weight != 0: - if self.blank_mode == "add": - logits[..., self.blank_index] += self.blank_weight - elif self.blank_mode == "set": - logits[..., self.blank_index] = self.blank_weight - else: - raise Exception(f"invalid blank mode {self.blank_mode}") - - padding = net_output["padding_mask"] - if padding.any(): - logits[padding] = float("-inf") - logits[padding][..., self.blank_index] = float("inf") - - if normalize: - logits = utils.log_softmax(logits.float(), dim=-1) - - return logits.transpose(0, 1) - - def get_normalized_probs( - self, - net_output: Tuple[ - torch.Tensor, Optional[Dict[str, List[Optional[torch.Tensor]]]] - ], - log_probs: bool, - sample: Optional[Dict[str, torch.Tensor]] = None, - ): - logits = self.get_logits(net_output) - - probs = super().get_normalized_probs(logits, log_probs, sample) - # BTC -> TBC for ctc - probs = probs.transpose(0, 1) - return probs - - def normalize(self, dense_x): - - bsz, tsz, csz = dense_x.shape - - if dense_x.numel() == 0: - raise Exception(dense_x.shape) - _, k = dense_x.max(-1) - hard_x = ( - dense_x.new_zeros(bsz * tsz, csz) - .scatter_(-1, k.view(-1, 1), 1.0) - .view(-1, csz) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - code_perplexity = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ) - - avg_probs = torch.softmax(dense_x.reshape(-1, csz).float(), dim=-1).mean(dim=0) - prob_perplexity = torch.exp( - -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1) - ) - - if not self.no_softmax: - if self.training and self.gumbel: - dense_x = F.gumbel_softmax( - dense_x.float(), tau=self.curr_temp, hard=self.hard_gumbel - ).type_as(dense_x) - else: - dense_x = dense_x.softmax(-1) - - return dense_x, code_perplexity, prob_perplexity - - def forward( - self, - features, - padding_mask, - random_label=None, - dense_x_only=False, - segment=True, - ): - if segment: - features, padding_mask = self.segmenter.pre_segment(features, padding_mask) - - orig_size = features.size(0) * features.size(1) - padding_mask.sum() - - gen_result = self.generator(features, random_label, padding_mask) - - orig_dense_x, token_x = gen_result["dense_x"], gen_result["token_x"] - orig_dense_padding_mask = gen_result["dense_padding_mask"] - - if segment: - dense_x, dense_padding_mask = self.segmenter.logit_segment( - orig_dense_x, orig_dense_padding_mask - ) - else: - dense_x = orig_dense_x - dense_padding_mask = orig_dense_padding_mask - - dense_logits = dense_x - prob_perplexity = None - code_perplexity = None - - if not (self.no_softmax and dense_x_only): - dense_x, code_perplexity, prob_perplexity = self.normalize(dense_logits) - - if dense_x_only or self.discriminator is None: - return { - "logits": dense_x, - "padding_mask": dense_padding_mask, - } - - token_padding_mask = random_label == self.pad - - dense_y = self.discriminator(dense_x, dense_padding_mask) - token_y = self.discriminator(token_x, token_padding_mask) - - sample_size = features.size(0) - - d_step = self.discrim_step(self.update_num) - - fake_smooth = self.smoothing - real_smooth = self.smoothing - if self.smoothing_one_sided: - fake_smooth = 0 - - zero_loss = None - smoothness_loss = None - code_pen = None - - if d_step: - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_ones(dense_y.shape) - fake_smooth, - reduction="sum", - ) - loss_token = F.binary_cross_entropy_with_logits( - token_y, - token_y.new_zeros(token_y.shape) + real_smooth, - reduction="sum", - ) - if self.training and self.gradient_penalty > 0: - grad_pen = self.calc_gradient_penalty(token_x, dense_x) - grad_pen = grad_pen.sum() * self.gradient_penalty - else: - grad_pen = None - else: - grad_pen = None - loss_token = None - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_zeros(dense_y.shape) + fake_smooth, - reduction="sum", - ) - num_vars = dense_x.size(-1) - if prob_perplexity is not None: - code_pen = (num_vars - prob_perplexity) / num_vars - code_pen = code_pen * sample_size * self.code_penalty - - if self.smoothness_weight > 0: - smoothness_loss = F.mse_loss( - dense_logits[:, :-1], dense_logits[:, 1:], reduction="none" - ) - smoothness_loss[dense_padding_mask[:, 1:]] = 0 - smoothness_loss = ( - smoothness_loss.mean() * sample_size * self.smoothness_weight - ) - - result = { - "losses": { - "grad_pen": grad_pen, - "code_pen": code_pen, - "smoothness": smoothness_loss, - }, - "temp": self.curr_temp, - "code_ppl": code_perplexity, - "prob_ppl": prob_perplexity, - "d_steps": int(d_step), - "sample_size": sample_size, - } - - suff = "_d" if d_step else "_g" - result["losses"]["dense" + suff] = loss_dense - result["losses"]["token" + suff] = loss_token - - return result diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/kmeans_attention.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/kmeans_attention.py deleted file mode 100644 index 11a7debcf2ac025fb02ba5e672987f87dbbc49a4..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/kmeans_attention.py +++ /dev/null @@ -1,609 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import math -from inspect import isfunction -from operator import mul -from functools import reduce, wraps - -from aml.multimodal_video.utils.einops.lib import rearrange, repeat -from aml.multimodal_video.utils.einops.lib.layers.torch import Rearrange - -from fairseq.modules.local_attention import LocalAttention - -# constants - -TOKEN_SELF_ATTN_VALUE = -5e4 -KMEAN_INIT_ITERS = 10 - -# helper functions - - -def exists(val): - return val is not None - - -def identity(x, *args, **kwargs): - return x - - -def default(x, d): - if not exists(x): - return d if not isfunction(d) else d() - return x - - -def cast_tuple(x): - return x if isinstance(x, tuple) else (x,) - - -def cache_fn(f): - cache = None - - @wraps(f) - def cached_fn(*args, **kwargs): - nonlocal cache - if exists(cache): - return cache - cache = f(*args, **kwargs) - return cache - return cached_fn - - -def to(t): - return {'device': t.device, 'dtype': t.dtype} - - -def find_modules(nn_module, type): - return [module for module in nn_module.modules() if isinstance(module, type)] - - -def is_empty(t): - return t.nelement() == 0 - - -def max_neg_value(tensor): - return -torch.finfo(tensor.dtype).max - - -def batched_index_select(values, indices): - last_dim = values.shape[-1] - return values.gather(2, expand_dim(indices, -1, last_dim)) - - -def merge_dims(ind_from, ind_to, tensor): - shape = list(tensor.shape) - arr_slice = slice(ind_from, ind_to + 1) - shape[arr_slice] = [reduce(mul, shape[arr_slice])] - return tensor.reshape(*shape) - - -def expand_dim(t, dim, k): - t = t.unsqueeze(dim) - expand_shape = [-1] * len(t.shape) - expand_shape[dim] = k - return t.expand(*expand_shape) - - -def scatter_mean(src, t, index, dim, eps=1e-5): - numer = src.scatter_add(dim, index, t) - denom = src.scatter_add(dim, index, torch.ones_like(t)) - return numer / (denom + eps) - - -def split_at_index(dim, index, t): - pre_slices = (slice(None),) * dim - l = (*pre_slices, slice(None, index)) - r = (*pre_slices, slice(index, None)) - return t[l], t[r] - - -def reshape_dim(t, dim, split_dims): - shape = list(t.shape) - num_dims = len(shape) - dim = (dim + num_dims) % num_dims - shape[dim:dim+1] = split_dims - return t.reshape(shape) - - -def ema(old, new, decay): - if not exists(old): - return new - return old * decay + new * (1 - decay) - - -def ema_inplace(moving_avg, new, decay): - if is_empty(moving_avg): - moving_avg.data.copy_(new) - return - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - -# helper classes - - -def map_first_tuple_or_el(x, fn): - if isinstance(x, tuple): - return (fn(x[0]),) + x[1:] - return fn(x) - - -class Chunk(nn.Module): - def __init__(self, chunks, fn, along_dim=-1): - super().__init__() - self.dim = along_dim - self.chunks = chunks - self.fn = fn - - def forward(self, x, **kwargs): - if self.chunks <= 1: - return self.fn(x, **kwargs) - chunks = x.chunk(self.chunks, dim=self.dim) - return torch.cat([self.fn(c, **kwargs) for c in chunks], dim=self.dim) - - -class PreNorm(nn.ModuleList): - def __init__(self, norm_class, dim, fn): - super().__init__() - self.norm = norm_class(dim) - self.fn = fn - - def forward(self, x, **kwargs): - x = self.norm(x) - return self.fn(x, **kwargs) - - -class ReZero(nn.Module): - def __init__(self, fn): - super().__init__() - self.residual_weight = nn.Parameter(torch.zeros(1)) - self.fn = fn - - def forward(self, x, **kwargs): - x = self.fn(x, **kwargs) - return map_first_tuple_or_el(x, lambda t: t * self.residual_weight) - - -class ScaleNorm(nn.Module): - def __init__(self, dim, eps=1e-5): - super().__init__() - self.g = nn.Parameter(torch.ones(1)) - self.eps = eps - - def forward(self, x): - def norm(t): - n = torch.norm(t, dim=-1, keepdim=True).clamp(min=self.eps) - return t / n * self.g - return map_first_tuple_or_el(x, norm) - - -class ProjectInOut(nn.Module): - def __init__(self, fn, dim_in, dim_out, project_out=True): - super().__init__() - self.fn = fn - self.project_in = nn.Linear(dim_in, dim_out) - self.project_out = nn.Linear(dim_out, dim_in) if project_out else identity - - def forward(self, x, **kwargs): - x = self.project_in(x) - x, loss = self.fn(x, **kwargs) - x = self.project_out(x) - return x, loss - - -class MatrixMultiply(nn.Module): - def __init__(self, tensor, transpose=False): - super().__init__() - self.tensor = tensor - self.transpose = transpose - - def forward(self, x): - tensor = self.tensor - if self.transpose: - tensor = tensor.t() - return x @ tensor - -# positional embeddings - - -class DepthWiseConv1d(nn.Module): - def __init__(self, dim_in, dim_out, kernel_size, stride=1, bias=True, causal=False): - super().__init__() - self.padding = ((kernel_size - 1), 0) if causal else (kernel_size // 2, kernel_size // 2) - - self.net = nn.Sequential( - nn.Conv1d(dim_in, dim_in, kernel_size=kernel_size, groups=dim_in, stride=stride, bias=bias), - nn.Conv1d(dim_in, dim_out, 1, bias=bias) - ) - - def forward(self, x): - x = F.pad(x, self.padding, value=0.) - return self.net(x) - - -class FixedPositionalEmbedding(nn.Module): - def __init__(self, dim, max_seq_len): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - position = torch.arange(0, max_seq_len, dtype=torch.float) - sinusoid_inp = torch.einsum("i,j->ij", position, inv_freq) - emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1) - self.register_buffer('emb', emb) - - def forward(self, x): - return self.emb[None, :x.shape[1], :].to(x) - - -def rotate_every_two(x): - x = rearrange(x, '... (d j) -> ... d j', j=2) - x1, x2 = x.unbind(dim=-1) - x = torch.stack((-x2, x1), dim=-1) - return rearrange(x, '... d j -> ... (d j)') - - -def apply_rotary_pos_emb(q, k, sinu_pos): - sinu_pos = rearrange(sinu_pos, '() n (j d) -> n j d', j=2) - sin, cos = sinu_pos.unbind(dim=-2) - sin, cos = map(lambda t: repeat(t, 'b n -> b (n j)', j=2), (sin, cos)) - q, k = map(lambda t: (t * cos) + (rotate_every_two(t) * sin), (q, k)) - return q, k - -# kmeans related function and class - - -def update_kmeans_on_backwards(module): - module.kmean_modules = find_modules(module, Kmeans) - - def hook(_, grad_in, grad_out): - for m in module.kmean_modules: - m.update() - - return module.register_backward_hook(hook) - - -def similarity(x, means): - return torch.einsum('bhld,hcd->bhlc', x, means) - - -def dists_and_buckets(x, means): - dists = similarity(x, means) - _, buckets = torch.max(dists, dim=-1) - return dists, buckets - - -def batched_bincount(index, num_classes, dim=-1): - shape = list(index.shape) - shape[dim] = num_classes - out = index.new_zeros(shape) - out.scatter_add_(dim, index, torch.ones_like(index, dtype=index.dtype)) - return out - - -def kmeans_iter(x, means, buckets=None): - b, h, _, d, dtype, num_clusters = *x.shape, x.dtype, means.shape[1] - - if not exists(buckets): - _, buckets = dists_and_buckets(x, means) - - bins = batched_bincount(buckets, num_clusters).sum(0, keepdim=True) - zero_mask = bins.long() == 0 - - means_ = buckets.new_zeros(b, h, num_clusters, d, dtype=dtype) - means_.scatter_add_(-2, expand_dim(buckets, -1, d), x) - means_ = F.normalize(means_.sum(0, keepdim=True), dim=-1).type(dtype) - - means = torch.where(zero_mask.unsqueeze(-1), means, means_) - means = means.squeeze(0) - return means - - -def distribution(dists, window_size): - _, topk_indices = dists.topk(k=window_size, dim=-2) - indices = topk_indices.transpose(-2, -1) - return indices.reshape(*indices.size()[:2], -1) - - -class Kmeans(nn.Module): - def __init__(self, num_heads, head_dim, num_clusters, ema_decay=0.999, commitment=1e-4): - super().__init__() - self.commitment = commitment - self.ema_decay = ema_decay - - self.register_buffer('means', torch.randn(num_heads, num_clusters, head_dim)) - self.register_buffer('initted', torch.tensor(False)) - self.num_new_means = 0 - self.new_means = None - - @torch.no_grad() - def init(self, x): - if self.initted: - return - _, h, _, d, device, _ = *x.shape, x.device, x.dtype - - num_clusters = self.means.shape[1] - - means = x.transpose(0, 1).contiguous().view(h, -1, d) - num_samples = means.shape[1] - - if num_samples >= num_clusters: - indices = torch.randperm(num_samples, device=device)[:num_clusters] - else: - indices = torch.randint(0, num_samples, (num_clusters,), device=device) - - means = means[:, indices] - - for _ in range(KMEAN_INIT_ITERS): - means = kmeans_iter(x, means) - - self.num_new_means = 0 - self.means.data.copy_(means) - self.initted.data.copy_(torch.tensor(True)) - - @torch.no_grad() - def update(self, new_means=None): - new_means = default(new_means, self.new_means) - assert exists(new_means), 'new kmeans has not been supplied' - ema_inplace(self.means, new_means, self.ema_decay) - - del self.new_means - self.new_means = None - self.num_new_means = 0 - - def forward(self, x, update_means=False): - self.init(x) - - b, dtype = x.shape[0], x.dtype - means = self.means.type(dtype) - x = F.normalize(x, 2, dim=-1).type(dtype) - - with torch.no_grad(): - dists, buckets = dists_and_buckets(x, means) - - routed_means = batched_index_select(expand_dim(means, 0, b), buckets) - loss = F.mse_loss(x, routed_means) * self.commitment - - if update_means: - with torch.no_grad(): - means = kmeans_iter(x, means, buckets) - self.new_means = ema(self.new_means, means, self.num_new_means / (self.num_new_means + 1)) - self.num_new_means += 1 - - return dists, loss - -# kmeans attention class - - -class KmeansAttention(nn.Module): - def __init__(self, num_clusters, window_size, num_heads, head_dim, causal=False, dropout=0., ema_decay=0.999, commitment=1e-4, context_window_size=None, receives_context=False, num_mem_kv=0, shared_qk=False): - super().__init__() - self.num_heads = num_heads - self.num_clusters = num_clusters - self.head_dim = head_dim - - self.window_size = window_size - self.context_window_size = default(context_window_size, window_size) - self.causal = causal - - self.shared_qk = shared_qk - self.receives_context = receives_context - self.kmeans = Kmeans(num_heads, head_dim, num_clusters, ema_decay, commitment) - self.dropout = nn.Dropout(dropout) - - self.num_mem_kv = max(num_mem_kv, 1 if causal and not shared_qk else 0) - self.mem_key = nn.Parameter(torch.randn(num_heads, num_clusters, self.num_mem_kv, head_dim)) - self.mem_value = nn.Parameter(torch.randn(num_heads, num_clusters, self.num_mem_kv, head_dim)) - - def forward(self, q, k, v, query_mask=None, key_mask=None, **kwargs): - b, h, t, d, kv_t, wsz, c_wsz, nc, device, dtype = *q.shape, k.shape[2], self.window_size, self.context_window_size, self.num_clusters, q.device, q.dtype - is_reverse = kwargs.pop('_reverse', False) - - out = torch.zeros_like(q, dtype=dtype) - - update_kmeans = self.training and not is_reverse - - key_mask = default(key_mask, query_mask) if not self.receives_context else key_mask - kv_wsz = wsz if not self.receives_context else c_wsz - - wsz = min(wsz, t) - kv_wsz = min(kv_wsz, kv_t) - - if not self.shared_qk or self.receives_context: - dists, aux_loss = self.kmeans(torch.cat((q, k), dim=2), update_kmeans) - q_dists, k_dists = split_at_index(2, t, dists) - indices = distribution(q_dists, wsz) - kv_indices = distribution(k_dists, kv_wsz) - else: - dists, aux_loss = self.kmeans(q, update_kmeans) - k = F.normalize(k, dim=-1).to(q) - indices = distribution(dists, wsz) - kv_indices = indices - - q = batched_index_select(q, indices) - k = batched_index_select(k, kv_indices) - v = batched_index_select(v, kv_indices) - - reshape_with_window = lambda x: x.reshape(b, h, nc, -1, d) - q, k, v = map(reshape_with_window, (q, k, v)) - - m_k, m_v = map(lambda x: expand_dim(x, 0, b).to(q), (self.mem_key, self.mem_value)) - k, v = map(lambda x: torch.cat(x, dim=3), ((m_k, k), (m_v, v))) - - dots = torch.einsum('bhnid,bhnjd->bhnij', q, k) * (d ** -0.5) - - mask_value = max_neg_value(dots) - - if exists(query_mask) or exists(key_mask): - query_mask = default(query_mask, lambda: torch.ones((b, t), device=device).bool()) - key_mask = default(key_mask, lambda: torch.ones((b, kv_t), device=device).bool()) - - q_mask = expand_dim(query_mask, 1, h).gather(2, indices) - kv_mask = expand_dim(key_mask, 1, h).gather(2, kv_indices) - q_mask, kv_mask = map(lambda t: t.reshape(b, h, nc, -1), (q_mask, kv_mask)) - mask = q_mask[:, :, :, :, None] * kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=1) - dots.masked_fill_(~mask, mask_value) - del mask - - if self.causal: - q_mask, kv_mask = map(lambda t: t.reshape(b, h, nc, -1), (indices, kv_indices)) - mask = q_mask[:, :, :, :, None] >= kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=1) - dots.masked_fill_(~mask, mask_value) - del mask - - if self.shared_qk: - q_mask, kv_mask = map(lambda t: t.reshape(b, h, nc, -1), (indices, kv_indices)) - mask = q_mask[:, :, :, :, None] == kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=0) - dots.masked_fill_(mask, TOKEN_SELF_ATTN_VALUE) - del mask - - dots = dots.softmax(dim=-1) - dots = self.dropout(dots) - - bo = torch.einsum('bhcij,bhcjd->bhcid', dots, v) - so = torch.reshape(bo, (b, h, -1, bo.shape[-1])).type(dtype) - out = scatter_mean(out, so, indices.unsqueeze(-1).expand_as(so), -2) - return out, aux_loss - -# feedforward - - -class GELU_(nn.Module): - def forward(self, x): - return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) - - -GELU = nn.GELU if hasattr(nn, 'GELU') else GELU_ - - -class FeedForward(nn.Module): - def __init__(self, dim, mult=4, dropout=0., activation=None, glu=False): - super().__init__() - activation = default(activation, GELU) - - self.glu = glu - self.w1 = nn.Linear(dim, dim * mult * (2 if glu else 1)) - self.act = activation() - self.dropout = nn.Dropout(dropout) - self.w2 = nn.Linear(dim * mult, dim) - - def forward(self, x, **kwargs): - if not self.glu: - x = self.w1(x) - x = self.act(x) - else: - x, v = self.w1(x).chunk(2, dim=-1) - x = self.act(x) * v - - x = self.dropout(x) - x = self.w2(x) - return x - -# self attention - - -class SelfAttention(nn.Module): - def __init__(self, dim, max_seq_len, heads, local_attn_heads, window_size, dim_head=None, local_attn_window_size=None, local_attn_radius_blocks=1, causal=False, attn_dropout=0., dropout=0., kmeans_ema_decay=0.999, commitment_factor=1e-4, receives_context=False, context_window_size=None, rel_pos_emb=True, num_mem_kv=0, shared_qk=False, conv_query_kernel=9): - super().__init__() - assert dim_head or (dim % heads) == 0, 'hidden dimension must be divisible by number of heads' - assert (max_seq_len % window_size) == 0, 'maximum sequence length must be divisible by the target window size' - assert local_attn_heads <= heads, 'number of local attention heads must be less than total heads' - assert not (receives_context and local_attn_heads > 0), 'local attention cannot be used for self attention with context' - assert not (receives_context and causal), 'contextual attention layer cannot be causal' - - local_attn_window_size = default(local_attn_window_size, window_size) - context_window_size = default(context_window_size, window_size) - - self.shared_qk = shared_qk - self.receives_context = receives_context - self.heads = heads - self.local_attn_heads = local_attn_heads - self.global_attn_heads = heads - local_attn_heads - - self.causal = causal - self.window_size = window_size - - dim_head = default(dim_head, dim // heads) - dim_heads = dim_head * heads - self.dim_head = dim_head - - num_clusters = max_seq_len // window_size - - # local - - local_dim_heads = dim_head * self.local_attn_heads - - if self.local_attn_heads > 0: - rel_pos_emb_config = (dim_head, local_attn_heads) if rel_pos_emb else None - self.local_attn = LocalAttention(dim=dim_head, window_size=local_attn_window_size, causal=causal, dropout=attn_dropout, rel_pos_emb_config=rel_pos_emb_config, look_backward=local_attn_radius_blocks, look_forward=0 if causal else local_attn_radius_blocks) - self.local_to_qkv = nn.Linear(dim, 3 * local_dim_heads) - - # global - - global_dim_heads = dim_head * self.global_attn_heads - - if self.global_attn_heads > 0: - self.global_attn = KmeansAttention(num_clusters, window_size, self.global_attn_heads, dim_head, causal=causal, dropout=attn_dropout, ema_decay=kmeans_ema_decay, commitment=commitment_factor, receives_context=receives_context, num_mem_kv=num_mem_kv, shared_qk=shared_qk) - - self.to_q = nn.Sequential( - Rearrange('b n c -> b c n'), - DepthWiseConv1d(dim, global_dim_heads, conv_query_kernel, causal=causal), - Rearrange('b c n -> b n c') - ) - - self.to_v = nn.Linear(dim, global_dim_heads, bias=False) - - if not self.shared_qk: - self.to_k = nn.Linear(dim, global_dim_heads, bias=False) - - # out - - self.to_out = nn.Linear(dim_heads, dim, bias=False) - self.dropout = nn.Dropout(dropout) - - def forward(self, query, key, value, context=None, key_padding_mask=None, context_mask=None, pos_emb=None, **kwargs): - assert not (self.receives_context and not exists(context)), 'context must be passed if self attention is set to receive context' - input_mask = key_padding_mask - x = query.transpose(0, 1) - b, t, _, h, dh = *x.shape, self.heads, self.dim_head - has_local, has_global = map(lambda x: x > 0, (self.local_attn_heads, self.global_attn_heads)) - - split_heads = lambda v: reshape_dim(v, -1, (-1, dh)).transpose(1, 2).contiguous() - - if has_local: - local_qkv = self.local_to_qkv(x).chunk(3, dim=-1) - lq, lk, lv = map(split_heads, local_qkv) - - if has_global: - kv_input = x if not self.receives_context else context - - q, v = self.to_q(x), self.to_v(kv_input) - - if not self.shared_qk: - k = self.to_k(kv_input) - else: - k = self.to_q(kv_input) if self.receives_context else q - - q, k, v = map(split_heads, (q, k, v)) - - out = [] - total_loss = torch.tensor(0., requires_grad=True, **to(x)) - - if has_local: - local_out = self.local_attn(lq, lk, lv, input_mask=input_mask) - out.append(local_out) - - if has_global: - if not self.receives_context and exists(pos_emb): - q, k = apply_rotary_pos_emb(q, k, pos_emb) - - global_out, loss = self.global_attn(q, k, v, query_mask=input_mask, key_mask=context_mask) - total_loss = total_loss + loss - - out.append(global_out) - - out = torch.cat(out, dim=1) - out = out.reshape(b, h, t, -1).transpose(1, 2).reshape(b, t, -1) - out = self.dropout(out.transpose(0, 1)) - # out = self.to_out(out) - return out, total_loss diff --git a/spaces/ICML2022/resefa/models/stylegan2_discriminator.py b/spaces/ICML2022/resefa/models/stylegan2_discriminator.py deleted file mode 100644 index 1802d44b68d0801290dcd691f09f605c3cfc9cbd..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/models/stylegan2_discriminator.py +++ /dev/null @@ -1,729 +0,0 @@ -# python3.7 -"""Contains the implementation of discriminator described in StyleGAN2. - -Compared to that of StyleGAN, the discriminator in StyleGAN2 mainly adds skip -connections, increases model size and disables progressive growth. This script -ONLY supports config F in the original paper. - -Paper: https://arxiv.org/pdf/1912.04958.pdf - -Official TensorFlow implementation: https://github.com/NVlabs/stylegan2 -""" - -import numpy as np - -import torch -import torch.nn as nn - -from third_party.stylegan2_official_ops import bias_act -from third_party.stylegan2_official_ops import upfirdn2d -from third_party.stylegan2_official_ops import conv2d_gradfix - -__all__ = ['StyleGAN2Discriminator'] - -# Resolutions allowed. -_RESOLUTIONS_ALLOWED = [8, 16, 32, 64, 128, 256, 512, 1024] - -# Architectures allowed. -_ARCHITECTURES_ALLOWED = ['resnet', 'skip', 'origin'] - -# pylint: disable=missing-function-docstring - -class StyleGAN2Discriminator(nn.Module): - """Defines the discriminator network in StyleGAN2. - - NOTE: The discriminator takes images with `RGB` channel order and pixel - range [-1, 1] as inputs. - - Settings for the backbone: - - (1) resolution: The resolution of the input image. (default: -1) - (2) init_res: Smallest resolution of the convolutional backbone. - (default: 4) - (3) image_channels: Number of channels of the input image. (default: 3) - (4) architecture: Type of architecture. Support `origin`, `skip`, and - `resnet`. (default: `resnet`) - (5) use_wscale: Whether to use weight scaling. (default: True) - (6) wscale_gain: The factor to control weight scaling. (default: 1.0) - (7) lr_mul: Learning rate multiplier for backbone. (default: 1.0) - (8) mbstd_groups: Group size for the minibatch standard deviation layer. - `0` means disable. (default: 4) - (9) mbstd_channels: Number of new channels (appended to the original feature - map) after the minibatch standard deviation layer. (default: 1) - (10) fmaps_base: Factor to control number of feature maps for each layer. - (default: 32 << 10) - (11) fmaps_max: Maximum number of feature maps in each layer. (default: 512) - (12) filter_kernel: Kernel used for filtering (e.g., downsampling). - (default: (1, 3, 3, 1)) - (13) conv_clamp: A threshold to clamp the output of convolution layers to - avoid overflow under FP16 training. (default: None) - (14) eps: A small value to avoid divide overflow. (default: 1e-8) - - Settings for conditional model: - - (1) label_dim: Dimension of the additional label for conditional generation. - In one-hot conditioning case, it is equal to the number of classes. If - set to 0, conditioning training will be disabled. (default: 0) - (2) embedding_dim: Dimension of the embedding space, if needed. - (default: 512) - (3) embedding_bias: Whether to add bias to embedding learning. - (default: True) - (4) embedding_use_wscale: Whether to use weight scaling for embedding - learning. (default: True) - (5) embedding_lr_mul: Learning rate multiplier for the embedding learning. - (default: 1.0) - (6) normalize_embedding: Whether to normalize the embedding. (default: True) - (7) mapping_layers: Number of layers of the additional mapping network after - embedding. (default: 0) - (8) mapping_fmaps: Number of hidden channels of the additional mapping - network after embedding. (default: 512) - (9) mapping_use_wscale: Whether to use weight scaling for the additional - mapping network. (default: True) - (10) mapping_lr_mul: Learning rate multiplier for the additional mapping - network after embedding. (default: 0.1) - - Runtime settings: - - (1) fp16_res: Layers at resolution higher than (or equal to) this field will - use `float16` precision for computation. This is merely used for - acceleration. If set as `None`, all layers will use `float32` by - default. (default: None) - (2) impl: Implementation mode of some particular ops, e.g., `filtering`, - `bias_act`, etc. `cuda` means using the official CUDA implementation - from StyleGAN2, while `ref` means using the native PyTorch ops. - (default: `cuda`) - """ - - def __init__(self, - # Settings for backbone. - resolution=-1, - init_res=4, - image_channels=3, - architecture='resnet', - use_wscale=True, - wscale_gain=1.0, - lr_mul=1.0, - mbstd_groups=4, - mbstd_channels=1, - fmaps_base=32 << 10, - fmaps_max=512, - filter_kernel=(1, 3, 3, 1), - conv_clamp=None, - eps=1e-8, - # Settings for conditional model. - label_dim=0, - embedding_dim=512, - embedding_bias=True, - embedding_use_wscale=True, - embedding_lr_mul=1.0, - normalize_embedding=True, - mapping_layers=0, - mapping_fmaps=512, - mapping_use_wscale=True, - mapping_lr_mul=0.1): - """Initializes with basic settings. - - Raises: - ValueError: If the `resolution` is not supported, or `architecture` - is not supported. - """ - super().__init__() - - if resolution not in _RESOLUTIONS_ALLOWED: - raise ValueError(f'Invalid resolution: `{resolution}`!\n' - f'Resolutions allowed: {_RESOLUTIONS_ALLOWED}.') - architecture = architecture.lower() - if architecture not in _ARCHITECTURES_ALLOWED: - raise ValueError(f'Invalid architecture: `{architecture}`!\n' - f'Architectures allowed: ' - f'{_ARCHITECTURES_ALLOWED}.') - - self.init_res = init_res - self.init_res_log2 = int(np.log2(init_res)) - self.resolution = resolution - self.final_res_log2 = int(np.log2(resolution)) - self.image_channels = image_channels - self.architecture = architecture - self.use_wscale = use_wscale - self.wscale_gain = wscale_gain - self.lr_mul = lr_mul - self.mbstd_groups = mbstd_groups - self.mbstd_channels = mbstd_channels - self.fmaps_base = fmaps_base - self.fmaps_max = fmaps_max - self.filter_kernel = filter_kernel - self.conv_clamp = conv_clamp - self.eps = eps - - self.label_dim = label_dim - self.embedding_dim = embedding_dim - self.embedding_bias = embedding_bias - self.embedding_use_wscale = embedding_use_wscale - self.embedding_lr_mul = embedding_lr_mul - self.normalize_embedding = normalize_embedding - self.mapping_layers = mapping_layers - self.mapping_fmaps = mapping_fmaps - self.mapping_use_wscale = mapping_use_wscale - self.mapping_lr_mul = mapping_lr_mul - - self.pth_to_tf_var_mapping = {} - - # Embedding for conditional discrimination. - self.use_embedding = label_dim > 0 and embedding_dim > 0 - if self.use_embedding: - self.embedding = DenseLayer(in_channels=label_dim, - out_channels=embedding_dim, - add_bias=embedding_bias, - init_bias=0.0, - use_wscale=embedding_use_wscale, - wscale_gain=wscale_gain, - lr_mul=embedding_lr_mul, - activation_type='linear') - self.pth_to_tf_var_mapping['embedding.weight'] = 'LabelEmbed/weight' - if self.embedding_bias: - self.pth_to_tf_var_mapping['embedding.bias'] = 'LabelEmbed/bias' - - if self.normalize_embedding: - self.norm = PixelNormLayer(dim=1, eps=eps) - - for i in range(mapping_layers): - in_channels = (embedding_dim if i == 0 else mapping_fmaps) - out_channels = (embedding_dim if i == (mapping_layers - 1) else - mapping_fmaps) - layer_name = f'mapping{i}' - self.add_module(layer_name, - DenseLayer(in_channels=in_channels, - out_channels=out_channels, - add_bias=True, - init_bias=0.0, - use_wscale=mapping_use_wscale, - wscale_gain=wscale_gain, - lr_mul=mapping_lr_mul, - activation_type='lrelu')) - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'Mapping{i}/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'Mapping{i}/bias') - - # Convolutional backbone. - for res_log2 in range(self.final_res_log2, self.init_res_log2 - 1, -1): - res = 2 ** res_log2 - in_channels = self.get_nf(res) - out_channels = self.get_nf(res // 2) - block_idx = self.final_res_log2 - res_log2 - - # Input convolution layer for each resolution (if needed). - if res_log2 == self.final_res_log2 or self.architecture == 'skip': - layer_name = f'input{block_idx}' - self.add_module(layer_name, - ConvLayer(in_channels=image_channels, - out_channels=in_channels, - kernel_size=1, - add_bias=True, - scale_factor=1, - filter_kernel=None, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - lr_mul=lr_mul, - activation_type='lrelu', - conv_clamp=conv_clamp)) - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/FromRGB/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'{res}x{res}/FromRGB/bias') - - # Convolution block for each resolution (except the last one). - if res != self.init_res: - # First layer (kernel 3x3) without downsampling. - layer_name = f'layer{2 * block_idx}' - self.add_module(layer_name, - ConvLayer(in_channels=in_channels, - out_channels=in_channels, - kernel_size=3, - add_bias=True, - scale_factor=1, - filter_kernel=None, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - lr_mul=lr_mul, - activation_type='lrelu', - conv_clamp=conv_clamp)) - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/Conv0/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'{res}x{res}/Conv0/bias') - - # Second layer (kernel 3x3) with downsampling - layer_name = f'layer{2 * block_idx + 1}' - self.add_module(layer_name, - ConvLayer(in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - add_bias=True, - scale_factor=2, - filter_kernel=filter_kernel, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - lr_mul=lr_mul, - activation_type='lrelu', - conv_clamp=conv_clamp)) - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/Conv1_down/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'{res}x{res}/Conv1_down/bias') - - # Residual branch (kernel 1x1) with downsampling, without bias, - # with linear activation. - if self.architecture == 'resnet': - layer_name = f'residual{block_idx}' - self.add_module(layer_name, - ConvLayer(in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - add_bias=False, - scale_factor=2, - filter_kernel=filter_kernel, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - lr_mul=lr_mul, - activation_type='linear', - conv_clamp=None)) - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/Skip/weight') - - # Convolution block for last resolution. - else: - self.mbstd = MiniBatchSTDLayer( - groups=mbstd_groups, new_channels=mbstd_channels, eps=eps) - - # First layer (kernel 3x3) without downsampling. - layer_name = f'layer{2 * block_idx}' - self.add_module( - layer_name, - ConvLayer(in_channels=in_channels + mbstd_channels, - out_channels=in_channels, - kernel_size=3, - add_bias=True, - scale_factor=1, - filter_kernel=None, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - lr_mul=lr_mul, - activation_type='lrelu', - conv_clamp=conv_clamp)) - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/Conv/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'{res}x{res}/Conv/bias') - - # Second layer, as a fully-connected layer. - layer_name = f'layer{2 * block_idx + 1}' - self.add_module(layer_name, - DenseLayer(in_channels=in_channels * res * res, - out_channels=in_channels, - add_bias=True, - init_bias=0.0, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - lr_mul=lr_mul, - activation_type='lrelu')) - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/Dense0/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'{res}x{res}/Dense0/bias') - - # Final dense layer to output score. - self.output = DenseLayer(in_channels=in_channels, - out_channels=(embedding_dim - if self.use_embedding - else max(label_dim, 1)), - add_bias=True, - init_bias=0.0, - use_wscale=use_wscale, - wscale_gain=wscale_gain, - lr_mul=lr_mul, - activation_type='linear') - self.pth_to_tf_var_mapping['output.weight'] = 'Output/weight' - self.pth_to_tf_var_mapping['output.bias'] = 'Output/bias' - - # Used for downsampling input image for `skip` architecture. - if self.architecture == 'skip': - self.register_buffer( - 'filter', upfirdn2d.setup_filter(filter_kernel)) - - def get_nf(self, res): - """Gets number of feature maps according to the given resolution.""" - return min(self.fmaps_base // res, self.fmaps_max) - - def forward(self, image, label=None, fp16_res=None, impl='cuda'): - # Check shape. - expected_shape = (self.image_channels, self.resolution, self.resolution) - if image.ndim != 4 or image.shape[1:] != expected_shape: - raise ValueError(f'The input tensor should be with shape ' - f'[batch_size, channel, height, width], where ' - f'`channel` equals to {self.image_channels}, ' - f'`height`, `width` equal to {self.resolution}!\n' - f'But `{image.shape}` is received!') - if self.label_dim > 0: - if label is None: - raise ValueError(f'Model requires an additional label ' - f'(with dimension {self.label_dim}) as input, ' - f'but no label is received!') - batch_size = image.shape[0] - if label.ndim != 2 or label.shape != (batch_size, self.label_dim): - raise ValueError(f'Input label should be with shape ' - f'[batch_size, label_dim], where ' - f'`batch_size` equals to that of ' - f'images ({image.shape[0]}) and ' - f'`label_dim` equals to {self.label_dim}!\n' - f'But `{label.shape}` is received!') - label = label.to(dtype=torch.float32) - if self.use_embedding: - embed = self.embedding(label, impl=impl) - if self.normalize_embedding: - embed = self.norm(embed) - for i in range(self.mapping_layers): - embed = getattr(self, f'mapping{i}')(embed, impl=impl) - - # Cast to `torch.float16` if needed. - if fp16_res is not None and self.resolution >= fp16_res: - image = image.to(torch.float16) - - x = self.input0(image, impl=impl) - - for res_log2 in range(self.final_res_log2, self.init_res_log2, -1): - res = 2 ** res_log2 - # Cast to `torch.float16` if needed. - if fp16_res is not None and res >= fp16_res: - x = x.to(torch.float16) - else: - x = x.to(torch.float32) - - idx = self.final_res_log2 - res_log2 # Block index - - if self.architecture == 'skip' and idx > 0: - image = upfirdn2d.downsample2d(image, self.filter, impl=impl) - # Cast to `torch.float16` if needed. - if fp16_res is not None and res >= fp16_res: - image = image.to(torch.float16) - else: - image = image.to(torch.float32) - y = getattr(self, f'input{idx}')(image, impl=impl) - x = x + y - - if self.architecture == 'resnet': - residual = getattr(self, f'residual{idx}')( - x, runtime_gain=np.sqrt(0.5), impl=impl) - x = getattr(self, f'layer{2 * idx}')(x, impl=impl) - x = getattr(self, f'layer{2 * idx + 1}')( - x, runtime_gain=np.sqrt(0.5), impl=impl) - x = x + residual - else: - x = getattr(self, f'layer{2 * idx}')(x, impl=impl) - x = getattr(self, f'layer{2 * idx + 1}')(x, impl=impl) - - # Final output. - idx += 1 - if fp16_res is not None: # Always use FP32 for the last block. - x = x.to(torch.float32) - if self.architecture == 'skip': - image = upfirdn2d.downsample2d(image, self.filter, impl=impl) - if fp16_res is not None: # Always use FP32 for the last block. - image = image.to(torch.float32) - y = getattr(self, f'input{idx}')(image, impl=impl) - x = x + y - x = self.mbstd(x) - x = getattr(self, f'layer{2 * idx}')(x, impl=impl) - x = getattr(self, f'layer{2 * idx + 1}')(x, impl=impl) - x = self.output(x, impl=impl) - - if self.use_embedding: - x = (x * embed).sum(dim=1, keepdim=True) - x = x / np.sqrt(self.embedding_dim) - elif self.label_dim > 0: - x = (x * label).sum(dim=1, keepdim=True) - - results = { - 'score': x, - 'label': label - } - if self.use_embedding: - results['embedding'] = embed - return results - - -class PixelNormLayer(nn.Module): - """Implements pixel-wise feature vector normalization layer.""" - - def __init__(self, dim, eps): - super().__init__() - self.dim = dim - self.eps = eps - - def extra_repr(self): - return f'dim={self.dim}, epsilon={self.eps}' - - def forward(self, x): - scale = (x.square().mean(dim=self.dim, keepdim=True) + self.eps).rsqrt() - return x * scale - - -class MiniBatchSTDLayer(nn.Module): - """Implements the minibatch standard deviation layer.""" - - def __init__(self, groups, new_channels, eps): - super().__init__() - self.groups = groups - self.new_channels = new_channels - self.eps = eps - - def extra_repr(self): - return (f'groups={self.groups}, ' - f'new_channels={self.new_channels}, ' - f'epsilon={self.eps}') - - def forward(self, x): - if self.groups <= 1 or self.new_channels < 1: - return x - - dtype = x.dtype - - N, C, H, W = x.shape - G = min(self.groups, N) # Number of groups. - nC = self.new_channels # Number of channel groups. - c = C // nC # Channels per channel group. - - y = x.reshape(G, -1, nC, c, H, W) # [GnFcHW] - y = y - y.mean(dim=0) # [GnFcHW] - y = y.square().mean(dim=0) # [nFcHW] - y = (y + self.eps).sqrt() # [nFcHW] - y = y.mean(dim=(2, 3, 4)) # [nF] - y = y.reshape(-1, nC, 1, 1) # [nF11] - y = y.repeat(G, 1, H, W) # [NFHW] - x = torch.cat((x, y), dim=1) # [N(C+F)HW] - - assert x.dtype == dtype - return x - - -class ConvLayer(nn.Module): - """Implements the convolutional layer. - - If downsampling is needed (i.e., `scale_factor = 2`), the feature map will - be filtered with `filter_kernel` first. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - add_bias, - scale_factor, - filter_kernel, - use_wscale, - wscale_gain, - lr_mul, - activation_type, - conv_clamp): - """Initializes with layer settings. - - Args: - in_channels: Number of channels of the input tensor. - out_channels: Number of channels of the output tensor. - kernel_size: Size of the convolutional kernels. - add_bias: Whether to add bias onto the convolutional result. - scale_factor: Scale factor for downsampling. `1` means skip - downsampling. - filter_kernel: Kernel used for filtering. - use_wscale: Whether to use weight scaling. - wscale_gain: Gain factor for weight scaling. - lr_mul: Learning multiplier for both weight and bias. - activation_type: Type of activation. - conv_clamp: A threshold to clamp the output of convolution layers to - avoid overflow under FP16 training. - """ - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.add_bias = add_bias - self.scale_factor = scale_factor - self.filter_kernel = filter_kernel - self.use_wscale = use_wscale - self.wscale_gain = wscale_gain - self.lr_mul = lr_mul - self.activation_type = activation_type - self.conv_clamp = conv_clamp - - weight_shape = (out_channels, in_channels, kernel_size, kernel_size) - fan_in = kernel_size * kernel_size * in_channels - wscale = wscale_gain / np.sqrt(fan_in) - if use_wscale: - self.weight = nn.Parameter(torch.randn(*weight_shape) / lr_mul) - self.wscale = wscale * lr_mul - else: - self.weight = nn.Parameter( - torch.randn(*weight_shape) * wscale / lr_mul) - self.wscale = lr_mul - - if add_bias: - self.bias = nn.Parameter(torch.zeros(out_channels)) - self.bscale = lr_mul - else: - self.bias = None - self.act_gain = bias_act.activation_funcs[activation_type].def_gain - - if scale_factor > 1: - assert filter_kernel is not None - self.register_buffer( - 'filter', upfirdn2d.setup_filter(filter_kernel)) - fh, fw = self.filter.shape - self.filter_padding = ( - kernel_size // 2 + (fw - scale_factor + 1) // 2, - kernel_size // 2 + (fw - scale_factor) // 2, - kernel_size // 2 + (fh - scale_factor + 1) // 2, - kernel_size // 2 + (fh - scale_factor) // 2) - - def extra_repr(self): - return (f'in_ch={self.in_channels}, ' - f'out_ch={self.out_channels}, ' - f'ksize={self.kernel_size}, ' - f'wscale_gain={self.wscale_gain:.3f}, ' - f'bias={self.add_bias}, ' - f'lr_mul={self.lr_mul:.3f}, ' - f'downsample={self.scale_factor}, ' - f'downsample_filter={self.filter_kernel}, ' - f'act={self.activation_type}, ' - f'clamp={self.conv_clamp}') - - def forward(self, x, runtime_gain=1.0, impl='cuda'): - dtype = x.dtype - - weight = self.weight - if self.wscale != 1.0: - weight = weight * self.wscale - bias = None - if self.bias is not None: - bias = self.bias.to(dtype) - if self.bscale != 1.0: - bias = bias * self.bscale - - if self.scale_factor == 1: # Native convolution without downsampling. - padding = self.kernel_size // 2 - x = conv2d_gradfix.conv2d( - x, weight.to(dtype), stride=1, padding=padding, impl=impl) - else: # Convolution with downsampling. - down = self.scale_factor - f = self.filter - padding = self.filter_padding - # When kernel size = 1, use filtering function for downsampling. - if self.kernel_size == 1: - x = upfirdn2d.upfirdn2d( - x, f, down=down, padding=padding, impl=impl) - x = conv2d_gradfix.conv2d( - x, weight.to(dtype), stride=1, padding=0, impl=impl) - # When kernel size != 1, use stride convolution for downsampling. - else: - x = upfirdn2d.upfirdn2d( - x, f, down=1, padding=padding, impl=impl) - x = conv2d_gradfix.conv2d( - x, weight.to(dtype), stride=down, padding=0, impl=impl) - - act_gain = self.act_gain * runtime_gain - act_clamp = None - if self.conv_clamp is not None: - act_clamp = self.conv_clamp * runtime_gain - x = bias_act.bias_act(x, bias, - act=self.activation_type, - gain=act_gain, - clamp=act_clamp, - impl=impl) - - assert x.dtype == dtype - return x - - -class DenseLayer(nn.Module): - """Implements the dense layer.""" - - def __init__(self, - in_channels, - out_channels, - add_bias, - init_bias, - use_wscale, - wscale_gain, - lr_mul, - activation_type): - """Initializes with layer settings. - - Args: - in_channels: Number of channels of the input tensor. - out_channels: Number of channels of the output tensor. - add_bias: Whether to add bias onto the fully-connected result. - init_bias: The initial bias value before training. - use_wscale: Whether to use weight scaling. - wscale_gain: Gain factor for weight scaling. - lr_mul: Learning multiplier for both weight and bias. - activation_type: Type of activation. - """ - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.add_bias = add_bias - self.init_bias = init_bias - self.use_wscale = use_wscale - self.wscale_gain = wscale_gain - self.lr_mul = lr_mul - self.activation_type = activation_type - - weight_shape = (out_channels, in_channels) - wscale = wscale_gain / np.sqrt(in_channels) - if use_wscale: - self.weight = nn.Parameter(torch.randn(*weight_shape) / lr_mul) - self.wscale = wscale * lr_mul - else: - self.weight = nn.Parameter( - torch.randn(*weight_shape) * wscale / lr_mul) - self.wscale = lr_mul - - if add_bias: - init_bias = np.float32(init_bias) / lr_mul - self.bias = nn.Parameter(torch.full([out_channels], init_bias)) - self.bscale = lr_mul - else: - self.bias = None - - def extra_repr(self): - return (f'in_ch={self.in_channels}, ' - f'out_ch={self.out_channels}, ' - f'wscale_gain={self.wscale_gain:.3f}, ' - f'bias={self.add_bias}, ' - f'init_bias={self.init_bias}, ' - f'lr_mul={self.lr_mul:.3f}, ' - f'act={self.activation_type}') - - def forward(self, x, impl='cuda'): - dtype = x.dtype - - if x.ndim != 2: - x = x.flatten(start_dim=1) - - weight = self.weight.to(dtype) * self.wscale - bias = None - if self.bias is not None: - bias = self.bias.to(dtype) - if self.bscale != 1.0: - bias = bias * self.bscale - - # Fast pass for linear activation. - if self.activation_type == 'linear' and bias is not None: - x = torch.addmm(bias.unsqueeze(0), x, weight.t()) - else: - x = x.matmul(weight.t()) - x = bias_act.bias_act(x, bias, act=self.activation_type, impl=impl) - - assert x.dtype == dtype - return x - -# pylint: enable=missing-function-docstring diff --git a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/build_sam.py b/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/build_sam.py deleted file mode 100644 index 07abfca24e96eced7f13bdefd3212ce1b77b8999..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/build_sam.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from functools import partial - -from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer - - -def build_sam_vit_h(checkpoint=None): - return _build_sam( - encoder_embed_dim=1280, - encoder_depth=32, - encoder_num_heads=16, - encoder_global_attn_indexes=[7, 15, 23, 31], - checkpoint=checkpoint, - ) - - -build_sam = build_sam_vit_h - - -def build_sam_vit_l(checkpoint=None): - return _build_sam( - encoder_embed_dim=1024, - encoder_depth=24, - encoder_num_heads=16, - encoder_global_attn_indexes=[5, 11, 17, 23], - checkpoint=checkpoint, - ) - - -def build_sam_vit_b(checkpoint=None): - return _build_sam( - encoder_embed_dim=768, - encoder_depth=12, - encoder_num_heads=12, - encoder_global_attn_indexes=[2, 5, 8, 11], - checkpoint=checkpoint, - ) - - -sam_model_registry = { - "default": build_sam, - "vit_h": build_sam, - "vit_l": build_sam_vit_l, - "vit_b": build_sam_vit_b, -} - - -def _build_sam( - encoder_embed_dim, - encoder_depth, - encoder_num_heads, - encoder_global_attn_indexes, - checkpoint=None, -): - prompt_embed_dim = 256 - image_size = 1024 - vit_patch_size = 16 - image_embedding_size = image_size // vit_patch_size - sam = Sam( - image_encoder=ImageEncoderViT( - depth=encoder_depth, - embed_dim=encoder_embed_dim, - img_size=image_size, - mlp_ratio=4, - norm_layer=partial(torch.nn.LayerNorm, eps=1e-6), - num_heads=encoder_num_heads, - patch_size=vit_patch_size, - qkv_bias=True, - use_rel_pos=True, - global_attn_indexes=encoder_global_attn_indexes, - window_size=14, - out_chans=prompt_embed_dim, - ), - prompt_encoder=PromptEncoder( - embed_dim=prompt_embed_dim, - image_embedding_size=(image_embedding_size, image_embedding_size), - input_image_size=(image_size, image_size), - mask_in_chans=16, - ), - mask_decoder=MaskDecoder( - num_multimask_outputs=3, - transformer=TwoWayTransformer( - depth=2, - embedding_dim=prompt_embed_dim, - mlp_dim=2048, - num_heads=8, - ), - transformer_dim=prompt_embed_dim, - iou_head_depth=3, - iou_head_hidden_dim=256, - ), - pixel_mean=[123.675, 116.28, 103.53], - pixel_std=[58.395, 57.12, 57.375], - ) - sam.eval() - if checkpoint is not None: - with open(checkpoint, "rb") as f: - state_dict = torch.load(f) - sam.load_state_dict(state_dict) - return sam diff --git a/spaces/Illumotion/Koboldcpp/otherarch/llama_v2.h b/spaces/Illumotion/Koboldcpp/otherarch/llama_v2.h deleted file mode 100644 index 2b1cfc725b8a73b7879f07f00410fbdcc5753bfe..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/llama_v2.h +++ /dev/null @@ -1,263 +0,0 @@ -#ifndef LLAMA_V2_H -#define LLAMA_V2_H - -#include <stddef.h> -#include <stdint.h> -#include <stdbool.h> - -#ifdef LLAMA_V2_SHARED -# if defined(_WIN32) && !defined(__MINGW32__) -# ifdef LLAMA_V2_BUILD -# define LLAMA_V2_API __declspec(dllexport) -# else -# define LLAMA_V2_API __declspec(dllimport) -# endif -# else -# define LLAMA_V2_API __attribute__ ((visibility ("default"))) -# endif -#else -# define LLAMA_V2_API -#endif - -#define LLAMA_V2_FILE_VERSION 3 -#define LLAMA_V2_FILE_MAGIC 'ggjt' -#define LLAMA_V2_FILE_MAGIC_UNVERSIONED 'ggml' -#define LLAMA_V2_SESSION_MAGIC 'ggsn' -#define LLAMA_V2_SESSION_VERSION 1 - -#ifdef __cplusplus -extern "C" { -#endif - - // - // C interface - // - // TODO: show sample usage - // - - struct llama_v2_context; - - typedef int llama_v2_token; - - typedef struct llama_v2_token_data { - llama_v2_token id; // token id - float logit; // log-odds of the token - float p; // probability of the token - } llama_v2_token_data; - - typedef struct llama_v2_token_data_array { - llama_v2_token_data * data; - size_t size; - bool sorted; - } llama_v2_token_data_array; - - typedef void (*llama_v2_progress_callback)(float progress, void *ctx); - - struct llama_v2_context_params { - int n_ctx; // text context - int n_gpu_layers; // number of layers to store in VRAM - int seed; // RNG seed, -1 for random - - bool f16_kv; // use fp16 for KV cache - bool logits_all; // the llama_v2_eval() call computes all logits, not just the last one - bool vocab_only; // only load the vocabulary, no weights - bool use_mmap; // use mmap if possible - bool use_mlock; // force system to keep model in RAM - bool embedding; // embedding mode only - - // called with a progress value between 0 and 1, pass NULL to disable - llama_v2_progress_callback progress_callback; - // context pointer passed to the progress callback - void * progress_callback_user_data; - }; - - // model file types - enum llama_v2_ftype { - LLAMA_V2_FTYPE_ALL_F32 = 0, - LLAMA_V2_FTYPE_MOSTLY_F16 = 1, // except 1d tensors - LLAMA_V2_FTYPE_MOSTLY_Q4_0 = 2, // except 1d tensors - LLAMA_V2_FTYPE_MOSTLY_Q4_1 = 3, // except 1d tensors - LLAMA_V2_FTYPE_MOSTLY_Q4_1_SOME_F16 = 4, // tok_embeddings.weight and output.weight are F16 - LLAMA_V2_FTYPE_MOSTLY_Q4_2 = 5, // except 1d tensors - LLAMA_V2_FTYPE_MOSTLY_Q4_3 = 6, // except 1d tensors - LLAMA_V2_FTYPE_MOSTLY_Q8_0 = 7, // except 1d tensors - LLAMA_V2_FTYPE_MOSTLY_Q5_0 = 8, // except 1d tensors - LLAMA_V2_FTYPE_MOSTLY_Q5_1 = 9, // except 1d tensors - }; - - LLAMA_V2_API struct llama_v2_context_params llama_v2_context_default_params(); - - LLAMA_V2_API bool llama_v2_mmap_supported(); - LLAMA_V2_API bool llama_v2_mlock_supported(); - - // Various functions for loading a ggml llama model. - // Allocate (almost) all memory needed for the model. - // Return NULL on failure - LLAMA_V2_API struct llama_v2_context * llama_v2_init_from_file( - const char * path_model, - struct llama_v2_context_params params); - - // Frees all allocated memory - LLAMA_V2_API void llama_v2_free(struct llama_v2_context * ctx); - - // TODO: not great API - very likely to change - // Returns 0 on success - // nthread - how many threads to use. If <=0, will use std::thread::hardware_concurrency(), else the number given - LLAMA_V2_API int llama_v2_model_quantize( - const char * fname_inp, - const char * fname_out, - enum llama_v2_ftype ftype, - int nthread); - - // Apply a LoRA adapter to a loaded model - // path_base_model is the path to a higher quality model to use as a base for - // the layers modified by the adapter. Can be NULL to use the current loaded model. - // The model needs to be reloaded before applying a new adapter, otherwise the adapter - // will be applied on top of the previous one - // Returns 0 on success - LLAMA_V2_API int llama_v2_apply_lora_from_file( - struct llama_v2_context * ctx, - const char * path_lora, - const char * path_base_model, - int n_threads); - - // Returns the number of tokens in the KV cache - LLAMA_V2_API int llama_v2_get_kv_cache_token_count(const struct llama_v2_context * ctx); - - // Sets the current rng seed. - LLAMA_V2_API void llama_v2_set_rng_seed(struct llama_v2_context * ctx, int seed); - - // Returns the maximum size in bytes of the state (rng, logits, embedding - // and kv_cache) - will often be smaller after compacting tokens - LLAMA_V2_API size_t llama_v2_get_state_size(const struct llama_v2_context * ctx); - - // Copies the state to the specified destination address. - // Destination needs to have allocated enough memory. - // Returns the number of bytes copied - LLAMA_V2_API size_t llama_v2_copy_state_data(struct llama_v2_context * ctx, uint8_t * dst); - - // Set the state reading from the specified address - // Returns the number of bytes read - LLAMA_V2_API size_t llama_v2_set_state_data(struct llama_v2_context * ctx, const uint8_t * src); - - // Save/load session file - LLAMA_V2_API bool llama_v2_load_session_file(struct llama_v2_context * ctx, const char * path_session, llama_v2_token * tokens_out, size_t n_token_capacity, size_t * n_token_count_out); - LLAMA_V2_API bool llama_v2_save_session_file(struct llama_v2_context * ctx, const char * path_session, const llama_v2_token * tokens, size_t n_token_count); - - // Run the llama inference to obtain the logits and probabilities for the next token. - // tokens + n_tokens is the provided batch of new tokens to process - // n_past is the number of tokens to use from previous eval calls - // Returns 0 on success - LLAMA_V2_API int llama_v2_eval( - struct llama_v2_context * ctx, - const llama_v2_token * tokens, - int n_tokens, - int n_past, - int n_threads); - - // Convert the provided text into tokens. - // The tokens pointer must be large enough to hold the resulting tokens. - // Returns the number of tokens on success, no more than n_max_tokens - // Returns a negative number on failure - the number of tokens that would have been returned - // TODO: not sure if correct - LLAMA_V2_API int llama_v2_tokenize( - struct llama_v2_context * ctx, - const char * text, - llama_v2_token * tokens, - int n_max_tokens, - bool add_bos); - - - std::vector<llama_v2_token> legacy_llama_v2_tokenize(struct llama_v2_context * ctx, const std::string & text, bool add_bos); - - LLAMA_V2_API int llama_v2_n_vocab(const struct llama_v2_context * ctx); - LLAMA_V2_API int llama_v2_n_ctx (const struct llama_v2_context * ctx); - LLAMA_V2_API int llama_v2_n_embd (const struct llama_v2_context * ctx); - - // Token logits obtained from the last call to llama_v2_eval() - // The logits for the last token are stored in the last row - // Can be mutated in order to change the probabilities of the next token - // Rows: n_tokens - // Cols: n_vocab - LLAMA_V2_API float * llama_v2_get_logits(struct llama_v2_context * ctx); - - // Get the embeddings for the input - // shape: [n_embd] (1-dimensional) - LLAMA_V2_API float * llama_v2_get_embeddings(struct llama_v2_context * ctx); - - // Token Id -> String. Uses the vocabulary in the provided context - LLAMA_V2_API const char * llama_v2_token_to_str(const struct llama_v2_context * ctx, llama_v2_token token); - - // Special tokens - LLAMA_V2_API llama_v2_token llama_v2_token_bos(); - LLAMA_V2_API llama_v2_token llama_v2_token_eos(); - LLAMA_V2_API llama_v2_token llama_v2_token_nl(); - - // Sampling functions - - /// @details Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix. - LLAMA_V2_API void llama_v2_sample_repetition_penalty(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, const llama_v2_token * last_tokens, size_t last_tokens_size, float penalty); - - /// @details Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details. - LLAMA_V2_API void llama_v2_sample_frequency_and_presence_penalties(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, const llama_v2_token * last_tokens, size_t last_tokens_size, float alpha_frequency, float alpha_presence); - - /// @details Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits. - LLAMA_V2_API void llama_v2_sample_softmax(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates); - - /// @details Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751 - LLAMA_V2_API void llama_v2_sample_top_k(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, int k, size_t min_keep); - - /// @details Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751 - LLAMA_V2_API void llama_v2_sample_top_p(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float p, size_t min_keep); - - /// @details Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/. - LLAMA_V2_API void llama_v2_sample_tail_free(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float z, size_t min_keep); - - /// @details Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666. - LLAMA_V2_API void llama_v2_sample_typical(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float p, size_t min_keep); - LLAMA_V2_API void llama_v2_sample_temperature(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float temp); - - /// @details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words. - /// @param candidates A vector of `llama_v2_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text. - /// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text. - /// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates. - /// @param m The number of tokens considered in the estimation of `s_hat`. This is an arbitrary value that is used to calculate `s_hat`, which in turn helps to calculate the value of `k`. In the paper, they use `m = 100`, but you can experiment with different values to see how it affects the performance of the algorithm. - /// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal. - LLAMA_V2_API llama_v2_token llama_v2_sample_token_mirostat(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float tau, float eta, int m, float * mu); - - /// @details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words. - /// @param candidates A vector of `llama_v2_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text. - /// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text. - /// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates. - /// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal. - LLAMA_V2_API llama_v2_token llama_v2_sample_token_mirostat_v2(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float tau, float eta, float * mu); - - /// @details Selects the token with the highest probability. - LLAMA_V2_API llama_v2_token llama_v2_sample_token_greedy(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates); - - /// @details Randomly selects a token from the candidates based on their probabilities. - LLAMA_V2_API llama_v2_token llama_v2_sample_token(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates); - - // Performance information - LLAMA_V2_API void llama_v2_print_timings(struct llama_v2_context * ctx); - LLAMA_V2_API void llama_v2_reset_timings(struct llama_v2_context * ctx); - - // Print system information - LLAMA_V2_API const char * llama_v2_print_system_info(void); - -#ifdef __cplusplus -} -#endif - -// Internal API to be implemented by llama.cpp and used by tests/benchmarks only -#ifdef LLAMA_V2_API_INTERNAL - -#include <vector> -#include <string> -struct ggml_v2_tensor; - -std::vector<std::pair<std::string, struct ggml_v2_tensor *>>& llama_v2_internal_get_tensor_map(struct llama_v2_context * ctx); - -#endif - -#endif // LLAMA_V2_H diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/distributed.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/distributed.py deleted file mode 100644 index c3d890e28fd2b9e044bdd9494de4a43ad2471eed..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/distributed.py +++ /dev/null @@ -1,58 +0,0 @@ -import math -import torch -from .sampler import Sampler -from torch.distributed import get_world_size, get_rank - - -class DistributedSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - It is especially useful in conjunction with - :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each - process can pass a DistributedSampler instance as a DataLoader sampler, - and load a subset of the original dataset that is exclusive to it. - - .. note:: - Dataset is assumed to be of constant size. - - Arguments: - dataset: Dataset used for sampling. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - """ - - def __init__(self, dataset, num_replicas=None, rank=None): - if num_replicas is None: - num_replicas = get_world_size() - if rank is None: - rank = get_rank() - self.dataset = dataset - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas)) - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - indices = list(torch.randperm(len(self.dataset), generator=g)) - - # add extra samples to make it evenly divisible - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/spaces/JUNGU/Image-to-Story-Ko-multiplot/app.py b/spaces/JUNGU/Image-to-Story-Ko-multiplot/app.py deleted file mode 100644 index 2177d432520e9b4dd84946910354bb3e07fcb27b..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/Image-to-Story-Ko-multiplot/app.py +++ /dev/null @@ -1,132 +0,0 @@ -from fpdf import FPDF -import gradio as gr -from urllib.parse import quote -import os -import openai -from gradio_client import Client -import requests -from PIL import Image -from io import BytesIO - - -def generate_image_url(keywords): - truncated_keywords = keywords[:20] - return f"https://image.pollinations.ai/prompt/{quote(truncated_keywords)}" - -def download_image(image_url): - response = requests.get(image_url) - img = Image.open(BytesIO(response.content)) - local_image_path = f"/tmp/{image_url.split('/')[-1]}.png" - img.save(local_image_path) - return local_image_path - -class PDF(FPDF): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.add_font('NanumGothic', '', '/home/user/app/NanumGothic.ttf', uni=True) - self.set_font('/home/user/app/NanumGothic.ttf', '', 12) - - def chapter_title(self, num, label): - self.set_font('/home/user/app/NanumGothic.ttf', '', 15) - self.cell(0, 10, f"Chapter {num} : {label}", 0, 1, 'C') - self.ln(10) - - def chapter_body(self, body, image_url): - self.set_font('/home/user/app/NanumGothic.ttf', '', 12) - self.multi_cell(0, 10, body) - self.image(image_url, x=10, w=100) - self.ln(60) - - def add_chapter(self, num, title, body, image_url): - local_image_path = download_image(image_url) - self.add_page() - self.chapter_title(num, title) - self.chapter_body(body, local_image_path) - -def save_as_pdf(): - global chapters - pdf_path = "/mnt/data/chapters.pdf" - pdf = PDF() - for i, chapter in enumerate(chapters, start=1): - pdf.add_chapter(i, f"Chapter {i}", chapter["story"], chapter["image"]) - pdf.output(pdf_path) - return pdf_path - -def create_markdown_table(): - global chapters - markdown_table = "| 챕터 | 이야기 | 그림 |\n|-------|-------|------|\n" - for i, chapter in enumerate(chapters, start=1): - markdown_table += f"| Chapter {i} | {chapter['story']} | ![Chapter {i} Image]({chapter['image']}) |\n" - return markdown_table - -OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY') -clipi_client = Client("https://fffiloni-clip-interrogator-2.hf.space/") - -chapter_num = 1 -story_intro = "" -current_story = "" -current_image_url = "" -chapters = [] - -def next_chapter(audience, keyword, protagonist): - global chapter_num, current_story, current_image_url, chapters - current_image_url = generate_image_url(current_story) - gr.Info(f'Chapter {chapter_num}를 생성하고 있습니다...') - chapter_prompt = f"{story_intro}\n\nKeyword: {keyword}\nProtagonist: {protagonist}\n\n![Chapter {chapter_num} Image]({current_image_url})\n\nChapter {chapter_num} 내용을 만들어줘." - chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo-16k", messages=[{"role": "user", "content": chapter_prompt}]) - current_story = chat_completion.choices[0].message.content - chapters.append({"story": current_story, "image": current_image_url}) - chapter_num += 1 - return current_story, current_image_url, create_markdown_table() - -def infer(image_input, audience, keyword, protagonist): - global story_intro, current_story, current_image_url, chapter_num, chapters - chapter_num = 1 - chapters = [] - gr.Info('Calling CLIP Interrogator, 이미지를 해석하고 있습니다...') - clipi_result = clipi_client.predict(image_input, "best", 4, api_name="/clipi2")[0] - story_intro = f""" - # Illustrated Tales - ## Created by [Sigkawat Pengnoo](https://flowgpt.com/prompt/qzv2D3OvHkzkfSE4rQCqv) at FlowGPT - Keyword: {keyword} - Protagonist: {protagonist} - 한국어로 답변해줘. - STORY : "{{ {clipi_result} }}" - Let's begin with Chapter 1! - """ - current_story = clipi_result - return next_chapter(audience, keyword, protagonist) - -css = """ -#col-container {max-width: 910px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -""" -with gr.Blocks(css=css) as demo: - with gr.Column(elem_id="col-container"): - gr.Markdown( - """ - <h1 style="text-align: center">Illustrated Tales - Korean</h1> - <p style="text-align: center">이미지를 업로드하세요, ChatGPT를 통해 한국어로 이야기와 그림을 만들어 줍니다!</p> - """ - ) - with gr.Row(): - with gr.Column(): - image_in = gr.Image(label="이미지 입력", type="filepath", elem_id="image-in", height=420) - audience = gr.Radio(label="대상", choices=["Children", "Adult"], value="Children") - keyword_in = gr.Textbox(label="핵심 키워드") - protagonist_in = gr.Textbox(label="주인공") - submit_btn = gr.Button('이야기와 그림을 만들어 주세요') - next_chapter_btn = gr.Button('다음 이야기') - save_pdf_btn = gr.Button('PDF로 저장하기') - with gr.Column(elem_id="component-10"): - chapter_story = gr.Markdown(label="이야기", elem_id="chapter_story") - chapter_image = gr.Image(label="그림", elem_id="chapter_image") - - table_markdown = gr.Markdown(label="이야기와 그림 정리", elem_id="table_markdown", width="100%") - - submit_btn.click(fn=infer, inputs=[image_in, audience, keyword_in, protagonist_in], outputs=[chapter_story, chapter_image, table_markdown]) - next_chapter_btn.click(fn=lambda: next_chapter(audience=audience.value, keyword=keyword_in.value, protagonist=protagonist_in.value), outputs=[chapter_story, chapter_image, table_markdown]) - save_pdf_btn.click(fn=save_as_pdf, outputs=gr.File(label="PDF 다운로드")) - - demo.queue(max_size=12).launch() - diff --git a/spaces/Jaehan/zero-shot-classification-2/app.py b/spaces/Jaehan/zero-shot-classification-2/app.py deleted file mode 100644 index bb54bd25a274cfa0efe4443c246f89c33dc9ddfe..0000000000000000000000000000000000000000 --- a/spaces/Jaehan/zero-shot-classification-2/app.py +++ /dev/null @@ -1,20 +0,0 @@ -from transformers import BartForSequenceClassification, BartTokenizer -import gradio as grad - -model_name = "facebook/bart-large-mnli" -bart_tokenizer = BartTokenizer.from_pretrained(model_name) -model = BartForSequenceClassification.from_pretrained(model_name) - -def classify(text, label): - token_ids = bart_tokenizer.encode(text, label, return_tensors="pt") - token_logits = model(token_ids)[0] - entail_contra_token_logits = token_logits[:, [0, 2]] - probabilities = entail_contra_token_logits.softmax(dim=1) - response = probabilities[:, 1].item() * 100 - return response - -in_text = grad.Textbox(lines=1, label="English", placeholder="Text to be classified") -in_labels = grad.Textbox(lines=1, label="Label", placeholder="Input a label") -out = grad.Textbox(lines=1, label="Probability of label being true is ") - -grad.Interface(classify, inputs=[in_text, in_labels], outputs=[out]).launch() \ No newline at end of file diff --git a/spaces/Jamel887/Rv-percobaan887/app.py b/spaces/Jamel887/Rv-percobaan887/app.py deleted file mode 100644 index c78eb66470a6c6adff0b6f8a21617014a90b822e..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rv-percobaan887/app.py +++ /dev/null @@ -1,503 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "crepe", "harvest"] - f0method_info = "PM is fast, Crepe or harvest is good but it was extremely slow (Default: PM)" -else: - audio_mode = ["Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "crepe", "harvest"] - f0method_info = "PM is fast, Crepe or harvest is good but it was extremely slow (Default: PM))" -def create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 360 and limitation: - return "Please upload an audio file that is less than 1 minute.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 600 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_title} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_model(): - categories = [] - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, models]) - return categories - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - else: - # Spotify doesnt work. - # Need to find other solution soon. - ''' - command = f"spotdl download {url} --output dl_audio/.wav" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - audio_path = "dl_audio/spotify_audio.wav" - ''' - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks(theme=gr.themes.Base()) as app: - gr.Markdown( - "# <center> Hololive RVC Models\n" - "### <center> will update every hololive ai model that i can find or make.\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/aziib/hololive-rvc-models-v2/blob/main/hololive_rvc_models_v2.ipynb)\n\n" - "[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/megaaziib)\n\n" - ) - for (folder_title, folder, models) in categories: - with gr.TabItem(folder_title): - with gr.Tabs(): - if not models: - gr.Markdown("# <center> No Model Loaded.") - gr.Markdown("## <center> Please add model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '<div align="center">' - f'<div>{title}</div>\n'+ - f'<div>RVC {model_version} Model</div>\n'+ - (f'<div>Model author: {author}</div>' if author else "")+ - (f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else "")+ - '</div>' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input and Upload - vc_input = gr.Textbox(label="Input audio path", visible=False) - vc_upload = gr.Audio(label="Upload audio file", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="Accents controling. Too high prob gonna sounds too robotic (Default: 0.4)", - value=0.4, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=1, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.23, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) -if limitation is True: - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) -else: - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=True) \ No newline at end of file diff --git a/spaces/Jikiwi/sovits-models/inference/infer_tool.py b/spaces/Jikiwi/sovits-models/inference/infer_tool.py deleted file mode 100644 index fed81f5abb6f2f525af616171ee9838ae341cb5f..0000000000000000000000000000000000000000 --- a/spaces/Jikiwi/sovits-models/inference/infer_tool.py +++ /dev/null @@ -1,324 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -import cluster -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - -def pad_array(arr, target_length): - current_length = arr.shape[0] - if current_length >= target_length: - return arr - else: - pad_width = target_length - current_length - pad_left = pad_width // 2 - pad_right = pad_width - pad_left - padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0)) - return padded_arr - -def split_list_by_n(list_collection, n, pre=0): - for i in range(0, len(list_collection), n): - yield list_collection[i-pre if i-pre>=0 else i: i + n] - - -class F0FilterException(Exception): - pass - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt"): - self.net_g_path = net_g_path - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - # 加载hubert - self.hubert_model = utils.get_hubert_model().to(self.dev) - self.load_model() - if os.path.exists(cluster_model_path): - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - - def load_model(self): - # 获取模型配置 - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - - - def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling): - - wav, sr = librosa.load(in_path, sr=self.target_sample) - - if F0_mean_pooling == True: - f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0 = torch.FloatTensor(list(f0)) - uv = torch.FloatTensor(list(uv)) - if F0_mean_pooling == False: - f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0).to(self.dev) - uv = uv.unsqueeze(0).to(self.dev) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio !=0: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c).to(self.dev) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False, - F0_mean_pooling=False - ): - - speaker_id = self.spk2id.__dict__.get(speaker) - if not speaker_id and type(speaker) is int: - if len(self.spk2id.__dict__) >= speaker: - speaker_id = speaker - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float() - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - def clear_empty(self): - # 清理显存 - torch.cuda.empty_cache() - - def slice_inference(self, - raw_audio_path, - spk, - tran, - slice_db, - cluster_infer_ratio, - auto_predict_f0, - noice_scale, - pad_seconds=0.5, - clip_seconds=0, - lg_num=0, - lgr_num =0.75, - F0_mean_pooling = False - ): - wav_path = raw_audio_path - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip_seconds*audio_sr) - lg_size = int(lg_num*audio_sr) - lg_size_r = int(lg_size*lgr_num) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(pad_array(_audio, length))) - continue - if per_size != 0: - datas = split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length - if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling - ) - _audio = out_audio.cpu().numpy() - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - return np.array(audio) - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False): - - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/Juancho/forest_fire_detector/app.py b/spaces/Juancho/forest_fire_detector/app.py deleted file mode 100644 index 529001c4b7c712a32158989b9778fc1377846e41..0000000000000000000000000000000000000000 --- a/spaces/Juancho/forest_fire_detector/app.py +++ /dev/null @@ -1,20 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner('model_convnext_small_in22k_version_1.pkl') -categories = ("fire", "nofire") - -def classify_image(img): - pred, idx, prob = learn.predict(img) - return dict(zip(categories, map(float, prob))) - -img = gr.inputs.Image(shape=(250, 250)) -label = gr.outputs.Label() - -gr.Interface( - fn=classify_image, - inputs="image", - outputs="label", - enable_queue=True, - examples=["after-a-bushfire.jpg", "fire_1.jpg", "forest_1.jpg", "sebastian-unrau-sp-p7uuT0tw-unsplash.jpg"], - interpretation='default').launch() \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/engine/hooks/densecl_hook.py b/spaces/KyanChen/RSPrompter/mmpretrain/engine/hooks/densecl_hook.py deleted file mode 100644 index 8c7e17d3419cbc2a540d3aecd81e223eed670df2..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/engine/hooks/densecl_hook.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Sequence - -from mmengine.hooks import Hook - -from mmpretrain.registry import HOOKS -from mmpretrain.utils import get_ori_model - - -@HOOKS.register_module() -class DenseCLHook(Hook): - """Hook for DenseCL. - - This hook includes ``loss_lambda`` warmup in DenseCL. - Borrowed from the authors' code: `<https://github.com/WXinlong/DenseCL>`_. - - Args: - start_iters (int): The number of warmup iterations to set - ``loss_lambda=0``. Defaults to 1000. - """ - - def __init__(self, start_iters: int = 1000) -> None: - self.start_iters = start_iters - - def before_train(self, runner) -> None: - """Obtain ``loss_lambda`` from algorithm.""" - assert hasattr(get_ori_model(runner.model), 'loss_lambda'), \ - "The runner must have attribute \"loss_lambda\" in DenseCL." - self.loss_lambda = get_ori_model(runner.model).loss_lambda - - def before_train_iter(self, - runner, - batch_idx: int, - data_batch: Optional[Sequence[dict]] = None) -> None: - """Adjust ``loss_lambda`` every train iter.""" - assert hasattr(get_ori_model(runner.model), 'loss_lambda'), \ - "The runner must have attribute \"loss_lambda\" in DenseCL." - cur_iter = runner.iter - if cur_iter >= self.start_iters: - get_ori_model(runner.model).loss_lambda = self.loss_lambda - else: - get_ori_model(runner.model).loss_lambda = 0. diff --git a/spaces/LanguageBind/LanguageBind/data/process_video.py b/spaces/LanguageBind/LanguageBind/data/process_video.py deleted file mode 100644 index 1b7171be0318349003b9cd20c3503fbdf24db36d..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/data/process_video.py +++ /dev/null @@ -1,161 +0,0 @@ - -import io -import logging -import os - -import cv2 -import numpy as np -import torch -import decord -import torchvision.transforms -from PIL import Image -from decord import VideoReader, cpu - -try: - from petrel_client.client import Client - petrel_backend_imported = True -except (ImportError, ModuleNotFoundError): - petrel_backend_imported = False - - -from pytorchvideo.data.encoded_video import EncodedVideo -from torchvision.transforms import Compose, Lambda, ToTensor -from torchvision.transforms._transforms_video import NormalizeVideo, RandomCropVideo, RandomHorizontalFlipVideo -from pytorchvideo.transforms import ApplyTransformToKey, ShortSideScale, UniformTemporalSubsample -import sys -sys.path.append('../') -from open_clip import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD -from os.path import join as opj - - -def get_video_loader(use_petrel_backend: bool = True, - enable_mc: bool = True, - conf_path: str = None): - if petrel_backend_imported and use_petrel_backend: - _client = Client(conf_path=conf_path, enable_mc=enable_mc) - else: - _client = None - - def _loader(video_path): - if _client is not None and 's3:' in video_path: - video_path = io.BytesIO(_client.get(video_path)) - - vr = VideoReader(video_path, num_threads=1, ctx=cpu(0)) - return vr - - return _loader - - -decord.bridge.set_bridge('torch') -# video_loader = get_video_loader() - - -def get_video_transform(args): - if args.video_decode_backend == 'pytorchvideo': - transform = ApplyTransformToKey( - key="video", - transform=Compose( - [ - UniformTemporalSubsample(args.num_frames), - Lambda(lambda x: x / 255.0), - NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD), - ShortSideScale(size=224), - RandomCropVideo(size=224), - RandomHorizontalFlipVideo(p=0.5), - ] - ), - ) - - elif args.video_decode_backend == 'decord': - - transform = Compose( - [ - # UniformTemporalSubsample(num_frames), - Lambda(lambda x: x / 255.0), - NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD), - ShortSideScale(size=224), - RandomCropVideo(size=224), - RandomHorizontalFlipVideo(p=0.5), - ] - ) - - elif args.video_decode_backend == 'opencv': - transform = Compose( - [ - # UniformTemporalSubsample(num_frames), - Lambda(lambda x: x / 255.0), - NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD), - ShortSideScale(size=224), - RandomCropVideo(size=224), - RandomHorizontalFlipVideo(p=0.5), - ] - ) - - elif args.video_decode_backend == 'imgs': - transform = Compose( - [ - # UniformTemporalSubsample(num_frames), - # Lambda(lambda x: x / 255.0), - NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD), - ShortSideScale(size=224), - RandomCropVideo(size=224), - RandomHorizontalFlipVideo(p=0.5), - ] - ) - else: - raise NameError('video_decode_backend should specify in (pytorchvideo, decord, opencv, imgs)') - return transform - -def load_and_transform_video( - video_path, - transform, - video_decode_backend='opencv', - clip_start_sec=0.0, - clip_end_sec=None, - num_frames=8, -): - if video_decode_backend == 'pytorchvideo': - # decord pyav - video = EncodedVideo.from_path(video_path, decoder="decord", decode_audio=False) - duration = video.duration - start_sec = clip_start_sec # secs - end_sec = clip_end_sec if clip_end_sec is not None else duration # secs - video_data = video.get_clip(start_sec=start_sec, end_sec=end_sec) - video_outputs = transform(video_data) - - elif video_decode_backend == 'decord': - decord_vr = VideoReader(video_path, ctx=cpu(0)) - duration = len(decord_vr) - frame_id_list = np.linspace(0, duration-1, num_frames, dtype=int) - video_data = decord_vr.get_batch(frame_id_list) - video_data = video_data.permute(3, 0, 1, 2) # (T, H, W, C) -> (C, T, H, W) - video_outputs = transform(video_data) - - elif video_decode_backend == 'opencv': - cv2_vr = cv2.VideoCapture(video_path) - duration = int(cv2_vr.get(cv2.CAP_PROP_FRAME_COUNT)) - frame_id_list = np.linspace(0, duration-1, num_frames, dtype=int) - - video_data = [] - for frame_idx in frame_id_list: - cv2_vr.set(1, frame_idx) - _, frame = cv2_vr.read() - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - video_data.append(torch.from_numpy(frame).permute(2, 0, 1)) - cv2_vr.release() - video_data = torch.stack(video_data, dim=1) - video_outputs = transform(video_data) - - elif video_decode_backend == 'imgs': - resize256_folder = video_path.replace('.mp4', '_resize256_folder') - video_data = [ToTensor()(Image.open(opj(resize256_folder, f'{i}.jpg'))) for i in range(8)] - video_data = torch.stack(video_data, dim=1) - # print(video_data.shape, video_data.max(), video_data.min()) - video_outputs = transform(video_data) - - else: - raise NameError('video_decode_backend should specify in (pytorchvideo, decord, opencv, imgs)') - return {'pixel_values': video_outputs} - -if __name__ == '__main__': - load_and_transform_video(r"D:\ONE-PEACE-main\lb_test\zHSOYcZblvY.mp4") \ No newline at end of file diff --git a/spaces/LaynzKunz/RCVAICOVER/src/infer_pack/models_onnx_moess.py b/spaces/LaynzKunz/RCVAICOVER/src/infer_pack/models_onnx_moess.py deleted file mode 100644 index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/RCVAICOVER/src/infer_pack/models_onnx_moess.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/dpo.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/dpo.py deleted file mode 100644 index e0c871b3a3fbe463caf3b70c9ab61eae8e78600a..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/dpo.py +++ /dev/null @@ -1,68 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see <http://www.gnu.org/licenses/>. -# -############################################################################### -# Python 2/3 compatibility imports -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from . import Indicator, MovAv - - -class DetrendedPriceOscillator(Indicator): - ''' - Defined by Joe DiNapoli in his book *"Trading with DiNapoli levels"* - - It measures the price variations against a Moving Average (the trend) - and therefore removes the "trend" factor from the price. - - Formula: - - movav = MovingAverage(close, period) - - dpo = close - movav(shifted period / 2 + 1) - - See: - - http://en.wikipedia.org/wiki/Detrended_price_oscillator - ''' - # Named alias for invocation - alias = ('DPO',) - - # Named output lines - lines = ('dpo',) - - # Accepted parameters (and defaults) - - # MovAvg also parameter to allow experimentation - params = (('period', 20), ('movav', MovAv.Simple)) - - # Emphasize central 0.0 line in plot - plotinfo = dict(plothlines=[0.0]) - - # Indicator information after the name (in brackets) - def _plotlabel(self): - plabels = [self.p.period] - plabels += [self.p.movav] * self.p.notdefault('movav') - return plabels - - def __init__(self): - # Create the Moving Average - ma = self.p.movav(self.data, period=self.p.period) - - # Calculate value (look back period/2 + 1 in MA) and bind to 'dpo' line - self.lines.dpo = self.data - ma(-self.p.period // 2 + 1) - - super(DetrendedPriceOscillator, self).__init__() diff --git a/spaces/Lookimi/TuberTranscript/README.md b/spaces/Lookimi/TuberTranscript/README.md deleted file mode 100644 index bf075ce07ec4c21f5f6678da8e7af993ffc52f58..0000000000000000000000000000000000000000 --- a/spaces/Lookimi/TuberTranscript/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TuberTranscript -emoji: 🏢 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LuxOAI/zenFace-Recognition-SDK/app.py b/spaces/LuxOAI/zenFace-Recognition-SDK/app.py deleted file mode 100644 index 5c0ef30dd74b75cec6b9e356c5f7d4dbb62af6eb..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/zenFace-Recognition-SDK/app.py +++ /dev/null @@ -1,217 +0,0 @@ -import sys -sys.path.append('.') - -from flask import Flask, request, jsonify -from time import gmtime, strftime -import os -import base64 -import json -import cv2 -import numpy as np - -from facewrapper.facewrapper import ttv_version -from facewrapper.facewrapper import ttv_get_hwid -from facewrapper.facewrapper import ttv_init -from facewrapper.facewrapper import ttv_init_offline -from facewrapper.facewrapper import ttv_extract_feature -from facewrapper.facewrapper import ttv_compare_feature - -app = Flask(__name__) - -app.config['SITE'] = "http://0.0.0.0:8000/" -app.config['DEBUG'] = False - -licenseKey = os.environ.get("LICENSE_KEY") -licensePath = "license.txt" -modelFolder = os.path.abspath(os.path.dirname(__file__)) + '/facewrapper/dict' - -version = ttv_version() -print("version: ", version.decode('utf-8')) - -ret = ttv_init(modelFolder.encode('utf-8'), licenseKey.encode('utf-8')) -if ret != 0: - print(f"online init failed: {ret}"); - - hwid = ttv_get_hwid() - print("hwid: ", hwid.decode('utf-8')) - - ret = ttv_init_offline(modelFolder.encode('utf-8'), licensePath.encode('utf-8')) - if ret != 0: - print(f"offline init failed: {ret}") - exit(-1) - else: - print(f"offline init ok") - -else: - print(f"online init ok") - -@app.route('/api/compare_face', methods=['POST']) -def compare_face(): - file1 = request.files['image1'] - image1 = cv2.imdecode(np.fromstring(file1.read(), np.uint8), cv2.IMREAD_COLOR) - if image1 is None: - result = "image1: is null!" - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - file2 = request.files['image2'] - image2 = cv2.imdecode(np.fromstring(file2.read(), np.uint8), cv2.IMREAD_COLOR) - if image2 is None: - result = "image2: is null!" - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - faceRect1 = np.zeros([4], dtype=np.int32) - feature1 = np.zeros([2048], dtype=np.uint8) - featureSize1 = np.zeros([1], dtype=np.int32) - - ret = ttv_extract_feature(image1, image1.shape[1], image1.shape[0], faceRect1, feature1, featureSize1) - if ret <= 0: - if ret == -1: - result = "license error!" - elif ret == -2: - result = "init error!" - elif ret == 0: - result = "image1: no face detected!" - - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - faceRect2 = np.zeros([4], dtype=np.int32) - feature2 = np.zeros([2048], dtype=np.uint8) - featureSize2 = np.zeros([1], dtype=np.int32) - - ret = ttv_extract_feature(image2, image2.shape[1], image2.shape[0], faceRect2, feature2, featureSize2) - if ret <= 0: - if ret == -1: - result = "license error!" - elif ret == -2: - result = "init error!" - elif ret == 0: - result = "image2: no face detected!" - - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - similarity = ttv_compare_feature(feature1, feature2) - if similarity > 0.7: - result = "same" - else: - result = "different" - - status = "ok" - response = jsonify( - { - "status": status, - "data": { - "result": result, - "similarity": float(similarity), - "face1": {"x1": int(faceRect1[0]), "y1": int(faceRect1[1]), "x2": int(faceRect1[2]), "y2" : int(faceRect1[3])}, - "face2": {"x1": int(faceRect2[0]), "y1": int(faceRect2[1]), "x2": int(faceRect2[2]), "y2" : int(faceRect2[3])}, - } - }) - - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - -@app.route('/api/compare_face_base64', methods=['POST']) -def coompare_face_base64(): - content = request.get_json() - imageBase641 = content['image1'] - image1 = cv2.imdecode(np.frombuffer(base64.b64decode(imageBase641), dtype=np.uint8), cv2.IMREAD_COLOR) - - if image1 is None: - result = "image1: is null!" - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - imageBase642 = content['image2'] - image2 = cv2.imdecode(np.frombuffer(base64.b64decode(imageBase642), dtype=np.uint8), cv2.IMREAD_COLOR) - - if image2 is None: - result = "image2: is null!" - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - faceRect1 = np.zeros([4], dtype=np.int32) - feature1 = np.zeros([2048], dtype=np.uint8) - featureSize1 = np.zeros([1], dtype=np.int32) - - ret = ttv_extract_feature(image1, image1.shape[1], image1.shape[0], faceRect1, feature1, featureSize1) - if ret <= 0: - if ret == -1: - result = "license error!" - elif ret == -2: - result = "init error!" - elif ret == 0: - result = "image1: no face detected!" - - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - faceRect2 = np.zeros([4], dtype=np.int32) - feature2 = np.zeros([2048], dtype=np.uint8) - featureSize2 = np.zeros([1], dtype=np.int32) - - ret = ttv_extract_feature(image2, image2.shape[1], image2.shape[0], faceRect2, feature2, featureSize2) - if ret <= 0: - if ret == -1: - result = "license error!" - elif ret == -2: - result = "init error!" - elif ret == 0: - result = "image2: no face detected!" - - status = "ok" - response = jsonify({"status": status, "data": {"result": result}}) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - - similarity = ttv_compare_feature(feature1, feature2) - if similarity > 0.7: - result = "same" - else: - result = "different" - - status = "ok" - response = jsonify( - { - "status": status, - "data": { - "result": result, - "similarity": float(similarity), - "face1": {"x1": int(faceRect1[0]), "y1": int(faceRect1[1]), "x2": int(faceRect1[2]), "y2" : int(faceRect1[3])}, - "face2": {"x1": int(faceRect2[0]), "y1": int(faceRect2[1]), "x2": int(faceRect2[2]), "y2" : int(faceRect2[3])}, - } - }) - response.status_code = 200 - response.headers["Content-Type"] = "application/json; charset=utf-8" - return response - -if __name__ == '__main__': - port = int(os.environ.get("PORT", 8000)) - app.run(host='0.0.0.0', port=port) diff --git a/spaces/Lyra121/finetuned_diffusion/README.md b/spaces/Lyra121/finetuned_diffusion/README.md deleted file mode 100644 index bca1a00cd251d1c13fc3fe72baad06e256245d3e..0000000000000000000000000000000000000000 --- a/spaces/Lyra121/finetuned_diffusion/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Finetuned Diffusion -emoji: 🪄🖼️ -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -license: mit -duplicated_from: anzorq/finetuned_diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mahiruoshi/BangDream-Bert-VITS2/webui.py b/spaces/Mahiruoshi/BangDream-Bert-VITS2/webui.py deleted file mode 100644 index d9dbd2ca721fe178703a063ffa6c25e85f461bb8..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/BangDream-Bert-VITS2/webui.py +++ /dev/null @@ -1,224 +0,0 @@ -# flake8: noqa: E402 - -import sys, os -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig( - level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s" -) - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser -import numpy as np - -net_g = None - -if sys.platform == "darwin" and torch.backends.mps.is_available(): - device = "mps" - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" -else: - device = "cuda" - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str, device) - del word2ph - assert bert.shape[-1] == len(phone), phone - - if language_str == "ZH": - bert = bert - ja_bert = torch.zeros(768, len(phone)) - elif language_str == "JP": - ja_bert = bert - bert = torch.zeros(1024, len(phone)) - else: - bert = torch.zeros(1024, len(phone)) - ja_bert = torch.zeros(768, len(phone)) - - assert bert.shape[-1] == len( - phone - ), f"Bert seq len {bert.shape[-1]} != {len(phone)}" - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, ja_bert, phone, tone, language - - -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid, language): - global net_g - bert, ja_bert, phones, tones, lang_ids = get_text(text, language, hps) - with torch.no_grad(): - x_tst = phones.to(device).unsqueeze(0) - tones = tones.to(device).unsqueeze(0) - lang_ids = lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - ja_bert = ja_bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = ( - net_g.infer( - x_tst, - x_tst_lengths, - speakers, - tones, - lang_ids, - bert, - ja_bert, - sdp_ratio=sdp_ratio, - noise_scale=noise_scale, - noise_scale_w=noise_scale_w, - length_scale=length_scale, - )[0][0, 0] - .data.cpu() - .float() - .numpy() - ) - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - torch.cuda.empty_cache() - return audio - - -def tts_fn( - text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale, language -): - slices = text.split("|") - audio_list = [] - with torch.no_grad(): - for slice in slices: - audio = infer( - slice, - sdp_ratio=sdp_ratio, - noise_scale=noise_scale, - noise_scale_w=noise_scale_w, - length_scale=length_scale, - sid=speaker, - language=language, - ) - audio_list.append(audio) - silence = np.zeros(hps.data.sampling_rate) # 生成1秒的静音 - audio_list.append(silence) # 将静音添加到列表中 - audio_concat = np.concatenate(audio_list) - return "Success", (hps.data.sampling_rate, audio_concat) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "-m", "--model", default="./logs/as/G_8000.pth", help="path of your model" - ) - parser.add_argument( - "-c", - "--config", - default="./configs/config.json", - help="path of your config file", - ) - parser.add_argument( - "--share", default=False, help="make link public", action="store_true" - ) - parser.add_argument( - "-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log" - ) - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config) - - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model, - ).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - languages = ["ZH", "JP"] - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - text = gr.TextArea( - label="Text", - placeholder="Input Text Here", - value="吃葡萄不吐葡萄皮,不吃葡萄倒吐葡萄皮。", - ) - speaker = gr.Dropdown( - choices=speakers, value=speakers[0], label="Speaker" - ) - sdp_ratio = gr.Slider( - minimum=0, maximum=1, value=0.2, step=0.1, label="SDP Ratio" - ) - noise_scale = gr.Slider( - minimum=0.1, maximum=2, value=0.6, step=0.1, label="Noise Scale" - ) - noise_scale_w = gr.Slider( - minimum=0.1, maximum=2, value=0.8, step=0.1, label="Noise Scale W" - ) - length_scale = gr.Slider( - minimum=0.1, maximum=2, value=1, step=0.1, label="Length Scale" - ) - language = gr.Dropdown( - choices=languages, value=languages[0], label="Language" - ) - btn = gr.Button("Generate!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - - btn.click( - tts_fn, - inputs=[ - text, - speaker, - sdp_ratio, - noise_scale, - noise_scale_w, - length_scale, - language, - ], - outputs=[text_output, audio_output], - ) - - webbrowser.open("http://127.0.0.1:7860") - app.launch(share=args.share) diff --git a/spaces/Manjushri/MusicGen/audiocraft/models/__init__.py b/spaces/Manjushri/MusicGen/audiocraft/models/__init__.py deleted file mode 100644 index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .musicgen import MusicGen -from .lm import LMModel -from .encodec import CompressionModel, EncodecModel diff --git a/spaces/Manjushri/MusicGen/tests/__init__.py b/spaces/Manjushri/MusicGen/tests/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/tests/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/MarcyWu/text_generator/README.md b/spaces/MarcyWu/text_generator/README.md deleted file mode 100644 index 550f93bb5dc60ef8a77941040b44bb69710cac82..0000000000000000000000000000000000000000 --- a/spaces/MarcyWu/text_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generator -emoji: 🏃 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MathFabian/p2_m5_hugging/app.py b/spaces/MathFabian/p2_m5_hugging/app.py deleted file mode 100644 index a31e94cd55326a87852ba0aea2b062d9597adcdc..0000000000000000000000000000000000000000 --- a/spaces/MathFabian/p2_m5_hugging/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import cv2 -import numpy as np -import gradio as gr -from ultralytics import YOLO - -def predict(path:str): - model = YOLO("yolov8l.yaml") - model = YOLO("best.pt") - imagen = cv2.imread(path) - results = model.predict(source=imagen) - - for r in results: - output_rgb = cv2.cvtColor(r.plot(), cv2.COLOR_BGR2RGB) - return output_rgb - -gr.Interface( - fn=predict, - inputs=gr.Image(type="filepath", label="Input"), - outputs=gr.Image(type="numpy", label="Output") -).launch(debug=False) \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py deleted file mode 100644 index b37c79bed4ef9fd8913715e62dbe3fc5cafdc3aa..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle - -from .base import BaseFileHandler - - -class PickleHandler(BaseFileHandler): - - str_like = False - - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path( - filepath, mode='rb', **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('protocol', 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('protocol', 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path( - obj, filepath, mode='wb', **kwargs) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/distributions/distributions.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/distributions/distributions.py deleted file mode 100644 index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/distributions/distributions.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) - + self.var - 1.0 - self.logvar, - dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - 1.0 - self.logvar + other.logvar, - dim=[1, 2, 3]) - - def nll(self, sample, dims=[1,2,3]): - if self.deterministic: - return torch.Tensor([0.]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/MiloSobral/PortiloopDemo/portiloop/src/processing.py b/spaces/MiloSobral/PortiloopDemo/portiloop/src/processing.py deleted file mode 100644 index b5ba836032441a5222a97cb565765626de878b60..0000000000000000000000000000000000000000 --- a/spaces/MiloSobral/PortiloopDemo/portiloop/src/processing.py +++ /dev/null @@ -1,141 +0,0 @@ -import numpy as np -from scipy.signal import firwin - - -def filter_24(value): - return (value * 4.5) / (2**23 - 1) / 24.0 * 1e6 # 23 because 1 bit is lost for sign - -def filter_2scomplement_np(value): - return np.where((value & (1 << 23)) != 0, value - (1 << 24), value) - - -def int_to_float(value): - """ - Convert the int value out of the ADS into a value in microvolts - """ - return filter_24(filter_2scomplement_np(value)) - - -def shift_numpy(arr, num, fill_value=np.nan): - result = np.empty_like(arr) - if num > 0: - result[:num] = fill_value - result[num:] = arr[:-num] - elif num < 0: - result[num:] = fill_value - result[:num] = arr[-num:] - else: - result[:] = arr - return result - - -class FIR: - def __init__(self, nb_channels, coefficients, buffer=None): - - self.coefficients = np.expand_dims(np.array(coefficients), axis=1) - self.taps = len(self.coefficients) - self.nb_channels = nb_channels - self.buffer = np.array(buffer) if buffer is not None else np.zeros((self.taps, self.nb_channels)) - - def filter(self, x): - self.buffer = shift_numpy(self.buffer, 1, x) - filtered = np.sum(self.buffer * self.coefficients, axis=0) - return filtered - - -class FilterPipeline: - def __init__(self, - nb_channels, - sampling_rate, - power_line_fq=60, - use_custom_fir=False, - custom_fir_order=20, - custom_fir_cutoff=30, - alpha_avg=0.1, - alpha_std=0.001, - epsilon=0.000001, - filter_args=[]): - if len(filter_args) > 0: - use_fir, use_notch, use_std = filter_args - else: - use_fir=True, - use_notch=True, - use_std=True - self.use_fir = use_fir - self.use_notch = use_notch - self.use_std = use_std - self.nb_channels = nb_channels - assert power_line_fq in [50, 60], f"The only supported power line frequencies are 50 Hz and 60 Hz" - if power_line_fq == 60: - self.notch_coeff1 = -0.12478308884588535 - self.notch_coeff2 = 0.98729186796473023 - self.notch_coeff3 = 0.99364593398236511 - self.notch_coeff4 = -0.12478308884588535 - self.notch_coeff5 = 0.99364593398236511 - else: - self.notch_coeff1 = -0.61410695998423581 - self.notch_coeff2 = 0.98729186796473023 - self.notch_coeff3 = 0.99364593398236511 - self.notch_coeff4 = -0.61410695998423581 - self.notch_coeff5 = 0.99364593398236511 - self.dfs = [np.zeros(self.nb_channels), np.zeros(self.nb_channels)] - - self.moving_average = None - self.moving_variance = np.zeros(self.nb_channels) - self.ALPHA_AVG = alpha_avg - self.ALPHA_STD = alpha_std - self.EPSILON = epsilon - - if use_custom_fir: - self.fir_coef = firwin(numtaps=custom_fir_order+1, cutoff=custom_fir_cutoff, fs=sampling_rate) - else: - self.fir_coef = [ - 0.001623780150148094927192721215192250384, - 0.014988684599373741992978104065059596905, - 0.021287595318265635502275046064823982306, - 0.007349500393709578957568417933998716762, - -0.025127515717112181709014251396183681209, - -0.052210507359822452833064687638398027048, - -0.039273839505489904766477593511808663607, - 0.033021568427940004020193498490698402748, - 0.147606943281569008563636202779889572412, - 0.254000252034505602516389899392379447818, - 0.297330876398883392486283128164359368384, - 0.254000252034505602516389899392379447818, - 0.147606943281569008563636202779889572412, - 0.033021568427940004020193498490698402748, - -0.039273839505489904766477593511808663607, - -0.052210507359822452833064687638398027048, - -0.025127515717112181709014251396183681209, - 0.007349500393709578957568417933998716762, - 0.021287595318265635502275046064823982306, - 0.014988684599373741992978104065059596905, - 0.001623780150148094927192721215192250384] - self.fir = FIR(self.nb_channels, self.fir_coef) - - def filter(self, value): - """ - value: a numpy array of shape (data series, channels) - """ - for i, x in enumerate(value): # loop over the data series - # FIR: - if self.use_fir: - x = self.fir.filter(x) - # notch: - if self.use_notch: - denAccum = (x - self.notch_coeff1 * self.dfs[0]) - self.notch_coeff2 * self.dfs[1] - x = (self.notch_coeff3 * denAccum + self.notch_coeff4 * self.dfs[0]) + self.notch_coeff5 * self.dfs[1] - self.dfs[1] = self.dfs[0] - self.dfs[0] = denAccum - # standardization: - if self.use_std: - if self.moving_average is not None: - delta = x - self.moving_average - self.moving_average = self.moving_average + self.ALPHA_AVG * delta - self.moving_variance = (1 - self.ALPHA_STD) * (self.moving_variance + self.ALPHA_STD * delta**2) - moving_std = np.sqrt(self.moving_variance) - x = (x - self.moving_average) / (moving_std + self.EPSILON) - else: - self.moving_average = x - value[i] = x - return value \ No newline at end of file diff --git a/spaces/MirageML/fantasy-sword/README.md b/spaces/MirageML/fantasy-sword/README.md deleted file mode 100644 index 99802bfe43997cd5a21091faceaae85260837a3d..0000000000000000000000000000000000000000 --- a/spaces/MirageML/fantasy-sword/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Fantasy Sword -emoji: 💻 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Miuzarte/SUI-svc-3.0/vdecoder/hifigan/nvSTFT.py b/spaces/Miuzarte/SUI-svc-3.0/vdecoder/hifigan/nvSTFT.py deleted file mode 100644 index ec90bb1e07a23d5e406e843f65ee088560166952..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/vdecoder/hifigan/nvSTFT.py +++ /dev/null @@ -1,111 +0,0 @@ -import math -import os -os.environ["LRU_CACHE_CAPACITY"] = "3" -import random -import torch -import torch.utils.data -import numpy as np -import librosa -from librosa.util import normalize -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read -import soundfile as sf - -def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False): - sampling_rate = None - try: - data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile. - except Exception as ex: - print(f"'{full_path}' failed to load.\nException:") - print(ex) - if return_empty_on_exception: - return [], sampling_rate or target_sr or 48000 - else: - raise Exception(ex) - - if len(data.shape) > 1: - data = data[:, 0] - assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension) - - if np.issubdtype(data.dtype, np.integer): # if audio data is type int - max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX - else: # if audio data is type fp32 - max_mag = max(np.amax(data), -np.amin(data)) - max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32 - - data = torch.FloatTensor(data.astype(np.float32))/max_mag - - if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except - return [], sampling_rate or target_sr or 48000 - if target_sr is not None and sampling_rate != target_sr: - data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr)) - sampling_rate = target_sr - - return data, sampling_rate - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - -class STFT(): - def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5): - self.target_sr = sr - - self.n_mels = n_mels - self.n_fft = n_fft - self.win_size = win_size - self.hop_length = hop_length - self.fmin = fmin - self.fmax = fmax - self.clip_val = clip_val - self.mel_basis = {} - self.hann_window = {} - - def get_mel(self, y, center=False): - sampling_rate = self.target_sr - n_mels = self.n_mels - n_fft = self.n_fft - win_size = self.win_size - hop_length = self.hop_length - fmin = self.fmin - fmax = self.fmax - clip_val = self.clip_val - - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - if fmax not in self.mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - # print(111,spec) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - # print(222,spec) - spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec) - # print(333,spec) - spec = dynamic_range_compression_torch(spec, clip_val=clip_val) - # print(444,spec) - return spec - - def __call__(self, audiopath): - audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr) - spect = self.get_mel(audio.unsqueeze(0)).squeeze(0) - return spect - -stft = STFT() diff --git a/spaces/Mohammed-Khalil/Chat_with_Youtube_Videos/app.py b/spaces/Mohammed-Khalil/Chat_with_Youtube_Videos/app.py deleted file mode 100644 index ac3424404bf5b50d19733caaa5ff8f167a0dcd00..0000000000000000000000000000000000000000 --- a/spaces/Mohammed-Khalil/Chat_with_Youtube_Videos/app.py +++ /dev/null @@ -1,266 +0,0 @@ - -import streamlit as st -hide_streamlit_style = """ - <style> - #MainMenu {visibility: hidden;} - footer {visibility: hidden;} - </style> - """ -st.markdown(hide_streamlit_style, unsafe_allow_html=True) -def paid_version(): - import os - import argparse - import shutil - from langchain.document_loaders import YoutubeLoader - from langchain.text_splitter import RecursiveCharacterTextSplitter - from langchain.vectorstores import Chroma - from langchain.embeddings import OpenAIEmbeddings - from langchain.chains import RetrievalQA - from langchain.llms import OpenAI - import streamlit as st - from langchain.chat_models import ChatOpenAI - from urllib.parse import urlparse, parse_qs - def extract_video_id(youtube_url): - try: - parsed_url = urlparse(youtube_url) - query_params = parse_qs(parsed_url.query) - video_id = query_params.get('v', [None])[0] - - return video_id - except Exception as e: - print(f"Error extracting video ID: {e}") - return None - def set_openAi_api_key(api_key: str): - st.session_state["OPENAI_API_KEY"] = api_key - os.environ['OPENAI_API_KEY'] = api_key - def openai_api_insert_component(): - with st.sidebar: - st.markdown( - """ - ## Quick Guide 🚀 - 1. Get started by adding your [OpenAI API key](https://platform.openai.com/account/api-keys) below🔑 - 2. Easily input the video url - 3. Engage with the content - ask questions, seek answers💬 - """ - ) - - api_key_input = st.text_input("Input your OpenAI API Key", - type="password", - placeholder="Format: sk-...", - help="You can get your API key from https://platform.openai.com/account/api-keys.") - - - if api_key_input == "" or api_key_input is None: - st.sidebar.caption("👆 :red[Please set your OpenAI API Key here]") - - - st.caption(":green[Your API is not stored anywhere. It is only used to generate answers to your questions.]") - - set_openAi_api_key(api_key_input) - - def launchpaidversion(): - openai_api_insert_component() - os.environ['OPENAI_API_KEY'] = st.session_state['OPENAI_API_KEY'] - st.title('MKG: Your Chat with Youtube Assistant') - - - videourl = st.text_input("Insert The video URL") - query = st.text_input("Ask any question about the video") - if st.button("Submit Question", type="primary"): - video_id = extract_video_id(videourl) - loader = YoutubeLoader(video_id) - documents = loader.load() - - text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - documents = text_splitter.split_documents(documents) - - shutil.rmtree('./data') - vectordb = Chroma.from_documents( - documents, - embedding=OpenAIEmbeddings(), - persist_directory='./data' - ) - vectordb.persist() - - qa_chain = RetrievalQA.from_chain_type( - llm=ChatOpenAI(model_name='gpt-3.5-turbo'), - retriever=vectordb.as_retriever(), - return_source_documents=True, - verbose=False - ) - response = qa_chain(query) - st.write(response) - - - launchpaidversion() -def free_version(): - import torch - import os - import argparse - import shutil - from langchain.document_loaders import YoutubeLoader - from langchain.text_splitter import RecursiveCharacterTextSplitter - from langchain.vectorstores import Chroma - from langchain.embeddings import OpenAIEmbeddings - from langchain.chains import RetrievalQA - from langchain.llms import OpenAI - import streamlit as st - from langchain.chat_models import ChatOpenAI - from langchain import HuggingFaceHub - from langchain.embeddings import HuggingFaceInstructEmbeddings - from urllib.parse import urlparse, parse_qs - from langchain.embeddings import HuggingFaceBgeEmbeddings - from transformers import pipeline - import textwrap - import time - from deep_translator import GoogleTranslator - from langdetect import detect - - - def typewriter(text: str, speed: float): - container = st.empty() - displayed_text = "" - - for char in text: - displayed_text += char - container.markdown(displayed_text) - time.sleep(1/speed) - def wrap_text_preserve_newlines(text, width=110): - # Split the input text into lines based on newline characters - lines = text.split('\n') - - # Wrap each line individually - wrapped_lines = [textwrap.fill(line, width=width) for line in lines] - - # Join the wrapped lines back together using newline characters - wrapped_text = '\n'.join(wrapped_lines) - return wrapped_text - def process_llm_response(llm_originalresponse2): - #result_text = wrap_text_preserve_newlines(llm_originalresponse2["result"]) - typewriter(llm_originalresponse2["result"], speed=40) - - def extract_video_id(youtube_url): - try: - parsed_url = urlparse(youtube_url) - query_params = parse_qs(parsed_url.query) - video_id = query_params.get('v', [None])[0] - - return video_id - except Exception as e: - print(f"Error extracting video ID: {e}") - return None - def set_openAi_api_key(api_key: str): - st.session_state["OPENAI_API_KEY"] = api_key - os.environ['OPENAI_API_KEY'] = api_key - def openai_api_insert_component(): - with st.sidebar: - st.markdown( - """ - ## Quick Guide 🚀 - 1. Get started by adding your [OpenAI API key](https://platform.openai.com/account/api-keys) below🔑 - 2. Easily input the video url - 3. Engage with the content - ask questions, seek answers💬 - """ - ) - - api_key_input = st.text_input("Input your OpenAI API Key", - type="password", - placeholder="Format: sk-...", - help="You can get your API key from https://platform.openai.com/account/api-keys.") - - - if api_key_input == "" or api_key_input is None: - st.sidebar.caption("👆 :red[Please set your OpenAI API Key here]") - - - st.caption(":green[Your API is not stored anywhere. It is only used to generate answers to your questions.]") - - set_openAi_api_key(api_key_input) - - def launchfreeversion(): - HUGGINGFACE_API_TOKEN = os.environ['access_code'] - model_name = "BAAI/bge-base-en" - encode_kwargs = {'normalize_embeddings': True} - - st.title('MKG: Your Chat with Youtube Assistant') - - videourl = st.text_input("Insert The video URL", placeholder="Format should be like: https://www.youtube.com/watch?v=pSLeYvld8Mk") - query = st.text_input("Ask any question about the video",help="Suggested queries: Summarize the key points of this video - What is this video about - Ask about a specific thing in the video ") - st.warning("⚠️ Please Keep in mind that the accuracy of the response relies on the :red[Video's quality] and the :red[prompt's Quality]. Occasionally, the response may not be entirely accurate. Consider using the response as a reference rather than a definitive answer.") - - if st.button("Submit Question", type="primary"): - with st.spinner('Processing the Video...'): - video_id = extract_video_id(videourl) - loader = YoutubeLoader(video_id) - documents = loader.load() - - text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100) - documents = text_splitter.split_documents(documents) - - - vectordb = Chroma.from_documents( - documents, - #embedding = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl", - # model_kwargs={"device": "cuda"}) - embedding= HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs={'device': 'cuda' if torch.cuda.is_available() else 'cpu'}, encode_kwargs=encode_kwargs) - ) - - repo_id = "tiiuae/falcon-7b-instruct" - qa_chain = RetrievalQA.from_chain_type( - - llm=HuggingFaceHub(huggingfacehub_api_token=HUGGINGFACE_API_TOKEN, - repo_id=repo_id, - model_kwargs={"temperature":0.1, "max_new_tokens":1000}), - retriever=vectordb.as_retriever(), - return_source_documents=False, - verbose=False - ) - with st.spinner('Generating Answer...'): - llm_response = qa_chain(query) - #llm_originalresponse2=llm_response['result'] - process_llm_response(llm_response) - launchfreeversion() - - -def intro(): - st.markdown(""" - # MKG: Your Chat with Youtube Assistant 🎬🤖 - - Welcome to MKG-Assistant, where AI meets Youtube! 🚀🔍 - - ## Base Models - - Q&A-Assistant is built on OpenAI's GPT 3.5 for the premium version and Falcon 7B instruct Model for the free version to enhance your websites browsing experience. Whether you're a student, researcher, or professional, we're here to simplify your interactions with the web. 💡📚 - - ## How to Get Started - - 1.Enter the Video URL. - 2. Enter your API key.(Only if you chose the premium version. Key is not needed in the free version) - 3. Ask questions using everyday language. - 4. Get detailed, AI-generated answers. - - 5. Enjoy a smarter way to Interact with Youtube! - - - - ## It is Time to Dive in! - - - """) -page_names_to_funcs = { - "Main Page": intro, - "Open Source Edition (Free version)": free_version, - "Premium edition (Requires Open AI API Key )": paid_version - -} - - - - - - -#test -demo_name = st.sidebar.selectbox("Choose a version", page_names_to_funcs.keys()) -page_names_to_funcs[demo_name]() -st.sidebar.markdown('<a href="https://www.linkedin.com/in/mohammed-khalil-ghali-11305119b/"> Connect on LinkedIn <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/linkedin/linkedin-original.svg" alt="LinkedIn" width="30" height="30"></a>', unsafe_allow_html=True) -st.sidebar.markdown('<a href="https://github.com/khalil-ghali"> Check out my GitHub <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/github/github-original.svg" alt="GitHub" width="30" height="30"></a>', unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/MountLiteraSwd/mount_ai_school1/README.md b/spaces/MountLiteraSwd/mount_ai_school1/README.md deleted file mode 100644 index 8d33ad73a264ddca61332814ae0b69a60b18a566..0000000000000000000000000000000000000000 --- a/spaces/MountLiteraSwd/mount_ai_school1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mount Ai School -emoji: 💻 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false -duplicated_from: MountLiteraSwd/mount_ai_school ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NMEX/rvc-hoyo-game/infer_pack/models_onnx_moess.py b/spaces/NMEX/rvc-hoyo-game/infer_pack/models_onnx_moess.py deleted file mode 100644 index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000 --- a/spaces/NMEX/rvc-hoyo-game/infer_pack/models_onnx_moess.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Nultx/VITS-TTS/ONNXVITS_inference.py b/spaces/Nultx/VITS-TTS/ONNXVITS_inference.py deleted file mode 100644 index 258b618cd338322365dfa25bec468a0a3f70ccd1..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/ONNXVITS_inference.py +++ /dev/null @@ -1,36 +0,0 @@ -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import IPython.display as ipd -import torch -import commons -import utils -import ONNXVITS_infer -from text import text_to_sequence - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") - -net_g = ONNXVITS_infer.SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() - -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("おはようございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.LongTensor([0]) - audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy() -print(audio) \ No newline at end of file diff --git a/spaces/OAOA/DifFace/basicsr/archs/__init__.py b/spaces/OAOA/DifFace/basicsr/archs/__init__.py deleted file mode 100644 index af6bcbd97bb3e4914c3c91dc53e0708bcac66075..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/archs/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -import importlib -from copy import deepcopy -from os import path as osp - -from basicsr.utils import get_root_logger, scandir -from basicsr.utils.registry import ARCH_REGISTRY - -__all__ = ['build_network'] - -# automatically scan and import arch modules for registry -# scan all the files under the 'archs' folder and collect files ending with '_arch.py' -arch_folder = osp.dirname(osp.abspath(__file__)) -arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')] -# import all the arch modules -_arch_modules = [importlib.import_module(f'basicsr.archs.{file_name}') for file_name in arch_filenames] - - -def build_network(opt): - opt = deepcopy(opt) - network_type = opt.pop('type') - net = ARCH_REGISTRY.get(network_type)(**opt) - logger = get_root_logger() - logger.info(f'Network [{net.__class__.__name__}] is created.') - return net diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/README.md deleted file mode 100644 index 90741f42b0b070f2a91b63c8badb817c6aa24230..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/README.md +++ /dev/null @@ -1,87 +0,0 @@ -# ASR-based evaluation - -Overall, the life cycle of the ASR-based evaluation for an ULM contains the following steps: - 1. Training an ULM and sampling from it [[description]](./../../ulm) - 2. Running UTS on the sampled unit sequences [[description]](./../../unit2speech) - 3. Pre-processing for the ASR (down-sampling to 16 KHz, aligning length of the generated audio with ground-truth utterances) - 4. Running ASR - 5. Calculation of the post-ASR evaluation metrics - -Here we assume that you have already went throught the first two steps and focus on the rest. - -## Preprocessing -### Down-sampling to 16KHz -The bulk conversion can be done by running -```bash - python $FAIRSEQ_ROOT/examples/textless_nlp/gslm/unit2speech/convert_to_16k.py $UTS_OUTPUT $UTS_OUTPUT_DOWNSAMPLE - ``` - where `$UTS_OUTPUT` specifies the directory with the generated audio and `$UTS_OUTPUT_DOWNSAMPLE` is the directory where downsampled audio would be saved. - - ### Matching by length -This step is somewhat optional. However, if you want to compare the fluency and diversity of a generated speech utterance to that of the ground-truth speech with the same prefix, it is a good idea to force them to be of the same length. -```bash -python $FAIRSEQ_ROOT/examples/textless_nlp/asr_metrics/cut_as.py \ - --samples_dir=$UTS_OUTPUT_DOWNSAMPLE --out_dir=$UTS_OUTPUT_DOWNSAMPLE_CUT \ - --prompts_description=data/ground_truth_continuation_dev.json -``` - -Here `ground_truth_continuation_dev.json` is a json file with ground-truth text from LibriSpeech dev-clean, associated with some meta-data (assuming the evaluation is done on dev-clean). This file can be downloaded [[here]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/ground_truth_continuation_dev.json). A similar file for the test-clean is [[here]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/ground_truth_continuation_test.json). These files are used for the evaluation and contain texts for audio sequences that are at least 6s long. - -## Running ASR -We use a pre-trained wav2vec model to run the ASR step. We firstly need to prepare manifest files which, roughly, tell the ASR system which files we want to transcribe. You can find more details and download the `960h_scratch.pt` checkpoint -[[here]](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md)). To run ASR, you would also need to -install KenLM, Flashlight decoder, and download the KenLM 4-gram English language model. - -```bash - python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py \ - $UTS_OUTPUT_DOWNSAMPLE_CUT --valid-percent 0.0 --dest $MANIFEST_DIR --ext wav -``` -where `$UTS_OUTPUT_DOWNSAMPLE_CUT` speficies the directory with the preprocessed UTS outputs and `$MANIFEST_DIR` is the output directory. - -We will be running an out-of-the-box evaluation script which requires ground-truth transcripts to measure quality metrics. We are only -interested in the transcripts (and we don't have ground-truth outputs for when our ULM generated!), hence we will just generate -some dummy transcripts instead: -```bash -cp $FAIRSEQ_ROOT/examples/textless_nlp/gslm/asr_metrics/misc/dict.ltr.txt $MANIFEST_DIR -python $FAIRSEQ_ROOT/examples/textless_nlp/gslm/asr_metrics/misc/dummy_asr_data.py --tsv=$MANIFEST_DIR/train.tsv \ - --output-dir=$MANIFEST_DIR -``` - -Now we are ready for running ASR: -``` -mkdir -p asr -python $FAIRSEQ_ROOT/examples/speech_recognition/infer.py \ - $MANIFEST_DIR \ - --task audio_pretraining --nbest 1 --path 960h_scratch.pt \ - --gen-subset=train --results-path $PATH_TO_ASR_OUTPUT \ - --w2l-decoder kenlm --lm-model 4-gram.bin \ - --lexicon librispeech/lexicon_ltr.lst --word-score -1 \ - --sil-weight 0 --lm-weight 2 --criterion ctc --labels ltr --max-tokens 300000 --remove-bpe letter -``` -where `lexicon_ltr.lst` is the LibriSpeech lexicon and `$PATH_TO_ASR_OUTPUT` is the output directory (can be downloaded [[here]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/lexicon_ltr.lst)). - -## Evaluation metrics -We run evaluation on the 1_000 shortest sequences that are at least 6s long. To filter those from the ASR transcript, we additionally provide each metric script with the paths to the manifest and `ground_truth_continuation_*` files. - -### Perplexity (PPX) -To get a PPX metric estimate on an ASR transcript, you need to run the following command: -```bash -python ppx.py $PATH_TO_ASR_OUTPUT/hypo.word-960h_scratch.pt-train.txt --cut-tail\ - --manifest=$MANIFEST_DIR/train.tsv --prompts-description=data/ground_truth_continuation_dev.json -``` -where `--cut-tail` tells the script to ignore the last token on each line (ASR puts the sequence ID there). - -### Self- and Auto-BLEU -```bash -python self_bleu.py $PATH_TO_ASR_OUTPUT/hypo.word-960h_scratch.pt-train.txt --cut-tail \ - --manifest=$MANIFEST_DIR/train.tsv --prompts-description=data/ground_truth_continuation_dev.json -``` - -### Continuation-BLEU -```bash -python continuation_eval.py --asr-transcript $PATH_TO_ASR_OUTPUT/hypo.word-960h_scratch.pt-train.txt \ - --manifest=$MANIFEST_DIR/train.tsv --prompts-description=data/ground_truth_continuation_dev.json -``` - -### AUC -Based on the metrics calculated above, we can estimate the AUC of the perplexity/diversity trade-off. We provide an illustration in a [Colab notebook](https://colab.research.google.com/drive/1pVPfOVax_PU3MkYdHRSsa-SI8GBUldNt?usp=sharing). diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/models/ofa/ofa.py b/spaces/OFA-Sys/OFA-Generic_Interface/models/ofa/ofa.py deleted file mode 100644 index 01abdf64706d9555a42fa4cd7a7f38fb6649c53e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/models/ofa/ofa.py +++ /dev/null @@ -1,410 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -OFA -""" -from typing import Optional - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import register_model, register_model_architecture -from fairseq.modules.transformer_sentence_encoder import init_bert_params - -from .unify_transformer import TransformerModel - -logger = logging.getLogger(__name__) - - -@register_model("ofa") -class OFAModel(TransformerModel): - __jit_unused_properties__ = ["supported_targets"] - - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - - # We follow BERT's random weight initialization - self.apply(init_bert_params) - - self.classification_heads = nn.ModuleDict() - if hasattr(self.encoder, "dictionary"): - self.eos: int = self.encoder.dictionary.eos() - - @staticmethod - def add_args(parser): - super(OFAModel, OFAModel).add_args(parser) - parser.add_argument( - "--pooler-dropout", - type=float, - metavar="D", - help="dropout probability in the masked_lm pooler layers", - ) - parser.add_argument( - "--pooler-classifier", - type=str, - choices=['mlp', 'linear'], - help="type of pooler classifier", - ) - parser.add_argument( - "--pooler-activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use for pooler layer", - ) - parser.add_argument( - "--spectral-norm-classification-head", - action="store_true", - help="Apply spectral normalization on the classification head", - ) - - @property - def supported_targets(self): - return {"self"} - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - patch_images: Optional[torch.Tensor] = None, - patch_images_2: Optional[torch.Tensor] = None, - patch_masks: Optional[torch.Tensor] = None, - code_masks: Optional[torch.Tensor] = None, - sample_patch_num: Optional[int] = None, - features_only: bool = False, - classification_head_name: Optional[str] = None, - token_embeddings: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - if classification_head_name is not None: - features_only = True - - encoder_out = self.encoder( - src_tokens, - src_lengths=src_lengths, - patch_images=patch_images, - patch_masks=patch_masks, - patch_images_2=patch_images_2, - token_embeddings=token_embeddings, - return_all_hiddens=return_all_hiddens, - sample_patch_num=sample_patch_num - ) - x, extra = self.decoder( - prev_output_tokens, - code_masks=code_masks, - encoder_out=encoder_out, - features_only=features_only, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - - pad = self.encoder.padding_idx - if classification_head_name is not None: - prev_lengths = prev_output_tokens.ne(pad).sum(1) - gather_index = prev_lengths[:, None, None].expand(x.size(0), 1, x.size(2)) - 1 - sentence_representation = x.gather(1, gather_index).squeeze() - if self.classification_heads[classification_head_name].use_two_images: - hidden_size = sentence_representation.size(1) - sentence_representation = sentence_representation.view(-1, hidden_size * 2) - for k, head in self.classification_heads.items(): - # for torch script only supports iteration - if k == classification_head_name: - x = head(sentence_representation) - break - - return x, extra - - def register_embedding_tokens(self, ans2label_dict, src_dict, bpe): - """Register embedding tokens""" - logger.info("Registering embedding tokens") - self.ans_tensor_list = [] - for i in range(len(ans2label_dict)): - ans = src_dict[-len(ans2label_dict)+i] - ans = ans[5:-1].replace('_', ' ') - ans_tensor = src_dict.encode_line( - line=bpe.encode(' {}'.format(ans.lower())), - add_if_not_exist=False, - append_eos=False - ).long() - self.ans_tensor_list.append(ans_tensor) - - def register_classification_head( - self, name, num_classes=None, inner_dim=None, use_two_images=False, **kwargs - ): - """Register a classification head.""" - logger.info("Registering classification head: {0}".format(name)) - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - "and inner_dim {} (prev: {})".format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - self.classification_heads[name] = OFAClassificationHead( - input_dim=self.args.encoder_embed_dim, - inner_dim=inner_dim or self.args.encoder_embed_dim, - num_classes=num_classes, - activation_fn=self.args.pooler_activation_fn, - pooler_dropout=self.args.pooler_dropout, - pooler_classifier=self.args.pooler_classifier, - use_two_images=use_two_images, - do_spectral_norm=getattr( - self.args, "spectral_norm_classification_head", False - ), - ) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - prefix = name + "." if name != "" else "" - current_head_names = ( - [] - if not hasattr(self, "classification_heads") - else self.classification_heads.keys() - ) - - # Handle new classification heads present in the state dict. - keys_to_delete = [] - for k in state_dict.keys(): - if not k.startswith(prefix + "classification_heads."): - continue - - head_name = k[len(prefix + "classification_heads.") :].split(".")[0] - num_classes = state_dict[ - prefix + "classification_heads." + head_name + ".out_proj.weight" - ].size(0) - inner_dim = state_dict[ - prefix + "classification_heads." + head_name + ".dense.weight" - ].size(0) - - if getattr(self.args, "load_checkpoint_heads", False): - if head_name not in current_head_names: - self.register_classification_head(head_name, num_classes, inner_dim) - else: - if head_name not in current_head_names: - logger.warning( - "deleting classification head ({}) from checkpoint " - "not present in current model: {}".format(head_name, k) - ) - keys_to_delete.append(k) - elif ( - num_classes - != self.classification_heads[head_name].out_proj.out_features - or inner_dim - != self.classification_heads[head_name].dense.out_features - ): - logger.warning( - "deleting classification head ({}) from checkpoint " - "with different dimensions than current model: {}".format( - head_name, k - ) - ) - keys_to_delete.append(k) - for k in keys_to_delete: - del state_dict[k] - - def truncate_emb(key): - if key in state_dict: - state_dict[key] = state_dict[key][:-1, :] - - # When finetuning on translation task, remove last row of - # embedding matrix that corresponds to mask_idx token. - loaded_dict_size = state_dict["encoder.embed_tokens.weight"].size(0) - if ( - loaded_dict_size == len(self.encoder.dictionary) + 1 - and "<mask>" not in self.encoder.dictionary - ): - truncate_emb("encoder.embed_tokens.weight") - truncate_emb("decoder.embed_tokens.weight") - truncate_emb("encoder.output_projection.weight") - truncate_emb("decoder.output_projection.weight") - - if loaded_dict_size < len(self.encoder.dictionary): - num_langids_to_add = len(self.encoder.dictionary) - loaded_dict_size - embed_dim = state_dict["encoder.embed_tokens.weight"].size(1) - - new_lang_embed_to_add = torch.zeros(num_langids_to_add, embed_dim) - if getattr(self, "ans_tensor_list", None): - assert len(new_lang_embed_to_add) == len(self.ans_tensor_list) - for i, ans_tensor in enumerate(self.ans_tensor_list): - ans_embed = F.embedding(ans_tensor, state_dict["encoder.embed_tokens.weight"]) - ans_embed = ans_embed.sum(0) / ans_embed.size(0) - new_lang_embed_to_add[i] = ans_embed - else: - nn.init.normal_(new_lang_embed_to_add, mean=0, std=embed_dim ** -0.5) - new_lang_embed_to_add = new_lang_embed_to_add.to( - dtype=state_dict["encoder.embed_tokens.weight"].dtype, - ) - - state_dict["encoder.embed_tokens.weight"] = torch.cat( - [state_dict["encoder.embed_tokens.weight"], new_lang_embed_to_add] - ) - state_dict["decoder.embed_tokens.weight"] = torch.cat( - [state_dict["decoder.embed_tokens.weight"], new_lang_embed_to_add] - ) - state_dict["decoder.output_projection.weight"] = torch.cat( - [state_dict["decoder.output_projection.weight"], new_lang_embed_to_add] - ) - - # Copy any newly-added classification heads into the state dict - # with their current weights. - if hasattr(self, "classification_heads"): - cur_state = self.classification_heads.state_dict() - for k, v in cur_state.items(): - if prefix + "classification_heads." + k not in state_dict: - logger.info("Overwriting " + prefix + "classification_heads." + k) - state_dict[prefix + "classification_heads." + k] = v - - -class OFAClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, - input_dim, - inner_dim, - num_classes, - activation_fn, - pooler_dropout, - pooler_classifier, - use_two_images=False, - do_spectral_norm=False, - ): - super().__init__() - self.pooler_classifier = pooler_classifier - self.use_two_images = use_two_images - input_dim = input_dim * 2 if use_two_images else input_dim - if pooler_classifier == "mlp": - self.dense = nn.Linear(input_dim, inner_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - elif pooler_classifier == "linear": - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(input_dim, num_classes) - else: - raise NotImplementedError - - if do_spectral_norm: - self.out_proj = torch.nn.utils.spectral_norm(self.out_proj) - - def forward(self, features, **kwargs): - if self.pooler_classifier == 'mlp': - x = features - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - elif self.pooler_classifier == 'linear': - x = features - x = self.dropout(x) - x = self.out_proj(x) - else: - raise NotImplementedError - return x - - -@register_model_architecture("ofa", "ofa_large") -def ofa_large_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * 1024) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.0) - args.max_target_positions = getattr(args, "max_target_positions", 1024) - args.max_source_positions = getattr(args, "max_source_positions", 1024) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", True - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", True) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", True) - args.layernorm_embedding = getattr(args, "layernorm_embedding", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) - args.pooler_classifier = getattr(args, "pooler_classifier", "mlp") - - args.resnet_drop_path_rate = getattr(args, "resnet_drop_path_rate", 0.0) - args.encoder_drop_path_rate = getattr(args, "encoder_drop_path_rate", 0.0) - args.decoder_drop_path_rate = getattr(args, "decoder_drop_path_rate", 0.0) - - args.resnet_type = getattr(args, "resnet_type", "resnet152") - args.token_bucket_size = getattr(args, "token_bucket_size", 256) - args.image_bucket_size = getattr(args, "image_bucket_size", 42) - - args.freeze_encoder_embedding = getattr(args, "freeze_encoder_embedding", False) - args.freeze_decoder_embedding = getattr(args, "freeze_decoder_embedding", False) - args.add_type_embedding = getattr(args, "add_type_embedding", True) - args.attn_scale_factor = getattr(args, "attn_scale_factor", 2) - - args.code_image_size = getattr(args, "code_image_size", 128) - args.patch_layernorm_embedding = getattr(args, "patch_layernorm_embedding", True) - args.code_layernorm_embedding = getattr(args, "code_layernorm_embedding", True) - args.entangle_position_embedding = getattr(args, "entangle_position_embedding", False) - args.disable_entangle = getattr(args, "disable_entangle", False) - args.sync_bn = getattr(args, "sync_bn", False) - - args.scale_attn = getattr(args, "scale_attn", False) - args.scale_fc = getattr(args, "scale_fc", False) - args.scale_heads = getattr(args, "scale_heads", False) - args.scale_resids = getattr(args, "scale_resids", False) - - -@register_model_architecture("ofa", "ofa_base") -def ofa_base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * 768) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 12) - args.resnet_type = getattr(args, "resnet_type", "resnet101") - ofa_large_architecture(args) - - -@register_model_architecture("ofa", "ofa_huge") -def ofa_huge_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1280) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * 1280) - args.encoder_layers = getattr(args, "encoder_layers", 24) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.resnet_type = getattr(args, "resnet_type", "resnet152") - ofa_large_architecture(args) - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/criterions/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/criterions/__init__.py deleted file mode 100644 index 5fae7bd4c2cfa7b4f64ad62dd9b9082f59f0e50d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/criterions/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("fairseq.model_parallel.criterions." + module) diff --git a/spaces/OOlajide/common-nlp-tasks/app.py b/spaces/OOlajide/common-nlp-tasks/app.py deleted file mode 100644 index 244a124dbe798ee834127e4b1b84abf339f11674..0000000000000000000000000000000000000000 --- a/spaces/OOlajide/common-nlp-tasks/app.py +++ /dev/null @@ -1,113 +0,0 @@ -import streamlit as st -from transformers import pipeline - -st.set_page_config(page_title="Common NLP Tasks") -st.title("Common NLP Tasks") -st.subheader(":point_left: Use the menu on the left to select a NLP task (click on > if closed).") -""" -[![](https://img.shields.io/github/followers/OOlajide?label=OOlajide&style=social)](https://gitHub.com/OOlajide) -[![](https://img.shields.io/twitter/follow/sageOlamide?label=@sageOlamide&style=social)](https://twitter.com/sageOlamide) -""" -expander = st.sidebar.expander("About") -expander.write("This web app allows you to perform common Natural Language Processing tasks, select a task below to get started.") - -st.sidebar.header("What will you like to do?") -option = st.sidebar.radio("", ["Text summarization", "Extractive question answering", "Text generation"]) - -@st.cache(show_spinner=False, allow_output_mutation=True) -def question_model(): - model_name = "deepset/tinyroberta-squad2" - question_answerer = pipeline(model=model_name, tokenizer=model_name, task="question-answering") - return question_answerer - -@st.cache(show_spinner=False, allow_output_mutation=True) -def summarization_model(): - model_name = "google/pegasus-xsum" - summarizer = pipeline(model=model_name, tokenizer=model_name, task="summarization") - return summarizer - -@st.cache(show_spinner=False, allow_output_mutation=True) -def generation_model(): - model_name = "distilgpt2" - generator = pipeline(model=model_name, tokenizer=model_name, task="text-generation") - return generator - -if option == "Extractive question answering": - st.markdown("<h2 style='text-align: center; color:grey;'>Extractive Question Answering</h2>", unsafe_allow_html=True) - st.markdown("<h3 style='text-align: left; color:#F63366; font-size:18px;'><b>What is extractive question answering about?<b></h3>", unsafe_allow_html=True) - st.write("Extractive question answering is a Natural Language Processing task where text is provided for a model so that the model can refer to it and make predictions about where the answer to a question is.") - st.markdown('___') - source = st.radio("How would you like to start? Choose an option below", ["I want to input some text", "I want to upload a file"]) - sample_question = "What did the shepherd boy do to amuse himself?" - if source == "I want to input some text": - with open("sample.txt", "r") as text_file: - sample_text = text_file.read() - context = st.text_area("Use the example below or input your own text in English (10,000 characters max)", value=sample_text, max_chars=10000, height=330) - question = st.text_input(label="Use the question below or enter your own question", value=sample_question) - button = st.button("Get answer") - if button: - with st.spinner(text="Loading question model..."): - question_answerer = question_model() - with st.spinner(text="Getting answer..."): - answer = question_answerer(context=context, question=question) - answer = answer["answer"] - st.text(answer) - elif source == "I want to upload a file": - uploaded_file = st.file_uploader("Choose a .txt file to upload", type=["txt"]) - if uploaded_file is not None: - raw_text = str(uploaded_file.read(),"utf-8") - context = st.text_area("", value=raw_text, height=330) - question = st.text_input(label="Enter your question", value=sample_question) - button = st.button("Get answer") - if button: - with st.spinner(text="Loading summarization model..."): - question_answerer = question_model() - with st.spinner(text="Getting answer..."): - answer = question_answerer(context=context, question=question) - answer = answer["answer"] - st.text(answer) - -elif option == "Text summarization": - st.markdown("<h2 style='text-align: center; color:grey;'>Text Summarization</h2>", unsafe_allow_html=True) - st.markdown("<h3 style='text-align: left; color:#F63366; font-size:18px;'><b>What is text summarization about?<b></h3>", unsafe_allow_html=True) - st.write("Text summarization is producing a shorter version of a given text while preserving its important information.") - st.markdown('___') - source = st.radio("How would you like to start? Choose an option below", ["I want to input some text", "I want to upload a file"]) - if source == "I want to input some text": - with open("sample.txt", "r") as text_file: - sample_text = text_file.read() - text = st.text_area("Input a text in English (10,000 characters max) or use the example below", value=sample_text, max_chars=10000, height=330) - button = st.button("Get summary") - if button: - with st.spinner(text="Loading summarization model..."): - summarizer = summarization_model() - with st.spinner(text="Summarizing text..."): - summary = summarizer(text, max_length=130, min_length=30) - st.text(summary[0]["summary_text"]) - - elif source == "I want to upload a file": - uploaded_file = st.file_uploader("Choose a .txt file to upload", type=["txt"]) - if uploaded_file is not None: - raw_text = str(uploaded_file.read(),"utf-8") - text = st.text_area("", value=raw_text, height=330) - button = st.button("Get summary") - if button: - with st.spinner(text="Loading summarization model..."): - summarizer = summarization_model() - with st.spinner(text="Summarizing text..."): - summary = summarizer(text, max_length=130, min_length=30) - st.text(summary[0]["summary_text"]) - -elif option == "Text generation": - st.markdown("<h2 style='text-align: center; color:grey;'>Text Generation</h2>", unsafe_allow_html=True) - st.markdown("<h3 style='text-align: left; color:#F63366; font-size:18px;'><b>What is text generation about?<b></h3>", unsafe_allow_html=True) - st.write("Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text.") - st.markdown('___') - text = st.text_input(label="Enter one line of text and let the NLP model generate the rest for you") - button = st.button("Generate text") - if button: - with st.spinner(text="Loading text generation model..."): - generator = generation_model() - with st.spinner(text="Generating text..."): - generated_text = generator(text, max_length=50) - st.text(generated_text[0]["generated_text"]) \ No newline at end of file diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/index.css b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/index.css deleted file mode 100644 index 86601e8611ce6c443c58e58b587848d0956b0f5a..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/index.css +++ /dev/null @@ -1,78 +0,0 @@ -:root { - font-family: Inter, Avenir, Helvetica, Arial, sans-serif; - font-size: 16px; - line-height: 24px; - font-weight: 400; - color-scheme: light dark; - color: rgb(255 255 255 / 87%); - background-color: #242424; - font-synthesis: none; - text-rendering: optimizelegibility; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; - text-size-adjust: 100%; -} - -a { - font-weight: 500; - color: #646cff; - text-decoration: inherit; -} - -a:hover { - color: #535bf2; -} - -body { - margin: 0; - padding: 0 32px; -} - -h1 { - font-size: 3.2em; - line-height: 1.1; -} - -select { - border: 2px solid gray; - border-radius: 4px; - padding: 4px 8px; -} - -button { - border-radius: 8px; - border: 1px solid transparent; - padding: 0.6em 1.2em; - font-size: 1em; - font-weight: 500; - font-family: inherit; - background-color: #1a1a1a; - cursor: pointer; - transition: border-color 0.25s; - word-break: keep-all; - outline: none; -} - -button:hover { - border-color: #646cff; -} - -button:focus, -button:focus-visible { - outline: 4px auto -webkit-focus-ring-color; -} - -@media (prefers-color-scheme: light) { - :root { - color: #213547; - background-color: #fff; - } - - a:hover { - color: #747bff; - } - - button { - background-color: #f9f9f9; - } -} diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/mask_head.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/mask_head.py deleted file mode 100644 index 5ac5c4b9aaa34653d6c50e512a5a4300da450c7f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/mask_head.py +++ /dev/null @@ -1,292 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import List -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ConvTranspose2d, ShapeSpec, cat, get_norm -from detectron2.structures import Instances -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry - -__all__ = [ - "BaseMaskRCNNHead", - "MaskRCNNConvUpsampleHead", - "build_mask_head", - "ROI_MASK_HEAD_REGISTRY", -] - - -ROI_MASK_HEAD_REGISTRY = Registry("ROI_MASK_HEAD") -ROI_MASK_HEAD_REGISTRY.__doc__ = """ -Registry for mask heads, which predicts instance masks given -per-region features. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -@torch.jit.unused -def mask_rcnn_loss(pred_mask_logits: torch.Tensor, instances: List[Instances], vis_period: int = 0): - """ - Compute the mask prediction loss defined in the Mask R-CNN paper. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. These instances are in 1:1 - correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask, - ...) associated with each instance are stored in fields. - vis_period (int): the period (in steps) to dump visualization. - - Returns: - mask_loss (Tensor): A scalar tensor containing the loss. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - total_num_masks = pred_mask_logits.size(0) - mask_side_len = pred_mask_logits.size(2) - assert pred_mask_logits.size(2) == pred_mask_logits.size(3), "Mask prediction must be square!" - - gt_classes = [] - gt_masks = [] - for instances_per_image in instances: - if len(instances_per_image) == 0: - continue - if not cls_agnostic_mask: - gt_classes_per_image = instances_per_image.gt_classes.to(dtype=torch.int64) - gt_classes.append(gt_classes_per_image) - - gt_masks_per_image = instances_per_image.gt_masks.crop_and_resize( - instances_per_image.proposal_boxes.tensor, mask_side_len - ).to(device=pred_mask_logits.device) - # A tensor of shape (N, M, M), N=#instances in the image; M=mask_side_len - gt_masks.append(gt_masks_per_image) - - if len(gt_masks) == 0: - return pred_mask_logits.sum() * 0 - - gt_masks = cat(gt_masks, dim=0) - - if cls_agnostic_mask: - pred_mask_logits = pred_mask_logits[:, 0] - else: - indices = torch.arange(total_num_masks) - gt_classes = cat(gt_classes, dim=0) - pred_mask_logits = pred_mask_logits[indices, gt_classes] - - if gt_masks.dtype == torch.bool: - gt_masks_bool = gt_masks - else: - # Here we allow gt_masks to be float as well (depend on the implementation of rasterize()) - gt_masks_bool = gt_masks > 0.5 - gt_masks = gt_masks.to(dtype=torch.float32) - - # Log the training accuracy (using gt classes and 0.5 threshold) - mask_incorrect = (pred_mask_logits > 0.0) != gt_masks_bool - mask_accuracy = 1 - (mask_incorrect.sum().item() / max(mask_incorrect.numel(), 1.0)) - num_positive = gt_masks_bool.sum().item() - false_positive = (mask_incorrect & ~gt_masks_bool).sum().item() / max( - gt_masks_bool.numel() - num_positive, 1.0 - ) - false_negative = (mask_incorrect & gt_masks_bool).sum().item() / max(num_positive, 1.0) - - storage = get_event_storage() - storage.put_scalar("mask_rcnn/accuracy", mask_accuracy) - storage.put_scalar("mask_rcnn/false_positive", false_positive) - storage.put_scalar("mask_rcnn/false_negative", false_negative) - if vis_period > 0 and storage.iter % vis_period == 0: - pred_masks = pred_mask_logits.sigmoid() - vis_masks = torch.cat([pred_masks, gt_masks], axis=2) - name = "Left: mask prediction; Right: mask GT" - for idx, vis_mask in enumerate(vis_masks): - vis_mask = torch.stack([vis_mask] * 3, axis=0) - storage.put_image(name + f" ({idx})", vis_mask) - - mask_loss = F.binary_cross_entropy_with_logits(pred_mask_logits, gt_masks, reduction="mean") - return mask_loss - - -def mask_rcnn_inference(pred_mask_logits: torch.Tensor, pred_instances: List[Instances]): - """ - Convert pred_mask_logits to estimated foreground probability masks while also - extracting only the masks for the predicted classes in pred_instances. For each - predicted box, the mask of the same class is attached to the instance by adding a - new "pred_masks" field to pred_instances. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - pred_instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. Each Instances must have field "pred_classes". - - Returns: - None. pred_instances will contain an extra "pred_masks" field storing a mask of size (Hmask, - Wmask) for predicted class. Note that the masks are returned as a soft (non-quantized) - masks the resolution predicted by the network; post-processing steps, such as resizing - the predicted masks to the original image resolution and/or binarizing them, is left - to the caller. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - - if cls_agnostic_mask: - mask_probs_pred = pred_mask_logits.sigmoid() - else: - # Select masks corresponding to the predicted classes - num_masks = pred_mask_logits.shape[0] - class_pred = cat([i.pred_classes for i in pred_instances]) - indices = torch.arange(num_masks, device=class_pred.device) - mask_probs_pred = pred_mask_logits[indices, class_pred][:, None].sigmoid() - # mask_probs_pred.shape: (B, 1, Hmask, Wmask) - - num_boxes_per_image = [len(i) for i in pred_instances] - mask_probs_pred = mask_probs_pred.split(num_boxes_per_image, dim=0) - - for prob, instances in zip(mask_probs_pred, pred_instances): - instances.pred_masks = prob # (1, Hmask, Wmask) - - -class BaseMaskRCNNHead(nn.Module): - """ - Implement the basic Mask R-CNN losses and inference logic described in :paper:`Mask R-CNN` - """ - - @configurable - def __init__(self, *, loss_weight: float = 1.0, vis_period: int = 0): - """ - NOTE: this interface is experimental. - - Args: - loss_weight (float): multiplier of the loss - vis_period (int): visualization period - """ - super().__init__() - self.vis_period = vis_period - self.loss_weight = loss_weight - - @classmethod - def from_config(cls, cfg, input_shape): - return {"vis_period": cfg.VIS_PERIOD} - - def forward(self, x, instances: List[Instances]): - """ - Args: - x: input region feature(s) provided by :class:`ROIHeads`. - instances (list[Instances]): contains the boxes & labels corresponding - to the input features. - Exact format is up to its caller to decide. - Typically, this is the foreground instances in training, with - "proposal_boxes" field and other gt annotations. - In inference, it contains boxes that are already predicted. - - Returns: - A dict of losses in training. The predicted "instances" in inference. - """ - x = self.layers(x) - if self.training: - return {"loss_mask": mask_rcnn_loss(x, instances, self.vis_period) * self.loss_weight} - else: - mask_rcnn_inference(x, instances) - return instances - - def layers(self, x): - """ - Neural network layers that makes predictions from input features. - """ - raise NotImplementedError - - -# To get torchscript support, we make the head a subclass of `nn.Sequential`. -# Therefore, to add new layers in this head class, please make sure they are -# added in the order they will be used in forward(). -@ROI_MASK_HEAD_REGISTRY.register() -class MaskRCNNConvUpsampleHead(BaseMaskRCNNHead, nn.Sequential): - """ - A mask head with several conv layers, plus an upsample layer (with `ConvTranspose2d`). - Predictions are made with a final 1x1 conv layer. - """ - - @configurable - def __init__(self, input_shape: ShapeSpec, *, num_classes, conv_dims, conv_norm="", **kwargs): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature - num_classes (int): the number of foreground classes (i.e. background is not - included). 1 if using class agnostic prediction. - conv_dims (list[int]): a list of N>0 integers representing the output dimensions - of N-1 conv layers and the last upsample layer. - conv_norm (str or callable): normalization for the conv layers. - See :func:`detectron2.layers.get_norm` for supported types. - """ - super().__init__(**kwargs) - assert len(conv_dims) >= 1, "conv_dims have to be non-empty!" - - self.conv_norm_relus = [] - - cur_channels = input_shape.channels - for k, conv_dim in enumerate(conv_dims[:-1]): - conv = Conv2d( - cur_channels, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=not conv_norm, - norm=get_norm(conv_norm, conv_dim), - activation=nn.ReLU(), - ) - self.add_module("mask_fcn{}".format(k + 1), conv) - self.conv_norm_relus.append(conv) - cur_channels = conv_dim - - self.deconv = ConvTranspose2d( - cur_channels, conv_dims[-1], kernel_size=2, stride=2, padding=0 - ) - self.add_module("deconv_relu", nn.ReLU()) - cur_channels = conv_dims[-1] - - self.predictor = Conv2d(cur_channels, num_classes, kernel_size=1, stride=1, padding=0) - - for layer in self.conv_norm_relus + [self.deconv]: - weight_init.c2_msra_fill(layer) - # use normal distribution initialization for mask prediction layer - nn.init.normal_(self.predictor.weight, std=0.001) - if self.predictor.bias is not None: - nn.init.constant_(self.predictor.bias, 0) - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM - num_conv = cfg.MODEL.ROI_MASK_HEAD.NUM_CONV - ret.update( - conv_dims=[conv_dim] * (num_conv + 1), # +1 for ConvTranspose - conv_norm=cfg.MODEL.ROI_MASK_HEAD.NORM, - input_shape=input_shape, - ) - if cfg.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK: - ret["num_classes"] = 1 - else: - ret["num_classes"] = cfg.MODEL.ROI_HEADS.NUM_CLASSES - return ret - - def layers(self, x): - for layer in self: - x = layer(x) - return x - - -def build_mask_head(cfg, input_shape): - """ - Build a mask head defined by `cfg.MODEL.ROI_MASK_HEAD.NAME`. - """ - name = cfg.MODEL.ROI_MASK_HEAD.NAME - return ROI_MASK_HEAD_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/datasets/pipelines/loading.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/datasets/pipelines/loading.py deleted file mode 100644 index d3692ae91f19b9c7ccf6023168788ff42c9e93e3..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/datasets/pipelines/loading.py +++ /dev/null @@ -1,153 +0,0 @@ -import os.path as osp - -import annotator.uniformer.mmcv as mmcv -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'cv2' - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk'), - imdecode_backend='cv2'): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('img_prefix') is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, backend=self.imdecode_backend) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32},' - repr_str += f"color_type='{self.color_type}'," - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load annotations for semantic segmentation. - - Args: - reduce_zero_label (bool): Whether reduce all label value by 1. - Usually used for datasets where 0 is background label. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'pillow' - """ - - def __init__(self, - reduce_zero_label=False, - file_client_args=dict(backend='disk'), - imdecode_backend='pillow'): - self.reduce_zero_label = reduce_zero_label - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('seg_prefix', None) is not None: - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - else: - filename = results['ann_info']['seg_map'] - img_bytes = self.file_client.get(filename) - gt_semantic_seg = mmcv.imfrombytes( - img_bytes, flag='unchanged', - backend=self.imdecode_backend).squeeze().astype(np.uint8) - # modify if custom classes - if results.get('label_map', None) is not None: - for old_id, new_id in results['label_map'].items(): - gt_semantic_seg[gt_semantic_seg == old_id] = new_id - # reduce zero_label - if self.reduce_zero_label: - # avoid using underflow conversion - gt_semantic_seg[gt_semantic_seg == 0] = 255 - gt_semantic_seg = gt_semantic_seg - 1 - gt_semantic_seg[gt_semantic_seg == 254] = 255 - results['gt_semantic_seg'] = gt_semantic_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(reduce_zero_label={self.reduce_zero_label},' - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/epoch_based_runner.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/epoch_based_runner.py deleted file mode 100644 index 766a9ce6afdf09cd11b1b15005f5132583011348..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/epoch_based_runner.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch - -import annotator.uniformer.mmcv as mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .utils import get_host_info - - -@RUNNERS.register_module() -class EpochBasedRunner(BaseRunner): - """Epoch-based Runner. - - This runner train models epoch by epoch. - """ - - def run_iter(self, data_batch, train_mode, **kwargs): - if self.batch_processor is not None: - outputs = self.batch_processor( - self.model, data_batch, train_mode=train_mode, **kwargs) - elif train_mode: - outputs = self.model.train_step(data_batch, self.optimizer, - **kwargs) - else: - outputs = self.model.val_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('"batch_processor()" or "model.train_step()"' - 'and "model.val_step()" must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._max_iters = self._max_epochs * len(self.data_loader) - self.call_hook('before_train_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_train_iter') - self.run_iter(data_batch, train_mode=True, **kwargs) - self.call_hook('after_train_iter') - self._iter += 1 - - self.call_hook('after_train_epoch') - self._epoch += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - self.call_hook('before_val_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_val_iter') - self.run_iter(data_batch, train_mode=False) - self.call_hook('after_val_iter') - - self.call_hook('after_val_epoch') - - def run(self, data_loaders, workflow, max_epochs=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, epochs) to specify the - running order and epochs. E.g, [('train', 2), ('val', 1)] means - running 2 epochs for training and 1 epoch for validation, - iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_epochs is not None: - warnings.warn( - 'setting max_epochs in run is deprecated, ' - 'please set max_epochs in runner_config', DeprecationWarning) - self._max_epochs = max_epochs - - assert self._max_epochs is not None, ( - 'max_epochs must be specified during instantiation') - - for i, flow in enumerate(workflow): - mode, epochs = flow - if mode == 'train': - self._max_iters = self._max_epochs * len(data_loaders[i]) - break - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d epochs', workflow, - self._max_epochs) - self.call_hook('before_run') - - while self.epoch < self._max_epochs: - for i, flow in enumerate(workflow): - mode, epochs = flow - if isinstance(mode, str): # self.train() - if not hasattr(self, mode): - raise ValueError( - f'runner has no method named "{mode}" to run an ' - 'epoch') - epoch_runner = getattr(self, mode) - else: - raise TypeError( - 'mode in workflow must be a str, but got {}'.format( - type(mode))) - - for _ in range(epochs): - if mode == 'train' and self.epoch >= self._max_epochs: - break - epoch_runner(data_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_run') - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - -@RUNNERS.register_module() -class Runner(EpochBasedRunner): - """Deprecated name of EpochBasedRunner.""" - - def __init__(self, *args, **kwargs): - warnings.warn( - 'Runner was deprecated, please use EpochBasedRunner instead') - super().__init__(*args, **kwargs) diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/background.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/background.py deleted file mode 100644 index 65c5fcbbff0f5fc9c3ecaac2257a875cf597fbd8..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/background.py +++ /dev/null @@ -1,53 +0,0 @@ -import os -import os.path -import json -from PIL import Image - -import torch -import torchvision -import torch.utils.data as data -from maskrcnn_benchmark.structures.bounding_box import BoxList - -class Background(data.Dataset): - """ Background - - Args: - root (string): Root directory where images are downloaded to. - annFile (string): Path to json annotation file. - transform (callable, optional): A function/transform that takes in an PIL image - and returns a transformed version. E.g, ``transforms.ToTensor`` - """ - - def __init__(self, ann_file, root, remove_images_without_annotations=None, transforms=None): - self.root = root - - with open(ann_file, 'r') as f: - self.ids = json.load(f)['images'] - self.transform = transforms - - def __getitem__(self, index): - """ - Args: - index (int): Index - - Returns: - tuple: Tuple (image, target). target is the object returned by ``coco.loadAnns``. - """ - im_info = self.ids[index] - path = im_info['file_name'] - fp = os.path.join(self.root, path) - - img = Image.open(fp).convert('RGB') - if self.transform is not None: - img, _ = self.transform(img, None) - null_target = BoxList(torch.zeros((0,4)), (img.shape[-1], img.shape[-2])) - null_target.add_field('labels', torch.zeros(0)) - - return img, null_target, index - - def __len__(self): - return len(self.ids) - - def get_img_info(self, index): - im_info = self.ids[index] - return im_info \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/blocks.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/blocks.py deleted file mode 100644 index 73f4feb0e83d058b6326d43de067af4adaf7f63b..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/blocks.py +++ /dev/null @@ -1,266 +0,0 @@ -import torch.nn as nn -from .ops import * - - -class stem(nn.Module): - num_layer = 1 - - def __init__(self, conv, inplanes, planes, stride=1, norm_layer=nn.BatchNorm2d): - super(stem, self).__init__() - - self.conv1 = conv(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - return out - - -class basic(nn.Module): - expansion = 1 - num_layer = 2 - - def __init__(self, conv, inplanes, planes, stride=1, midplanes=None, norm_layer=nn.BatchNorm2d): - super(basic, self).__init__() - midplanes = planes if midplanes is None else midplanes - self.conv1 = conv(inplanes, midplanes, stride) - self.bn1 = norm_layer(midplanes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv(midplanes, planes) - self.bn2 = norm_layer(planes) - if stride!=1 or inplanes!=planes*self.expansion: - self.downsample = nn.Sequential( - conv1x1(inplanes, planes, stride), - norm_layer(planes), - ) - else: - self.downsample = None - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class bottleneck(nn.Module): - expansion = 4 - num_layer = 3 - - def __init__(self, conv, inplanes, planes, stride=1, midplanes=None, norm_layer=nn.BatchNorm2d): - super(bottleneck, self).__init__() - midplanes = planes if midplanes is None else midplanes - self.conv1 = conv1x1(inplanes, midplanes) - self.bn1 = norm_layer(midplanes) - self.conv2 = conv(midplanes, midplanes, stride) - self.bn2 = norm_layer(midplanes) - self.conv3 = conv1x1(midplanes, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - if stride!=1 or inplanes!=planes*self.expansion: - self.downsample = nn.Sequential( - conv1x1(inplanes, planes*self.expansion, stride), - norm_layer(planes*self.expansion), - ) - else: - self.downsample = None - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class invert(nn.Module): - def __init__(self, conv, inp, oup, stride=1, expand_ratio=1, norm_layer=nn.BatchNorm2d): - super(invert, self).__init__() - self.stride = stride - assert stride in [1, 2] - - hidden_dim = round(inp * expand_ratio) - self.use_res_connect = self.stride == 1 and inp == oup - - if expand_ratio == 1: - self.conv = nn.Sequential( - # dw - conv(hidden_dim, hidden_dim, stride), - norm_layer(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - norm_layer(oup), - ) - else: - self.conv = nn.Sequential( - # pw - nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False), - norm_layer(hidden_dim), - nn.ReLU6(inplace=True), - # dw - conv(hidden_dim, hidden_dim, stride), - norm_layer(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - norm_layer(oup), - ) - - def forward(self, x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - -invert2 = lambda op, inp, outp, stride, **kwargs: invert(op, inp, outp, stride, expand_ratio=2, **kwargs) -invert3 = lambda op, inp, outp, stride, **kwargs: invert(op, inp, outp, stride, expand_ratio=3, **kwargs) -invert4 = lambda op, inp, outp, stride, **kwargs: invert(op, inp, outp, stride, expand_ratio=4, **kwargs) -invert6 = lambda op, inp, outp, stride, **kwargs: invert(op, inp, outp, stride, expand_ratio=6, **kwargs) - - -def channel_shuffle(x, groups): - batchsize, num_channels, height, width = x.data.size() - channels_per_group = num_channels // groups - # reshape - x = x.view(batchsize, groups, channels_per_group, height, width) - x = torch.transpose(x, 1, 2).contiguous() - # flatten - x = x.view(batchsize, -1, height, width) - return x - - -class shuffle(nn.Module): - expansion = 1 - num_layer = 3 - - def __init__(self, conv, inplanes, outplanes, stride=1, midplanes=None, norm_layer=nn.BatchNorm2d): - super(shuffle, self).__init__() - inplanes = inplanes // 2 if stride == 1 else inplanes - midplanes = outplanes // 2 if midplanes is None else midplanes - rightoutplanes = outplanes - inplanes - if stride == 2: - self.left_branch = nn.Sequential( - # dw - conv(inplanes, inplanes, stride), - norm_layer(inplanes), - # pw-linear - conv1x1(inplanes, inplanes), - norm_layer(inplanes), - nn.ReLU(inplace=True), - ) - - self.right_branch = nn.Sequential( - # pw - conv1x1(inplanes, midplanes), - norm_layer(midplanes), - nn.ReLU(inplace=True), - # dw - conv(midplanes, midplanes, stride), - norm_layer(midplanes), - # pw-linear - conv1x1(midplanes, rightoutplanes), - norm_layer(rightoutplanes), - nn.ReLU(inplace=True), - ) - - self.reduce = stride==2 - - def forward(self, x): - if self.reduce: - out = torch.cat((self.left_branch(x), self.right_branch(x)), 1) - else: - x1 = x[:, :(x.shape[1]//2), :, :] - x2 = x[:, (x.shape[1]//2):, :, :] - out = torch.cat((x1, self.right_branch(x2)), 1) - - return channel_shuffle(out, 2) - - -class shufflex(nn.Module): - expansion = 1 - num_layer = 3 - - def __init__(self, conv, inplanes, outplanes, stride=1, midplanes=None, norm_layer=nn.BatchNorm2d): - super(shufflex, self).__init__() - inplanes = inplanes // 2 if stride == 1 else inplanes - midplanes = outplanes // 2 if midplanes is None else midplanes - rightoutplanes = outplanes - inplanes - if stride==2: - self.left_branch = nn.Sequential( - # dw - conv(inplanes, inplanes, stride), - norm_layer(inplanes), - # pw-linear - conv1x1(inplanes, inplanes), - norm_layer(inplanes), - nn.ReLU(inplace=True), - ) - - self.right_branch = nn.Sequential( - # dw - conv(inplanes, inplanes, stride), - norm_layer(inplanes), - # pw-linear - conv1x1(inplanes, midplanes), - norm_layer(midplanes), - nn.ReLU(inplace=True), - # dw - conv(midplanes, midplanes, 1), - norm_layer(midplanes), - # pw-linear - conv1x1(midplanes, midplanes), - norm_layer(midplanes), - nn.ReLU(inplace=True), - # dw - conv(midplanes, midplanes, 1), - norm_layer(midplanes), - # pw-linear - conv1x1(midplanes, rightoutplanes), - norm_layer(rightoutplanes), - nn.ReLU(inplace=True), - ) - - self.reduce = stride==2 - - def forward(self, x): - if self.reduce: - out = torch.cat((self.left_branch(x), self.right_branch(x)), 1) - else: - x1 = x[:, :(x.shape[1] // 2), :, :] - x2 = x[:, (x.shape[1] // 2):, :, :] - out = torch.cat((x1, self.right_branch(x2)), 1) - - return channel_shuffle(out, 2) \ No newline at end of file diff --git a/spaces/Plachta/VALL-E-X/utils/generation.py b/spaces/Plachta/VALL-E-X/utils/generation.py deleted file mode 100644 index b0952dba48146ba3be2f3f1133b9f0640ad30f3c..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VALL-E-X/utils/generation.py +++ /dev/null @@ -1,257 +0,0 @@ -import os -import torch -import gdown -import logging -import psutil -import langid -langid.set_languages(['en', 'zh', 'ja']) - -import pathlib -import platform -if platform.system().lower() == 'windows': - temp = pathlib.PosixPath - pathlib.PosixPath = pathlib.WindowsPath -elif platform.system().lower() == 'linux': - temp = pathlib.WindowsPath - pathlib.WindowsPath = pathlib.PosixPath - -import numpy as np -from data.tokenizer import ( - AudioTokenizer, - tokenize_audio, -) -from data.collation import get_text_token_collater -from models.vallex import VALLE -from utils.g2p import PhonemeBpeTokenizer -from utils.sentence_cutter import split_text_into_sentences - -from macros import * - -device = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda", 0) - -url = 'https://drive.google.com/file/d/10gdQWvP-K_e1undkvv0p2b7SU6I4Egyl/view?usp=sharing' - -checkpoints_dir = "./checkpoints/" - -model_checkpoint_name = "vallex-checkpoint.pt" - -model = None - -codec = None - -text_tokenizer = PhonemeBpeTokenizer(tokenizer_path="./utils/g2p/bpe_69.json") -text_collater = get_text_token_collater() - -def preload_models(): - global model, codec - if not os.path.exists(checkpoints_dir): os.mkdir(checkpoints_dir) - if not os.path.exists(os.path.join(checkpoints_dir, model_checkpoint_name)): - gdown.download(id="10gdQWvP-K_e1undkvv0p2b7SU6I4Egyl", output=os.path.join(checkpoints_dir, model_checkpoint_name), quiet=False) - # VALL-E - model = VALLE( - N_DIM, - NUM_HEAD, - NUM_LAYERS, - norm_first=True, - add_prenet=False, - prefix_mode=PREFIX_MODE, - share_embedding=True, - nar_scale_factor=1.0, - prepend_bos=True, - num_quantizers=NUM_QUANTIZERS, - ).to(device) - checkpoint = torch.load(os.path.join(checkpoints_dir, model_checkpoint_name), map_location='cpu') - missing_keys, unexpected_keys = model.load_state_dict( - checkpoint["model"], strict=True - ) - assert not missing_keys - model.eval() - - # Encodec - codec = AudioTokenizer(device) - -@torch.no_grad() -def generate_audio(text, prompt=None, language='auto', accent='no-accent'): - global model, codec, text_tokenizer, text_collater - text = text.replace("\n", "").strip(" ") - # detect language - if language == "auto": - language = langid.classify(text)[0] - lang_token = lang2token[language] - lang = token2lang[lang_token] - text = lang_token + text + lang_token - - # load prompt - if prompt is not None: - prompt_path = prompt - if not os.path.exists(prompt_path): - prompt_path = "./presets/" + prompt + ".npz" - if not os.path.exists(prompt_path): - prompt_path = "./customs/" + prompt + ".npz" - if not os.path.exists(prompt_path): - raise ValueError(f"Cannot find prompt {prompt}") - prompt_data = np.load(prompt_path) - audio_prompts = prompt_data['audio_tokens'] - text_prompts = prompt_data['text_tokens'] - lang_pr = prompt_data['lang_code'] - lang_pr = code2lang[int(lang_pr)] - - # numpy to tensor - audio_prompts = torch.tensor(audio_prompts).type(torch.int32).to(device) - text_prompts = torch.tensor(text_prompts).type(torch.int32) - else: - audio_prompts = torch.zeros([1, 0, NUM_QUANTIZERS]).type(torch.int32).to(device) - text_prompts = torch.zeros([1, 0]).type(torch.int32) - lang_pr = lang if lang != 'mix' else 'en' - - enroll_x_lens = text_prompts.shape[-1] - logging.info(f"synthesize text: {text}") - phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip()) - text_tokens, text_tokens_lens = text_collater( - [ - phone_tokens - ] - ) - text_tokens = torch.cat([text_prompts, text_tokens], dim=-1) - text_tokens_lens += enroll_x_lens - # accent control - lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]] - encoded_frames = model.inference( - text_tokens.to(device), - text_tokens_lens.to(device), - audio_prompts, - enroll_x_lens=enroll_x_lens, - top_k=-100, - temperature=1, - prompt_language=lang_pr, - text_language=langs if accent == "no-accent" else lang, - ) - samples = codec.decode( - [(encoded_frames.transpose(2, 1), None)] - ) - - return samples[0][0].cpu().numpy() - -@torch.no_grad() -def generate_audio_from_long_text(text, prompt=None, language='auto', accent='no-accent', mode='sliding-window'): - """ - For long audio generation, two modes are available. - fixed-prompt: This mode will keep using the same prompt the user has provided, and generate audio sentence by sentence. - sliding-window: This mode will use the last sentence as the prompt for the next sentence, but has some concern on speaker maintenance. - """ - global model, codec, text_tokenizer, text_collater - if prompt is None or prompt == "": - mode = 'sliding-window' # If no prompt is given, use sliding-window mode - sentences = split_text_into_sentences(text) - # detect language - if language == "auto": - language = langid.classify(text)[0] - - # if initial prompt is given, encode it - if prompt is not None and prompt != "": - prompt_path = prompt - if not os.path.exists(prompt_path): - prompt_path = "./presets/" + prompt + ".npz" - if not os.path.exists(prompt_path): - prompt_path = "./customs/" + prompt + ".npz" - if not os.path.exists(prompt_path): - raise ValueError(f"Cannot find prompt {prompt}") - prompt_data = np.load(prompt_path) - audio_prompts = prompt_data['audio_tokens'] - text_prompts = prompt_data['text_tokens'] - lang_pr = prompt_data['lang_code'] - lang_pr = code2lang[int(lang_pr)] - - # numpy to tensor - audio_prompts = torch.tensor(audio_prompts).type(torch.int32).to(device) - text_prompts = torch.tensor(text_prompts).type(torch.int32) - else: - audio_prompts = torch.zeros([1, 0, NUM_QUANTIZERS]).type(torch.int32).to(device) - text_prompts = torch.zeros([1, 0]).type(torch.int32) - lang_pr = language if language != 'mix' else 'en' - if mode == 'fixed-prompt': - complete_tokens = torch.zeros([1, NUM_QUANTIZERS, 0]).type(torch.LongTensor).to(device) - for text in sentences: - text = text.replace("\n", "").strip(" ") - if text == "": - continue - lang_token = lang2token[language] - lang = token2lang[lang_token] - text = lang_token + text + lang_token - - enroll_x_lens = text_prompts.shape[-1] - logging.info(f"synthesize text: {text}") - phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip()) - text_tokens, text_tokens_lens = text_collater( - [ - phone_tokens - ] - ) - text_tokens = torch.cat([text_prompts, text_tokens], dim=-1) - text_tokens_lens += enroll_x_lens - # accent control - lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]] - encoded_frames = model.inference( - text_tokens.to(device), - text_tokens_lens.to(device), - audio_prompts, - enroll_x_lens=enroll_x_lens, - top_k=-100, - temperature=1, - prompt_language=lang_pr, - text_language=langs if accent == "no-accent" else lang, - ) - complete_tokens = torch.cat([complete_tokens, encoded_frames.transpose(2, 1)], dim=-1) - samples = codec.decode( - [(complete_tokens, None)] - ) - return samples[0][0].cpu().numpy() - elif mode == "sliding-window": - complete_tokens = torch.zeros([1, NUM_QUANTIZERS, 0]).type(torch.LongTensor).to(device) - original_audio_prompts = audio_prompts - original_text_prompts = text_prompts - for text in sentences: - text = text.replace("\n", "").strip(" ") - if text == "": - continue - lang_token = lang2token[language] - lang = token2lang[lang_token] - text = lang_token + text + lang_token - - enroll_x_lens = text_prompts.shape[-1] - logging.info(f"synthesize text: {text}") - phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip()) - text_tokens, text_tokens_lens = text_collater( - [ - phone_tokens - ] - ) - text_tokens = torch.cat([text_prompts, text_tokens], dim=-1) - text_tokens_lens += enroll_x_lens - # accent control - lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]] - encoded_frames = model.inference( - text_tokens.to(device), - text_tokens_lens.to(device), - audio_prompts, - enroll_x_lens=enroll_x_lens, - top_k=-100, - temperature=1, - prompt_language=lang_pr, - text_language=langs if accent == "no-accent" else lang, - ) - complete_tokens = torch.cat([complete_tokens, encoded_frames.transpose(2, 1)], dim=-1) - if torch.rand(1) < 0.5: - audio_prompts = encoded_frames[:, :, -NUM_QUANTIZERS:] - text_prompts = text_tokens[:, enroll_x_lens:] - else: - audio_prompts = original_audio_prompts - text_prompts = original_text_prompts - samples = codec.decode( - [(complete_tokens, None)] - ) - return samples[0][0].cpu().numpy() - else: - raise ValueError(f"No such mode {mode}") diff --git a/spaces/Poupeto/RVC_Ryu7ztv/app.py b/spaces/Poupeto/RVC_Ryu7ztv/app.py deleted file mode 100644 index 78860b1b60857b61f273cb10e5642dec9b01d83e..0000000000000000000000000000000000000000 --- a/spaces/Poupeto/RVC_Ryu7ztv/app.py +++ /dev/null @@ -1,514 +0,0 @@ - -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -def create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 240 and limitation: - return "max 4 min, pour plus faites tourner en local ou demander moi sur discord", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_title} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, None - return vc_fn - -def load_model(): - categories = [] - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - return categories - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "<div align='center'>\n\n"+ - "# RVC Ryu7ztv\n\n" - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"### <center> {description}") - with gr.Tabs(): - if not models: - gr.Markdown("# <center> No Model Loaded.") - gr.Markdown("## <center> Please add model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '<div align="center">' - f'<div>{title}</div>\n'+ - f'<div>RVC {model_version} Model</div>\n'+ - (f'<div>Model author: {author}</div>' if author else "")+ - (f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else "")+ - '</div>' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/music_dataset.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/music_dataset.py deleted file mode 100644 index 4e28796939f9cde2b23a2c4bf43fd7ba5fa26b2d..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/music_dataset.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Dataset of music tracks with rich metadata. -""" -from dataclasses import dataclass, field, fields, replace -import gzip -import json -import logging -from pathlib import Path -import random -import typing as tp - -import torch - -from .info_audio_dataset import ( - InfoAudioDataset, - AudioInfo, - get_keyword_list, - get_keyword, - get_string -) -from ..modules.conditioners import ( - ConditioningAttributes, - JointEmbedCondition, - WavCondition, -) -from ..utils.utils import warn_once - - -logger = logging.getLogger(__name__) - - -@dataclass -class MusicInfo(AudioInfo): - """Segment info augmented with music metadata. - """ - # music-specific metadata - title: tp.Optional[str] = None - artist: tp.Optional[str] = None # anonymized artist id, used to ensure no overlap between splits - key: tp.Optional[str] = None - bpm: tp.Optional[float] = None - genre: tp.Optional[str] = None - moods: tp.Optional[list] = None - keywords: tp.Optional[list] = None - description: tp.Optional[str] = None - name: tp.Optional[str] = None - instrument: tp.Optional[str] = None - # original wav accompanying the metadata - self_wav: tp.Optional[WavCondition] = None - # dict mapping attributes names to tuple of wav, text and metadata - joint_embed: tp.Dict[str, JointEmbedCondition] = field(default_factory=dict) - - @property - def has_music_meta(self) -> bool: - return self.name is not None - - def to_condition_attributes(self) -> ConditioningAttributes: - out = ConditioningAttributes() - for _field in fields(self): - key, value = _field.name, getattr(self, _field.name) - if key == 'self_wav': - out.wav[key] = value - elif key == 'joint_embed': - for embed_attribute, embed_cond in value.items(): - out.joint_embed[embed_attribute] = embed_cond - else: - if isinstance(value, list): - value = ' '.join(value) - out.text[key] = value - return out - - @staticmethod - def attribute_getter(attribute): - if attribute == 'bpm': - preprocess_func = get_bpm - elif attribute == 'key': - preprocess_func = get_musical_key - elif attribute in ['moods', 'keywords']: - preprocess_func = get_keyword_list - elif attribute in ['genre', 'name', 'instrument']: - preprocess_func = get_keyword - elif attribute in ['title', 'artist', 'description']: - preprocess_func = get_string - else: - preprocess_func = None - return preprocess_func - - @classmethod - def from_dict(cls, dictionary: dict, fields_required: bool = False): - _dictionary: tp.Dict[str, tp.Any] = {} - - # allow a subset of attributes to not be loaded from the dictionary - # these attributes may be populated later - post_init_attributes = ['self_wav', 'joint_embed'] - optional_fields = ['keywords'] - - for _field in fields(cls): - if _field.name in post_init_attributes: - continue - elif _field.name not in dictionary: - if fields_required and _field.name not in optional_fields: - raise KeyError(f"Unexpected missing key: {_field.name}") - else: - preprocess_func: tp.Optional[tp.Callable] = cls.attribute_getter(_field.name) - value = dictionary[_field.name] - if preprocess_func: - value = preprocess_func(value) - _dictionary[_field.name] = value - return cls(**_dictionary) - - -def augment_music_info_description(music_info: MusicInfo, merge_text_p: float = 0., - drop_desc_p: float = 0., drop_other_p: float = 0.) -> MusicInfo: - """Augment MusicInfo description with additional metadata fields and potential dropout. - Additional textual attributes are added given probability 'merge_text_conditions_p' and - the original textual description is dropped from the augmented description given probability drop_desc_p. - - Args: - music_info (MusicInfo): The music metadata to augment. - merge_text_p (float): Probability of merging additional metadata to the description. - If provided value is 0, then no merging is performed. - drop_desc_p (float): Probability of dropping the original description on text merge. - if provided value is 0, then no drop out is performed. - drop_other_p (float): Probability of dropping the other fields used for text augmentation. - Returns: - MusicInfo: The MusicInfo with augmented textual description. - """ - def is_valid_field(field_name: str, field_value: tp.Any) -> bool: - valid_field_name = field_name in ['key', 'bpm', 'genre', 'moods', 'instrument', 'keywords'] - valid_field_value = field_value is not None and isinstance(field_value, (int, float, str, list)) - keep_field = random.uniform(0, 1) < drop_other_p - return valid_field_name and valid_field_value and keep_field - - def process_value(v: tp.Any) -> str: - if isinstance(v, (int, float, str)): - return str(v) - if isinstance(v, list): - return ", ".join(v) - else: - raise ValueError(f"Unknown type for text value! ({type(v), v})") - - description = music_info.description - - metadata_text = "" - if random.uniform(0, 1) < merge_text_p: - meta_pairs = [f'{_field.name}: {process_value(getattr(music_info, _field.name))}' - for _field in fields(music_info) if is_valid_field(_field.name, getattr(music_info, _field.name))] - random.shuffle(meta_pairs) - metadata_text = ". ".join(meta_pairs) - description = description if not random.uniform(0, 1) < drop_desc_p else None - logger.debug(f"Applying text augmentation on MMI info. description: {description}, metadata: {metadata_text}") - - if description is None: - description = metadata_text if len(metadata_text) > 1 else None - else: - description = ". ".join([description.rstrip('.'), metadata_text]) - description = description.strip() if description else None - - music_info = replace(music_info) - music_info.description = description - return music_info - - -class Paraphraser: - def __init__(self, paraphrase_source: tp.Union[str, Path], paraphrase_p: float = 0.): - self.paraphrase_p = paraphrase_p - open_fn = gzip.open if str(paraphrase_source).lower().endswith('.gz') else open - with open_fn(paraphrase_source, 'rb') as f: # type: ignore - self.paraphrase_source = json.loads(f.read()) - logger.info(f"loaded paraphrasing source from: {paraphrase_source}") - - def sample_paraphrase(self, audio_path: str, description: str): - if random.random() >= self.paraphrase_p: - return description - info_path = Path(audio_path).with_suffix('.json') - if info_path not in self.paraphrase_source: - warn_once(logger, f"{info_path} not in paraphrase source!") - return description - new_desc = random.choice(self.paraphrase_source[info_path]) - logger.debug(f"{description} -> {new_desc}") - return new_desc - - -class MusicDataset(InfoAudioDataset): - """Music dataset is an AudioDataset with music-related metadata. - - Args: - info_fields_required (bool): Whether to enforce having required fields. - merge_text_p (float): Probability of merging additional metadata to the description. - drop_desc_p (float): Probability of dropping the original description on text merge. - drop_other_p (float): Probability of dropping the other fields used for text augmentation. - joint_embed_attributes (list[str]): A list of attributes for which joint embedding metadata is returned. - paraphrase_source (str, optional): Path to the .json or .json.gz file containing the - paraphrases for the description. The json should be a dict with keys are the - original info path (e.g. track_path.json) and each value is a list of possible - paraphrased. - paraphrase_p (float): probability of taking a paraphrase. - - See `audiocraft.data.info_audio_dataset.InfoAudioDataset` for full initialization arguments. - """ - def __init__(self, *args, info_fields_required: bool = True, - merge_text_p: float = 0., drop_desc_p: float = 0., drop_other_p: float = 0., - joint_embed_attributes: tp.List[str] = [], - paraphrase_source: tp.Optional[str] = None, paraphrase_p: float = 0, - **kwargs): - kwargs['return_info'] = True # We require the info for each song of the dataset. - super().__init__(*args, **kwargs) - self.info_fields_required = info_fields_required - self.merge_text_p = merge_text_p - self.drop_desc_p = drop_desc_p - self.drop_other_p = drop_other_p - self.joint_embed_attributes = joint_embed_attributes - self.paraphraser = None - if paraphrase_source is not None: - self.paraphraser = Paraphraser(paraphrase_source, paraphrase_p) - - def __getitem__(self, index): - wav, info = super().__getitem__(index) - info_data = info.to_dict() - music_info_path = Path(info.meta.path).with_suffix('.json') - - if Path(music_info_path).exists(): - with open(music_info_path, 'r') as json_file: - music_data = json.load(json_file) - music_data.update(info_data) - music_info = MusicInfo.from_dict(music_data, fields_required=self.info_fields_required) - if self.paraphraser is not None: - music_info.description = self.paraphraser.sample(music_info.meta.path, music_info.description) - if self.merge_text_p: - music_info = augment_music_info_description( - music_info, self.merge_text_p, self.drop_desc_p, self.drop_other_p) - else: - music_info = MusicInfo.from_dict(info_data, fields_required=False) - - music_info.self_wav = WavCondition( - wav=wav[None], length=torch.tensor([info.n_frames]), - sample_rate=[info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time]) - - for att in self.joint_embed_attributes: - att_value = getattr(music_info, att) - joint_embed_cond = JointEmbedCondition( - wav[None], [att_value], torch.tensor([info.n_frames]), - sample_rate=[info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time]) - music_info.joint_embed[att] = joint_embed_cond - - return wav, music_info - - -def get_musical_key(value: tp.Optional[str]) -> tp.Optional[str]: - """Preprocess key keywords, discarding them if there are multiple key defined.""" - if value is None or (not isinstance(value, str)) or len(value) == 0 or value == 'None': - return None - elif ',' in value: - # For now, we discard when multiple keys are defined separated with comas - return None - else: - return value.strip().lower() - - -def get_bpm(value: tp.Optional[str]) -> tp.Optional[float]: - """Preprocess to a float.""" - if value is None: - return None - try: - return float(value) - except ValueError: - return None diff --git a/spaces/RMXK/RVC_HFF/README.md b/spaces/RMXK/RVC_HFF/README.md deleted file mode 100644 index 9d8914cd05791e4f8db6267eb2a5fe2133e22e58..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: RVC Inference HF -emoji: 👀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/RamV/ChatRobo_II/README.md b/spaces/RamV/ChatRobo_II/README.md deleted file mode 100644 index 1ea6a0e725c0ccb28b543c7a03ef2fdb63b9c739..0000000000000000000000000000000000000000 --- a/spaces/RamV/ChatRobo_II/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ChatRobo II -emoji: 🐠 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/connection.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/connection.py deleted file mode 100644 index 10fb36c4e350d8ca6f65e4036a60c48a9b3216fc..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/connection.py +++ /dev/null @@ -1,567 +0,0 @@ -from __future__ import absolute_import - -import datetime -import logging -import os -import re -import socket -import warnings -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from .packages import six -from .packages.six.moves.http_client import HTTPConnection as _HTTPConnection -from .packages.six.moves.http_client import HTTPException # noqa: F401 -from .util.proxy import create_proxy_ssl_context - -try: # Compiled with SSL? - import ssl - - BaseSSLError = ssl.SSLError -except (ImportError, AttributeError): # Platform-specific: No SSL. - ssl = None - - class BaseSSLError(BaseException): - pass - - -try: - # Python 3: not a no-op, we're adding this to the namespace so it can be imported. - ConnectionError = ConnectionError -except NameError: - # Python 2 - class ConnectionError(Exception): - pass - - -try: # Python 3: - # Not a no-op, we're adding this to the namespace so it can be imported. - BrokenPipeError = BrokenPipeError -except NameError: # Python 2: - - class BrokenPipeError(Exception): - pass - - -from ._collections import HTTPHeaderDict # noqa (historical, removed in v2) -from ._version import __version__ -from .exceptions import ( - ConnectTimeoutError, - NewConnectionError, - SubjectAltNameWarning, - SystemTimeWarning, -) -from .util import SKIP_HEADER, SKIPPABLE_HEADERS, connection -from .util.ssl_ import ( - assert_fingerprint, - create_urllib3_context, - is_ipaddress, - resolve_cert_reqs, - resolve_ssl_version, - ssl_wrap_socket, -) -from .util.ssl_match_hostname import CertificateError, match_hostname - -log = logging.getLogger(__name__) - -port_by_scheme = {"http": 80, "https": 443} - -# When it comes time to update this value as a part of regular maintenance -# (ie test_recent_date is failing) update it to ~6 months before the current date. -RECENT_DATE = datetime.date(2022, 1, 1) - -_CONTAINS_CONTROL_CHAR_RE = re.compile(r"[^-!#$%&'*+.^_`|~0-9a-zA-Z]") - - -class HTTPConnection(_HTTPConnection, object): - """ - Based on :class:`http.client.HTTPConnection` but provides an extra constructor - backwards-compatibility layer between older and newer Pythons. - - Additional keyword parameters are used to configure attributes of the connection. - Accepted parameters include: - - - ``strict``: See the documentation on :class:`urllib3.connectionpool.HTTPConnectionPool` - - ``source_address``: Set the source address for the current connection. - - ``socket_options``: Set specific options on the underlying socket. If not specified, then - defaults are loaded from ``HTTPConnection.default_socket_options`` which includes disabling - Nagle's algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy. - - For example, if you wish to enable TCP Keep Alive in addition to the defaults, - you might pass: - - .. code-block:: python - - HTTPConnection.default_socket_options + [ - (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), - ] - - Or you may want to disable the defaults by passing an empty list (e.g., ``[]``). - """ - - default_port = port_by_scheme["http"] - - #: Disable Nagle's algorithm by default. - #: ``[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]`` - default_socket_options = [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)] - - #: Whether this connection verifies the host's certificate. - is_verified = False - - #: Whether this proxy connection (if used) verifies the proxy host's - #: certificate. - proxy_is_verified = None - - def __init__(self, *args, **kw): - if not six.PY2: - kw.pop("strict", None) - - # Pre-set source_address. - self.source_address = kw.get("source_address") - - #: The socket options provided by the user. If no options are - #: provided, we use the default options. - self.socket_options = kw.pop("socket_options", self.default_socket_options) - - # Proxy options provided by the user. - self.proxy = kw.pop("proxy", None) - self.proxy_config = kw.pop("proxy_config", None) - - _HTTPConnection.__init__(self, *args, **kw) - - @property - def host(self): - """ - Getter method to remove any trailing dots that indicate the hostname is an FQDN. - - In general, SSL certificates don't include the trailing dot indicating a - fully-qualified domain name, and thus, they don't validate properly when - checked against a domain name that includes the dot. In addition, some - servers may not expect to receive the trailing dot when provided. - - However, the hostname with trailing dot is critical to DNS resolution; doing a - lookup with the trailing dot will properly only resolve the appropriate FQDN, - whereas a lookup without a trailing dot will search the system's search domain - list. Thus, it's important to keep the original host around for use only in - those cases where it's appropriate (i.e., when doing DNS lookup to establish the - actual TCP connection across which we're going to send HTTP requests). - """ - return self._dns_host.rstrip(".") - - @host.setter - def host(self, value): - """ - Setter for the `host` property. - - We assume that only urllib3 uses the _dns_host attribute; httplib itself - only uses `host`, and it seems reasonable that other libraries follow suit. - """ - self._dns_host = value - - def _new_conn(self): - """Establish a socket connection and set nodelay settings on it. - - :return: New socket connection. - """ - extra_kw = {} - if self.source_address: - extra_kw["source_address"] = self.source_address - - if self.socket_options: - extra_kw["socket_options"] = self.socket_options - - try: - conn = connection.create_connection( - (self._dns_host, self.port), self.timeout, **extra_kw - ) - - except SocketTimeout: - raise ConnectTimeoutError( - self, - "Connection to %s timed out. (connect timeout=%s)" - % (self.host, self.timeout), - ) - - except SocketError as e: - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % e - ) - - return conn - - def _is_using_tunnel(self): - # Google App Engine's httplib does not define _tunnel_host - return getattr(self, "_tunnel_host", None) - - def _prepare_conn(self, conn): - self.sock = conn - if self._is_using_tunnel(): - # TODO: Fix tunnel so it doesn't depend on self.sock state. - self._tunnel() - # Mark this connection as not reusable - self.auto_open = 0 - - def connect(self): - conn = self._new_conn() - self._prepare_conn(conn) - - def putrequest(self, method, url, *args, **kwargs): - """ """ - # Empty docstring because the indentation of CPython's implementation - # is broken but we don't want this method in our documentation. - match = _CONTAINS_CONTROL_CHAR_RE.search(method) - if match: - raise ValueError( - "Method cannot contain non-token characters %r (found at least %r)" - % (method, match.group()) - ) - - return _HTTPConnection.putrequest(self, method, url, *args, **kwargs) - - def putheader(self, header, *values): - """ """ - if not any(isinstance(v, str) and v == SKIP_HEADER for v in values): - _HTTPConnection.putheader(self, header, *values) - elif six.ensure_str(header.lower()) not in SKIPPABLE_HEADERS: - raise ValueError( - "urllib3.util.SKIP_HEADER only supports '%s'" - % ("', '".join(map(str.title, sorted(SKIPPABLE_HEADERS))),) - ) - - def request(self, method, url, body=None, headers=None): - if headers is None: - headers = {} - else: - # Avoid modifying the headers passed into .request() - headers = headers.copy() - if "user-agent" not in (six.ensure_str(k.lower()) for k in headers): - headers["User-Agent"] = _get_default_user_agent() - super(HTTPConnection, self).request(method, url, body=body, headers=headers) - - def request_chunked(self, method, url, body=None, headers=None): - """ - Alternative to the common request method, which sends the - body with chunked encoding and not as one block - """ - headers = headers or {} - header_keys = set([six.ensure_str(k.lower()) for k in headers]) - skip_accept_encoding = "accept-encoding" in header_keys - skip_host = "host" in header_keys - self.putrequest( - method, url, skip_accept_encoding=skip_accept_encoding, skip_host=skip_host - ) - if "user-agent" not in header_keys: - self.putheader("User-Agent", _get_default_user_agent()) - for header, value in headers.items(): - self.putheader(header, value) - if "transfer-encoding" not in header_keys: - self.putheader("Transfer-Encoding", "chunked") - self.endheaders() - - if body is not None: - stringish_types = six.string_types + (bytes,) - if isinstance(body, stringish_types): - body = (body,) - for chunk in body: - if not chunk: - continue - if not isinstance(chunk, bytes): - chunk = chunk.encode("utf8") - len_str = hex(len(chunk))[2:] - to_send = bytearray(len_str.encode()) - to_send += b"\r\n" - to_send += chunk - to_send += b"\r\n" - self.send(to_send) - - # After the if clause, to always have a closed body - self.send(b"0\r\n\r\n") - - -class HTTPSConnection(HTTPConnection): - """ - Many of the parameters to this constructor are passed to the underlying SSL - socket by means of :py:func:`urllib3.util.ssl_wrap_socket`. - """ - - default_port = port_by_scheme["https"] - - cert_reqs = None - ca_certs = None - ca_cert_dir = None - ca_cert_data = None - ssl_version = None - assert_fingerprint = None - tls_in_tls_required = False - - def __init__( - self, - host, - port=None, - key_file=None, - cert_file=None, - key_password=None, - strict=None, - timeout=socket._GLOBAL_DEFAULT_TIMEOUT, - ssl_context=None, - server_hostname=None, - **kw - ): - - HTTPConnection.__init__(self, host, port, strict=strict, timeout=timeout, **kw) - - self.key_file = key_file - self.cert_file = cert_file - self.key_password = key_password - self.ssl_context = ssl_context - self.server_hostname = server_hostname - - # Required property for Google AppEngine 1.9.0 which otherwise causes - # HTTPS requests to go out as HTTP. (See Issue #356) - self._protocol = "https" - - def set_cert( - self, - key_file=None, - cert_file=None, - cert_reqs=None, - key_password=None, - ca_certs=None, - assert_hostname=None, - assert_fingerprint=None, - ca_cert_dir=None, - ca_cert_data=None, - ): - """ - This method should only be called once, before the connection is used. - """ - # If cert_reqs is not provided we'll assume CERT_REQUIRED unless we also - # have an SSLContext object in which case we'll use its verify_mode. - if cert_reqs is None: - if self.ssl_context is not None: - cert_reqs = self.ssl_context.verify_mode - else: - cert_reqs = resolve_cert_reqs(None) - - self.key_file = key_file - self.cert_file = cert_file - self.cert_reqs = cert_reqs - self.key_password = key_password - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - self.ca_certs = ca_certs and os.path.expanduser(ca_certs) - self.ca_cert_dir = ca_cert_dir and os.path.expanduser(ca_cert_dir) - self.ca_cert_data = ca_cert_data - - def connect(self): - # Add certificate verification - self.sock = conn = self._new_conn() - hostname = self.host - tls_in_tls = False - - if self._is_using_tunnel(): - if self.tls_in_tls_required: - self.sock = conn = self._connect_tls_proxy(hostname, conn) - tls_in_tls = True - - # Calls self._set_hostport(), so self.host is - # self._tunnel_host below. - self._tunnel() - # Mark this connection as not reusable - self.auto_open = 0 - - # Override the host with the one we're requesting data from. - hostname = self._tunnel_host - - server_hostname = hostname - if self.server_hostname is not None: - server_hostname = self.server_hostname - - is_time_off = datetime.date.today() < RECENT_DATE - if is_time_off: - warnings.warn( - ( - "System time is way off (before {0}). This will probably " - "lead to SSL verification errors" - ).format(RECENT_DATE), - SystemTimeWarning, - ) - - # Wrap socket using verification with the root certs in - # trusted_root_certs - default_ssl_context = False - if self.ssl_context is None: - default_ssl_context = True - self.ssl_context = create_urllib3_context( - ssl_version=resolve_ssl_version(self.ssl_version), - cert_reqs=resolve_cert_reqs(self.cert_reqs), - ) - - context = self.ssl_context - context.verify_mode = resolve_cert_reqs(self.cert_reqs) - - # Try to load OS default certs if none are given. - # Works well on Windows (requires Python3.4+) - if ( - not self.ca_certs - and not self.ca_cert_dir - and not self.ca_cert_data - and default_ssl_context - and hasattr(context, "load_default_certs") - ): - context.load_default_certs() - - self.sock = ssl_wrap_socket( - sock=conn, - keyfile=self.key_file, - certfile=self.cert_file, - key_password=self.key_password, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - server_hostname=server_hostname, - ssl_context=context, - tls_in_tls=tls_in_tls, - ) - - # If we're using all defaults and the connection - # is TLSv1 or TLSv1.1 we throw a DeprecationWarning - # for the host. - if ( - default_ssl_context - and self.ssl_version is None - and hasattr(self.sock, "version") - and self.sock.version() in {"TLSv1", "TLSv1.1"} - ): - warnings.warn( - "Negotiating TLSv1/TLSv1.1 by default is deprecated " - "and will be disabled in urllib3 v2.0.0. Connecting to " - "'%s' with '%s' can be enabled by explicitly opting-in " - "with 'ssl_version'" % (self.host, self.sock.version()), - DeprecationWarning, - ) - - if self.assert_fingerprint: - assert_fingerprint( - self.sock.getpeercert(binary_form=True), self.assert_fingerprint - ) - elif ( - context.verify_mode != ssl.CERT_NONE - and not getattr(context, "check_hostname", False) - and self.assert_hostname is not False - ): - # While urllib3 attempts to always turn off hostname matching from - # the TLS library, this cannot always be done. So we check whether - # the TLS Library still thinks it's matching hostnames. - cert = self.sock.getpeercert() - if not cert.get("subjectAltName", ()): - warnings.warn( - ( - "Certificate for {0} has no `subjectAltName`, falling back to check for a " - "`commonName` for now. This feature is being removed by major browsers and " - "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " - "for details.)".format(hostname) - ), - SubjectAltNameWarning, - ) - _match_hostname(cert, self.assert_hostname or server_hostname) - - self.is_verified = ( - context.verify_mode == ssl.CERT_REQUIRED - or self.assert_fingerprint is not None - ) - - def _connect_tls_proxy(self, hostname, conn): - """ - Establish a TLS connection to the proxy using the provided SSL context. - """ - proxy_config = self.proxy_config - ssl_context = proxy_config.ssl_context - if ssl_context: - # If the user provided a proxy context, we assume CA and client - # certificates have already been set - return ssl_wrap_socket( - sock=conn, - server_hostname=hostname, - ssl_context=ssl_context, - ) - - ssl_context = create_proxy_ssl_context( - self.ssl_version, - self.cert_reqs, - self.ca_certs, - self.ca_cert_dir, - self.ca_cert_data, - ) - - # If no cert was provided, use only the default options for server - # certificate validation - socket = ssl_wrap_socket( - sock=conn, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - server_hostname=hostname, - ssl_context=ssl_context, - ) - - if ssl_context.verify_mode != ssl.CERT_NONE and not getattr( - ssl_context, "check_hostname", False - ): - # While urllib3 attempts to always turn off hostname matching from - # the TLS library, this cannot always be done. So we check whether - # the TLS Library still thinks it's matching hostnames. - cert = socket.getpeercert() - if not cert.get("subjectAltName", ()): - warnings.warn( - ( - "Certificate for {0} has no `subjectAltName`, falling back to check for a " - "`commonName` for now. This feature is being removed by major browsers and " - "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " - "for details.)".format(hostname) - ), - SubjectAltNameWarning, - ) - _match_hostname(cert, hostname) - - self.proxy_is_verified = ssl_context.verify_mode == ssl.CERT_REQUIRED - return socket - - -def _match_hostname(cert, asserted_hostname): - # Our upstream implementation of ssl.match_hostname() - # only applies this normalization to IP addresses so it doesn't - # match DNS SANs so we do the same thing! - stripped_hostname = asserted_hostname.strip("u[]") - if is_ipaddress(stripped_hostname): - asserted_hostname = stripped_hostname - - try: - match_hostname(cert, asserted_hostname) - except CertificateError as e: - log.warning( - "Certificate did not match expected hostname: %s. Certificate: %s", - asserted_hostname, - cert, - ) - # Add cert to exception and reraise so client code can inspect - # the cert when catching the exception, if they want to - e._peer_cert = cert - raise - - -def _get_default_user_agent(): - return "python-urllib3/%s" % __version__ - - -class DummyConnection(object): - """Used to detect a failed ConnectionCls import.""" - - pass - - -if not ssl: - HTTPSConnection = DummyConnection # noqa: F811 - - -VerifiedHTTPSConnection = HTTPSConnection diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/semantic_version/base.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/semantic_version/base.py deleted file mode 100644 index 777c27ac463f34996d0281fb7a68e5f6c7fb9a9c..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/semantic_version/base.py +++ /dev/null @@ -1,1449 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) The python-semanticversion project -# This code is distributed under the two-clause BSD License. - -import functools -import re -import warnings - - -def _has_leading_zero(value): - return (value - and value[0] == '0' - and value.isdigit() - and value != '0') - - -class MaxIdentifier(object): - __slots__ = [] - - def __repr__(self): - return 'MaxIdentifier()' - - def __eq__(self, other): - return isinstance(other, self.__class__) - - -@functools.total_ordering -class NumericIdentifier(object): - __slots__ = ['value'] - - def __init__(self, value): - self.value = int(value) - - def __repr__(self): - return 'NumericIdentifier(%r)' % self.value - - def __eq__(self, other): - if isinstance(other, NumericIdentifier): - return self.value == other.value - return NotImplemented - - def __lt__(self, other): - if isinstance(other, MaxIdentifier): - return True - elif isinstance(other, AlphaIdentifier): - return True - elif isinstance(other, NumericIdentifier): - return self.value < other.value - else: - return NotImplemented - - -@functools.total_ordering -class AlphaIdentifier(object): - __slots__ = ['value'] - - def __init__(self, value): - self.value = value.encode('ascii') - - def __repr__(self): - return 'AlphaIdentifier(%r)' % self.value - - def __eq__(self, other): - if isinstance(other, AlphaIdentifier): - return self.value == other.value - return NotImplemented - - def __lt__(self, other): - if isinstance(other, MaxIdentifier): - return True - elif isinstance(other, NumericIdentifier): - return False - elif isinstance(other, AlphaIdentifier): - return self.value < other.value - else: - return NotImplemented - - -class Version(object): - - version_re = re.compile(r'^(\d+)\.(\d+)\.(\d+)(?:-([0-9a-zA-Z.-]+))?(?:\+([0-9a-zA-Z.-]+))?$') - partial_version_re = re.compile(r'^(\d+)(?:\.(\d+)(?:\.(\d+))?)?(?:-([0-9a-zA-Z.-]*))?(?:\+([0-9a-zA-Z.-]*))?$') - - def __init__( - self, - version_string=None, - major=None, - minor=None, - patch=None, - prerelease=None, - build=None, - partial=False): - if partial: - warnings.warn( - "Partial versions will be removed in 3.0; use SimpleSpec('1.x.x') instead.", - DeprecationWarning, - stacklevel=2, - ) - has_text = version_string is not None - has_parts = not (major is minor is patch is prerelease is build is None) - if not has_text ^ has_parts: - raise ValueError("Call either Version('1.2.3') or Version(major=1, ...).") - - if has_text: - major, minor, patch, prerelease, build = self.parse(version_string, partial) - else: - # Convenience: allow to omit prerelease/build. - prerelease = tuple(prerelease or ()) - if not partial: - build = tuple(build or ()) - self._validate_kwargs(major, minor, patch, prerelease, build, partial) - - self.major = major - self.minor = minor - self.patch = patch - self.prerelease = prerelease - self.build = build - - self.partial = partial - - # Cached precedence keys - # _cmp_precedence_key is used for semver-precedence comparison - self._cmp_precedence_key = self._build_precedence_key(with_build=False) - # _sort_precedence_key is used for self.precedence_key, esp. for sorted(...) - self._sort_precedence_key = self._build_precedence_key(with_build=True) - - @classmethod - def _coerce(cls, value, allow_none=False): - if value is None and allow_none: - return value - return int(value) - - def next_major(self): - if self.prerelease and self.minor == self.patch == 0: - return Version( - major=self.major, - minor=0, - patch=0, - partial=self.partial, - ) - else: - return Version( - major=self.major + 1, - minor=0, - patch=0, - partial=self.partial, - ) - - def next_minor(self): - if self.prerelease and self.patch == 0: - return Version( - major=self.major, - minor=self.minor, - patch=0, - partial=self.partial, - ) - else: - return Version( - major=self.major, - minor=self.minor + 1, - patch=0, - partial=self.partial, - ) - - def next_patch(self): - if self.prerelease: - return Version( - major=self.major, - minor=self.minor, - patch=self.patch, - partial=self.partial, - ) - else: - return Version( - major=self.major, - minor=self.minor, - patch=self.patch + 1, - partial=self.partial, - ) - - def truncate(self, level='patch'): - """Return a new Version object, truncated up to the selected level.""" - if level == 'build': - return self - elif level == 'prerelease': - return Version( - major=self.major, - minor=self.minor, - patch=self.patch, - prerelease=self.prerelease, - partial=self.partial, - ) - elif level == 'patch': - return Version( - major=self.major, - minor=self.minor, - patch=self.patch, - partial=self.partial, - ) - elif level == 'minor': - return Version( - major=self.major, - minor=self.minor, - patch=None if self.partial else 0, - partial=self.partial, - ) - elif level == 'major': - return Version( - major=self.major, - minor=None if self.partial else 0, - patch=None if self.partial else 0, - partial=self.partial, - ) - else: - raise ValueError("Invalid truncation level `%s`." % level) - - @classmethod - def coerce(cls, version_string, partial=False): - """Coerce an arbitrary version string into a semver-compatible one. - - The rule is: - - If not enough components, fill minor/patch with zeroes; unless - partial=True - - If more than 3 dot-separated components, extra components are "build" - data. If some "build" data already appeared, append it to the - extra components - - Examples: - >>> Version.coerce('0.1') - Version(0, 1, 0) - >>> Version.coerce('0.1.2.3') - Version(0, 1, 2, (), ('3',)) - >>> Version.coerce('0.1.2.3+4') - Version(0, 1, 2, (), ('3', '4')) - >>> Version.coerce('0.1+2-3+4_5') - Version(0, 1, 0, (), ('2-3', '4-5')) - """ - base_re = re.compile(r'^\d+(?:\.\d+(?:\.\d+)?)?') - - match = base_re.match(version_string) - if not match: - raise ValueError( - "Version string lacks a numerical component: %r" - % version_string - ) - - version = version_string[:match.end()] - if not partial: - # We need a not-partial version. - while version.count('.') < 2: - version += '.0' - - # Strip leading zeros in components - # Version is of the form nn, nn.pp or nn.pp.qq - version = '.'.join( - # If the part was '0', we end up with an empty string. - part.lstrip('0') or '0' - for part in version.split('.') - ) - - if match.end() == len(version_string): - return Version(version, partial=partial) - - rest = version_string[match.end():] - - # Cleanup the 'rest' - rest = re.sub(r'[^a-zA-Z0-9+.-]', '-', rest) - - if rest[0] == '+': - # A 'build' component - prerelease = '' - build = rest[1:] - elif rest[0] == '.': - # An extra version component, probably 'build' - prerelease = '' - build = rest[1:] - elif rest[0] == '-': - rest = rest[1:] - if '+' in rest: - prerelease, build = rest.split('+', 1) - else: - prerelease, build = rest, '' - elif '+' in rest: - prerelease, build = rest.split('+', 1) - else: - prerelease, build = rest, '' - - build = build.replace('+', '.') - - if prerelease: - version = '%s-%s' % (version, prerelease) - if build: - version = '%s+%s' % (version, build) - - return cls(version, partial=partial) - - @classmethod - def parse(cls, version_string, partial=False, coerce=False): - """Parse a version string into a tuple of components: - (major, minor, patch, prerelease, build). - - Args: - version_string (str), the version string to parse - partial (bool), whether to accept incomplete input - coerce (bool), whether to try to map the passed in string into a - valid Version. - """ - if not version_string: - raise ValueError('Invalid empty version string: %r' % version_string) - - if partial: - version_re = cls.partial_version_re - else: - version_re = cls.version_re - - match = version_re.match(version_string) - if not match: - raise ValueError('Invalid version string: %r' % version_string) - - major, minor, patch, prerelease, build = match.groups() - - if _has_leading_zero(major): - raise ValueError("Invalid leading zero in major: %r" % version_string) - if _has_leading_zero(minor): - raise ValueError("Invalid leading zero in minor: %r" % version_string) - if _has_leading_zero(patch): - raise ValueError("Invalid leading zero in patch: %r" % version_string) - - major = int(major) - minor = cls._coerce(minor, partial) - patch = cls._coerce(patch, partial) - - if prerelease is None: - if partial and (build is None): - # No build info, strip here - return (major, minor, patch, None, None) - else: - prerelease = () - elif prerelease == '': - prerelease = () - else: - prerelease = tuple(prerelease.split('.')) - cls._validate_identifiers(prerelease, allow_leading_zeroes=False) - - if build is None: - if partial: - build = None - else: - build = () - elif build == '': - build = () - else: - build = tuple(build.split('.')) - cls._validate_identifiers(build, allow_leading_zeroes=True) - - return (major, minor, patch, prerelease, build) - - @classmethod - def _validate_identifiers(cls, identifiers, allow_leading_zeroes=False): - for item in identifiers: - if not item: - raise ValueError( - "Invalid empty identifier %r in %r" - % (item, '.'.join(identifiers)) - ) - - if item[0] == '0' and item.isdigit() and item != '0' and not allow_leading_zeroes: - raise ValueError("Invalid leading zero in identifier %r" % item) - - @classmethod - def _validate_kwargs(cls, major, minor, patch, prerelease, build, partial): - if ( - major != int(major) - or minor != cls._coerce(minor, partial) - or patch != cls._coerce(patch, partial) - or prerelease is None and not partial - or build is None and not partial - ): - raise ValueError( - "Invalid kwargs to Version(major=%r, minor=%r, patch=%r, " - "prerelease=%r, build=%r, partial=%r" % ( - major, minor, patch, prerelease, build, partial - )) - if prerelease is not None: - cls._validate_identifiers(prerelease, allow_leading_zeroes=False) - if build is not None: - cls._validate_identifiers(build, allow_leading_zeroes=True) - - def __iter__(self): - return iter((self.major, self.minor, self.patch, self.prerelease, self.build)) - - def __str__(self): - version = '%d' % self.major - if self.minor is not None: - version = '%s.%d' % (version, self.minor) - if self.patch is not None: - version = '%s.%d' % (version, self.patch) - - if self.prerelease or (self.partial and self.prerelease == () and self.build is None): - version = '%s-%s' % (version, '.'.join(self.prerelease)) - if self.build or (self.partial and self.build == ()): - version = '%s+%s' % (version, '.'.join(self.build)) - return version - - def __repr__(self): - return '%s(%r%s)' % ( - self.__class__.__name__, - str(self), - ', partial=True' if self.partial else '', - ) - - def __hash__(self): - # We don't include 'partial', since this is strictly equivalent to having - # at least a field being `None`. - return hash((self.major, self.minor, self.patch, self.prerelease, self.build)) - - def _build_precedence_key(self, with_build=False): - """Build a precedence key. - - The "build" component should only be used when sorting an iterable - of versions. - """ - if self.prerelease: - prerelease_key = tuple( - NumericIdentifier(part) if part.isdigit() else AlphaIdentifier(part) - for part in self.prerelease - ) - else: - prerelease_key = ( - MaxIdentifier(), - ) - - if not with_build: - return ( - self.major, - self.minor, - self.patch, - prerelease_key, - ) - - build_key = tuple( - NumericIdentifier(part) if part.isdigit() else AlphaIdentifier(part) - for part in self.build or () - ) - - return ( - self.major, - self.minor, - self.patch, - prerelease_key, - build_key, - ) - - @property - def precedence_key(self): - return self._sort_precedence_key - - def __cmp__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - if self < other: - return -1 - elif self > other: - return 1 - elif self == other: - return 0 - else: - return NotImplemented - - def __eq__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return ( - self.major == other.major - and self.minor == other.minor - and self.patch == other.patch - and (self.prerelease or ()) == (other.prerelease or ()) - and (self.build or ()) == (other.build or ()) - ) - - def __ne__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return tuple(self) != tuple(other) - - def __lt__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key < other._cmp_precedence_key - - def __le__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key <= other._cmp_precedence_key - - def __gt__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key > other._cmp_precedence_key - - def __ge__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key >= other._cmp_precedence_key - - -class SpecItem(object): - """A requirement specification.""" - - KIND_ANY = '*' - KIND_LT = '<' - KIND_LTE = '<=' - KIND_EQUAL = '==' - KIND_SHORTEQ = '=' - KIND_EMPTY = '' - KIND_GTE = '>=' - KIND_GT = '>' - KIND_NEQ = '!=' - KIND_CARET = '^' - KIND_TILDE = '~' - KIND_COMPATIBLE = '~=' - - # Map a kind alias to its full version - KIND_ALIASES = { - KIND_SHORTEQ: KIND_EQUAL, - KIND_EMPTY: KIND_EQUAL, - } - - re_spec = re.compile(r'^(<|<=||=|==|>=|>|!=|\^|~|~=)(\d.*)$') - - def __init__(self, requirement_string, _warn=True): - if _warn: - warnings.warn( - "The `SpecItem` class will be removed in 3.0.", - DeprecationWarning, - stacklevel=2, - ) - kind, spec = self.parse(requirement_string) - self.kind = kind - self.spec = spec - self._clause = Spec(requirement_string).clause - - @classmethod - def parse(cls, requirement_string): - if not requirement_string: - raise ValueError("Invalid empty requirement specification: %r" % requirement_string) - - # Special case: the 'any' version spec. - if requirement_string == '*': - return (cls.KIND_ANY, '') - - match = cls.re_spec.match(requirement_string) - if not match: - raise ValueError("Invalid requirement specification: %r" % requirement_string) - - kind, version = match.groups() - if kind in cls.KIND_ALIASES: - kind = cls.KIND_ALIASES[kind] - - spec = Version(version, partial=True) - if spec.build is not None and kind not in (cls.KIND_EQUAL, cls.KIND_NEQ): - raise ValueError( - "Invalid requirement specification %r: build numbers have no ordering." - % requirement_string - ) - return (kind, spec) - - @classmethod - def from_matcher(cls, matcher): - if matcher == Always(): - return cls('*', _warn=False) - elif matcher == Never(): - return cls('<0.0.0-', _warn=False) - elif isinstance(matcher, Range): - return cls('%s%s' % (matcher.operator, matcher.target), _warn=False) - - def match(self, version): - return self._clause.match(version) - - def __str__(self): - return '%s%s' % (self.kind, self.spec) - - def __repr__(self): - return '<SpecItem: %s %r>' % (self.kind, self.spec) - - def __eq__(self, other): - if not isinstance(other, SpecItem): - return NotImplemented - return self.kind == other.kind and self.spec == other.spec - - def __hash__(self): - return hash((self.kind, self.spec)) - - -def compare(v1, v2): - return Version(v1).__cmp__(Version(v2)) - - -def match(spec, version): - return Spec(spec).match(Version(version)) - - -def validate(version_string): - """Validates a version string againt the SemVer specification.""" - try: - Version.parse(version_string) - return True - except ValueError: - return False - - -DEFAULT_SYNTAX = 'simple' - - -class BaseSpec(object): - """A specification of compatible versions. - - Usage: - >>> Spec('>=1.0.0', syntax='npm') - - A version matches a specification if it matches any - of the clauses of that specification. - - Internally, a Spec is AnyOf( - AllOf(Matcher, Matcher, Matcher), - AllOf(...), - ) - """ - SYNTAXES = {} - - @classmethod - def register_syntax(cls, subclass): - syntax = subclass.SYNTAX - if syntax is None: - raise ValueError("A Spec needs its SYNTAX field to be set.") - elif syntax in cls.SYNTAXES: - raise ValueError( - "Duplicate syntax for %s: %r, %r" - % (syntax, cls.SYNTAXES[syntax], subclass) - ) - cls.SYNTAXES[syntax] = subclass - return subclass - - def __init__(self, expression): - super(BaseSpec, self).__init__() - self.expression = expression - self.clause = self._parse_to_clause(expression) - - @classmethod - def parse(cls, expression, syntax=DEFAULT_SYNTAX): - """Convert a syntax-specific expression into a BaseSpec instance.""" - return cls.SYNTAXES[syntax](expression) - - @classmethod - def _parse_to_clause(cls, expression): - """Converts an expression to a clause.""" - raise NotImplementedError() - - def filter(self, versions): - """Filter an iterable of versions satisfying the Spec.""" - for version in versions: - if self.match(version): - yield version - - def match(self, version): - """Check whether a Version satisfies the Spec.""" - return self.clause.match(version) - - def select(self, versions): - """Select the best compatible version among an iterable of options.""" - options = list(self.filter(versions)) - if options: - return max(options) - return None - - def __contains__(self, version): - """Whether `version in self`.""" - if isinstance(version, Version): - return self.match(version) - return False - - def __eq__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - - return self.clause == other.clause - - def __hash__(self): - return hash(self.clause) - - def __str__(self): - return self.expression - - def __repr__(self): - return '<%s: %r>' % (self.__class__.__name__, self.expression) - - -class Clause(object): - __slots__ = [] - - def match(self, version): - raise NotImplementedError() - - def __and__(self, other): - raise NotImplementedError() - - def __or__(self, other): - raise NotImplementedError() - - def __eq__(self, other): - raise NotImplementedError() - - def prettyprint(self, indent='\t'): - """Pretty-print the clause. - """ - return '\n'.join(self._pretty()).replace('\t', indent) - - def _pretty(self): - """Actual pretty-printing logic. - - Yields: - A list of string. Indentation is performed with \t. - """ - yield repr(self) - - def __ne__(self, other): - return not self == other - - def simplify(self): - return self - - -class AnyOf(Clause): - __slots__ = ['clauses'] - - def __init__(self, *clauses): - super(AnyOf, self).__init__() - self.clauses = frozenset(clauses) - - def match(self, version): - return any(c.match(version) for c in self.clauses) - - def simplify(self): - subclauses = set() - for clause in self.clauses: - simplified = clause.simplify() - if isinstance(simplified, AnyOf): - subclauses |= simplified.clauses - elif simplified == Never(): - continue - else: - subclauses.add(simplified) - if len(subclauses) == 1: - return subclauses.pop() - return AnyOf(*subclauses) - - def __hash__(self): - return hash((AnyOf, self.clauses)) - - def __iter__(self): - return iter(self.clauses) - - def __eq__(self, other): - return isinstance(other, self.__class__) and self.clauses == other.clauses - - def __and__(self, other): - if isinstance(other, AllOf): - return other & self - elif isinstance(other, Matcher) or isinstance(other, AnyOf): - return AllOf(self, other) - else: - return NotImplemented - - def __or__(self, other): - if isinstance(other, AnyOf): - clauses = list(self.clauses | other.clauses) - elif isinstance(other, Matcher) or isinstance(other, AllOf): - clauses = list(self.clauses | set([other])) - else: - return NotImplemented - return AnyOf(*clauses) - - def __repr__(self): - return 'AnyOf(%s)' % ', '.join(sorted(repr(c) for c in self.clauses)) - - def _pretty(self): - yield 'AnyOF(' - for clause in self.clauses: - lines = list(clause._pretty()) - for line in lines[:-1]: - yield '\t' + line - yield '\t' + lines[-1] + ',' - yield ')' - - -class AllOf(Clause): - __slots__ = ['clauses'] - - def __init__(self, *clauses): - super(AllOf, self).__init__() - self.clauses = frozenset(clauses) - - def match(self, version): - return all(clause.match(version) for clause in self.clauses) - - def simplify(self): - subclauses = set() - for clause in self.clauses: - simplified = clause.simplify() - if isinstance(simplified, AllOf): - subclauses |= simplified.clauses - elif simplified == Always(): - continue - else: - subclauses.add(simplified) - if len(subclauses) == 1: - return subclauses.pop() - return AllOf(*subclauses) - - def __hash__(self): - return hash((AllOf, self.clauses)) - - def __iter__(self): - return iter(self.clauses) - - def __eq__(self, other): - return isinstance(other, self.__class__) and self.clauses == other.clauses - - def __and__(self, other): - if isinstance(other, Matcher) or isinstance(other, AnyOf): - clauses = list(self.clauses | set([other])) - elif isinstance(other, AllOf): - clauses = list(self.clauses | other.clauses) - else: - return NotImplemented - return AllOf(*clauses) - - def __or__(self, other): - if isinstance(other, AnyOf): - return other | self - elif isinstance(other, Matcher): - return AnyOf(self, AllOf(other)) - elif isinstance(other, AllOf): - return AnyOf(self, other) - else: - return NotImplemented - - def __repr__(self): - return 'AllOf(%s)' % ', '.join(sorted(repr(c) for c in self.clauses)) - - def _pretty(self): - yield 'AllOF(' - for clause in self.clauses: - lines = list(clause._pretty()) - for line in lines[:-1]: - yield '\t' + line - yield '\t' + lines[-1] + ',' - yield ')' - - -class Matcher(Clause): - __slots__ = [] - - def __and__(self, other): - if isinstance(other, AllOf): - return other & self - elif isinstance(other, Matcher) or isinstance(other, AnyOf): - return AllOf(self, other) - else: - return NotImplemented - - def __or__(self, other): - if isinstance(other, AnyOf): - return other | self - elif isinstance(other, Matcher) or isinstance(other, AllOf): - return AnyOf(self, other) - else: - return NotImplemented - - -class Never(Matcher): - __slots__ = [] - - def match(self, version): - return False - - def __hash__(self): - return hash((Never,)) - - def __eq__(self, other): - return isinstance(other, self.__class__) - - def __and__(self, other): - return self - - def __or__(self, other): - return other - - def __repr__(self): - return 'Never()' - - -class Always(Matcher): - __slots__ = [] - - def match(self, version): - return True - - def __hash__(self): - return hash((Always,)) - - def __eq__(self, other): - return isinstance(other, self.__class__) - - def __and__(self, other): - return other - - def __or__(self, other): - return self - - def __repr__(self): - return 'Always()' - - -class Range(Matcher): - OP_EQ = '==' - OP_GT = '>' - OP_GTE = '>=' - OP_LT = '<' - OP_LTE = '<=' - OP_NEQ = '!=' - - # <1.2.3 matches 1.2.3-a1 - PRERELEASE_ALWAYS = 'always' - # <1.2.3 does not match 1.2.3-a1 - PRERELEASE_NATURAL = 'natural' - # 1.2.3-a1 is only considered if target == 1.2.3-xxx - PRERELEASE_SAMEPATCH = 'same-patch' - - # 1.2.3 matches 1.2.3+* - BUILD_IMPLICIT = 'implicit' - # 1.2.3 matches only 1.2.3, not 1.2.3+4 - BUILD_STRICT = 'strict' - - __slots__ = ['operator', 'target', 'prerelease_policy', 'build_policy'] - - def __init__(self, operator, target, prerelease_policy=PRERELEASE_NATURAL, build_policy=BUILD_IMPLICIT): - super(Range, self).__init__() - if target.build and operator not in (self.OP_EQ, self.OP_NEQ): - raise ValueError( - "Invalid range %s%s: build numbers have no ordering." - % (operator, target)) - self.operator = operator - self.target = target - self.prerelease_policy = prerelease_policy - self.build_policy = self.BUILD_STRICT if target.build else build_policy - - def match(self, version): - if self.build_policy != self.BUILD_STRICT: - version = version.truncate('prerelease') - - if version.prerelease: - same_patch = self.target.truncate() == version.truncate() - - if self.prerelease_policy == self.PRERELEASE_SAMEPATCH and not same_patch: - return False - - if self.operator == self.OP_EQ: - if self.build_policy == self.BUILD_STRICT: - return ( - self.target.truncate('prerelease') == version.truncate('prerelease') - and version.build == self.target.build - ) - return version == self.target - elif self.operator == self.OP_GT: - return version > self.target - elif self.operator == self.OP_GTE: - return version >= self.target - elif self.operator == self.OP_LT: - if ( - version.prerelease - and self.prerelease_policy == self.PRERELEASE_NATURAL - and version.truncate() == self.target.truncate() - and not self.target.prerelease - ): - return False - return version < self.target - elif self.operator == self.OP_LTE: - return version <= self.target - else: - assert self.operator == self.OP_NEQ - if self.build_policy == self.BUILD_STRICT: - return not ( - self.target.truncate('prerelease') == version.truncate('prerelease') - and version.build == self.target.build - ) - - if ( - version.prerelease - and self.prerelease_policy == self.PRERELEASE_NATURAL - and version.truncate() == self.target.truncate() - and not self.target.prerelease - ): - return False - return version != self.target - - def __hash__(self): - return hash((Range, self.operator, self.target, self.prerelease_policy)) - - def __eq__(self, other): - return ( - isinstance(other, self.__class__) - and self.operator == other.operator - and self.target == other.target - and self.prerelease_policy == other.prerelease_policy - ) - - def __str__(self): - return '%s%s' % (self.operator, self.target) - - def __repr__(self): - policy_part = ( - '' if self.prerelease_policy == self.PRERELEASE_NATURAL - else ', prerelease_policy=%r' % self.prerelease_policy - ) + ( - '' if self.build_policy == self.BUILD_IMPLICIT - else ', build_policy=%r' % self.build_policy - ) - return 'Range(%r, %r%s)' % ( - self.operator, - self.target, - policy_part, - ) - - -@BaseSpec.register_syntax -class SimpleSpec(BaseSpec): - - SYNTAX = 'simple' - - @classmethod - def _parse_to_clause(cls, expression): - return cls.Parser.parse(expression) - - class Parser: - NUMBER = r'\*|0|[1-9][0-9]*' - NAIVE_SPEC = re.compile(r"""^ - (?P<op><|<=||=|==|>=|>|!=|\^|~|~=) - (?P<major>{nb})(?:\.(?P<minor>{nb})(?:\.(?P<patch>{nb}))?)? - (?:-(?P<prerel>[a-z0-9A-Z.-]*))? - (?:\+(?P<build>[a-z0-9A-Z.-]*))? - $ - """.format(nb=NUMBER), - re.VERBOSE, - ) - - @classmethod - def parse(cls, expression): - blocks = expression.split(',') - clause = Always() - for block in blocks: - if not cls.NAIVE_SPEC.match(block): - raise ValueError("Invalid simple block %r" % block) - clause &= cls.parse_block(block) - - return clause - - PREFIX_CARET = '^' - PREFIX_TILDE = '~' - PREFIX_COMPATIBLE = '~=' - PREFIX_EQ = '==' - PREFIX_NEQ = '!=' - PREFIX_GT = '>' - PREFIX_GTE = '>=' - PREFIX_LT = '<' - PREFIX_LTE = '<=' - - PREFIX_ALIASES = { - '=': PREFIX_EQ, - '': PREFIX_EQ, - } - - EMPTY_VALUES = ['*', 'x', 'X', None] - - @classmethod - def parse_block(cls, expr): - if not cls.NAIVE_SPEC.match(expr): - raise ValueError("Invalid simple spec component: %r" % expr) - prefix, major_t, minor_t, patch_t, prerel, build = cls.NAIVE_SPEC.match(expr).groups() - prefix = cls.PREFIX_ALIASES.get(prefix, prefix) - - major = None if major_t in cls.EMPTY_VALUES else int(major_t) - minor = None if minor_t in cls.EMPTY_VALUES else int(minor_t) - patch = None if patch_t in cls.EMPTY_VALUES else int(patch_t) - - if major is None: # '*' - target = Version(major=0, minor=0, patch=0) - if prefix not in (cls.PREFIX_EQ, cls.PREFIX_GTE): - raise ValueError("Invalid simple spec: %r" % expr) - elif minor is None: - target = Version(major=major, minor=0, patch=0) - elif patch is None: - target = Version(major=major, minor=minor, patch=0) - else: - target = Version( - major=major, - minor=minor, - patch=patch, - prerelease=prerel.split('.') if prerel else (), - build=build.split('.') if build else (), - ) - - if (major is None or minor is None or patch is None) and (prerel or build): - raise ValueError("Invalid simple spec: %r" % expr) - - if build is not None and prefix not in (cls.PREFIX_EQ, cls.PREFIX_NEQ): - raise ValueError("Invalid simple spec: %r" % expr) - - if prefix == cls.PREFIX_CARET: - # Accept anything with the same most-significant digit - if target.major: - high = target.next_major() - elif target.minor: - high = target.next_minor() - else: - high = target.next_patch() - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, high) - - elif prefix == cls.PREFIX_TILDE: - assert major is not None - # Accept any higher patch in the same minor - # Might go higher if the initial version was a partial - if minor is None: - high = target.next_major() - else: - high = target.next_minor() - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, high) - - elif prefix == cls.PREFIX_COMPATIBLE: - assert major is not None - # ~1 is 1.0.0..2.0.0; ~=2.2 is 2.2.0..3.0.0; ~=1.4.5 is 1.4.5..1.5.0 - if minor is None or patch is None: - # We got a partial version - high = target.next_major() - else: - high = target.next_minor() - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, high) - - elif prefix == cls.PREFIX_EQ: - if major is None: - return Range(Range.OP_GTE, target) - elif minor is None: - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, target.next_major()) - elif patch is None: - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, target.next_minor()) - elif build == '': - return Range(Range.OP_EQ, target, build_policy=Range.BUILD_STRICT) - else: - return Range(Range.OP_EQ, target) - - elif prefix == cls.PREFIX_NEQ: - assert major is not None - if minor is None: - # !=1.x => <1.0.0 || >=2.0.0 - return Range(Range.OP_LT, target) | Range(Range.OP_GTE, target.next_major()) - elif patch is None: - # !=1.2.x => <1.2.0 || >=1.3.0 - return Range(Range.OP_LT, target) | Range(Range.OP_GTE, target.next_minor()) - elif prerel == '': - # !=1.2.3- - return Range(Range.OP_NEQ, target, prerelease_policy=Range.PRERELEASE_ALWAYS) - elif build == '': - # !=1.2.3+ or !=1.2.3-a2+ - return Range(Range.OP_NEQ, target, build_policy=Range.BUILD_STRICT) - else: - return Range(Range.OP_NEQ, target) - - elif prefix == cls.PREFIX_GT: - assert major is not None - if minor is None: - # >1.x => >=2.0 - return Range(Range.OP_GTE, target.next_major()) - elif patch is None: - return Range(Range.OP_GTE, target.next_minor()) - else: - return Range(Range.OP_GT, target) - - elif prefix == cls.PREFIX_GTE: - return Range(Range.OP_GTE, target) - - elif prefix == cls.PREFIX_LT: - assert major is not None - if prerel == '': - # <1.2.3- - return Range(Range.OP_LT, target, prerelease_policy=Range.PRERELEASE_ALWAYS) - return Range(Range.OP_LT, target) - - else: - assert prefix == cls.PREFIX_LTE - assert major is not None - if minor is None: - # <=1.x => <2.0 - return Range(Range.OP_LT, target.next_major()) - elif patch is None: - return Range(Range.OP_LT, target.next_minor()) - else: - return Range(Range.OP_LTE, target) - - -class LegacySpec(SimpleSpec): - def __init__(self, *expressions): - warnings.warn( - "The Spec() class will be removed in 3.1; use SimpleSpec() instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - if len(expressions) > 1: - warnings.warn( - "Passing 2+ arguments to SimpleSpec will be removed in 3.0; concatenate them with ',' instead.", - DeprecationWarning, - stacklevel=2, - ) - expression = ','.join(expressions) - super(LegacySpec, self).__init__(expression) - - @property - def specs(self): - return list(self) - - def __iter__(self): - warnings.warn( - "Iterating over the components of a SimpleSpec object will be removed in 3.0.", - DeprecationWarning, - stacklevel=2, - ) - try: - clauses = list(self.clause) - except TypeError: # Not an iterable - clauses = [self.clause] - for clause in clauses: - yield SpecItem.from_matcher(clause) - - -Spec = LegacySpec - - -@BaseSpec.register_syntax -class NpmSpec(BaseSpec): - SYNTAX = 'npm' - - @classmethod - def _parse_to_clause(cls, expression): - return cls.Parser.parse(expression) - - class Parser: - JOINER = '||' - HYPHEN = ' - ' - - NUMBER = r'x|X|\*|0|[1-9][0-9]*' - PART = r'[a-zA-Z0-9.-]*' - NPM_SPEC_BLOCK = re.compile(r""" - ^(?:v)? # Strip optional initial v - (?P<op><|<=|>=|>|=|\^|~|) # Operator, can be empty - (?P<major>{nb})(?:\.(?P<minor>{nb})(?:\.(?P<patch>{nb}))?)? - (?:-(?P<prerel>{part}))? # Optional re-release - (?:\+(?P<build>{part}))? # Optional build - $""".format(nb=NUMBER, part=PART), - re.VERBOSE, - ) - - @classmethod - def range(cls, operator, target): - return Range(operator, target, prerelease_policy=Range.PRERELEASE_SAMEPATCH) - - @classmethod - def parse(cls, expression): - result = Never() - groups = expression.split(cls.JOINER) - for group in groups: - group = group.strip() - if not group: - group = '>=0.0.0' - - subclauses = [] - if cls.HYPHEN in group: - low, high = group.split(cls.HYPHEN, 2) - subclauses = cls.parse_simple('>=' + low) + cls.parse_simple('<=' + high) - - else: - blocks = group.split(' ') - for block in blocks: - if not cls.NPM_SPEC_BLOCK.match(block): - raise ValueError("Invalid NPM block in %r: %r" % (expression, block)) - - subclauses.extend(cls.parse_simple(block)) - - prerelease_clauses = [] - non_prerel_clauses = [] - for clause in subclauses: - if clause.target.prerelease: - if clause.operator in (Range.OP_GT, Range.OP_GTE): - prerelease_clauses.append(Range( - operator=Range.OP_LT, - target=Version( - major=clause.target.major, - minor=clause.target.minor, - patch=clause.target.patch + 1, - ), - prerelease_policy=Range.PRERELEASE_ALWAYS, - )) - elif clause.operator in (Range.OP_LT, Range.OP_LTE): - prerelease_clauses.append(Range( - operator=Range.OP_GTE, - target=Version( - major=clause.target.major, - minor=clause.target.minor, - patch=0, - prerelease=(), - ), - prerelease_policy=Range.PRERELEASE_ALWAYS, - )) - prerelease_clauses.append(clause) - non_prerel_clauses.append(cls.range( - operator=clause.operator, - target=clause.target.truncate(), - )) - else: - non_prerel_clauses.append(clause) - if prerelease_clauses: - result |= AllOf(*prerelease_clauses) - result |= AllOf(*non_prerel_clauses) - - return result - - PREFIX_CARET = '^' - PREFIX_TILDE = '~' - PREFIX_EQ = '=' - PREFIX_GT = '>' - PREFIX_GTE = '>=' - PREFIX_LT = '<' - PREFIX_LTE = '<=' - - PREFIX_ALIASES = { - '': PREFIX_EQ, - } - - PREFIX_TO_OPERATOR = { - PREFIX_EQ: Range.OP_EQ, - PREFIX_LT: Range.OP_LT, - PREFIX_LTE: Range.OP_LTE, - PREFIX_GTE: Range.OP_GTE, - PREFIX_GT: Range.OP_GT, - } - - EMPTY_VALUES = ['*', 'x', 'X', None] - - @classmethod - def parse_simple(cls, simple): - match = cls.NPM_SPEC_BLOCK.match(simple) - - prefix, major_t, minor_t, patch_t, prerel, build = match.groups() - - prefix = cls.PREFIX_ALIASES.get(prefix, prefix) - major = None if major_t in cls.EMPTY_VALUES else int(major_t) - minor = None if minor_t in cls.EMPTY_VALUES else int(minor_t) - patch = None if patch_t in cls.EMPTY_VALUES else int(patch_t) - - if build is not None and prefix not in [cls.PREFIX_EQ]: - # Ignore the 'build' part when not comparing to a specific part. - build = None - - if major is None: # '*', 'x', 'X' - target = Version(major=0, minor=0, patch=0) - if prefix not in [cls.PREFIX_EQ, cls.PREFIX_GTE]: - raise ValueError("Invalid expression %r" % simple) - prefix = cls.PREFIX_GTE - elif minor is None: - target = Version(major=major, minor=0, patch=0) - elif patch is None: - target = Version(major=major, minor=minor, patch=0) - else: - target = Version( - major=major, - minor=minor, - patch=patch, - prerelease=prerel.split('.') if prerel else (), - build=build.split('.') if build else (), - ) - - if (major is None or minor is None or patch is None) and (prerel or build): - raise ValueError("Invalid NPM spec: %r" % simple) - - if prefix == cls.PREFIX_CARET: - if target.major: # ^1.2.4 => >=1.2.4 <2.0.0 ; ^1.x => >=1.0.0 <2.0.0 - high = target.truncate().next_major() - elif target.minor: # ^0.1.2 => >=0.1.2 <0.2.0 - high = target.truncate().next_minor() - elif minor is None: # ^0.x => >=0.0.0 <1.0.0 - high = target.truncate().next_major() - elif patch is None: # ^0.2.x => >=0.2.0 <0.3.0 - high = target.truncate().next_minor() - else: # ^0.0.1 => >=0.0.1 <0.0.2 - high = target.truncate().next_patch() - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, high)] - - elif prefix == cls.PREFIX_TILDE: - assert major is not None - if minor is None: # ~1.x => >=1.0.0 <2.0.0 - high = target.next_major() - else: # ~1.2.x => >=1.2.0 <1.3.0; ~1.2.3 => >=1.2.3 <1.3.0 - high = target.next_minor() - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, high)] - - elif prefix == cls.PREFIX_EQ: - if major is None: - return [cls.range(Range.OP_GTE, target)] - elif minor is None: - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, target.next_major())] - elif patch is None: - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, target.next_minor())] - else: - return [cls.range(Range.OP_EQ, target)] - - elif prefix == cls.PREFIX_GT: - assert major is not None - if minor is None: # >1.x - return [cls.range(Range.OP_GTE, target.next_major())] - elif patch is None: # >1.2.x => >=1.3.0 - return [cls.range(Range.OP_GTE, target.next_minor())] - else: - return [cls.range(Range.OP_GT, target)] - - elif prefix == cls.PREFIX_GTE: - return [cls.range(Range.OP_GTE, target)] - - elif prefix == cls.PREFIX_LT: - assert major is not None - return [cls.range(Range.OP_LT, target)] - - else: - assert prefix == cls.PREFIX_LTE - assert major is not None - if minor is None: # <=1.x => <2.0.0 - return [cls.range(Range.OP_LT, target.next_major())] - elif patch is None: # <=1.2.x => <1.3.0 - return [cls.range(Range.OP_LT, target.next_minor())] - else: - return [cls.range(Range.OP_LTE, target)] diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/losses/robust_loss.py b/spaces/Realcat/image-matching-webui/third_party/Roma/roma/losses/robust_loss.py deleted file mode 100644 index cd9fd5bbc9c2d01bb6dd40823e350b588bd598b3..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/losses/robust_loss.py +++ /dev/null @@ -1,222 +0,0 @@ -from einops.einops import rearrange -import torch -import torch.nn as nn -import torch.nn.functional as F -from roma.utils.utils import get_gt_warp -import wandb -import roma -import math - - -class RobustLosses(nn.Module): - def __init__( - self, - robust=False, - center_coords=False, - scale_normalize=False, - ce_weight=0.01, - local_loss=True, - local_dist=4.0, - local_largest_scale=8, - smooth_mask=False, - depth_interpolation_mode="bilinear", - mask_depth_loss=False, - relative_depth_error_threshold=0.05, - alpha=1.0, - c=1e-3, - ): - super().__init__() - self.robust = robust # measured in pixels - self.center_coords = center_coords - self.scale_normalize = scale_normalize - self.ce_weight = ce_weight - self.local_loss = local_loss - self.local_dist = local_dist - self.local_largest_scale = local_largest_scale - self.smooth_mask = smooth_mask - self.depth_interpolation_mode = depth_interpolation_mode - self.mask_depth_loss = mask_depth_loss - self.relative_depth_error_threshold = relative_depth_error_threshold - self.avg_overlap = dict() - self.alpha = alpha - self.c = c - - def gm_cls_loss(self, x2, prob, scale_gm_cls, gm_certainty, scale): - with torch.no_grad(): - B, C, H, W = scale_gm_cls.shape - device = x2.device - cls_res = round(math.sqrt(C)) - G = torch.meshgrid( - *[ - torch.linspace( - -1 + 1 / cls_res, 1 - 1 / cls_res, steps=cls_res, device=device - ) - for _ in range(2) - ] - ) - G = torch.stack((G[1], G[0]), dim=-1).reshape(C, 2) - GT = ( - (G[None, :, None, None, :] - x2[:, None]) - .norm(dim=-1) - .min(dim=1) - .indices - ) - cls_loss = F.cross_entropy(scale_gm_cls, GT, reduction="none")[prob > 0.99] - if not torch.any(cls_loss): - cls_loss = certainty_loss * 0.0 # Prevent issues where prob is 0 everywhere - - certainty_loss = F.binary_cross_entropy_with_logits(gm_certainty[:, 0], prob) - losses = { - f"gm_certainty_loss_{scale}": certainty_loss.mean(), - f"gm_cls_loss_{scale}": cls_loss.mean(), - } - wandb.log(losses, step=roma.GLOBAL_STEP) - return losses - - def delta_cls_loss( - self, x2, prob, flow_pre_delta, delta_cls, certainty, scale, offset_scale - ): - with torch.no_grad(): - B, C, H, W = delta_cls.shape - device = x2.device - cls_res = round(math.sqrt(C)) - G = torch.meshgrid( - *[ - torch.linspace( - -1 + 1 / cls_res, 1 - 1 / cls_res, steps=cls_res, device=device - ) - for _ in range(2) - ] - ) - G = torch.stack((G[1], G[0]), dim=-1).reshape(C, 2) * offset_scale - GT = ( - (G[None, :, None, None, :] + flow_pre_delta[:, None] - x2[:, None]) - .norm(dim=-1) - .min(dim=1) - .indices - ) - cls_loss = F.cross_entropy(delta_cls, GT, reduction="none")[prob > 0.99] - if not torch.any(cls_loss): - cls_loss = certainty_loss * 0.0 # Prevent issues where prob is 0 everywhere - certainty_loss = F.binary_cross_entropy_with_logits(certainty[:, 0], prob) - losses = { - f"delta_certainty_loss_{scale}": certainty_loss.mean(), - f"delta_cls_loss_{scale}": cls_loss.mean(), - } - wandb.log(losses, step=roma.GLOBAL_STEP) - return losses - - def regression_loss(self, x2, prob, flow, certainty, scale, eps=1e-8, mode="delta"): - epe = (flow.permute(0, 2, 3, 1) - x2).norm(dim=-1) - if scale == 1: - pck_05 = (epe[prob > 0.99] < 0.5 * (2 / 512)).float().mean() - wandb.log({"train_pck_05": pck_05}, step=roma.GLOBAL_STEP) - - ce_loss = F.binary_cross_entropy_with_logits(certainty[:, 0], prob) - a = self.alpha - cs = self.c * scale - x = epe[prob > 0.99] - reg_loss = cs**a * ((x / (cs)) ** 2 + 1**2) ** (a / 2) - if not torch.any(reg_loss): - reg_loss = ce_loss * 0.0 # Prevent issues where prob is 0 everywhere - losses = { - f"{mode}_certainty_loss_{scale}": ce_loss.mean(), - f"{mode}_regression_loss_{scale}": reg_loss.mean(), - } - wandb.log(losses, step=roma.GLOBAL_STEP) - return losses - - def forward(self, corresps, batch): - scales = list(corresps.keys()) - tot_loss = 0.0 - # scale_weights due to differences in scale for regression gradients and classification gradients - scale_weights = {1: 1, 2: 1, 4: 1, 8: 1, 16: 1} - for scale in scales: - scale_corresps = corresps[scale] - ( - scale_certainty, - flow_pre_delta, - delta_cls, - offset_scale, - scale_gm_cls, - scale_gm_certainty, - flow, - scale_gm_flow, - ) = ( - scale_corresps["certainty"], - scale_corresps["flow_pre_delta"], - scale_corresps.get("delta_cls"), - scale_corresps.get("offset_scale"), - scale_corresps.get("gm_cls"), - scale_corresps.get("gm_certainty"), - scale_corresps["flow"], - scale_corresps.get("gm_flow"), - ) - flow_pre_delta = rearrange(flow_pre_delta, "b d h w -> b h w d") - b, h, w, d = flow_pre_delta.shape - gt_warp, gt_prob = get_gt_warp( - batch["im_A_depth"], - batch["im_B_depth"], - batch["T_1to2"], - batch["K1"], - batch["K2"], - H=h, - W=w, - ) - x2 = gt_warp.float() - prob = gt_prob - - if self.local_largest_scale >= scale: - prob = prob * ( - F.interpolate(prev_epe[:, None], size=(h, w), mode="nearest-exact")[ - :, 0 - ] - < (2 / 512) * (self.local_dist[scale] * scale) - ) - - if scale_gm_cls is not None: - gm_cls_losses = self.gm_cls_loss( - x2, prob, scale_gm_cls, scale_gm_certainty, scale - ) - gm_loss = ( - self.ce_weight * gm_cls_losses[f"gm_certainty_loss_{scale}"] - + gm_cls_losses[f"gm_cls_loss_{scale}"] - ) - tot_loss = tot_loss + scale_weights[scale] * gm_loss - elif scale_gm_flow is not None: - gm_flow_losses = self.regression_loss( - x2, prob, scale_gm_flow, scale_gm_certainty, scale, mode="gm" - ) - gm_loss = ( - self.ce_weight * gm_flow_losses[f"gm_certainty_loss_{scale}"] - + gm_flow_losses[f"gm_regression_loss_{scale}"] - ) - tot_loss = tot_loss + scale_weights[scale] * gm_loss - - if delta_cls is not None: - delta_cls_losses = self.delta_cls_loss( - x2, - prob, - flow_pre_delta, - delta_cls, - scale_certainty, - scale, - offset_scale, - ) - delta_cls_loss = ( - self.ce_weight * delta_cls_losses[f"delta_certainty_loss_{scale}"] - + delta_cls_losses[f"delta_cls_loss_{scale}"] - ) - tot_loss = tot_loss + scale_weights[scale] * delta_cls_loss - else: - delta_regression_losses = self.regression_loss( - x2, prob, flow, scale_certainty, scale - ) - reg_loss = ( - self.ce_weight - * delta_regression_losses[f"delta_certainty_loss_{scale}"] - + delta_regression_losses[f"delta_regression_loss_{scale}"] - ) - tot_loss = tot_loss + scale_weights[scale] * reg_loss - prev_epe = (flow.permute(0, 2, 3, 1) - x2).norm(dim=-1).detach() - return tot_loss diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/viz/methods/__init__.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/viz/methods/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SShaik/SS-02-H5-AR-VR-IOT/index.html b/spaces/SShaik/SS-02-H5-AR-VR-IOT/index.html deleted file mode 100644 index f64aad6580cd12cbdbb0bcc0321ed7a6486d2a19..0000000000000000000000000000000000000000 --- a/spaces/SShaik/SS-02-H5-AR-VR-IOT/index.html +++ /dev/null @@ -1,66 +0,0 @@ -<!DOCTYPE html> -<html> - <head> - <title>Dynamic Lights - A-Frame - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/attentions.py b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Sapphire-356/Video2MC/common/model.py b/spaces/Sapphire-356/Video2MC/common/model.py deleted file mode 100644 index 8be5a30e9847cdf7a93403c1a5f8a4e497946829..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/common/model.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) 2018-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -import torch.nn as nn - - -class TemporalModelBase(nn.Module): - """ - Do not instantiate this class. - """ - - def __init__(self, num_joints_in, in_features, num_joints_out, - filter_widths, causal, dropout, channels): - super().__init__() - - # Validate input - for fw in filter_widths: - assert fw % 2 != 0, 'Only odd filter widths are supported' - - self.num_joints_in = num_joints_in - self.in_features = in_features - self.num_joints_out = num_joints_out - self.filter_widths = filter_widths - - self.drop = nn.Dropout(dropout) - self.relu = nn.ReLU(inplace=True) - - self.pad = [filter_widths[0] // 2] - self.expand_bn = nn.BatchNorm1d(channels, momentum=0.1) - self.shrink = nn.Conv1d(channels, num_joints_out * 3, 1) - - def set_bn_momentum(self, momentum): - self.expand_bn.momentum = momentum - for bn in self.layers_bn: - bn.momentum = momentum - - def receptive_field(self): - """ - Return the total receptive field of this model as # of frames. - """ - frames = 0 - for f in self.pad: - frames += f - return 1 + 2 * frames - - def total_causal_shift(self): - """ - Return the asymmetric offset for sequence padding. - The returned value is typically 0 if causal convolutions are disabled, - otherwise it is half the receptive field. - """ - frames = self.causal_shift[0] - next_dilation = self.filter_widths[0] - for i in range(1, len(self.filter_widths)): - frames += self.causal_shift[i] * next_dilation - next_dilation *= self.filter_widths[i] - return frames - - def forward(self, x): - assert len(x.shape) == 4 - assert x.shape[-2] == self.num_joints_in - assert x.shape[-1] == self.in_features - - sz = x.shape[:3] - x = x.view(x.shape[0], x.shape[1], -1) - x = x.permute(0, 2, 1) - - x = self._forward_blocks(x) - - x = x.permute(0, 2, 1) - x = x.view(sz[0], -1, self.num_joints_out, 3) - - return x - - -class TemporalModel(TemporalModelBase): - """ - Reference 3D pose estimation model with temporal convolutions. - This implementation can be used for all use-cases. - """ - - def __init__(self, num_joints_in, in_features, num_joints_out, - filter_widths, causal=False, dropout=0.25, channels=1024, dense=False): - """ - Initialize this model. - - Arguments: - num_joints_in -- number of input joints (e.g. 17 for Human3.6M) - in_features -- number of input features for each joint (typically 2 for 2D input) - num_joints_out -- number of output joints (can be different than input) - filter_widths -- list of convolution widths, which also determines the # of blocks and receptive field - causal -- use causal convolutions instead of symmetric convolutions (for real-time applications) - dropout -- dropout probability - channels -- number of convolution channels - dense -- use regular dense convolutions instead of dilated convolutions (ablation experiment) - """ - super().__init__(num_joints_in, in_features, num_joints_out, filter_widths, causal, dropout, channels) - - self.expand_conv = nn.Conv1d(num_joints_in * in_features, channels, filter_widths[0], bias=False) - - layers_conv = [] - layers_bn = [] - - self.causal_shift = [(filter_widths[0]) // 2 if causal else 0] - next_dilation = filter_widths[0] - for i in range(1, len(filter_widths)): - self.pad.append((filter_widths[i] - 1) * next_dilation // 2) - self.causal_shift.append((filter_widths[i] // 2 * next_dilation) if causal else 0) - - layers_conv.append(nn.Conv1d(channels, channels, - filter_widths[i] if not dense else (2 * self.pad[-1] + 1), - dilation=next_dilation if not dense else 1, - bias=False)) - layers_bn.append(nn.BatchNorm1d(channels, momentum=0.1)) - layers_conv.append(nn.Conv1d(channels, channels, 1, dilation=1, bias=False)) - layers_bn.append(nn.BatchNorm1d(channels, momentum=0.1)) - - next_dilation *= filter_widths[i] - - self.layers_conv = nn.ModuleList(layers_conv) - self.layers_bn = nn.ModuleList(layers_bn) - - def _forward_blocks(self, x): - x = self.drop(self.relu(self.expand_bn(self.expand_conv(x)))) - - for i in range(len(self.pad) - 1): - pad = self.pad[i + 1] - shift = self.causal_shift[i + 1] - # clip - res = x[:, :, pad + shift: x.shape[2] - pad + shift] - - x = self.drop(self.relu(self.layers_bn[2 * i](self.layers_conv[2 * i](x)))) - x = res + self.drop(self.relu(self.layers_bn[2 * i + 1](self.layers_conv[2 * i + 1](x)))) - - x = self.shrink(x) - return x - - -class TemporalModelOptimized1f(TemporalModelBase): - """ - 3D pose estimation model optimized for single-frame batching, i.e. - where batches have input length = receptive field, and output length = 1. - This scenario is only used for training when stride == 1. - - This implementation replaces dilated convolutions with strided convolutions - to avoid generating unused intermediate results. The weights are interchangeable - with the reference implementation. - """ - - def __init__(self, num_joints_in, in_features, num_joints_out, - filter_widths, causal=False, dropout=0.25, channels=1024): - """ - Initialize this model. - - Arguments: - num_joints_in -- number of input joints (e.g. 17 for Human3.6M) - in_features -- number of input features for each joint (typically 2 for 2D input) - num_joints_out -- number of output joints (can be different than input) - filter_widths -- list of convolution widths, which also determines the # of blocks and receptive field - causal -- use causal convolutions instead of symmetric convolutions (for real-time applications) - dropout -- dropout probability - channels -- number of convolution channels - """ - super().__init__(num_joints_in, in_features, num_joints_out, filter_widths, causal, dropout, channels) - - self.expand_conv = nn.Conv1d(num_joints_in * in_features, channels, filter_widths[0], stride=filter_widths[0], bias=False) - - layers_conv = [] - layers_bn = [] - - self.causal_shift = [(filter_widths[0] // 2) if causal else 0] - next_dilation = filter_widths[0] - for i in range(1, len(filter_widths)): - self.pad.append((filter_widths[i] - 1) * next_dilation // 2) - self.causal_shift.append((filter_widths[i] // 2) if causal else 0) - - layers_conv.append(nn.Conv1d(channels, channels, filter_widths[i], stride=filter_widths[i], bias=False)) - layers_bn.append(nn.BatchNorm1d(channels, momentum=0.1)) - layers_conv.append(nn.Conv1d(channels, channels, 1, dilation=1, bias=False)) - layers_bn.append(nn.BatchNorm1d(channels, momentum=0.1)) - next_dilation *= filter_widths[i] - - self.layers_conv = nn.ModuleList(layers_conv) - self.layers_bn = nn.ModuleList(layers_bn) - - def _forward_blocks(self, x): - x = self.drop(self.relu(self.expand_bn(self.expand_conv(x)))) - - for i in range(len(self.pad) - 1): - res = x[:, :, self.causal_shift[i + 1] + self.filter_widths[i + 1] // 2:: self.filter_widths[i + 1]] - - x = self.drop(self.relu(self.layers_bn[2 * i](self.layers_conv[2 * i](x)))) - x = res + self.drop(self.relu(self.layers_bn[2 * i + 1](self.layers_conv[2 * i + 1](x)))) - - x = self.shrink(x) - return x diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/openpose/main.py b/spaces/Sapphire-356/Video2MC/joints_detectors/openpose/main.py deleted file mode 100644 index a2f33174c1ced8b53bd46e05c4253415aec71a63..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/openpose/main.py +++ /dev/null @@ -1,126 +0,0 @@ -import os -import sys - -import cv2 - -dir_path = os.path.dirname(os.path.realpath(__file__)) -sys.path.insert(0, dir_path) -import ipdb; - -pdb = ipdb.set_trace -import argparse -from tqdm import tqdm -from utils import convert -import numpy as np - -sys.path.remove(dir_path) - -try: - from openpose import pyopenpose as op -except ImportError as e: - print('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?') - raise e - -# Flags -parser = argparse.ArgumentParser() -parser.add_argument("--image_path", default="../../examples/media/COCO_val2014_000000000192.jpg", - help="Process an image. Read all standard formats (jpg, png, bmp, etc.).") -args = parser.parse_known_args() - -params = dict() -cur_dir = os.path.dirname(os.path.abspath(__file__)) -params["model_folder"] = cur_dir + "/models/" -params['tracking'] = 5 -params['number_people_max'] = 1 - - -# params['num_gpu'] = 1 -# params['num_gpu_start'] = 1 -# import ipdb;ipdb.set_trace() - - -def load_model(): - try: - opWrapper = op.WrapperPython() - opWrapper.configure(params) - opWrapper.start() - except Exception as e: - print(e) - sys.exit(-1) - - return opWrapper - - -def test_video(model, video_name=0): - opWrapper = model - - cam = cv2.VideoCapture(video_name) - # warm up - for i in range(5): - datum = op.Datum() - _, imageToProcess = cam.read() - datum.cvInputData = imageToProcess - opWrapper.emplaceAndPop([datum]) - - for i in tqdm(range(2000)): - datum = op.Datum() - _, imageToProcess = cam.read() - datum.cvInputData = imageToProcess - opWrapper.emplaceAndPop([datum]) - - # Display Image - # print("Body keypoints: \n" + str(datum.poseKeypoints)) - # cv2.imshow("OpenPose 1.4.0 - Tutorial Python API", datum.cvOutputData) - # cv2.waitKey(10) - # cv2.destroyAllWindows() - - -def generate_kpts(video_name): - kpt_results = [] - - cap = cv2.VideoCapture(video_name) - length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - opWrapper = load_model() - for i in tqdm(range(length)): - - try: - datum = op.Datum() - _, imageToProcess = cap.read() - datum.cvInputData = imageToProcess - opWrapper.emplaceAndPop([datum]) - results = datum.poseKeypoints - - assert len(results) == 1, 'videopose3D only support one pserson restruction' - # 25 to 17 - kpts = convert(results[0]) - kpt_results.append(kpts) - except Exception as e: - print(e) - - # pose processes - result = np.array(kpt_results) - - # # save - # name = '/home/xyliu/experiments/VideoPose3D/data/tmp.npz' - # kpts = result.astype(np.float32) - # print('kpts npz save in ', name) - # np.savez_compressed(name, kpts=kpts) - return result - - -def generate_frame_kpt(frame, opWrapper): - ''' - 提供frame and model - ''' - datum = op.Datum() - datum.cvInputData = frame - opWrapper.emplaceAndPop([datum]) - re = datum.poseKeypoints - assert len(re) == 1, 'videopose3D only support one pserson restruction' - kpt = convert(re[0]) - - return kpt - - -if __name__ == "__main__": - generate_kpts(os.environ.get('VIDEO_PATH') + 'dance.mp4') diff --git a/spaces/ShoaibMajidDar/PDF-chatbot/apikey.py b/spaces/ShoaibMajidDar/PDF-chatbot/apikey.py deleted file mode 100644 index bf74cec5441e15062e94ed8e04c1171abf76745c..0000000000000000000000000000000000000000 --- a/spaces/ShoaibMajidDar/PDF-chatbot/apikey.py +++ /dev/null @@ -1,8 +0,0 @@ -import streamlit as st - -def get_apikey(): - OPENAI_API_KEY = st.text_input(":blue[Enter Your OPENAI API-KEY :]", - placeholder="Paste your OpenAI API key here (sk-...)", - type="password", - ) - return OPENAI_API_KEY \ No newline at end of file diff --git a/spaces/Sphila/Sphila-Diffusion/app.py b/spaces/Sphila/Sphila-Diffusion/app.py deleted file mode 100644 index 1cd1cd730c7434c36281db2aa86bcc324adc7566..0000000000000000000000000000000000000000 --- a/spaces/Sphila/Sphila-Diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/prompthero/openjourney").launch() diff --git a/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/app.py b/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/app.py deleted file mode 100644 index 957c08b3d43079f2aa180934015ec29326d15626..0000000000000000000000000000000000000000 --- a/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/app.py +++ /dev/null @@ -1,158 +0,0 @@ -import torch -import torch.optim as optim -from torchvision import datasets, transforms -import matplotlib.pyplot as plt -#%matplotlib inline -import numpy as np -import pytorch_lightning as pl -from models.custom_resnet import CustomRes_Network -from models.lit_model import LitModule -from Cifar_datamodule import * -from transformation import * -from torchsummary import summary -from utils import * -from torch.optim.lr_scheduler import StepLR -from torch_lr_finder import LRFinder -from torch.optim.lr_scheduler import OneCycleLR -import pytorch_lightning as pl -import math -import numpy as np -import pandas as pd -import matplotlib.pyplot as plt -from pytorch_grad_cam import GradCAM -from pytorch_grad_cam.utils.image import show_cam_on_image,scale_cam_image -from pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget -import warnings -warnings.filterwarnings('ignore') -import gradio as gr -from PIL import Image - -datamodule = CIFAR10DataModule(batch_size=1,num_workers=2, pin_memory=True) -torch_model = CustomRes_Network() -inference_model = LitModule(torch_model,num_classes = 10 , learning_rate = 0.001) - -inference_model.load_state_dict(torch.load("Model_Artifacts/custom_trained_resnet_model.pth", map_location=torch.device('cpu')), strict=False) -inference_model.eval() - -classes = ('plane', 'car', 'bird', 'cat', 'deer', - 'dog', 'frog', 'horse', 'ship', 'truck') - - -new_line = '\n' -wrong_img = pd.read_csv('wrongprediction.csv') -wrong_img_no = len(wrong_img) - - -def inference_up_img(input_img,see_gradcam= True,target_layer_number = -1,transparency = 0.85,top_classes=3): - org_img = input_img - # model inference - transform = transforms.ToTensor() - input_img = transform(input_img) - input_img = input_img.unsqueeze(0) - outputs = inference_model.net(input_img) - softmax = torch.nn.Softmax(dim=0) - o = softmax(outputs.flatten()) - confidences = {classes[i]: float(o[i]) for i in range(10)} - sorted_confidences = dict(sorted(confidences.items(), key=lambda x:x[1], reverse=True)) - _, prediction = torch.max(outputs, 1) - - # gradcam - if see_gradcam: - target_layers = [inference_model.net.layer2[target_layer_number]] - cam = GradCAM(model=inference_model.net, target_layers=target_layers, use_cuda=False) - grayscale_cam = cam(input_tensor=input_img, targets=None) - grayscale_cam = grayscale_cam[0, :] - visualization = show_cam_on_image(org_img/255.0, grayscale_cam, use_rgb=True, image_weight=transparency) - else: - visualization = org_img - - # top n classes only - sorted_confidences = {k: sorted_confidences[k] for k in list(sorted_confidences)[:top_classes]} - return sorted_confidences, visualization - -def misclass_fn(misclassified_check,num_misclassified=1,see_gradcam=True,num_gradcam=1,gradcam_layer=-2,gradcam_opa= 0.50): - img_gallery = [] - - if misclassified_check: - for i in range(int(num_misclassified)): - org_img = np.asarray(Image.open(f'Misclassified_images/{i+1}.jpg')) - input_img = org_img - actual = classes[wrong_img.loc[i].at["actual"]] - predicted = classes[wrong_img.loc[i].at["predicted"]] - img_gallery.append((org_img,f"Actual:{actual}{new_line}Predicted:{predicted}")) - - if see_gradcam: - transform = transforms.ToTensor() - input_img = transform(input_img) - input_img = input_img.unsqueeze(0) - target_layers = [inference_model.net.layer2[gradcam_layer]] - cam = GradCAM(model=inference_model.net, target_layers=target_layers, use_cuda=False) - grayscale_cam = cam(input_tensor=input_img, targets=None) - grayscale_cam = grayscale_cam[0, :] - visualization = show_cam_on_image(org_img/255.0, grayscale_cam, use_rgb=True, image_weight=gradcam_opa) - img_gallery.append((visualization,f"Actual:{actual}{new_line}Predicted:{predicted}")) - else: - pass - #img_gallery.append((org_img,f"Actual:{actual}{new_line}Predicted:{predicted}")) - - elif see_gradcam: - for i in range(int(num_gradcam)): - org_img = np.asarray(Image.open(f'Misclassified_images/{i+1}.jpg')) - input_img = org_img - actual = classes[wrong_img.loc[i].at["actual"]] - predicted = classes[wrong_img.loc[i].at["predicted"]] - transform = transforms.ToTensor() - input_img = transform(input_img) - input_img = input_img.unsqueeze(0) - target_layers = [inference_model.net.layer2[gradcam_layer]] - cam = GradCAM(model=inference_model.net, target_layers=target_layers, use_cuda=False) - grayscale_cam = cam(input_tensor=input_img, targets=None) - grayscale_cam = grayscale_cam[0, :] - visualization = show_cam_on_image(org_img/255.0, grayscale_cam, use_rgb=True, image_weight=gradcam_opa) - img_gallery.append((visualization,f"Actual:{actual}{new_line}Predicted:{predicted}")) - - - return img_gallery - -examples = [["Images/cat.jpg"], ["Images/plane.jpg"],["Images/dog.jpg"],["Images/truck.jpg"],["Images/bird.jpg"],["Images/ship.jpg"],["Images/horse.jpg"],["Images/frog.jpg"],["Images/deer.jpg"],["Images/car.jpg"]] - -with gr.Blocks() as demo: - gr.Markdown("Explore Custom ResNet model for CIFAR10.") - with gr.Tab("Upload your own image"): - with gr.Row(): - image_input = gr.Image(shape=(32, 32), label="Input Image") - image_label = gr.Label() - with gr.Row(): - with gr.Column(): - gradcam_check = gr.Checkbox(label="Gradcam") - gradcam_layer = gr.Slider(-2, -1, value = -1, step=1, label="Which Layer?") - gradcam_opa = gr.Slider(0, 1, value = 0.5, label="Opacity of GradCAM") - top_classes = gr.Slider(1, 10, value=3, step=1, label="How many top classes?") - image_output = gr.Image(shape=(32, 32), label="Output").style(width=128, height=128) - with gr.Row(): - examples = gr.Examples(examples=examples, - inputs=[image_input,gradcam_check,gradcam_layer,gradcam_opa,top_classes,image_label], - outputs=[image_output], - fn=inference_up_img, cache_examples=False) - with gr.Row(): - tab_1_button = gr.Button("Submit") - tab_1_cl_button = gr.ClearButton([image_input,gradcam_check,gradcam_layer,gradcam_opa,top_classes,image_label,image_output]) - - with gr.Tab("Explore Misclassified/Gradcam Images"): - with gr.Row(): - with gr.Column(): - misclassified_check = gr.Checkbox(label="Misclassified") - num_misclassified = gr.Number(value=2,minimum=1,maximum=10,label="Total Misclassified Images") - gradcam_check1 = gr.Checkbox(label="Gradcam") - num_gradcam = gr.Number(value=2,minimum=1,maximum=10,label="Total Gradcam Images") - gradcam_layer1 = gr.Slider(-2, -1, value = -1, step=1, label="Which Layer?") - gradcam_opa1 = gr.Slider(0, 1, value = 0.5, label="Opacity of GradCAM") - image_gallery_output = gr.Gallery(label="Output Images", show_label=False, elem_id="gallery").style(columns=[2], rows=[5], object_fit="contain", height="auto") - with gr.Row(): - tab_2_button = gr.Button("Submit") - tab_2_cl_button = gr.ClearButton([misclassified_check,num_misclassified,gradcam_check1,num_gradcam,gradcam_layer1,gradcam_opa1,image_gallery_output]) - - tab_1_button.click(inference_up_img, inputs=[image_input,gradcam_check,gradcam_layer,gradcam_opa,top_classes], outputs=[image_label,image_output]) - tab_2_button.click(misclass_fn, inputs=[misclassified_check,num_misclassified,gradcam_check1,num_gradcam,gradcam_layer1,gradcam_opa1], outputs=[image_gallery_output]) -demo.launch(debug=True) - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/oinspect.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/oinspect.py deleted file mode 100644 index ef6a0d02d7f66c73c8cd4b4daf2ebd12aa434e27..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/oinspect.py +++ /dev/null @@ -1,1171 +0,0 @@ -# -*- coding: utf-8 -*- -"""Tools for inspecting Python objects. - -Uses syntax highlighting for presenting the various information elements. - -Similar in spirit to the inspect module, but all calls take a name argument to -reference the name under which an object is being read. -""" - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - -__all__ = ['Inspector','InspectColors'] - -# stdlib modules -from dataclasses import dataclass -from inspect import signature -from textwrap import dedent -import ast -import html -import inspect -import io as stdlib_io -import linecache -import os -import sys -import types -import warnings - -from typing import Any, Optional, Dict, Union, List, Tuple - -if sys.version_info <= (3, 10): - from typing_extensions import TypeAlias -else: - from typing import TypeAlias - -# IPython's own -from IPython.core import page -from IPython.lib.pretty import pretty -from IPython.testing.skipdoctest import skip_doctest -from IPython.utils import PyColorize -from IPython.utils import openpy -from IPython.utils.dir2 import safe_hasattr -from IPython.utils.path import compress_user -from IPython.utils.text import indent -from IPython.utils.wildcard import list_namespace -from IPython.utils.wildcard import typestr2type -from IPython.utils.coloransi import TermColors, ColorScheme, ColorSchemeTable -from IPython.utils.py3compat import cast_unicode -from IPython.utils.colorable import Colorable -from IPython.utils.decorators import undoc - -from pygments import highlight -from pygments.lexers import PythonLexer -from pygments.formatters import HtmlFormatter - -HOOK_NAME = "__custom_documentations__" - - -UnformattedBundle: TypeAlias = Dict[str, List[Tuple[str, str]]] # List of (title, body) -Bundle: TypeAlias = Dict[str, str] - - -@dataclass -class OInfo: - ismagic: bool - isalias: bool - found: bool - namespace: Optional[str] - parent: Any - obj: Any - - def get(self, field): - """Get a field from the object for backward compatibility with before 8.12 - - see https://github.com/h5py/h5py/issues/2253 - """ - # We need to deprecate this at some point, but the warning will show in completion. - # Let's comment this for now and uncomment end of 2023 ish - # warnings.warn( - # f"OInfo dataclass with fields access since IPython 8.12 please use OInfo.{field} instead." - # "OInfo used to be a dict but a dataclass provide static fields verification with mypy." - # "This warning and backward compatibility `get()` method were added in 8.13.", - # DeprecationWarning, - # stacklevel=2, - # ) - return getattr(self, field) - - -def pylight(code): - return highlight(code, PythonLexer(), HtmlFormatter(noclasses=True)) - -# builtin docstrings to ignore -_func_call_docstring = types.FunctionType.__call__.__doc__ -_object_init_docstring = object.__init__.__doc__ -_builtin_type_docstrings = { - inspect.getdoc(t) for t in (types.ModuleType, types.MethodType, - types.FunctionType, property) -} - -_builtin_func_type = type(all) -_builtin_meth_type = type(str.upper) # Bound methods have the same type as builtin functions -#**************************************************************************** -# Builtin color schemes - -Colors = TermColors # just a shorthand - -InspectColors = PyColorize.ANSICodeColors - -#**************************************************************************** -# Auxiliary functions and objects - -# See the messaging spec for the definition of all these fields. This list -# effectively defines the order of display -info_fields = ['type_name', 'base_class', 'string_form', 'namespace', - 'length', 'file', 'definition', 'docstring', 'source', - 'init_definition', 'class_docstring', 'init_docstring', - 'call_def', 'call_docstring', - # These won't be printed but will be used to determine how to - # format the object - 'ismagic', 'isalias', 'isclass', 'found', 'name' - ] - - -def object_info(**kw): - """Make an object info dict with all fields present.""" - infodict = {k:None for k in info_fields} - infodict.update(kw) - return infodict - - -def get_encoding(obj): - """Get encoding for python source file defining obj - - Returns None if obj is not defined in a sourcefile. - """ - ofile = find_file(obj) - # run contents of file through pager starting at line where the object - # is defined, as long as the file isn't binary and is actually on the - # filesystem. - if ofile is None: - return None - elif ofile.endswith(('.so', '.dll', '.pyd')): - return None - elif not os.path.isfile(ofile): - return None - else: - # Print only text files, not extension binaries. Note that - # getsourcelines returns lineno with 1-offset and page() uses - # 0-offset, so we must adjust. - with stdlib_io.open(ofile, 'rb') as buffer: # Tweaked to use io.open for Python 2 - encoding, lines = openpy.detect_encoding(buffer.readline) - return encoding - -def getdoc(obj) -> Union[str,None]: - """Stable wrapper around inspect.getdoc. - - This can't crash because of attribute problems. - - It also attempts to call a getdoc() method on the given object. This - allows objects which provide their docstrings via non-standard mechanisms - (like Pyro proxies) to still be inspected by ipython's ? system. - """ - # Allow objects to offer customized documentation via a getdoc method: - try: - ds = obj.getdoc() - except Exception: - pass - else: - if isinstance(ds, str): - return inspect.cleandoc(ds) - docstr = inspect.getdoc(obj) - return docstr - - -def getsource(obj, oname='') -> Union[str,None]: - """Wrapper around inspect.getsource. - - This can be modified by other projects to provide customized source - extraction. - - Parameters - ---------- - obj : object - an object whose source code we will attempt to extract - oname : str - (optional) a name under which the object is known - - Returns - ------- - src : unicode or None - - """ - - if isinstance(obj, property): - sources = [] - for attrname in ['fget', 'fset', 'fdel']: - fn = getattr(obj, attrname) - if fn is not None: - encoding = get_encoding(fn) - oname_prefix = ('%s.' % oname) if oname else '' - sources.append(''.join(('# ', oname_prefix, attrname))) - if inspect.isfunction(fn): - _src = getsource(fn) - if _src: - # assert _src is not None, "please mypy" - sources.append(dedent(_src)) - else: - # Default str/repr only prints function name, - # pretty.pretty prints module name too. - sources.append( - '%s%s = %s\n' % (oname_prefix, attrname, pretty(fn)) - ) - if sources: - return '\n'.join(sources) - else: - return None - - else: - # Get source for non-property objects. - - obj = _get_wrapped(obj) - - try: - src = inspect.getsource(obj) - except TypeError: - # The object itself provided no meaningful source, try looking for - # its class definition instead. - try: - src = inspect.getsource(obj.__class__) - except (OSError, TypeError): - return None - except OSError: - return None - - return src - - -def is_simple_callable(obj): - """True if obj is a function ()""" - return (inspect.isfunction(obj) or inspect.ismethod(obj) or \ - isinstance(obj, _builtin_func_type) or isinstance(obj, _builtin_meth_type)) - -@undoc -def getargspec(obj): - """Wrapper around :func:`inspect.getfullargspec` - - In addition to functions and methods, this can also handle objects with a - ``__call__`` attribute. - - DEPRECATED: Deprecated since 7.10. Do not use, will be removed. - """ - - warnings.warn('`getargspec` function is deprecated as of IPython 7.10' - 'and will be removed in future versions.', DeprecationWarning, stacklevel=2) - - if safe_hasattr(obj, '__call__') and not is_simple_callable(obj): - obj = obj.__call__ - - return inspect.getfullargspec(obj) - -@undoc -def format_argspec(argspec): - """Format argspect, convenience wrapper around inspect's. - - This takes a dict instead of ordered arguments and calls - inspect.format_argspec with the arguments in the necessary order. - - DEPRECATED (since 7.10): Do not use; will be removed in future versions. - """ - - warnings.warn('`format_argspec` function is deprecated as of IPython 7.10' - 'and will be removed in future versions.', DeprecationWarning, stacklevel=2) - - - return inspect.formatargspec(argspec['args'], argspec['varargs'], - argspec['varkw'], argspec['defaults']) - -@undoc -def call_tip(oinfo, format_call=True): - """DEPRECATED since 6.0. Extract call tip data from an oinfo dict.""" - warnings.warn( - "`call_tip` function is deprecated as of IPython 6.0" - "and will be removed in future versions.", - DeprecationWarning, - stacklevel=2, - ) - # Get call definition - argspec = oinfo.get('argspec') - if argspec is None: - call_line = None - else: - # Callable objects will have 'self' as their first argument, prune - # it out if it's there for clarity (since users do *not* pass an - # extra first argument explicitly). - try: - has_self = argspec['args'][0] == 'self' - except (KeyError, IndexError): - pass - else: - if has_self: - argspec['args'] = argspec['args'][1:] - - call_line = oinfo['name']+format_argspec(argspec) - - # Now get docstring. - # The priority is: call docstring, constructor docstring, main one. - doc = oinfo.get('call_docstring') - if doc is None: - doc = oinfo.get('init_docstring') - if doc is None: - doc = oinfo.get('docstring','') - - return call_line, doc - - -def _get_wrapped(obj): - """Get the original object if wrapped in one or more @decorators - - Some objects automatically construct similar objects on any unrecognised - attribute access (e.g. unittest.mock.call). To protect against infinite loops, - this will arbitrarily cut off after 100 levels of obj.__wrapped__ - attribute access. --TK, Jan 2016 - """ - orig_obj = obj - i = 0 - while safe_hasattr(obj, '__wrapped__'): - obj = obj.__wrapped__ - i += 1 - if i > 100: - # __wrapped__ is probably a lie, so return the thing we started with - return orig_obj - return obj - -def find_file(obj) -> str: - """Find the absolute path to the file where an object was defined. - - This is essentially a robust wrapper around `inspect.getabsfile`. - - Returns None if no file can be found. - - Parameters - ---------- - obj : any Python object - - Returns - ------- - fname : str - The absolute path to the file where the object was defined. - """ - obj = _get_wrapped(obj) - - fname = None - try: - fname = inspect.getabsfile(obj) - except TypeError: - # For an instance, the file that matters is where its class was - # declared. - try: - fname = inspect.getabsfile(obj.__class__) - except (OSError, TypeError): - # Can happen for builtins - pass - except OSError: - pass - - return cast_unicode(fname) - - -def find_source_lines(obj): - """Find the line number in a file where an object was defined. - - This is essentially a robust wrapper around `inspect.getsourcelines`. - - Returns None if no file can be found. - - Parameters - ---------- - obj : any Python object - - Returns - ------- - lineno : int - The line number where the object definition starts. - """ - obj = _get_wrapped(obj) - - try: - lineno = inspect.getsourcelines(obj)[1] - except TypeError: - # For instances, try the class object like getsource() does - try: - lineno = inspect.getsourcelines(obj.__class__)[1] - except (OSError, TypeError): - return None - except OSError: - return None - - return lineno - -class Inspector(Colorable): - - def __init__(self, color_table=InspectColors, - code_color_table=PyColorize.ANSICodeColors, - scheme=None, - str_detail_level=0, - parent=None, config=None): - super(Inspector, self).__init__(parent=parent, config=config) - self.color_table = color_table - self.parser = PyColorize.Parser(out='str', parent=self, style=scheme) - self.format = self.parser.format - self.str_detail_level = str_detail_level - self.set_active_scheme(scheme) - - def _getdef(self,obj,oname='') -> Union[str,None]: - """Return the call signature for any callable object. - - If any exception is generated, None is returned instead and the - exception is suppressed.""" - try: - return _render_signature(signature(obj), oname) - except: - return None - - def __head(self,h) -> str: - """Return a header string with proper colors.""" - return '%s%s%s' % (self.color_table.active_colors.header,h, - self.color_table.active_colors.normal) - - def set_active_scheme(self, scheme): - if scheme is not None: - self.color_table.set_active_scheme(scheme) - self.parser.color_table.set_active_scheme(scheme) - - def noinfo(self, msg, oname): - """Generic message when no information is found.""" - print('No %s found' % msg, end=' ') - if oname: - print('for %s' % oname) - else: - print() - - def pdef(self, obj, oname=''): - """Print the call signature for any callable object. - - If the object is a class, print the constructor information.""" - - if not callable(obj): - print('Object is not callable.') - return - - header = '' - - if inspect.isclass(obj): - header = self.__head('Class constructor information:\n') - - - output = self._getdef(obj,oname) - if output is None: - self.noinfo('definition header',oname) - else: - print(header,self.format(output), end=' ') - - # In Python 3, all classes are new-style, so they all have __init__. - @skip_doctest - def pdoc(self, obj, oname='', formatter=None): - """Print the docstring for any object. - - Optional: - -formatter: a function to run the docstring through for specially - formatted docstrings. - - Examples - -------- - In [1]: class NoInit: - ...: pass - - In [2]: class NoDoc: - ...: def __init__(self): - ...: pass - - In [3]: %pdoc NoDoc - No documentation found for NoDoc - - In [4]: %pdoc NoInit - No documentation found for NoInit - - In [5]: obj = NoInit() - - In [6]: %pdoc obj - No documentation found for obj - - In [5]: obj2 = NoDoc() - - In [6]: %pdoc obj2 - No documentation found for obj2 - """ - - head = self.__head # For convenience - lines = [] - ds = getdoc(obj) - if formatter: - ds = formatter(ds).get('plain/text', ds) - if ds: - lines.append(head("Class docstring:")) - lines.append(indent(ds)) - if inspect.isclass(obj) and hasattr(obj, '__init__'): - init_ds = getdoc(obj.__init__) - if init_ds is not None: - lines.append(head("Init docstring:")) - lines.append(indent(init_ds)) - elif hasattr(obj,'__call__'): - call_ds = getdoc(obj.__call__) - if call_ds: - lines.append(head("Call docstring:")) - lines.append(indent(call_ds)) - - if not lines: - self.noinfo('documentation',oname) - else: - page.page('\n'.join(lines)) - - def psource(self, obj, oname=''): - """Print the source code for an object.""" - - # Flush the source cache because inspect can return out-of-date source - linecache.checkcache() - try: - src = getsource(obj, oname=oname) - except Exception: - src = None - - if src is None: - self.noinfo('source', oname) - else: - page.page(self.format(src)) - - def pfile(self, obj, oname=''): - """Show the whole file where an object was defined.""" - - lineno = find_source_lines(obj) - if lineno is None: - self.noinfo('file', oname) - return - - ofile = find_file(obj) - # run contents of file through pager starting at line where the object - # is defined, as long as the file isn't binary and is actually on the - # filesystem. - if ofile.endswith(('.so', '.dll', '.pyd')): - print('File %r is binary, not printing.' % ofile) - elif not os.path.isfile(ofile): - print('File %r does not exist, not printing.' % ofile) - else: - # Print only text files, not extension binaries. Note that - # getsourcelines returns lineno with 1-offset and page() uses - # 0-offset, so we must adjust. - page.page(self.format(openpy.read_py_file(ofile, skip_encoding_cookie=False)), lineno - 1) - - - def _mime_format(self, text:str, formatter=None) -> dict: - """Return a mime bundle representation of the input text. - - - if `formatter` is None, the returned mime bundle has - a ``text/plain`` field, with the input text. - a ``text/html`` field with a ``
`` tag containing the input text.
-
-        - if ``formatter`` is not None, it must be a callable transforming the
-          input text into a mime bundle. Default values for ``text/plain`` and
-          ``text/html`` representations are the ones described above.
-
-        Note:
-
-        Formatters returning strings are supported but this behavior is deprecated.
-
-        """
-        defaults = {
-            "text/plain": text,
-            "text/html": f"
{html.escape(text)}
", - } - - if formatter is None: - return defaults - else: - formatted = formatter(text) - - if not isinstance(formatted, dict): - # Handle the deprecated behavior of a formatter returning - # a string instead of a mime bundle. - return {"text/plain": formatted, "text/html": f"
{formatted}
"} - - else: - return dict(defaults, **formatted) - - def format_mime(self, bundle: UnformattedBundle) -> Bundle: - """Format a mimebundle being created by _make_info_unformatted into a real mimebundle""" - # Format text/plain mimetype - assert isinstance(bundle["text/plain"], list) - for item in bundle["text/plain"]: - assert isinstance(item, tuple) - - new_b: Bundle = {} - lines = [] - _len = max(len(h) for h, _ in bundle["text/plain"]) - - for head, body in bundle["text/plain"]: - body = body.strip("\n") - delim = "\n" if "\n" in body else " " - lines.append( - f"{self.__head(head+':')}{(_len - len(head))*' '}{delim}{body}" - ) - - new_b["text/plain"] = "\n".join(lines) - - if "text/html" in bundle: - assert isinstance(bundle["text/html"], list) - for item in bundle["text/html"]: - assert isinstance(item, tuple) - # Format the text/html mimetype - if isinstance(bundle["text/html"], (list, tuple)): - # bundle['text/html'] is a list of (head, formatted body) pairs - new_b["text/html"] = "\n".join( - (f"

{head}

\n{body}" for (head, body) in bundle["text/html"]) - ) - - for k in bundle.keys(): - if k in ("text/html", "text/plain"): - continue - else: - new_b = bundle[k] # type:ignore - return new_b - - def _append_info_field( - self, - bundle: UnformattedBundle, - title: str, - key: str, - info, - omit_sections, - formatter, - ): - """Append an info value to the unformatted mimebundle being constructed by _make_info_unformatted""" - if title in omit_sections or key in omit_sections: - return - field = info[key] - if field is not None: - formatted_field = self._mime_format(field, formatter) - bundle["text/plain"].append((title, formatted_field["text/plain"])) - bundle["text/html"].append((title, formatted_field["text/html"])) - - def _make_info_unformatted( - self, obj, info, formatter, detail_level, omit_sections - ) -> UnformattedBundle: - """Assemble the mimebundle as unformatted lists of information""" - bundle: UnformattedBundle = { - "text/plain": [], - "text/html": [], - } - - # A convenience function to simplify calls below - def append_field( - bundle: UnformattedBundle, title: str, key: str, formatter=None - ): - self._append_info_field( - bundle, - title=title, - key=key, - info=info, - omit_sections=omit_sections, - formatter=formatter, - ) - - def code_formatter(text) -> Bundle: - return { - 'text/plain': self.format(text), - 'text/html': pylight(text) - } - - if info["isalias"]: - append_field(bundle, "Repr", "string_form") - - elif info['ismagic']: - if detail_level > 0: - append_field(bundle, "Source", "source", code_formatter) - else: - append_field(bundle, "Docstring", "docstring", formatter) - append_field(bundle, "File", "file") - - elif info['isclass'] or is_simple_callable(obj): - # Functions, methods, classes - append_field(bundle, "Signature", "definition", code_formatter) - append_field(bundle, "Init signature", "init_definition", code_formatter) - append_field(bundle, "Docstring", "docstring", formatter) - if detail_level > 0 and info["source"]: - append_field(bundle, "Source", "source", code_formatter) - else: - append_field(bundle, "Init docstring", "init_docstring", formatter) - - append_field(bundle, "File", "file") - append_field(bundle, "Type", "type_name") - append_field(bundle, "Subclasses", "subclasses") - - else: - # General Python objects - append_field(bundle, "Signature", "definition", code_formatter) - append_field(bundle, "Call signature", "call_def", code_formatter) - append_field(bundle, "Type", "type_name") - append_field(bundle, "String form", "string_form") - - # Namespace - if info["namespace"] != "Interactive": - append_field(bundle, "Namespace", "namespace") - - append_field(bundle, "Length", "length") - append_field(bundle, "File", "file") - - # Source or docstring, depending on detail level and whether - # source found. - if detail_level > 0 and info["source"]: - append_field(bundle, "Source", "source", code_formatter) - else: - append_field(bundle, "Docstring", "docstring", formatter) - - append_field(bundle, "Class docstring", "class_docstring", formatter) - append_field(bundle, "Init docstring", "init_docstring", formatter) - append_field(bundle, "Call docstring", "call_docstring", formatter) - return bundle - - - def _get_info( - self, - obj: Any, - oname: str = "", - formatter=None, - info: Optional[OInfo] = None, - detail_level=0, - omit_sections=(), - ) -> Bundle: - """Retrieve an info dict and format it. - - Parameters - ---------- - obj : any - Object to inspect and return info from - oname : str (default: ''): - Name of the variable pointing to `obj`. - formatter : callable - info - already computed information - detail_level : integer - Granularity of detail level, if set to 1, give more information. - omit_sections : container[str] - Titles or keys to omit from output (can be set, tuple, etc., anything supporting `in`) - """ - - info_dict = self.info(obj, oname=oname, info=info, detail_level=detail_level) - bundle = self._make_info_unformatted( - obj, - info_dict, - formatter, - detail_level=detail_level, - omit_sections=omit_sections, - ) - return self.format_mime(bundle) - - def pinfo( - self, - obj, - oname="", - formatter=None, - info: Optional[OInfo] = None, - detail_level=0, - enable_html_pager=True, - omit_sections=(), - ): - """Show detailed information about an object. - - Optional arguments: - - - oname: name of the variable pointing to the object. - - - formatter: callable (optional) - A special formatter for docstrings. - - The formatter is a callable that takes a string as an input - and returns either a formatted string or a mime type bundle - in the form of a dictionary. - - Although the support of custom formatter returning a string - instead of a mime type bundle is deprecated. - - - info: a structure with some information fields which may have been - precomputed already. - - - detail_level: if set to 1, more information is given. - - - omit_sections: set of section keys and titles to omit - """ - assert info is not None - info_b: Bundle = self._get_info( - obj, oname, formatter, info, detail_level, omit_sections=omit_sections - ) - if not enable_html_pager: - del info_b["text/html"] - page.page(info_b) - - def _info(self, obj, oname="", info=None, detail_level=0): - """ - Inspector.info() was likely improperly marked as deprecated - while only a parameter was deprecated. We "un-deprecate" it. - """ - - warnings.warn( - "The `Inspector.info()` method has been un-deprecated as of 8.0 " - "and the `formatter=` keyword removed. `Inspector._info` is now " - "an alias, and you can just call `.info()` directly.", - DeprecationWarning, - stacklevel=2, - ) - return self.info(obj, oname=oname, info=info, detail_level=detail_level) - - def info(self, obj, oname="", info=None, detail_level=0) -> Dict[str, Any]: - """Compute a dict with detailed information about an object. - - Parameters - ---------- - obj : any - An object to find information about - oname : str (default: '') - Name of the variable pointing to `obj`. - info : (default: None) - A struct (dict like with attr access) with some information fields - which may have been precomputed already. - detail_level : int (default:0) - If set to 1, more information is given. - - Returns - ------- - An object info dict with known fields from `info_fields`. Keys are - strings, values are string or None. - """ - - if info is None: - ismagic = False - isalias = False - ospace = '' - else: - ismagic = info.ismagic - isalias = info.isalias - ospace = info.namespace - - # Get docstring, special-casing aliases: - att_name = oname.split(".")[-1] - parents_docs = None - prelude = "" - if info and info.parent is not None and hasattr(info.parent, HOOK_NAME): - parents_docs_dict = getattr(info.parent, HOOK_NAME) - parents_docs = parents_docs_dict.get(att_name, None) - out = dict( - name=oname, found=True, isalias=isalias, ismagic=ismagic, subclasses=None - ) - - if parents_docs: - ds = parents_docs - elif isalias: - if not callable(obj): - try: - ds = "Alias to the system command:\n %s" % obj[1] - except: - ds = "Alias: " + str(obj) - else: - ds = "Alias to " + str(obj) - if obj.__doc__: - ds += "\nDocstring:\n" + obj.__doc__ - else: - ds_or_None = getdoc(obj) - if ds_or_None is None: - ds = '' - else: - ds = ds_or_None - - ds = prelude + ds - - # store output in a dict, we initialize it here and fill it as we go - - string_max = 200 # max size of strings to show (snipped if longer) - shalf = int((string_max - 5) / 2) - - if ismagic: - out['type_name'] = 'Magic function' - elif isalias: - out['type_name'] = 'System alias' - else: - out['type_name'] = type(obj).__name__ - - try: - bclass = obj.__class__ - out['base_class'] = str(bclass) - except: - pass - - # String form, but snip if too long in ? form (full in ??) - if detail_level >= self.str_detail_level: - try: - ostr = str(obj) - str_head = 'string_form' - if not detail_level and len(ostr)>string_max: - ostr = ostr[:shalf] + ' <...> ' + ostr[-shalf:] - ostr = ("\n" + " " * len(str_head.expandtabs())).\ - join(q.strip() for q in ostr.split("\n")) - out[str_head] = ostr - except: - pass - - if ospace: - out['namespace'] = ospace - - # Length (for strings and lists) - try: - out['length'] = str(len(obj)) - except Exception: - pass - - # Filename where object was defined - binary_file = False - fname = find_file(obj) - if fname is None: - # if anything goes wrong, we don't want to show source, so it's as - # if the file was binary - binary_file = True - else: - if fname.endswith(('.so', '.dll', '.pyd')): - binary_file = True - elif fname.endswith(''): - fname = 'Dynamically generated function. No source code available.' - out['file'] = compress_user(fname) - - # Original source code for a callable, class or property. - if detail_level: - # Flush the source cache because inspect can return out-of-date - # source - linecache.checkcache() - try: - if isinstance(obj, property) or not binary_file: - src = getsource(obj, oname) - if src is not None: - src = src.rstrip() - out['source'] = src - - except Exception: - pass - - # Add docstring only if no source is to be shown (avoid repetitions). - if ds and not self._source_contains_docstring(out.get('source'), ds): - out['docstring'] = ds - - # Constructor docstring for classes - if inspect.isclass(obj): - out['isclass'] = True - - # get the init signature: - try: - init_def = self._getdef(obj, oname) - except AttributeError: - init_def = None - - # get the __init__ docstring - try: - obj_init = obj.__init__ - except AttributeError: - init_ds = None - else: - if init_def is None: - # Get signature from init if top-level sig failed. - # Can happen for built-in types (list, etc.). - try: - init_def = self._getdef(obj_init, oname) - except AttributeError: - pass - init_ds = getdoc(obj_init) - # Skip Python's auto-generated docstrings - if init_ds == _object_init_docstring: - init_ds = None - - if init_def: - out['init_definition'] = init_def - - if init_ds: - out['init_docstring'] = init_ds - - names = [sub.__name__ for sub in type.__subclasses__(obj)] - if len(names) < 10: - all_names = ', '.join(names) - else: - all_names = ', '.join(names[:10]+['...']) - out['subclasses'] = all_names - # and class docstring for instances: - else: - # reconstruct the function definition and print it: - defln = self._getdef(obj, oname) - if defln: - out['definition'] = defln - - # First, check whether the instance docstring is identical to the - # class one, and print it separately if they don't coincide. In - # most cases they will, but it's nice to print all the info for - # objects which use instance-customized docstrings. - if ds: - try: - cls = getattr(obj,'__class__') - except: - class_ds = None - else: - class_ds = getdoc(cls) - # Skip Python's auto-generated docstrings - if class_ds in _builtin_type_docstrings: - class_ds = None - if class_ds and ds != class_ds: - out['class_docstring'] = class_ds - - # Next, try to show constructor docstrings - try: - init_ds = getdoc(obj.__init__) - # Skip Python's auto-generated docstrings - if init_ds == _object_init_docstring: - init_ds = None - except AttributeError: - init_ds = None - if init_ds: - out['init_docstring'] = init_ds - - # Call form docstring for callable instances - if safe_hasattr(obj, '__call__') and not is_simple_callable(obj): - call_def = self._getdef(obj.__call__, oname) - if call_def and (call_def != out.get('definition')): - # it may never be the case that call def and definition differ, - # but don't include the same signature twice - out['call_def'] = call_def - call_ds = getdoc(obj.__call__) - # Skip Python's auto-generated docstrings - if call_ds == _func_call_docstring: - call_ds = None - if call_ds: - out['call_docstring'] = call_ds - - return object_info(**out) - - @staticmethod - def _source_contains_docstring(src, doc): - """ - Check whether the source *src* contains the docstring *doc*. - - This is is helper function to skip displaying the docstring if the - source already contains it, avoiding repetition of information. - """ - try: - (def_node,) = ast.parse(dedent(src)).body - return ast.get_docstring(def_node) == doc # type: ignore[arg-type] - except Exception: - # The source can become invalid or even non-existent (because it - # is re-fetched from the source file) so the above code fail in - # arbitrary ways. - return False - - def psearch(self,pattern,ns_table,ns_search=[], - ignore_case=False,show_all=False, *, list_types=False): - """Search namespaces with wildcards for objects. - - Arguments: - - - pattern: string containing shell-like wildcards to use in namespace - searches and optionally a type specification to narrow the search to - objects of that type. - - - ns_table: dict of name->namespaces for search. - - Optional arguments: - - - ns_search: list of namespace names to include in search. - - - ignore_case(False): make the search case-insensitive. - - - show_all(False): show all names, including those starting with - underscores. - - - list_types(False): list all available object types for object matching. - """ - #print 'ps pattern:<%r>' % pattern # dbg - - # defaults - type_pattern = 'all' - filter = '' - - # list all object types - if list_types: - page.page('\n'.join(sorted(typestr2type))) - return - - cmds = pattern.split() - len_cmds = len(cmds) - if len_cmds == 1: - # Only filter pattern given - filter = cmds[0] - elif len_cmds == 2: - # Both filter and type specified - filter,type_pattern = cmds - else: - raise ValueError('invalid argument string for psearch: <%s>' % - pattern) - - # filter search namespaces - for name in ns_search: - if name not in ns_table: - raise ValueError('invalid namespace <%s>. Valid names: %s' % - (name,ns_table.keys())) - - #print 'type_pattern:',type_pattern # dbg - search_result, namespaces_seen = set(), set() - for ns_name in ns_search: - ns = ns_table[ns_name] - # Normally, locals and globals are the same, so we just check one. - if id(ns) in namespaces_seen: - continue - namespaces_seen.add(id(ns)) - tmp_res = list_namespace(ns, type_pattern, filter, - ignore_case=ignore_case, show_all=show_all) - search_result.update(tmp_res) - - page.page('\n'.join(sorted(search_result))) - - -def _render_signature(obj_signature, obj_name) -> str: - """ - This was mostly taken from inspect.Signature.__str__. - Look there for the comments. - The only change is to add linebreaks when this gets too long. - """ - result = [] - pos_only = False - kw_only = True - for param in obj_signature.parameters.values(): - if param.kind == inspect.Parameter.POSITIONAL_ONLY: - pos_only = True - elif pos_only: - result.append('/') - pos_only = False - - if param.kind == inspect.Parameter.VAR_POSITIONAL: - kw_only = False - elif param.kind == inspect.Parameter.KEYWORD_ONLY and kw_only: - result.append('*') - kw_only = False - - result.append(str(param)) - - if pos_only: - result.append('/') - - # add up name, parameters, braces (2), and commas - if len(obj_name) + sum(len(r) + 2 for r in result) > 75: - # This doesn’t fit behind “Signature: ” in an inspect window. - rendered = '{}(\n{})'.format(obj_name, ''.join( - ' {},\n'.format(r) for r in result) - ) - else: - rendered = '{}({})'.format(obj_name, ', '.join(result)) - - if obj_signature.return_annotation is not inspect._empty: - anno = inspect.formatannotation(obj_signature.return_annotation) - rendered += ' -> {}'.format(anno) - - return rendered diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_help.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_help.py deleted file mode 100644 index 038f560b2fb2addf7d95f45b358a7d4e0864e943..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_help.py +++ /dev/null @@ -1,30 +0,0 @@ -"""Test help output of various IPython entry points""" - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - -import pytest -import IPython.testing.tools as tt - - -def test_ipython_help(): - tt.help_all_output_test() - -def test_profile_help(): - tt.help_all_output_test("profile") - -def test_profile_list_help(): - tt.help_all_output_test("profile list") - -def test_profile_create_help(): - tt.help_all_output_test("profile create") - -def test_locate_help(): - tt.help_all_output_test("locate") - -def test_locate_profile_help(): - tt.help_all_output_test("locate profile") - -def test_trust_help(): - pytest.importorskip("nbformat") - tt.help_all_output_test("trust") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00002-migration-2.sqlite.sql b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00002-migration-2.sqlite.sql deleted file mode 100644 index 01e4b222af541efb9022d2eeb69e39239faecb34..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00002-migration-2.sqlite.sql +++ /dev/null @@ -1,3 +0,0 @@ -CREATE TABLE table2 ( - name TEXT PRIMARY KEY -); diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/util/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/util/__init__.py deleted file mode 100644 index fe33fcef1e18d2a4b92287e434cf6b1257e4274f..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/util/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from __future__ import annotations - -from contourpy.util._build_config import build_config - -__all__ = ["build_config"] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_extension_api.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_extension_api.py deleted file mode 100644 index 8c5a441b13feee85eedf1f9b7c228fa195c44333..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_extension_api.py +++ /dev/null @@ -1,108 +0,0 @@ -import abc -from typing import Any - - -# borrowed from from six -def _with_metaclass(meta, *bases): - """Create a base class with a metaclass.""" - - class metaclass(meta): - - def __new__(cls, name, this_bases, d): - return meta(name, bases, d) - - return type.__new__(metaclass, 'temporary_class', (), {}) - - -# ======================================================================================================================= -# AbstractResolver -# ======================================================================================================================= -class _AbstractResolver(_with_metaclass(abc.ABCMeta)): - """ - This class exists only for documentation purposes to explain how to create a resolver. - - Some examples on how to resolve things: - - list: get_dictionary could return a dict with index->item and use the index to resolve it later - - set: get_dictionary could return a dict with id(object)->object and reiterate in that array to resolve it later - - arbitrary instance: get_dictionary could return dict with attr_name->attr and use getattr to resolve it later - """ - - @abc.abstractmethod - def resolve(self, var, attribute): - """ - In this method, we'll resolve some child item given the string representation of the item in the key - representing the previously asked dictionary. - - :param var: this is the actual variable to be resolved. - :param attribute: this is the string representation of a key previously returned in get_dictionary. - """ - raise NotImplementedError - - @abc.abstractmethod - def get_dictionary(self, var): - """ - :param var: this is the variable that should have its children gotten. - - :return: a dictionary where each pair key, value should be shown to the user as children items - in the variables view for the given var. - """ - raise NotImplementedError - - -class _AbstractProvider(_with_metaclass(abc.ABCMeta)): - - @abc.abstractmethod - def can_provide(self, type_object, type_name): - raise NotImplementedError - -# ======================================================================================================================= -# API CLASSES: -# ======================================================================================================================= - - -class TypeResolveProvider(_AbstractResolver, _AbstractProvider): - """ - Implement this in an extension to provide a custom resolver, see _AbstractResolver - """ - - -class StrPresentationProvider(_AbstractProvider): - """ - Implement this in an extension to provide a str presentation for a type - """ - - def get_str_in_context(self, val: Any, context: str): - ''' - :param val: - This is the object for which we want a string representation. - - :param context: - This is the context in which the variable is being requested. Valid values: - "watch", - "repl", - "hover", - "clipboard" - - :note: this method is not required (if it's not available, get_str is called directly, - so, it's only needed if the string representation needs to be converted based on - the context). - ''' - return self.get_str(val) - - @abc.abstractmethod - def get_str(self, val): - raise NotImplementedError - - -class DebuggerEventHandler(_with_metaclass(abc.ABCMeta)): - """ - Implement this to receive lifecycle events from the debugger - """ - - def on_debugger_modules_loaded(self, **kwargs): - """ - This method invoked after all debugger modules are loaded. Useful for importing and/or patching debugger - modules at a safe time - :param kwargs: This is intended to be flexible dict passed from the debugger. - Currently passes the debugger version - """ diff --git a/spaces/Surn/UnlimitedMusicGen/audiocraft/modules/codebooks_patterns.py b/spaces/Surn/UnlimitedMusicGen/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/Surn/UnlimitedMusicGen/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/TIMBOVILL/RVC-Noobie/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/TIMBOVILL/RVC-Noobie/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/predictors/biased_predictor.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/predictors/biased_predictor.py deleted file mode 100644 index 1a4d4076d5bf25c287b928573959d4bf2e9409f8..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/risk_biased/predictors/biased_predictor.py +++ /dev/null @@ -1,568 +0,0 @@ -from dataclasses import dataclass -from functools import partial -from typing import Callable, List, Optional, Tuple, Union - - -from einops import repeat -from mmcv import Config -import pytorch_lightning as pl -import torch - -from risk_biased.models.cvae_params import CVAEParams -from risk_biased.models.biased_cvae_model import ( - cvae_factory, -) - -from risk_biased.utils.cost import TTCCostTorch, TTCCostParams -from risk_biased.utils.risk import get_risk_estimator -from risk_biased.utils.risk import get_risk_level_sampler - - -@dataclass -class LitTrajectoryPredictorParams: - """ - cvae_params: CVAEParams class defining the necessary parameters for the CVAE model - risk distribution: dict of string and values defining the risk distribution to use - risk_estimator: dict of string and values defining the risk estimator to use - kl_weight: float defining the weight of the KL term in the loss function - kl_threshold: float defining the threshold to apply when computing kl divergence (avoid posterior collapse) - risk_weight: float defining the weight of the risk term in the loss function - n_mc_samples_risk: int defining the number of Monte Carlo samples to use when estimating the risk - n_mc_samples_biased: int defining the number of Monte Carlo samples to use when estimating the expected biased cost - dt: float defining the duration between two consecutive time steps - learning_rate: float defining the learning rate for the optimizer - use_risk_constraint: bool defining whether to use the risk constrained optimization procedure - risk_constraint_update_every_n_epoch: int defining the number of epochs between two risk weight updates - risk_constraint_weight_update_factor: float defining the factor by which the risk weight is multiplied at each update - risk_constraint_weight_maximum: float defining the maximum value of the risk weight - num_samples_min_fde: int defining the number of samples to use when estimating the minimum FDE - condition_on_ego_future: bool defining whether to condition the biasing on the ego future trajectory (else on the ego past) - - """ - - cvae_params: CVAEParams - risk_distribution: dict - risk_estimator: dict - kl_weight: float - kl_threshold: float - risk_weight: float - n_mc_samples_risk: int - n_mc_samples_biased: int - dt: float - learning_rate: float - use_risk_constraint: bool - risk_constraint_update_every_n_epoch: int - risk_constraint_weight_update_factor: float - risk_constraint_weight_maximum: float - num_samples_min_fde: int - condition_on_ego_future: bool - - @staticmethod - def from_config(cfg: Config): - cvae_params = CVAEParams.from_config(cfg) - return LitTrajectoryPredictorParams( - risk_distribution=cfg.risk_distribution, - risk_estimator=cfg.risk_estimator, - kl_weight=cfg.kl_weight, - kl_threshold=cfg.kl_threshold, - risk_weight=cfg.risk_weight, - n_mc_samples_risk=cfg.n_mc_samples_risk, - n_mc_samples_biased=cfg.n_mc_samples_biased, - dt=cfg.dt, - learning_rate=cfg.learning_rate, - cvae_params=cvae_params, - use_risk_constraint=cfg.use_risk_constraint, - risk_constraint_update_every_n_epoch=cfg.risk_constraint_update_every_n_epoch, - risk_constraint_weight_update_factor=cfg.risk_constraint_weight_update_factor, - risk_constraint_weight_maximum=cfg.risk_constraint_weight_maximum, - num_samples_min_fde=cfg.num_samples_min_fde, - condition_on_ego_future=cfg.condition_on_ego_future, - ) - - -class LitTrajectoryPredictor(pl.LightningModule): - """Pytorch Lightning Module for Trajectory Prediction with the biased cvae model - - Args: - params : dataclass object containing the necessary parameters - cost_params: dataclass object defining the TTC cost function - unnormalizer: function that takes in a trajectory and an offset and that outputs the - unnormalized trajectory - """ - - def __init__( - self, - params: LitTrajectoryPredictorParams, - cost_params: TTCCostParams, - unnormalizer: Callable[[torch.Tensor, torch.Tensor], torch.Tensor], - ) -> None: - super().__init__() - model = cvae_factory( - params.cvae_params, - cost_function=TTCCostTorch(cost_params), - risk_estimator=get_risk_estimator(params.risk_estimator), - training_mode="cvae", - ) - self.model = model - self.params = params - self._unnormalize_trajectory = unnormalizer - self.set_training_mode("cvae") - - self.learning_rate = params.learning_rate - self.num_samples_min_fde = params.num_samples_min_fde - - self.dynamic_state_dim = params.cvae_params.dynamic_state_dim - self.dt = params.cvae_params.dt - - self.use_risk_constraint = params.use_risk_constraint - self.risk_weight = params.risk_weight - self.risk_weight_ratio = params.risk_weight / params.kl_weight - self.kl_weight = params.kl_weight - if self.use_risk_constraint: - self.risk_constraint_update_every_n_epoch = ( - params.risk_constraint_update_every_n_epoch - ) - self.risk_constraint_weight_update_factor = ( - params.risk_constraint_weight_update_factor - ) - self.risk_constraint_weight_maximum = params.risk_constraint_weight_maximum - - self._risk_sampler = get_risk_level_sampler(params.risk_distribution) - - def set_training_mode(self, training_mode: str): - self.model.set_training_mode(training_mode) - self.partial_get_loss = partial( - self.model.get_loss, - kl_threshold=self.params.kl_threshold, - n_samples_risk=self.params.n_mc_samples_risk, - n_samples_biased=self.params.n_mc_samples_biased, - dt=self.params.dt, - unnormalizer=self._unnormalize_trajectory, - ) - - def _get_loss( - self, - x: torch.Tensor, - mask_x: torch.Tensor, - map: torch.Tensor, - mask_map: torch.Tensor, - y: torch.Tensor, - mask_y: torch.Tensor, - mask_loss: torch.Tensor, - x_ego: torch.Tensor, - y_ego: torch.Tensor, - offset: Optional[torch.Tensor] = None, - risk_level: Optional[torch.Tensor] = None, - ) -> Tuple[Union[torch.Tensor, Tuple[torch.Tensor, ...]], dict]: - """Compute loss based on trajectory history x and future y - - Args: - x: (batch_size, num_agents, num_steps, state_dim) tensor of history - mask_x: (batch_size, num_agents, num_steps) tensor of bool mask - map: (batch_size, num_objects, object_sequence_length, map_feature_dim) tensor of encoded map objects - mask_map: (batch_size, num_objects, object_sequence_length) tensor True where map features are good False where it is padding - y: (batch_size, num_agents, num_steps_future, state_dim) tensor of future trajectory. - mask_y: (batch_size, num_agents, num_steps_future) tensor of bool mask. - mask_loss: (batch_size, num_agents, num_steps_future) tensor of bool mask set to True where the loss - should be computed and to False where it shouldn't - offset : (batch_size, num_agents, state_dim) offset position from ego - risk_level : (batch_size, num_agents) tensor of risk levels desired for future trajectories - - Returns: - Union[torch.Tensor, Tuple[torch.Tensor, ...]]: (1,) loss tensor or tuple of - loss tensors - dict: dict that contains values to be logged - """ - return self.partial_get_loss( - x=x, - mask_x=mask_x, - map=map, - mask_map=mask_map, - y=y, - mask_y=mask_y, - mask_loss=mask_loss, - offset=offset, - risk_level=risk_level, - x_ego=x_ego, - y_ego=y_ego, - risk_weight=self.risk_weight, - kl_weight=self.kl_weight, - ) - - def log_with_prefix( - self, - log_dict: dict, - prefix: Optional[str] = None, - on_step: Optional[bool] = None, - on_epoch: Optional[bool] = None, - ) -> None: - """log entries in log_dict while optinally adding "/" to its keys - - Args: - log_dict: dict that contains values to be logged - prefix: prefix to be added to keys - on_step: if True logs at this step. None auto-logs at the training_step but not - validation/test_step - on_epoch: if True logs epoch accumulated metrics. None auto-logs at the val/test - step but not training_step - """ - if prefix is None: - prefix = "" - else: - prefix += "/" - - for (metric, value) in log_dict.items(): - metric = prefix + metric - self.log(metric, value, on_step=on_step, on_epoch=on_epoch) - - def configure_optimizers( - self, - ) -> Union[torch.optim.Optimizer, List[torch.optim.Optimizer]]: - """Configure optimizer for PyTorch-Lightning - - Returns: - torch.optim.Optimizer: optimizer to be used for training - """ - if isinstance(self.model.get_parameters(), list): - self._optimizers = [ - torch.optim.Adam(params, lr=self.learning_rate) - for params in self.model.get_parameters() - ] - else: - self._optimizers = [ - torch.optim.Adam(self.model.get_parameters(), lr=self.learning_rate) - ] - return self._optimizers - - def training_step( - self, - batch: Tuple[ - torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor - ], - batch_idx: int, - ) -> dict: - """Training step definition for PyTorch-Lightning - - Args: - batch : [(batch_size, num_agents, num_steps, state_dim), # past trajectories of all agents in the scene - (batch_size, num_agents, num_steps), # mask past False where past trajectories are padding data - (batch_size, num_agents, num_steps_future, state_dim), # future trajectory - (batch_size, num_agents, num_steps_future), # mask future False where future trajectories are padding data - (batch_size, num_agents, num_steps_future), # mask loss False where future trajectories are not to be predicted - (batch_size, num_objects, object_seq_len, state_dim), # map object sequences in the scene - (batch_size, num_objects, object_seq_len), # mask map False where map objects are padding data - (batch_size, num_agents, state_dim), # position offset of all agents relative to ego at present time - (batch_size, 1, num_steps, state_dim), # ego past trajectory - (batch_size, 1, num_steps_future, state_dim)] # ego future trajectory - batch_idx : batch_idx to be used by PyTorch-Lightning - - Returns: - dict: dict of outputs containing loss - """ - x, mask_x, y, mask_y, mask_loss, map, mask_map, offset, x_ego, y_ego = batch - risk_level = repeat( - self._risk_sampler.sample(x.shape[0], x.device), - "b -> b num_agents", - num_agents=x.shape[1], - ) - loss, log_dict = self._get_loss( - x=x, - mask_x=mask_x, - map=map, - mask_map=mask_map, - y=y, - mask_y=mask_y, - mask_loss=mask_loss, - offset=offset, - risk_level=risk_level, - x_ego=x_ego, - y_ego=y_ego, - ) - if isinstance(loss, tuple): - loss = sum(loss) - self.log_with_prefix(log_dict, prefix="train", on_step=True, on_epoch=False) - - return {"loss": loss} - - def training_epoch_end(self, outputs: List[dict]) -> None: - """Called at the end of the training epoch with the outputs of all training steps - - Args: - outputs: list of outputs of all training steps in the current epoch - """ - if self.use_risk_constraint: - if ( - self.model.training_mode == "bias" - and (self.trainer.current_epoch + 1) - % self.risk_constraint_update_every_n_epoch - == 0 - ): - self.risk_weight_ratio *= self.risk_constraint_weight_update_factor - if self.risk_weight_ratio < self.risk_constraint_weight_maximum: - sum_weight = self.risk_weight + self.kl_weight - self.risk_weight = ( - sum_weight - * self.risk_weight_ratio - / (1 + self.risk_weight_ratio) - ) - self.kl_weight = sum_weight / (1 + self.risk_weight_ratio) - # self.risk_weight *= self.risk_constraint_weight_update_factor - # if self.risk_weight > self.risk_constraint_weight_maximum: - # self.risk_weight = self.risk_constraint_weight_maximum - - def _get_risk_tensor( - self, - batch_size: int, - num_agents: int, - device: torch.device, - risk_level: Optional[torch.Tensor] = None, - ): - """This function is used to reformat different possible formattings of risk_level input arguments into a tensor of shape (batch_size). - If given a tensor the same tensor is returned. - If given a float value, a tensor of this value is returned. - If given None, a tensor filled with random samples is returned. - - Args: - batch_size : desired batch size - device : device on which we want to store risk - risk_level : The risk level as a tensor, a float value or None - - Returns: - _type_: _description_ - """ - if risk_level is not None: - if isinstance(risk_level, float): - risk_level = ( - torch.ones(batch_size, num_agents, device=device) * risk_level - ) - else: - risk_level = risk_level.to(device) - else: - risk_level = None - - return risk_level - - def validation_step( - self, - batch: Tuple[ - torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor - ], - batch_idx: int, - risk_level: float = 1.0, - ) -> dict: - """Validation step definition for PyTorch-Lightning - - Args: - batch : [(batch_size, num_agents, num_steps, state_dim), # past trajectories of all agents in the scene - (batch_size, num_agents, num_steps), # mask past False where past trajectories are padding data - (batch_size, num_agents, num_steps_future, state_dim), # future trajectory - (batch_size, num_agents, num_steps_future), # mask future False where future trajectories are padding data - (batch_size, num_agents, num_steps_future), # mask loss False where future trajectories are not to be predicted - (batch_size, num_objects, object_seq_len, state_dim), # map object sequences in the scene - (batch_size, num_objects, object_seq_len), # mask map False where map objects are padding data - (batch_size, num_agents, state_dim), # position offset of all agents relative to ego at present time - (batch_size, 1, num_steps, state_dim), # ego past trajectory - (batch_size, 1, num_steps_future, state_dim)] # ego future trajectory - batch_idx : batch_idx to be used by PyTorch-Lightning - risk_level : optional desired risk level - - Returns: - dict: dict of outputs containing loss - """ - x, mask_x, y, mask_y, mask_loss, map, mask_map, offset, x_ego, y_ego = batch - - risk_level = self._get_risk_tensor( - x.shape[0], x.shape[1], x.device, risk_level=risk_level - ) - self.model.eval() - log_dict_accuracy = self.model.get_prediction_accuracy( - x=x, - mask_x=mask_x, - map=map, - mask_map=mask_map, - y=y, - mask_loss=mask_loss, - offset=offset, - x_ego=x_ego, - y_ego=y_ego, - unnormalizer=self._unnormalize_trajectory, - risk_level=risk_level, - num_samples_min_fde=self.num_samples_min_fde, - ) - - loss, log_dict_loss = self._get_loss( - x=x, - mask_x=mask_x, - map=map, - mask_map=mask_map, - y=y, - mask_y=mask_y, - mask_loss=mask_loss, - offset=offset, - risk_level=risk_level, - x_ego=x_ego, - y_ego=y_ego, - ) - - if isinstance(loss, tuple): - loss = sum(loss) - - self.log_with_prefix( - dict(log_dict_accuracy, **log_dict_loss), - prefix="val", - on_step=False, - on_epoch=True, - ) - self.model.train() - return {"loss": loss} - - def test_step( - self, - batch: Tuple[ - torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor - ], - batch_idx: int, - risk_level: Optional[torch.Tensor] = None, - ) -> dict: - """Test step definition for PyTorch-Lightning - - Args: - batch : [(batch_size, num_agents, num_steps, state_dim), # past trajectories of all agents in the scene - (batch_size, num_agents, num_steps), # mask past False where past trajectories are padding data - (batch_size, num_agents, num_steps_future, state_dim), # future trajectory - (batch_size, num_agents, num_steps_future), # mask future False where future trajectories are padding data - (batch_size, num_agents, num_steps_future), # mask loss False where future trajectories are not to be predicted - (batch_size, num_objects, object_seq_len, state_dim), # map object sequences in the scene - (batch_size, num_objects, object_seq_len), # mask map False where map objects are padding data - (batch_size, num_agents, state_dim), # position offset of all agents relative to ego at present time - (batch_size, 1, num_steps, state_dim), # ego past trajectory - (batch_size, 1, num_steps_future, state_dim)] # ego future trajectory - batch_idx : batch_idx to be used by PyTorch-Lightning - risk_level : optional desired risk level - - Returns: - dict: dict of outputs containing loss - """ - x, mask_x, y, mask_y, mask_loss, map, mask_map, offset, x_ego, y_ego = batch - risk_level = self._get_risk_tensor( - x.shape[0], x.shape[1], x.device, risk_level=risk_level - ) - loss, log_dict = self._get_loss( - x=x, - mask_x=mask_x, - map=map, - mask_map=mask_map, - y=y, - mask_y=mask_y, - mask_loss=mask_loss, - offset=offset, - risk_level=risk_level, - x_ego=x_ego, - y_ego=y_ego, - ) - if isinstance(loss, tuple): - loss = sum(loss) - self.log_with_prefix(log_dict, prefix="test", on_step=False, on_epoch=True) - return {"loss": loss} - - def predict_step( - self, - batch: Tuple[torch.Tensor, torch.Tensor], - batch_idx: int = 0, - risk_level: Optional[torch.Tensor] = None, - n_samples: int = 0, - return_weights: bool = False, - ) -> torch.Tensor: - """Predict step definition for PyTorch-Lightning - - Args: - batch: [(batch_size, num_agents, num_steps, state_dim), # past trajectories of all agents in the scene - (batch_size, num_agents, num_steps), # mask past False where past trajectories are padding data - (batch_size, num_objects, object_seq_len, state_dim), # map object sequences in the scene - (batch_size, num_objects, object_seq_len), # mask map False where map objects are padding data - (batch_size, num_agents, state_dim), # position offset of all agents relative to ego at present time - (batch_size, 1, num_steps, state_dim), # past trajectory of the ego agent in the scene - (batch_size, 1, num_steps_future, state_dim),] # future trajectory of the ego agent in the scene - batch_idx : batch_idx to be used by PyTorch-Lightning (unused here) - risk_level : optional desired risk level - n_samples: Number of samples to predict per agent - With value of 0 does not include the `n_samples` dim in the output. - return_weights: If True, also returns the sample weights - - Returns: - (batch_size, (n_samples), num_steps_future, state_dim) tensor - """ - x, mask_x, map, mask_map, offset, x_ego, y_ego = batch - risk_level = self._get_risk_tensor( - batch_size=x.shape[0], - num_agents=x.shape[1], - device=x.device, - risk_level=risk_level, - ) - y_sampled, weights, _ = self.model( - x, - mask_x, - map, - mask_map, - offset=offset, - x_ego=x_ego, - y_ego=y_ego, - risk_level=risk_level, - n_samples=n_samples, - ) - predict_sampled = self._unnormalize_trajectory(y_sampled, offset) - if return_weights: - return predict_sampled, weights - else: - return predict_sampled - - def predict_loop_once( - self, - batch: Tuple[torch.Tensor, torch.Tensor], - batch_idx: int = 0, - risk_level: Optional[torch.Tensor] = None, - ) -> torch.Tensor: - """Predict with refinment: - A first prediction is done as in predict_step, however instead of unnormalize and return it, - it is fed to the encoder that wast trained to encode past and ground truth future. - Then the decoder is used again but its latent input sample is biased by the encoder - instead of being a sample of the prior distribution. - Then as in predict_step the result is unnormalized and returned. - - Args: - batch: [(batch_size, num_agents, num_steps, state_dim), # past trajectories of all agents in the scene - (batch_size, num_agents, num_steps), # mask past False where past trajectories are padding data - (batch_size, num_objects, object_seq_len, state_dim), # map object sequences in the scene - (batch_size, num_objects, object_seq_len), # mask map False where map objects are padding data - (batch_size, num_agents, state_dim),] # position offset of all agents relative to ego at present time - batch_idx : batch_idx to be used by PyTorch-Lightning (Unused here). Defaults to 0. - risk_level : optional desired risk level - - Returns: - torch.Tensor: (batch_size, num_steps_future, state_dim) tensor - """ - x, mask_x, map, mask_map, offset = batch - risk_level = self._get_risk_tensor( - x.shape[0], x.shape[1], x.device, risk_level=risk_level - ) - y_sampled, _ = self.model( - x, - mask_x, - map, - mask_map, - offset=offset, - risk_level=risk_level, - ) - mask_y = repeat(mask_x.any(-1), "b a -> b a f", f=y_sampled.shape[-2]) - y_sampled, _ = self.model( - x, - mask_x, - map, - mask_map, - y_sampled, - mask_y, - offset=offset, - risk_level=risk_level, - ) - predict_sampled = self._unnormalize_trajectory(y_sampled, offset=offset) - return predict_sampled diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/panel.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/panel.py deleted file mode 100644 index d522d80b5189554d1acf9b46d5db1981b946d712..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/panel.py +++ /dev/null @@ -1,308 +0,0 @@ -from typing import TYPE_CHECKING, Optional - -from .align import AlignMethod -from .box import ROUNDED, Box -from .cells import cell_len -from .jupyter import JupyterMixin -from .measure import Measurement, measure_renderables -from .padding import Padding, PaddingDimensions -from .segment import Segment -from .style import Style, StyleType -from .text import Text, TextType - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderableType, RenderResult - - -class Panel(JupyterMixin): - """A console renderable that draws a border around its contents. - - Example: - >>> console.print(Panel("Hello, World!")) - - Args: - renderable (RenderableType): A console renderable object. - box (Box, optional): A Box instance that defines the look of the border (see :ref:`appendix_box`. - Defaults to box.ROUNDED. - safe_box (bool, optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True. - expand (bool, optional): If True the panel will stretch to fill the console - width, otherwise it will be sized to fit the contents. Defaults to True. - style (str, optional): The style of the panel (border and contents). Defaults to "none". - border_style (str, optional): The style of the border. Defaults to "none". - width (Optional[int], optional): Optional width of panel. Defaults to None to auto-detect. - height (Optional[int], optional): Optional height of panel. Defaults to None to auto-detect. - padding (Optional[PaddingDimensions]): Optional padding around renderable. Defaults to 0. - highlight (bool, optional): Enable automatic highlighting of panel title (if str). Defaults to False. - """ - - def __init__( - self, - renderable: "RenderableType", - box: Box = ROUNDED, - *, - title: Optional[TextType] = None, - title_align: AlignMethod = "center", - subtitle: Optional[TextType] = None, - subtitle_align: AlignMethod = "center", - safe_box: Optional[bool] = None, - expand: bool = True, - style: StyleType = "none", - border_style: StyleType = "none", - width: Optional[int] = None, - height: Optional[int] = None, - padding: PaddingDimensions = (0, 1), - highlight: bool = False, - ) -> None: - self.renderable = renderable - self.box = box - self.title = title - self.title_align: AlignMethod = title_align - self.subtitle = subtitle - self.subtitle_align = subtitle_align - self.safe_box = safe_box - self.expand = expand - self.style = style - self.border_style = border_style - self.width = width - self.height = height - self.padding = padding - self.highlight = highlight - - @classmethod - def fit( - cls, - renderable: "RenderableType", - box: Box = ROUNDED, - *, - title: Optional[TextType] = None, - title_align: AlignMethod = "center", - subtitle: Optional[TextType] = None, - subtitle_align: AlignMethod = "center", - safe_box: Optional[bool] = None, - style: StyleType = "none", - border_style: StyleType = "none", - width: Optional[int] = None, - padding: PaddingDimensions = (0, 1), - ) -> "Panel": - """An alternative constructor that sets expand=False.""" - return cls( - renderable, - box, - title=title, - title_align=title_align, - subtitle=subtitle, - subtitle_align=subtitle_align, - safe_box=safe_box, - style=style, - border_style=border_style, - width=width, - padding=padding, - expand=False, - ) - - @property - def _title(self) -> Optional[Text]: - if self.title: - title_text = ( - Text.from_markup(self.title) - if isinstance(self.title, str) - else self.title.copy() - ) - title_text.end = "" - title_text.plain = title_text.plain.replace("\n", " ") - title_text.no_wrap = True - title_text.expand_tabs() - title_text.pad(1) - return title_text - return None - - @property - def _subtitle(self) -> Optional[Text]: - if self.subtitle: - subtitle_text = ( - Text.from_markup(self.subtitle) - if isinstance(self.subtitle, str) - else self.subtitle.copy() - ) - subtitle_text.end = "" - subtitle_text.plain = subtitle_text.plain.replace("\n", " ") - subtitle_text.no_wrap = True - subtitle_text.expand_tabs() - subtitle_text.pad(1) - return subtitle_text - return None - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - _padding = Padding.unpack(self.padding) - renderable = ( - Padding(self.renderable, _padding) if any(_padding) else self.renderable - ) - style = console.get_style(self.style) - border_style = style + console.get_style(self.border_style) - width = ( - options.max_width - if self.width is None - else min(options.max_width, self.width) - ) - - safe_box: bool = console.safe_box if self.safe_box is None else self.safe_box - box = self.box.substitute(options, safe=safe_box) - - def align_text( - text: Text, width: int, align: str, character: str, style: Style - ) -> Text: - """Gets new aligned text. - - Args: - text (Text): Title or subtitle text. - width (int): Desired width. - align (str): Alignment. - character (str): Character for alignment. - style (Style): Border style - - Returns: - Text: New text instance - """ - text = text.copy() - text.truncate(width) - excess_space = width - cell_len(text.plain) - if excess_space: - if align == "left": - return Text.assemble( - text, - (character * excess_space, style), - no_wrap=True, - end="", - ) - elif align == "center": - left = excess_space // 2 - return Text.assemble( - (character * left, style), - text, - (character * (excess_space - left), style), - no_wrap=True, - end="", - ) - else: - return Text.assemble( - (character * excess_space, style), - text, - no_wrap=True, - end="", - ) - return text - - title_text = self._title - if title_text is not None: - title_text.stylize_before(border_style) - - child_width = ( - width - 2 - if self.expand - else console.measure( - renderable, options=options.update_width(width - 2) - ).maximum - ) - child_height = self.height or options.height or None - if child_height: - child_height -= 2 - if title_text is not None: - child_width = min( - options.max_width - 2, max(child_width, title_text.cell_len + 2) - ) - - width = child_width + 2 - child_options = options.update( - width=child_width, height=child_height, highlight=self.highlight - ) - lines = console.render_lines(renderable, child_options, style=style) - - line_start = Segment(box.mid_left, border_style) - line_end = Segment(f"{box.mid_right}", border_style) - new_line = Segment.line() - if title_text is None or width <= 4: - yield Segment(box.get_top([width - 2]), border_style) - else: - title_text = align_text( - title_text, - width - 4, - self.title_align, - box.top, - border_style, - ) - yield Segment(box.top_left + box.top, border_style) - yield from console.render(title_text, child_options.update_width(width - 4)) - yield Segment(box.top + box.top_right, border_style) - - yield new_line - for line in lines: - yield line_start - yield from line - yield line_end - yield new_line - - subtitle_text = self._subtitle - if subtitle_text is not None: - subtitle_text.stylize_before(border_style) - - if subtitle_text is None or width <= 4: - yield Segment(box.get_bottom([width - 2]), border_style) - else: - subtitle_text = align_text( - subtitle_text, - width - 4, - self.subtitle_align, - box.bottom, - border_style, - ) - yield Segment(box.bottom_left + box.bottom, border_style) - yield from console.render( - subtitle_text, child_options.update_width(width - 4) - ) - yield Segment(box.bottom + box.bottom_right, border_style) - - yield new_line - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - _title = self._title - _, right, _, left = Padding.unpack(self.padding) - padding = left + right - renderables = [self.renderable, _title] if _title else [self.renderable] - - if self.width is None: - width = ( - measure_renderables( - console, - options.update_width(options.max_width - padding - 2), - renderables, - ).maximum - + padding - + 2 - ) - else: - width = self.width - return Measurement(width, width) - - -if __name__ == "__main__": # pragma: no cover - from .console import Console - - c = Console() - - from .box import DOUBLE, ROUNDED - from .padding import Padding - - p = Panel( - "Hello, World!", - title="rich.Panel", - style="white on blue", - box=DOUBLE, - padding=1, - ) - - c.print() - c.print(p) diff --git a/spaces/Vageesh1/bio_generator/app.py b/spaces/Vageesh1/bio_generator/app.py deleted file mode 100644 index 66a95a4718ae39100c1235d7afad9bd631367757..0000000000000000000000000000000000000000 --- a/spaces/Vageesh1/bio_generator/app.py +++ /dev/null @@ -1,154 +0,0 @@ -import streamlit as st -import langchain -import pandas as pd -import numpy as np -import os - -import re - -from langchain.chat_models import ChatOpenAI -import openai -from langchain import HuggingFaceHub, LLMChain, PromptTemplate -from langchain.memory import ConversationBufferWindowMemory -from langchain.chains import ConversationalRetrievalChain - -trait_content_df=pd.read_csv('AI Personality Chart trait_content (2).csv') -trait_content_df=trait_content_df.drop(0,axis=0) -trait_content_df.rename(columns={'Column 1':'Question','Column 2':'Options','Column 3':'Traits','Column 4':'Content'},inplace=True) -trait_content_df['Title'].fillna(method='ffill',inplace=True) -trait_content_df['Question'].fillna(method='ffill',inplace=True) - -template = """ -Imagine you're someone looking to create a unique personalized bio based on your traits and experiences. You've shared some details about your background, and now it's time to craft a bio that stands out. Respond in the second person and avoid using the same sentences for different users. Your response should be concise and conclude within 150 words. - -{history} -You: {human_input} -Bot: - -[CHARACTER_LIMIT=150] -""" - -prompt = PromptTemplate( - input_variables=["history", "human_input"], - template=template -) - -llm_chain = LLMChain( - llm = ChatOpenAI(temperature=1.3,model_name='gpt-3.5-turbo'), - prompt=prompt, - verbose=True, - memory=ConversationBufferWindowMemory(k=0) - ) - -def extract_text_from_html(html): - cleanr = re.compile('<.*?>') - cleantext = re.sub(cleanr, '', html) - return cleantext.strip() - -def conversational_chat(query, replacement_word=None): - hist_dict['past'].append(query) - output = llm_chain.predict(human_input=query) - hist_dict['generated'].append(output) - - if replacement_word is not None: - # Use a regular expression with the re module for case-insensitive replacement - output = re.sub(r'\bjack\b', replacement_word, output, flags=re.IGNORECASE) - - return extract_text_from_html(output) - -def word_count(text): - words = re.findall(r'\w+', text) - return len(words) - - - -hist_dict={} -hist_dict['generated']=["Hello ! Ask me anything about " + " 🤗"] -hist_dict['past'] = ["Hey ! 👋"] - - -trait_content_df_org=pd.read_csv('AI Personality Chart trait_content (2).csv') -trait_content_df_org=trait_content_df_org.drop(0,axis=0) -trait_content_df_org.rename(columns={'Column 1':'Question','Column 2':'Options','Column 3':'Traits','Column 4':'Content'},inplace=True) - - -def ui(): - # Initialize a dictionary to store responses - responses = {} - - # Create checkboxes for each question and options - index = 0 - while index < len(trait_content_df_org): - question = trait_content_df_org.iloc[index]["Question"] - st.write(question) - - option_a = st.checkbox(f"Option A: {trait_content_df_org.iloc[index]['Options']}", key=f"option_a_{index}") - - # Check if Option B has a corresponding question (not None) - if trait_content_df_org.iloc[index + 1]["Question"] is not None: - option_b = st.checkbox(f"Option B: {trait_content_df_org.iloc[index + 1]['Options']}", key=f"option_b_{index + 1}") - else: - option_b = False - - st.write("") # Add some spacing between questions - - # Store responses in the dictionary - if option_a: - responses[question] = f"{trait_content_df_org.iloc[index]['Options']}" - if option_b: - responses[question] = f"{trait_content_df_org.iloc[index + 1]['Options']}" - - index += 2 # Move to the next question and options (skipping None) - - st.write("Responses:") - for question, selected_option in responses.items(): - st.write(question) - st.write(selected_option) - - # Generate a prompt based on selected options - selected_traits = [responses[question] for question in responses] - options_list = [] - traits_list = [] - content_list = [] - - for trait_str in selected_traits: - matching_rows = trait_content_df_org[trait_content_df_org["Options"] == trait_str] - - if not matching_rows.empty: - options_list.append(matching_rows["Options"].values[0]) - traits_list.append(matching_rows["Traits"].values[0]) - content_list.append(matching_rows["Content"].values[0]) - - prompt = f"The following are Traits {', '.join(traits_list)}, and the content for the options is {', '.join(content_list)}" - - # Display user input field - name_input = st.text_input("Enter your name:") - - # Add a submit button - if st.button("Submit"): - # Generate a chatbot response - bio = conversational_chat(prompt, name_input) - - # Count words in the generated bio - bio_word_count = word_count(bio) - - # Check if the bio exceeds 250 words - if bio_word_count > 250: - st.warning("Generated Bio exceeded 250 words. Re-inferencing...") - bio = conversational_chat(prompt, name_input) # Re-inferencing - - # Count words in the re-inferenced bio - bio_word_count = word_count(bio) - - st.write(f"Generated Bio Word Count: {bio_word_count}") - st.write(bio) - - - - - - - -if __name__=='__main__': - ui() - diff --git a/spaces/Vision-CAIR/minigpt4/minigpt4/common/gradcam.py b/spaces/Vision-CAIR/minigpt4/minigpt4/common/gradcam.py deleted file mode 100644 index d53a5254d4b319eaf2cbfbd081b0ca8e38c5c7a0..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/minigpt4/minigpt4/common/gradcam.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np -from matplotlib import pyplot as plt -from scipy.ndimage import filters -from skimage import transform as skimage_transform - - -def getAttMap(img, attMap, blur=True, overlap=True): - attMap -= attMap.min() - if attMap.max() > 0: - attMap /= attMap.max() - attMap = skimage_transform.resize(attMap, (img.shape[:2]), order=3, mode="constant") - if blur: - attMap = filters.gaussian_filter(attMap, 0.02 * max(img.shape[:2])) - attMap -= attMap.min() - attMap /= attMap.max() - cmap = plt.get_cmap("jet") - attMapV = cmap(attMap) - attMapV = np.delete(attMapV, 3, 2) - if overlap: - attMap = ( - 1 * (1 - attMap**0.7).reshape(attMap.shape + (1,)) * img - + (attMap**0.7).reshape(attMap.shape + (1,)) * attMapV - ) - return attMap diff --git a/spaces/WordLift/entity-linking/app.py b/spaces/WordLift/entity-linking/app.py deleted file mode 100644 index 0acc47629b8581ae3a02eff101f037ab0d288211..0000000000000000000000000000000000000000 --- a/spaces/WordLift/entity-linking/app.py +++ /dev/null @@ -1,195 +0,0 @@ -import streamlit as st -from annotated_text import annotated_text -from refined.inference.processor import Refined -import requests -import json -import spacy - -# Page config -st.set_page_config( - page_title="Entity Linking by WordLift", - page_icon="fav-ico.png", - layout="wide", - initial_sidebar_state="collapsed", - menu_items={ - 'Get Help': 'https://wordlift.io/book-a-demo/', - 'About': "# This is a demo app for NEL/NED/NER and SEO" - } -) - -# Sidebar -st.sidebar.image("logo-wordlift.png") -language_options = {"English", "German"} -selected_language = st.sidebar.selectbox("Select the Language", list(language_options), index=0) - -# Based on selected language, configure model, entity set, and citation options -if selected_language != "German": - model_options = ["aida_model", "wikipedia_model_with_numbers"] - entity_set_options = ["wikidata", "wikipedia"] - - selected_model_name = st.sidebar.selectbox("Select the Model", model_options) - selected_entity_set = st.sidebar.selectbox("Select the Entity Set", entity_set_options) - - refined_citation = """ - @inproceedings{ayoola-etal-2022-refined, - title = "{R}e{F}in{ED}: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking", - author = "Tom Ayoola, Shubhi Tyagi, Joseph Fisher, Christos Christodoulopoulos, Andrea Pierleoni", - booktitle = "NAACL", - year = "2022" - } - """ - - with st.sidebar.expander('Citations'): - st.markdown(refined_citation) -else: - selected_model_name = None - selected_entity_set = None - - entity_fishing_citation = """ - @misc{entity-fishing, - title = {entity-fishing}, - publisher = {GitHub}, - year = {2016--2023}, - archivePrefix = {swh}, - eprint = {1:dir:cb0ba3379413db12b0018b7c3af8d0d2d864139c} - } - """ - - with st.sidebar.expander('Citations'): - st.markdown(entity_fishing_citation) - -@st.cache_resource # 👈 Add the caching decorator -def load_model(selected_language, model_name=None, entity_set=None): - if selected_language == "German": - # Load the German-specific model - nlp_model_de = spacy.load("de_core_news_lg") - nlp_model_de.add_pipe("entityfishing") - - return nlp_model_de - else: - # Load the pretrained model for other languages - refined_model = Refined.from_pretrained(model_name=model_name, entity_set=entity_set) - return refined_model - -# Use the cached model -model = load_model(selected_language, selected_model_name, selected_entity_set) - -# Helper functions -def get_wikidata_id(entity_string): - entity_list = entity_string.split("=") - entity_id = str(entity_list[1]) - entity_link = "http://www.wikidata.org/entity/" + entity_id - return {"id": entity_id, "link": entity_link} - -def get_entity_data(entity_link): - try: - # Format the entity_link - formatted_link = entity_link.replace("http://", "http/") - response = requests.get(f'https://api.wordlift.io/id/{formatted_link}') - return response.json() - except Exception as e: - print(f"Exception when fetching data for entity: {entity_link}. Exception: {e}") - return None - -# Create the form -with st.form(key='my_form'): - text_input = st.text_area(label='Enter a sentence') - submit_button = st.form_submit_button(label='Analyze') - -# Initialization -entities_map = {} -entities_data = {} - -if text_input: - if selected_language == "German": - doc_de = model(text_input) - entities = [(ent.text, ent.label_, ent._.kb_qid, ent._.url_wikidata) for ent in doc_de.ents] - for entity in entities: - entity_string, entity_type, wikidata_id, wikidata_url = entity - if wikidata_url: - # Ensure correct format for the German model - formatted_wikidata_url = wikidata_url.replace("https://www.wikidata.org/wiki/", "http://www.wikidata.org/entity/") - entities_map[entity_string] = {"id": wikidata_id, "link": formatted_wikidata_url} - entity_data = get_entity_data(formatted_wikidata_url) - - if entity_data is not None: - entities_data[entity_string] = entity_data - - else: - entities = model.process_text(text_input) - - for entity in entities: - single_entity_list = str(entity).strip('][').replace("\'", "").split(', ') - if len(single_entity_list) >= 2 and "wikidata" in single_entity_list[1]: - entities_map[single_entity_list[0].strip()] = get_wikidata_id(single_entity_list[1]) - entity_data = get_entity_data(entities_map[single_entity_list[0].strip()]["link"]) - if entity_data is not None: - entities_data[single_entity_list[0].strip()] = entity_data - - combined_entity_info_dictionary = dict([(k, [entities_map[k], entities_data[k] if k in entities_data else None]) for k in entities_map]) - - if submit_button: - # Prepare a list to hold the final output - final_text = [] - - # JSON-LD data - json_ld_data = { - "@context": "https://schema.org", - "@type": "WebPage", - "mentions": [] - } - - # Replace each entity in the text with its annotated version - for entity_string, entity_info in entities_map.items(): - # Check if the entity has a valid Wikidata link - if entity_info["link"] is None or entity_info["link"] == "None": - continue # skip this entity - - entity_data = entities_data.get(entity_string, None) - entity_type = None - if entity_data is not None: - entity_type = entity_data.get("@type", None) - - # Use different colors based on the entity's type - color = "#8ef" # Default color - if entity_type == "Place": - color = "#8AC7DB" - elif entity_type == "Organization": - color = "#ADD8E6" - elif entity_type == "Person": - color = "#67B7D1" - elif entity_type == "Product": - color = "#2ea3f2" - elif entity_type == "CreativeWork": - color = "#00BFFF" - elif entity_type == "Event": - color = "#1E90FF" - - entity_annotation = (entity_string, entity_info["id"], color) - text_input = text_input.replace(entity_string, f'{{{str(entity_annotation)}}}', 1) - - # Add the entity to JSON-LD data - entity_json_ld = combined_entity_info_dictionary[entity_string][1] - if entity_json_ld and entity_json_ld.get("link") != "None": - json_ld_data["mentions"].append(entity_json_ld) - - # Split the modified text_input into a list - text_list = text_input.split("{") - - for item in text_list: - if "}" in item: - item_list = item.split("}") - final_text.append(eval(item_list[0])) - if len(item_list[1]) > 0: - final_text.append(item_list[1]) - else: - final_text.append(item) - - # Pass the final_text to the annotated_text function - annotated_text(*final_text) - - with st.expander("See annotations"): - st.write(combined_entity_info_dictionary) - - with st.expander("Here is the final JSON-LD"): - st.json(json_ld_data) # Output JSON-LD \ No newline at end of file diff --git a/spaces/XAI/CHM-Corr/model/base/geometry.py b/spaces/XAI/CHM-Corr/model/base/geometry.py deleted file mode 100644 index 41363601169587951572bae74183459f14819acb..0000000000000000000000000000000000000000 --- a/spaces/XAI/CHM-Corr/model/base/geometry.py +++ /dev/null @@ -1,133 +0,0 @@ -r""" Provides functions that manipulate boxes and points """ - -import math - -import torch.nn.functional as F -import torch - - -class Geometry(object): - - @classmethod - def initialize(cls, img_size): - cls.img_size = img_size - - cls.spatial_side = int(img_size / 8) - norm_grid1d = torch.linspace(-1, 1, cls.spatial_side) - - cls.norm_grid_x = norm_grid1d.view(1, -1).repeat(cls.spatial_side, 1).view(1, 1, -1) - cls.norm_grid_y = norm_grid1d.view(-1, 1).repeat(1, cls.spatial_side).view(1, 1, -1) - cls.grid = torch.stack(list(reversed(torch.meshgrid(norm_grid1d, norm_grid1d)))).permute(1, 2, 0) - - cls.feat_idx = torch.arange(0, cls.spatial_side).float() - - @classmethod - def normalize_kps(cls, kps): - kps = kps.clone().detach() - kps[kps != -2] -= (cls.img_size // 2) - kps[kps != -2] /= (cls.img_size // 2) - return kps - - @classmethod - def unnormalize_kps(cls, kps): - kps = kps.clone().detach() - kps[kps != -2] *= (cls.img_size // 2) - kps[kps != -2] += (cls.img_size // 2) - return kps - - @classmethod - def attentive_indexing(cls, kps, thres=0.1): - r"""kps: normalized keypoints x, y (N, 2) - returns attentive index map(N, spatial_side, spatial_side) - """ - nkps = kps.size(0) - kps = kps.view(nkps, 1, 1, 2) - - eps = 1e-5 - attmap = (cls.grid.unsqueeze(0).repeat(nkps, 1, 1, 1) - kps).pow(2).sum(dim=3) - attmap = (attmap + eps).pow(0.5) - attmap = (thres - attmap).clamp(min=0).view(nkps, -1) - attmap = attmap / attmap.sum(dim=1, keepdim=True) - attmap = attmap.view(nkps, cls.spatial_side, cls.spatial_side) - - return attmap - - @classmethod - def apply_gaussian_kernel(cls, corr, sigma=17): - bsz, side, side = corr.size() - - center = corr.max(dim=2)[1] - center_y = center // cls.spatial_side - center_x = center % cls.spatial_side - - y = cls.feat_idx.view(1, 1, cls.spatial_side).repeat(bsz, center_y.size(1), 1) - center_y.unsqueeze(2) - x = cls.feat_idx.view(1, 1, cls.spatial_side).repeat(bsz, center_x.size(1), 1) - center_x.unsqueeze(2) - - y = y.unsqueeze(3).repeat(1, 1, 1, cls.spatial_side) - x = x.unsqueeze(2).repeat(1, 1, cls.spatial_side, 1) - - gauss_kernel = torch.exp(-(x.pow(2) + y.pow(2)) / (2 * sigma ** 2)) - filtered_corr = gauss_kernel * corr.view(bsz, -1, cls.spatial_side, cls.spatial_side) - filtered_corr = filtered_corr.view(bsz, side, side) - - return filtered_corr - - @classmethod - def transfer_kps(cls, confidence_ts, src_kps, n_pts, normalized): - r""" Transfer keypoints by weighted average """ - - if not normalized: - src_kps = Geometry.normalize_kps(src_kps) - confidence_ts = cls.apply_gaussian_kernel(confidence_ts) - - pdf = F.softmax(confidence_ts, dim=2) - prd_x = (pdf * cls.norm_grid_x).sum(dim=2) - prd_y = (pdf * cls.norm_grid_y).sum(dim=2) - - prd_kps = [] - for idx, (x, y, src_kp, np) in enumerate(zip(prd_x, prd_y, src_kps, n_pts)): - max_pts = src_kp.size()[1] - prd_xy = torch.stack([x, y]).t() - - src_kp = src_kp[:, :np].t() - attmap = cls.attentive_indexing(src_kp).view(np, -1) - prd_kp = (prd_xy.unsqueeze(0) * attmap.unsqueeze(-1)).sum(dim=1).t() - pads = (torch.zeros((2, max_pts - np)) - 2) - prd_kp = torch.cat([prd_kp, pads], dim=1) - prd_kps.append(prd_kp) - - return torch.stack(prd_kps) - - @staticmethod - def get_coord1d(coord4d, ksz): - i, j, k, l = coord4d - coord1d = i * (ksz ** 3) + j * (ksz ** 2) + k * (ksz) + l - return coord1d - - @staticmethod - def get_distance(coord1, coord2): - delta_y = int(math.pow(coord1[0] - coord2[0], 2)) - delta_x = int(math.pow(coord1[1] - coord2[1], 2)) - dist = delta_y + delta_x - return dist - - @staticmethod - def interpolate4d(tensor4d, size): - bsz, h1, w1, h2, w2 = tensor4d.size() - tensor4d = tensor4d.view(bsz, h1, w1, -1).permute(0, 3, 1, 2) - tensor4d = F.interpolate(tensor4d, size, mode='bilinear', align_corners=True) - tensor4d = tensor4d.view(bsz, h2, w2, -1).permute(0, 3, 1, 2) - tensor4d = F.interpolate(tensor4d, size, mode='bilinear', align_corners=True) - tensor4d = tensor4d.view(bsz, size[0], size[0], size[0], size[0]) - - return tensor4d - @staticmethod - def init_idx4d(ksz): - i0 = torch.arange(0, ksz).repeat(ksz ** 3) - i1 = torch.arange(0, ksz).unsqueeze(1).repeat(1, ksz).view(-1).repeat(ksz ** 2) - i2 = torch.arange(0, ksz).unsqueeze(1).repeat(1, ksz ** 2).view(-1).repeat(ksz) - i3 = torch.arange(0, ksz).unsqueeze(1).repeat(1, ksz ** 3).view(-1) - idx4d = torch.stack([i3, i2, i1, i0]).t().numpy() - - return idx4d - diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/core.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/core.py deleted file mode 100644 index 67a1da3c821e8934ab555fe8e2e8b9a479cabcca..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/core.py +++ /dev/null @@ -1,410 +0,0 @@ -"`fastai.core` contains essential util functions to format and split data" -from .imports.core import * - -warnings.filterwarnings("ignore", message="numpy.dtype size changed") -warnings.filterwarnings("ignore", message="numpy.ufunc size changed") - -AnnealFunc = Callable[[Number,Number,float], Number] -ArgStar = Collection[Any] -BatchSamples = Collection[Tuple[Collection[int], int]] -DataFrameOrChunks = Union[DataFrame, pd.io.parsers.TextFileReader] -FilePathList = Collection[Path] -Floats = Union[float, Collection[float]] -ImgLabel = str -ImgLabels = Collection[ImgLabel] -IntsOrStrs = Union[int, Collection[int], str, Collection[str]] -KeyFunc = Callable[[int], int] -KWArgs = Dict[str,Any] -ListOrItem = Union[Collection[Any],int,float,str] -ListRules = Collection[Callable[[str],str]] -ListSizes = Collection[Tuple[int,int]] -NPArrayableList = Collection[Union[np.ndarray, list]] -NPArrayList = Collection[np.ndarray] -NPArrayMask = np.ndarray -NPImage = np.ndarray -OptDataFrame = Optional[DataFrame] -OptListOrItem = Optional[ListOrItem] -OptRange = Optional[Tuple[float,float]] -OptStrTuple = Optional[Tuple[str,str]] -OptStats = Optional[Tuple[np.ndarray, np.ndarray]] -PathOrStr = Union[Path,str] -PathLikeOrBinaryStream = Union[PathOrStr, BufferedWriter, BytesIO] -PBar = Union[MasterBar, ProgressBar] -Point=Tuple[float,float] -Points=Collection[Point] -Sizes = List[List[int]] -SplitArrayList = List[Tuple[np.ndarray,np.ndarray]] -StartOptEnd=Union[float,Tuple[float,float]] -StrList = Collection[str] -Tokens = Collection[Collection[str]] -OptStrList = Optional[StrList] -np.set_printoptions(precision=6, threshold=50, edgeitems=4, linewidth=120) - -def num_cpus()->int: - "Get number of cpus" - try: return len(os.sched_getaffinity(0)) - except AttributeError: return os.cpu_count() - -_default_cpus = min(16, num_cpus()) -defaults = SimpleNamespace(cpus=_default_cpus, cmap='viridis', return_fig=False, silent=False) - -def is_listy(x:Any)->bool: return isinstance(x, (tuple,list)) -def is_tuple(x:Any)->bool: return isinstance(x, tuple) -def is_dict(x:Any)->bool: return isinstance(x, dict) -def is_pathlike(x:Any)->bool: return isinstance(x, (str,Path)) -def noop(x): return x - -class PrePostInitMeta(type): - "A metaclass that calls optional `__pre_init__` and `__post_init__` methods" - def __new__(cls, name, bases, dct): - x = super().__new__(cls, name, bases, dct) - old_init = x.__init__ - def _pass(self): pass - @functools.wraps(old_init) - def _init(self,*args,**kwargs): - self.__pre_init__() - old_init(self, *args,**kwargs) - self.__post_init__() - x.__init__ = _init - if not hasattr(x,'__pre_init__'): x.__pre_init__ = _pass - if not hasattr(x,'__post_init__'): x.__post_init__ = _pass - return x - -def chunks(l:Collection, n:int)->Iterable: - "Yield successive `n`-sized chunks from `l`." - for i in range(0, len(l), n): yield l[i:i+n] - -def recurse(func:Callable, x:Any, *args, **kwargs)->Any: - if is_listy(x): return [recurse(func, o, *args, **kwargs) for o in x] - if is_dict(x): return {k: recurse(func, v, *args, **kwargs) for k,v in x.items()} - return func(x, *args, **kwargs) - -def first_el(x: Any)->Any: - "Recursively get the first element of `x`." - if is_listy(x): return first_el(x[0]) - if is_dict(x): return first_el(x[list(x.keys())[0]]) - return x - -def to_int(b:Any)->Union[int,List[int]]: - "Recursively convert `b` to an int or list/dict of ints; raises exception if not convertible." - return recurse(lambda x: int(x), b) - -def ifnone(a:Any,b:Any)->Any: - "`a` if `a` is not None, otherwise `b`." - return b if a is None else a - -def is1d(a:Collection)->bool: - "Return `True` if `a` is one-dimensional" - return len(a.shape) == 1 if hasattr(a, 'shape') else len(np.array(a).shape) == 1 - -def uniqueify(x:Series, sort:bool=False)->List: - "Return sorted unique values of `x`." - res = list(OrderedDict.fromkeys(x).keys()) - if sort: res.sort() - return res - -def idx_dict(a): - "Create a dictionary value to index from `a`." - return {v:k for k,v in enumerate(a)} - -def find_classes(folder:Path)->FilePathList: - "List of label subdirectories in imagenet-style `folder`." - classes = [d for d in folder.iterdir() - if d.is_dir() and not d.name.startswith('.')] - assert(len(classes)>0) - return sorted(classes, key=lambda d: d.name) - -def arrays_split(mask:NPArrayMask, *arrs:NPArrayableList)->SplitArrayList: - "Given `arrs` is [a,b,...] and `mask`index - return[(a[mask],a[~mask]),(b[mask],b[~mask]),...]." - assert all([len(arr)==len(arrs[0]) for arr in arrs]), 'All arrays should have same length' - mask = array(mask) - return list(zip(*[(a[mask],a[~mask]) for a in map(np.array, arrs)])) - -def random_split(valid_pct:float, *arrs:NPArrayableList)->SplitArrayList: - "Randomly split `arrs` with `valid_pct` ratio. good for creating validation set." - assert (valid_pct>=0 and valid_pct<=1), 'Validation set percentage should be between 0 and 1' - is_train = np.random.uniform(size=(len(arrs[0]),)) > valid_pct - return arrays_split(is_train, *arrs) - -def listify(p:OptListOrItem=None, q:OptListOrItem=None): - "Make `p` listy and the same length as `q`." - if p is None: p=[] - elif isinstance(p, str): p = [p] - elif not isinstance(p, Iterable): p = [p] - #Rank 0 tensors in PyTorch are Iterable but don't have a length. - else: - try: a = len(p) - except: p = [p] - n = q if type(q)==int else len(p) if q is None else len(q) - if len(p)==1: p = p * n - assert len(p)==n, f'List len mismatch ({len(p)} vs {n})' - return list(p) - -_camel_re1 = re.compile('(.)([A-Z][a-z]+)') -_camel_re2 = re.compile('([a-z0-9])([A-Z])') -def camel2snake(name:str)->str: - "Change `name` from camel to snake style." - s1 = re.sub(_camel_re1, r'\1_\2', name) - return re.sub(_camel_re2, r'\1_\2', s1).lower() - -def even_mults(start:float, stop:float, n:int)->np.ndarray: - "Build log-stepped array from `start` to `stop` in `n` steps." - mult = stop/start - step = mult**(1/(n-1)) - return np.array([start*(step**i) for i in range(n)]) - -def extract_kwargs(names:Collection[str], kwargs:KWArgs): - "Extract the keys in `names` from the `kwargs`." - new_kwargs = {} - for arg_name in names: - if arg_name in kwargs: - arg_val = kwargs.pop(arg_name) - new_kwargs[arg_name] = arg_val - return new_kwargs, kwargs - -def partition(a:Collection, sz:int)->List[Collection]: - "Split iterables `a` in equal parts of size `sz`" - return [a[i:i+sz] for i in range(0, len(a), sz)] - -def partition_by_cores(a:Collection, n_cpus:int)->List[Collection]: - "Split data in `a` equally among `n_cpus` cores" - return partition(a, len(a)//n_cpus + 1) - -def series2cat(df:DataFrame, *col_names): - "Categorifies the columns `col_names` in `df`." - for c in listify(col_names): df[c] = df[c].astype('category').cat.as_ordered() - -TfmList = Union[Callable, Collection[Callable]] - -class ItemBase(): - "Base item type in the fastai library." - def __init__(self, data:Any): self.data=self.obj=data - def __repr__(self)->str: return f'{self.__class__.__name__} {str(self)}' - def show(self, ax:plt.Axes, **kwargs): - "Subclass this method if you want to customize the way this `ItemBase` is shown on `ax`." - ax.set_title(str(self)) - def apply_tfms(self, tfms:Collection, **kwargs): - "Subclass this method if you want to apply data augmentation with `tfms` to this `ItemBase`." - if tfms: raise Exception(f"Not implemented: you can't apply transforms to this type of item ({self.__class__.__name__})") - return self - def __eq__(self, other): return recurse_eq(self.data, other.data) - -def recurse_eq(arr1, arr2): - if is_listy(arr1): return is_listy(arr2) and len(arr1) == len(arr2) and np.all([recurse_eq(x,y) for x,y in zip(arr1,arr2)]) - else: return np.all(np.atleast_1d(arr1 == arr2)) - -def download_url(url:str, dest:str, overwrite:bool=False, pbar:ProgressBar=None, - show_progress=True, chunk_size=1024*1024, timeout=4, retries=5)->None: - "Download `url` to `dest` unless it exists and not `overwrite`." - if os.path.exists(dest) and not overwrite: return - - s = requests.Session() - s.mount('http://',requests.adapters.HTTPAdapter(max_retries=retries)) - u = s.get(url, stream=True, timeout=timeout) - try: file_size = int(u.headers["Content-Length"]) - except: show_progress = False - - with open(dest, 'wb') as f: - nbytes = 0 - if show_progress: pbar = progress_bar(range(file_size), auto_update=False, leave=False, parent=pbar) - try: - for chunk in u.iter_content(chunk_size=chunk_size): - nbytes += len(chunk) - if show_progress: pbar.update(nbytes) - f.write(chunk) - except requests.exceptions.ConnectionError as e: - fname = url.split('/')[-1] - from fastai.datasets import Config - data_dir = Config().data_path() - timeout_txt =(f'\n Download of {url} has failed after {retries} retries\n' - f' Fix the download manually:\n' - f'$ mkdir -p {data_dir}\n' - f'$ cd {data_dir}\n' - f'$ wget -c {url}\n' - f'$ tar -zxvf {fname}\n\n' - f'And re-run your code once the download is successful\n') - print(timeout_txt) - import sys;sys.exit(1) - -def range_of(x): - "Create a range from 0 to `len(x)`." - return list(range(len(x))) -def arange_of(x): - "Same as `range_of` but returns an array." - return np.arange(len(x)) - -Path.ls = lambda x: list(x.iterdir()) - -def join_path(fname:PathOrStr, path:PathOrStr='.')->Path: - "Return `Path(path)/Path(fname)`, `path` defaults to current dir." - return Path(path)/Path(fname) - -def join_paths(fnames:FilePathList, path:PathOrStr='.')->Collection[Path]: - "Join `path` to every file name in `fnames`." - path = Path(path) - return [join_path(o,path) for o in fnames] - -def loadtxt_str(path:PathOrStr)->np.ndarray: - "Return `ndarray` of `str` of lines of text from `path`." - with open(path, 'r') as f: lines = f.readlines() - return np.array([l.strip() for l in lines]) - -def save_texts(fname:PathOrStr, texts:Collection[str]): - "Save in `fname` the content of `texts`." - with open(fname, 'w') as f: - for t in texts: f.write(f'{t}\n') - -def df_names_to_idx(names:IntsOrStrs, df:DataFrame): - "Return the column indexes of `names` in `df`." - if not is_listy(names): names = [names] - if isinstance(names[0], int): return names - return [df.columns.get_loc(c) for c in names] - -def one_hot(x:Collection[int], c:int): - "One-hot encode `x` with `c` classes." - res = np.zeros((c,), np.float32) - res[listify(x)] = 1. - return res - -def index_row(a:Union[Collection,pd.DataFrame,pd.Series], idxs:Collection[int])->Any: - "Return the slice of `a` corresponding to `idxs`." - if a is None: return a - if isinstance(a,(pd.DataFrame,pd.Series)): - res = a.iloc[idxs] - if isinstance(res,(pd.DataFrame,pd.Series)): return res.copy() - return res - return a[idxs] - -def func_args(func)->bool: - "Return the arguments of `func`." - code = func.__code__ - return code.co_varnames[:code.co_argcount] - -def has_arg(func, arg)->bool: - "Check if `func` accepts `arg`." - return arg in func_args(func) - -def split_kwargs_by_func(kwargs, func): - "Split `kwargs` between those expected by `func` and the others." - args = func_args(func) - func_kwargs = {a:kwargs.pop(a) for a in args if a in kwargs} - return func_kwargs, kwargs - -def array(a, dtype:type=None, **kwargs)->np.ndarray: - "Same as `np.array` but also handles generators. `kwargs` are passed to `np.array` with `dtype`." - if not isinstance(a, collections.abc.Sized) and not getattr(a,'__array_interface__',False): - a = list(a) - if np.int_==np.int32 and dtype is None and is_listy(a) and len(a) and isinstance(a[0],int): - dtype=np.int64 - return np.array(a, dtype=dtype, **kwargs) - -class EmptyLabel(ItemBase): - "Should be used for a dummy label." - def __init__(self): self.obj,self.data = 0,0 - def __str__(self): return '' - def __hash__(self): return hash(str(self)) - -class Category(ItemBase): - "Basic class for single classification labels." - def __init__(self,data,obj): self.data,self.obj = data,obj - def __int__(self): return int(self.data) - def __str__(self): return str(self.obj) - def __hash__(self): return hash(str(self)) - -class MultiCategory(ItemBase): - "Basic class for multi-classification labels." - def __init__(self,data,obj,raw): self.data,self.obj,self.raw = data,obj,raw - def __str__(self): return ';'.join([str(o) for o in self.obj]) - def __hash__(self): return hash(str(self)) - -class FloatItem(ItemBase): - "Basic class for float items." - def __init__(self,obj): self.data,self.obj = np.array(obj).astype(np.float32),obj - def __str__(self): return str(self.obj) - def __hash__(self): return hash(str(self)) - -def _treat_html(o:str)->str: - o = str(o) - to_replace = {'\n':'\\n', '<':'<', '>':'>', '&':'&'} - for k,v in to_replace.items(): o = o.replace(k, v) - return o - -def text2html_table(items:Collection[Collection[str]])->str: - "Put the texts in `items` in an HTML table, `widths` are the widths of the columns in %." - html_code = f"""""" - html_code += f""" \n \n""" - for i in items[0]: html_code += f" " - html_code += f" \n \n " - html_code += " " - for line in items[1:]: - html_code += " " - for i in line: html_code += f" " - html_code += " " - html_code += " \n
{_treat_html(i)}
{_treat_html(i)}
" - return html_code - -def parallel(func, arr:Collection, max_workers:int=None, leave=False): - "Call `func` on every element of `arr` in parallel using `max_workers`." - max_workers = ifnone(max_workers, defaults.cpus) - if max_workers<2: results = [func(o,i) for i,o in progress_bar(enumerate(arr), total=len(arr), leave=leave)] - else: - with ProcessPoolExecutor(max_workers=max_workers) as ex: - futures = [ex.submit(func,o,i) for i,o in enumerate(arr)] - results = [] - for f in progress_bar(concurrent.futures.as_completed(futures), total=len(arr), leave=leave): - results.append(f.result()) - if any([o is not None for o in results]): return results - -def subplots(rows:int, cols:int, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, title=None, **kwargs): - "Like `plt.subplots` but with consistent axs shape, `kwargs` passed to `fig.suptitle` with `title`" - figsize = ifnone(figsize, (imgsize*cols, imgsize*rows)) - fig, axs = plt.subplots(rows,cols,figsize=figsize) - if rows==cols==1: axs = [[axs]] # subplots(1,1) returns Axes, not [Axes] - elif (rows==1 and cols!=1) or (cols==1 and rows!=1): axs = [axs] - if title is not None: fig.suptitle(title, **kwargs) - return array(axs) - -def show_some(items:Collection, n_max:int=5, sep:str=','): - "Return the representation of the first `n_max` elements in `items`." - if items is None or len(items) == 0: return '' - res = sep.join([f'{o}' for o in items[:n_max]]) - if len(items) > n_max: res += '...' - return res - -def get_tmp_file(dir=None): - "Create and return a tmp filename, optionally at a specific path. `os.remove` when done with it." - with tempfile.NamedTemporaryFile(delete=False, dir=dir) as f: return f.name - -def compose(funcs:List[Callable])->Callable: - "Compose `funcs`" - def compose_(funcs, x, *args, **kwargs): - for f in listify(funcs): x = f(x, *args, **kwargs) - return x - return partial(compose_, funcs) - -class PrettyString(str): - "Little hack to get strings to show properly in Jupyter." - def __repr__(self): return self - -def float_or_x(x): - "Tries to convert to float, returns x if it can't" - try: return float(x) - except:return x - -def bunzip(fn:PathOrStr): - "bunzip `fn`, raising exception if output already exists" - fn = Path(fn) - assert fn.exists(), f"{fn} doesn't exist" - out_fn = fn.with_suffix('') - assert not out_fn.exists(), f"{out_fn} already exists" - with bz2.BZ2File(fn, 'rb') as src, out_fn.open('wb') as dst: - for d in iter(lambda: src.read(1024*1024), b''): dst.write(d) - -@contextmanager -def working_directory(path:PathOrStr): - "Change working directory to `path` and return to previous on exit." - prev_cwd = Path.cwd() - os.chdir(path) - try: yield - finally: os.chdir(prev_cwd) - diff --git a/spaces/Xenova/react-translator/assets/index-138f5db2.js b/spaces/Xenova/react-translator/assets/index-138f5db2.js deleted file mode 100644 index 5eee70594bb46ea8a00d0fdeb482af31f9c1d6dd..0000000000000000000000000000000000000000 --- a/spaces/Xenova/react-translator/assets/index-138f5db2.js +++ /dev/null @@ -1,40 +0,0 @@ -(function(){const n=document.createElement("link").relList;if(n&&n.supports&&n.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const o of i.addedNodes)o.tagName==="LINK"&&o.rel==="modulepreload"&&r(o)}).observe(document,{childList:!0,subtree:!0});function t(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=t(l);fetch(l.href,i)}})();function lc(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var Wu={exports:{}},el={},Qu={exports:{}},T={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var Yt=Symbol.for("react.element"),ic=Symbol.for("react.portal"),oc=Symbol.for("react.fragment"),uc=Symbol.for("react.strict_mode"),ac=Symbol.for("react.profiler"),sc=Symbol.for("react.provider"),cc=Symbol.for("react.context"),fc=Symbol.for("react.forward_ref"),dc=Symbol.for("react.suspense"),pc=Symbol.for("react.memo"),mc=Symbol.for("react.lazy"),Oo=Symbol.iterator;function hc(e){return e===null||typeof e!="object"?null:(e=Oo&&e[Oo]||e["@@iterator"],typeof e=="function"?e:null)}var Ku={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},Gu=Object.assign,Yu={};function it(e,n,t){this.props=e,this.context=n,this.refs=Yu,this.updater=t||Ku}it.prototype.isReactComponent={};it.prototype.setState=function(e,n){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,n,"setState")};it.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function Xu(){}Xu.prototype=it.prototype;function Ai(e,n,t){this.props=e,this.context=n,this.refs=Yu,this.updater=t||Ku}var Ui=Ai.prototype=new Xu;Ui.constructor=Ai;Gu(Ui,it.prototype);Ui.isPureReactComponent=!0;var Do=Array.isArray,Zu=Object.prototype.hasOwnProperty,$i={current:null},Ju={key:!0,ref:!0,__self:!0,__source:!0};function qu(e,n,t){var r,l={},i=null,o=null;if(n!=null)for(r in n.ref!==void 0&&(o=n.ref),n.key!==void 0&&(i=""+n.key),n)Zu.call(n,r)&&!Ju.hasOwnProperty(r)&&(l[r]=n[r]);var u=arguments.length-2;if(u===1)l.children=t;else if(1>>1,X=E[W];if(0>>1;Wl(gl,P))gnl(er,gl)?(E[W]=er,E[gn]=P,W=gn):(E[W]=gl,E[yn]=P,W=yn);else if(gnl(er,P))E[W]=er,E[gn]=P,W=gn;else break e}}return N}function l(E,N){var P=E.sortIndex-N.sortIndex;return P!==0?P:E.id-N.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var o=Date,u=o.now();e.unstable_now=function(){return o.now()-u}}var a=[],f=[],h=1,m=null,p=3,w=!1,g=!1,k=!1,z=typeof setTimeout=="function"?setTimeout:null,c=typeof clearTimeout=="function"?clearTimeout:null,s=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(E){for(var N=t(f);N!==null;){if(N.callback===null)r(f);else if(N.startTime<=E)r(f),N.sortIndex=N.expirationTime,n(a,N);else break;N=t(f)}}function v(E){if(k=!1,d(E),!g)if(t(a)!==null)g=!0,vl(_);else{var N=t(f);N!==null&&yl(v,N.startTime-E)}}function _(E,N){g=!1,k&&(k=!1,c(x),x=-1),w=!0;var P=p;try{for(d(N),m=t(a);m!==null&&(!(m.expirationTime>N)||E&&!Ne());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var X=W(m.expirationTime<=N);N=e.unstable_now(),typeof X=="function"?m.callback=X:m===t(a)&&r(a),d(N)}else r(a);m=t(a)}if(m!==null)var bt=!0;else{var yn=t(f);yn!==null&&yl(v,yn.startTime-N),bt=!1}return bt}finally{m=null,p=P,w=!1}}var L=!1,C=null,x=-1,H=5,R=-1;function Ne(){return!(e.unstable_now()-RE||125W?(E.sortIndex=P,n(f,E),t(a)===null&&E===t(f)&&(k?(c(x),x=-1):k=!0,yl(v,P-W))):(E.sortIndex=X,n(a,E),g||w||(g=!0,vl(_))),E},e.unstable_shouldYield=Ne,e.unstable_wrapCallback=function(E){var N=p;return function(){var P=p;p=N;try{return E.apply(this,arguments)}finally{p=P}}}})(ta);na.exports=ta;var Nc=na.exports;/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var ra=ke,ye=Nc;function y(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),Kl=Object.prototype.hasOwnProperty,Pc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Fo={},Ao={};function zc(e){return Kl.call(Ao,e)?!0:Kl.call(Fo,e)?!1:Pc.test(e)?Ao[e]=!0:(Fo[e]=!0,!1)}function Tc(e,n,t,r){if(t!==null&&t.type===0)return!1;switch(typeof n){case"function":case"symbol":return!0;case"boolean":return r?!1:t!==null?!t.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Rc(e,n,t,r){if(n===null||typeof n>"u"||Tc(e,n,t,r))return!0;if(r)return!1;if(t!==null)switch(t.type){case 3:return!n;case 4:return n===!1;case 5:return isNaN(n);case 6:return isNaN(n)||1>n}return!1}function ae(e,n,t,r,l,i,o){this.acceptsBooleans=n===2||n===3||n===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=t,this.propertyName=e,this.type=n,this.sanitizeURL=i,this.removeEmptyString=o}var ee={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ee[e]=new ae(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var n=e[0];ee[n]=new ae(n,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ee[e]=new ae(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ee[e]=new ae(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ee[e]=new ae(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ee[e]=new ae(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ee[e]=new ae(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ee[e]=new ae(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ee[e]=new ae(e,5,!1,e.toLowerCase(),null,!1,!1)});var Vi=/[\-:]([a-z])/g;function Hi(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var n=e.replace(Vi,Hi);ee[n]=new ae(n,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var n=e.replace(Vi,Hi);ee[n]=new ae(n,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var n=e.replace(Vi,Hi);ee[n]=new ae(n,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ee[e]=new ae(e,1,!1,e.toLowerCase(),null,!1,!1)});ee.xlinkHref=new ae("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ee[e]=new ae(e,1,!1,e.toLowerCase(),null,!0,!0)});function Wi(e,n,t,r){var l=ee.hasOwnProperty(n)?ee[n]:null;(l!==null?l.type!==0:r||!(2u||l[o]!==i[u]){var a=` -`+l[o].replace(" at new "," at ");return e.displayName&&a.includes("")&&(a=a.replace("",e.displayName)),a}while(1<=o&&0<=u);break}}}finally{Sl=!1,Error.prepareStackTrace=t}return(e=e?e.displayName||e.name:"")?gt(e):""}function Mc(e){switch(e.tag){case 5:return gt(e.type);case 16:return gt("Lazy");case 13:return gt("Suspense");case 19:return gt("SuspenseList");case 0:case 2:case 15:return e=_l(e.type,!1),e;case 11:return e=_l(e.type.render,!1),e;case 1:return e=_l(e.type,!0),e;default:return""}}function Zl(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Dn:return"Fragment";case On:return"Portal";case Gl:return"Profiler";case Qi:return"StrictMode";case Yl:return"Suspense";case Xl:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case oa:return(e.displayName||"Context")+".Consumer";case ia:return(e._context.displayName||"Context")+".Provider";case Ki:var n=e.render;return e=e.displayName,e||(e=n.displayName||n.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case Gi:return n=e.displayName||null,n!==null?n:Zl(e.type)||"Memo";case Je:n=e._payload,e=e._init;try{return Zl(e(n))}catch{}}return null}function jc(e){var n=e.type;switch(e.tag){case 24:return"Cache";case 9:return(n.displayName||"Context")+".Consumer";case 10:return(n._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=n.render,e=e.displayName||e.name||"",n.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return n;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Zl(n);case 8:return n===Qi?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof n=="function")return n.displayName||n.name||null;if(typeof n=="string")return n}return null}function dn(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function aa(e){var n=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(n==="checkbox"||n==="radio")}function Oc(e){var n=aa(e)?"checked":"value",t=Object.getOwnPropertyDescriptor(e.constructor.prototype,n),r=""+e[n];if(!e.hasOwnProperty(n)&&typeof t<"u"&&typeof t.get=="function"&&typeof t.set=="function"){var l=t.get,i=t.set;return Object.defineProperty(e,n,{configurable:!0,get:function(){return l.call(this)},set:function(o){r=""+o,i.call(this,o)}}),Object.defineProperty(e,n,{enumerable:t.enumerable}),{getValue:function(){return r},setValue:function(o){r=""+o},stopTracking:function(){e._valueTracker=null,delete e[n]}}}}function rr(e){e._valueTracker||(e._valueTracker=Oc(e))}function sa(e){if(!e)return!1;var n=e._valueTracker;if(!n)return!0;var t=n.getValue(),r="";return e&&(r=aa(e)?e.checked?"true":"false":e.value),e=r,e!==t?(n.setValue(e),!0):!1}function Tr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function Jl(e,n){var t=n.checked;return B({},n,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:t??e._wrapperState.initialChecked})}function $o(e,n){var t=n.defaultValue==null?"":n.defaultValue,r=n.checked!=null?n.checked:n.defaultChecked;t=dn(n.value!=null?n.value:t),e._wrapperState={initialChecked:r,initialValue:t,controlled:n.type==="checkbox"||n.type==="radio"?n.checked!=null:n.value!=null}}function ca(e,n){n=n.checked,n!=null&&Wi(e,"checked",n,!1)}function ql(e,n){ca(e,n);var t=dn(n.value),r=n.type;if(t!=null)r==="number"?(t===0&&e.value===""||e.value!=t)&&(e.value=""+t):e.value!==""+t&&(e.value=""+t);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}n.hasOwnProperty("value")?bl(e,n.type,t):n.hasOwnProperty("defaultValue")&&bl(e,n.type,dn(n.defaultValue)),n.checked==null&&n.defaultChecked!=null&&(e.defaultChecked=!!n.defaultChecked)}function Bo(e,n,t){if(n.hasOwnProperty("value")||n.hasOwnProperty("defaultValue")){var r=n.type;if(!(r!=="submit"&&r!=="reset"||n.value!==void 0&&n.value!==null))return;n=""+e._wrapperState.initialValue,t||n===e.value||(e.value=n),e.defaultValue=n}t=e.name,t!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,t!==""&&(e.name=t)}function bl(e,n,t){(n!=="number"||Tr(e.ownerDocument)!==e)&&(t==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+t&&(e.defaultValue=""+t))}var wt=Array.isArray;function Kn(e,n,t,r){if(e=e.options,n){n={};for(var l=0;l"+n.valueOf().toString()+"",n=lr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;n.firstChild;)e.appendChild(n.firstChild)}});function Mt(e,n){if(n){var t=e.firstChild;if(t&&t===e.lastChild&&t.nodeType===3){t.nodeValue=n;return}}e.textContent=n}var _t={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Dc=["Webkit","ms","Moz","O"];Object.keys(_t).forEach(function(e){Dc.forEach(function(n){n=n+e.charAt(0).toUpperCase()+e.substring(1),_t[n]=_t[e]})});function ma(e,n,t){return n==null||typeof n=="boolean"||n===""?"":t||typeof n!="number"||n===0||_t.hasOwnProperty(e)&&_t[e]?(""+n).trim():n+"px"}function ha(e,n){e=e.style;for(var t in n)if(n.hasOwnProperty(t)){var r=t.indexOf("--")===0,l=ma(t,n[t],r);t==="float"&&(t="cssFloat"),r?e.setProperty(t,l):e[t]=l}}var Ic=B({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function ti(e,n){if(n){if(Ic[e]&&(n.children!=null||n.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(n.dangerouslySetInnerHTML!=null){if(n.children!=null)throw Error(y(60));if(typeof n.dangerouslySetInnerHTML!="object"||!("__html"in n.dangerouslySetInnerHTML))throw Error(y(61))}if(n.style!=null&&typeof n.style!="object")throw Error(y(62))}}function ri(e,n){if(e.indexOf("-")===-1)return typeof n.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var li=null;function Yi(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var ii=null,Gn=null,Yn=null;function Wo(e){if(e=Jt(e)){if(typeof ii!="function")throw Error(y(280));var n=e.stateNode;n&&(n=il(n),ii(e.stateNode,e.type,n))}}function va(e){Gn?Yn?Yn.push(e):Yn=[e]:Gn=e}function ya(){if(Gn){var e=Gn,n=Yn;if(Yn=Gn=null,Wo(e),n)for(e=0;e>>=0,e===0?32:31-(Gc(e)/Yc|0)|0}var ir=64,or=4194304;function kt(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Or(e,n){var t=e.pendingLanes;if(t===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,o=t&268435455;if(o!==0){var u=o&~l;u!==0?r=kt(u):(i&=o,i!==0&&(r=kt(i)))}else o=t&~l,o!==0?r=kt(o):i!==0&&(r=kt(i));if(r===0)return 0;if(n!==0&&n!==r&&!(n&l)&&(l=r&-r,i=n&-n,l>=i||l===16&&(i&4194240)!==0))return n;if(r&4&&(r|=t&16),n=e.entangledLanes,n!==0)for(e=e.entanglements,n&=r;0t;t++)n.push(e);return n}function Xt(e,n,t){e.pendingLanes|=n,n!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,n=31-Me(n),e[n]=t}function qc(e,n){var t=e.pendingLanes&~n;e.pendingLanes=n,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=n,e.mutableReadLanes&=n,e.entangledLanes&=n,n=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Lt),bo=String.fromCharCode(32),eu=!1;function Fa(e,n){switch(e){case"keyup":return Nf.indexOf(n.keyCode)!==-1;case"keydown":return n.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Aa(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var In=!1;function zf(e,n){switch(e){case"compositionend":return Aa(n);case"keypress":return n.which!==32?null:(eu=!0,bo);case"textInput":return e=n.data,e===bo&&eu?null:e;default:return null}}function Tf(e,n){if(In)return e==="compositionend"||!to&&Fa(e,n)?(e=Da(),Sr=bi=nn=null,In=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(n.ctrlKey||n.altKey||n.metaKey)||n.ctrlKey&&n.altKey){if(n.char&&1=n)return{node:t,offset:n-e};e=r}e:{for(;t;){if(t.nextSibling){t=t.nextSibling;break e}t=t.parentNode}t=void 0}t=lu(t)}}function Va(e,n){return e&&n?e===n?!0:e&&e.nodeType===3?!1:n&&n.nodeType===3?Va(e,n.parentNode):"contains"in e?e.contains(n):e.compareDocumentPosition?!!(e.compareDocumentPosition(n)&16):!1:!1}function Ha(){for(var e=window,n=Tr();n instanceof e.HTMLIFrameElement;){try{var t=typeof n.contentWindow.location.href=="string"}catch{t=!1}if(t)e=n.contentWindow;else break;n=Tr(e.document)}return n}function ro(e){var n=e&&e.nodeName&&e.nodeName.toLowerCase();return n&&(n==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||n==="textarea"||e.contentEditable==="true")}function Uf(e){var n=Ha(),t=e.focusedElem,r=e.selectionRange;if(n!==t&&t&&t.ownerDocument&&Va(t.ownerDocument.documentElement,t)){if(r!==null&&ro(t)){if(n=r.start,e=r.end,e===void 0&&(e=n),"selectionStart"in t)t.selectionStart=n,t.selectionEnd=Math.min(e,t.value.length);else if(e=(n=t.ownerDocument||document)&&n.defaultView||window,e.getSelection){e=e.getSelection();var l=t.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=iu(t,i);var o=iu(t,r);l&&o&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==o.node||e.focusOffset!==o.offset)&&(n=n.createRange(),n.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(n),e.extend(o.node,o.offset)):(n.setEnd(o.node,o.offset),e.addRange(n)))}}for(n=[],e=t;e=e.parentNode;)e.nodeType===1&&n.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof t.focus=="function"&&t.focus(),t=0;t=document.documentMode,Fn=null,fi=null,xt=null,di=!1;function ou(e,n,t){var r=t.window===t?t.document:t.nodeType===9?t:t.ownerDocument;di||Fn==null||Fn!==Tr(r)||(r=Fn,"selectionStart"in r&&ro(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),xt&&At(xt,r)||(xt=r,r=Fr(fi,"onSelect"),0$n||(e.current=gi[$n],gi[$n]=null,$n--)}function O(e,n){$n++,gi[$n]=e.current,e.current=n}var pn={},le=hn(pn),fe=hn(!1),xn=pn;function bn(e,n){var t=e.type.contextTypes;if(!t)return pn;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===n)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in t)l[i]=n[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=n,e.__reactInternalMemoizedMaskedChildContext=l),l}function de(e){return e=e.childContextTypes,e!=null}function Ur(){I(fe),I(le)}function pu(e,n,t){if(le.current!==pn)throw Error(y(168));O(le,n),O(fe,t)}function qa(e,n,t){var r=e.stateNode;if(n=n.childContextTypes,typeof r.getChildContext!="function")return t;r=r.getChildContext();for(var l in r)if(!(l in n))throw Error(y(108,jc(e)||"Unknown",l));return B({},t,r)}function $r(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||pn,xn=le.current,O(le,e),O(fe,fe.current),!0}function mu(e,n,t){var r=e.stateNode;if(!r)throw Error(y(169));t?(e=qa(e,n,xn),r.__reactInternalMemoizedMergedChildContext=e,I(fe),I(le),O(le,e)):I(fe),O(fe,t)}var Be=null,ol=!1,Il=!1;function ba(e){Be===null?Be=[e]:Be.push(e)}function Jf(e){ol=!0,ba(e)}function vn(){if(!Il&&Be!==null){Il=!0;var e=0,n=j;try{var t=Be;for(j=1;e>=o,l-=o,Ve=1<<32-Me(n)+l|t<x?(H=C,C=null):H=C.sibling;var R=p(c,C,d[x],v);if(R===null){C===null&&(C=H);break}e&&C&&R.alternate===null&&n(c,C),s=i(R,s,x),L===null?_=R:L.sibling=R,L=R,C=H}if(x===d.length)return t(c,C),A&&wn(c,x),_;if(C===null){for(;xx?(H=C,C=null):H=C.sibling;var Ne=p(c,C,R.value,v);if(Ne===null){C===null&&(C=H);break}e&&C&&Ne.alternate===null&&n(c,C),s=i(Ne,s,x),L===null?_=Ne:L.sibling=Ne,L=Ne,C=H}if(R.done)return t(c,C),A&&wn(c,x),_;if(C===null){for(;!R.done;x++,R=d.next())R=m(c,R.value,v),R!==null&&(s=i(R,s,x),L===null?_=R:L.sibling=R,L=R);return A&&wn(c,x),_}for(C=r(c,C);!R.done;x++,R=d.next())R=w(C,c,x,R.value,v),R!==null&&(e&&R.alternate!==null&&C.delete(R.key===null?x:R.key),s=i(R,s,x),L===null?_=R:L.sibling=R,L=R);return e&&C.forEach(function(at){return n(c,at)}),A&&wn(c,x),_}function z(c,s,d,v){if(typeof d=="object"&&d!==null&&d.type===Dn&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case tr:e:{for(var _=d.key,L=s;L!==null;){if(L.key===_){if(_=d.type,_===Dn){if(L.tag===7){t(c,L.sibling),s=l(L,d.props.children),s.return=c,c=s;break e}}else if(L.elementType===_||typeof _=="object"&&_!==null&&_.$$typeof===Je&&Su(_)===L.type){t(c,L.sibling),s=l(L,d.props),s.ref=ht(c,L,d),s.return=c,c=s;break e}t(c,L);break}else n(c,L);L=L.sibling}d.type===Dn?(s=Cn(d.props.children,c.mode,v,d.key),s.return=c,c=s):(v=zr(d.type,d.key,d.props,null,c.mode,v),v.ref=ht(c,s,d),v.return=c,c=v)}return o(c);case On:e:{for(L=d.key;s!==null;){if(s.key===L)if(s.tag===4&&s.stateNode.containerInfo===d.containerInfo&&s.stateNode.implementation===d.implementation){t(c,s.sibling),s=l(s,d.children||[]),s.return=c,c=s;break e}else{t(c,s);break}else n(c,s);s=s.sibling}s=Wl(d,c.mode,v),s.return=c,c=s}return o(c);case Je:return L=d._init,z(c,s,L(d._payload),v)}if(wt(d))return g(c,s,d,v);if(ct(d))return k(c,s,d,v);pr(c,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,s!==null&&s.tag===6?(t(c,s.sibling),s=l(s,d),s.return=c,c=s):(t(c,s),s=Hl(d,c.mode,v),s.return=c,c=s),o(c)):t(c,s)}return z}var nt=us(!0),as=us(!1),qt={},Ue=hn(qt),Vt=hn(qt),Ht=hn(qt);function En(e){if(e===qt)throw Error(y(174));return e}function po(e,n){switch(O(Ht,n),O(Vt,e),O(Ue,qt),e=n.nodeType,e){case 9:case 11:n=(n=n.documentElement)?n.namespaceURI:ni(null,"");break;default:e=e===8?n.parentNode:n,n=e.namespaceURI||null,e=e.tagName,n=ni(n,e)}I(Ue),O(Ue,n)}function tt(){I(Ue),I(Vt),I(Ht)}function ss(e){En(Ht.current);var n=En(Ue.current),t=ni(n,e.type);n!==t&&(O(Vt,e),O(Ue,t))}function mo(e){Vt.current===e&&(I(Ue),I(Vt))}var U=hn(0);function Kr(e){for(var n=e;n!==null;){if(n.tag===13){var t=n.memoizedState;if(t!==null&&(t=t.dehydrated,t===null||t.data==="$?"||t.data==="$!"))return n}else if(n.tag===19&&n.memoizedProps.revealOrder!==void 0){if(n.flags&128)return n}else if(n.child!==null){n.child.return=n,n=n.child;continue}if(n===e)break;for(;n.sibling===null;){if(n.return===null||n.return===e)return null;n=n.return}n.sibling.return=n.return,n=n.sibling}return null}var Fl=[];function ho(){for(var e=0;et?t:4,e(!0);var r=Al.transition;Al.transition={};try{e(!1),n()}finally{j=t,Al.transition=r}}function Cs(){return xe().memoizedState}function nd(e,n,t){var r=cn(e);if(t={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null},xs(e))Ns(n,t);else if(t=rs(e,n,t,r),t!==null){var l=oe();je(t,e,r,l),Ps(t,n,r)}}function td(e,n,t){var r=cn(e),l={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null};if(xs(e))Ns(n,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=n.lastRenderedReducer,i!==null))try{var o=n.lastRenderedState,u=i(o,t);if(l.hasEagerState=!0,l.eagerState=u,Oe(u,o)){var a=n.interleaved;a===null?(l.next=l,co(n)):(l.next=a.next,a.next=l),n.interleaved=l;return}}catch{}finally{}t=rs(e,n,l,r),t!==null&&(l=oe(),je(t,e,r,l),Ps(t,n,r))}}function xs(e){var n=e.alternate;return e===$||n!==null&&n===$}function Ns(e,n){Nt=Gr=!0;var t=e.pending;t===null?n.next=n:(n.next=t.next,t.next=n),e.pending=n}function Ps(e,n,t){if(t&4194240){var r=n.lanes;r&=e.pendingLanes,t|=r,n.lanes=t,Zi(e,t)}}var Yr={readContext:Ce,useCallback:ne,useContext:ne,useEffect:ne,useImperativeHandle:ne,useInsertionEffect:ne,useLayoutEffect:ne,useMemo:ne,useReducer:ne,useRef:ne,useState:ne,useDebugValue:ne,useDeferredValue:ne,useTransition:ne,useMutableSource:ne,useSyncExternalStore:ne,useId:ne,unstable_isNewReconciler:!1},rd={readContext:Ce,useCallback:function(e,n){return Ie().memoizedState=[e,n===void 0?null:n],e},useContext:Ce,useEffect:Eu,useImperativeHandle:function(e,n,t){return t=t!=null?t.concat([e]):null,Cr(4194308,4,ks.bind(null,n,e),t)},useLayoutEffect:function(e,n){return Cr(4194308,4,e,n)},useInsertionEffect:function(e,n){return Cr(4,2,e,n)},useMemo:function(e,n){var t=Ie();return n=n===void 0?null:n,e=e(),t.memoizedState=[e,n],e},useReducer:function(e,n,t){var r=Ie();return n=t!==void 0?t(n):n,r.memoizedState=r.baseState=n,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:n},r.queue=e,e=e.dispatch=nd.bind(null,$,e),[r.memoizedState,e]},useRef:function(e){var n=Ie();return e={current:e},n.memoizedState=e},useState:_u,useDebugValue:ko,useDeferredValue:function(e){return Ie().memoizedState=e},useTransition:function(){var e=_u(!1),n=e[0];return e=ed.bind(null,e[1]),Ie().memoizedState=e,[n,e]},useMutableSource:function(){},useSyncExternalStore:function(e,n,t){var r=$,l=Ie();if(A){if(t===void 0)throw Error(y(407));t=t()}else{if(t=n(),J===null)throw Error(y(349));Pn&30||ds(r,n,t)}l.memoizedState=t;var i={value:t,getSnapshot:n};return l.queue=i,Eu(ms.bind(null,r,i,e),[e]),r.flags|=2048,Kt(9,ps.bind(null,r,i,t,n),void 0,null),t},useId:function(){var e=Ie(),n=J.identifierPrefix;if(A){var t=He,r=Ve;t=(r&~(1<<32-Me(r)-1)).toString(32)+t,n=":"+n+"R"+t,t=Wt++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=o.createElement(t,{is:r.is}):(e=o.createElement(t),t==="select"&&(o=e,r.multiple?o.multiple=!0:r.size&&(o.size=r.size))):e=o.createElementNS(e,t),e[Fe]=n,e[Bt]=r,Fs(e,n,!1,!1),n.stateNode=e;e:{switch(o=ri(t,r),t){case"dialog":D("cancel",e),D("close",e),l=r;break;case"iframe":case"object":case"embed":D("load",e),l=r;break;case"video":case"audio":for(l=0;llt&&(n.flags|=128,r=!0,vt(i,!1),n.lanes=4194304)}else{if(!r)if(e=Kr(o),e!==null){if(n.flags|=128,r=!0,t=e.updateQueue,t!==null&&(n.updateQueue=t,n.flags|=4),vt(i,!0),i.tail===null&&i.tailMode==="hidden"&&!o.alternate&&!A)return te(n),null}else 2*Q()-i.renderingStartTime>lt&&t!==1073741824&&(n.flags|=128,r=!0,vt(i,!1),n.lanes=4194304);i.isBackwards?(o.sibling=n.child,n.child=o):(t=i.last,t!==null?t.sibling=o:n.child=o,i.last=o)}return i.tail!==null?(n=i.tail,i.rendering=n,i.tail=n.sibling,i.renderingStartTime=Q(),n.sibling=null,t=U.current,O(U,r?t&1|2:t&1),n):(te(n),null);case 22:case 23:return xo(),r=n.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(n.flags|=8192),r&&n.mode&1?me&1073741824&&(te(n),n.subtreeFlags&6&&(n.flags|=8192)):te(n),null;case 24:return null;case 25:return null}throw Error(y(156,n.tag))}function fd(e,n){switch(io(n),n.tag){case 1:return de(n.type)&&Ur(),e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 3:return tt(),I(fe),I(le),ho(),e=n.flags,e&65536&&!(e&128)?(n.flags=e&-65537|128,n):null;case 5:return mo(n),null;case 13:if(I(U),e=n.memoizedState,e!==null&&e.dehydrated!==null){if(n.alternate===null)throw Error(y(340));et()}return e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 19:return I(U),null;case 4:return tt(),null;case 10:return so(n.type._context),null;case 22:case 23:return xo(),null;case 24:return null;default:return null}}var hr=!1,re=!1,dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Wn(e,n){var t=e.ref;if(t!==null)if(typeof t=="function")try{t(null)}catch(r){V(e,n,r)}else t.current=null}function Ti(e,n,t){try{t()}catch(r){V(e,n,r)}}var Mu=!1;function pd(e,n){if(pi=Dr,e=Ha(),ro(e)){if("selectionStart"in e)var t={start:e.selectionStart,end:e.selectionEnd};else e:{t=(t=e.ownerDocument)&&t.defaultView||window;var r=t.getSelection&&t.getSelection();if(r&&r.rangeCount!==0){t=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{t.nodeType,i.nodeType}catch{t=null;break e}var o=0,u=-1,a=-1,f=0,h=0,m=e,p=null;n:for(;;){for(var w;m!==t||l!==0&&m.nodeType!==3||(u=o+l),m!==i||r!==0&&m.nodeType!==3||(a=o+r),m.nodeType===3&&(o+=m.nodeValue.length),(w=m.firstChild)!==null;)p=m,m=w;for(;;){if(m===e)break n;if(p===t&&++f===l&&(u=o),p===i&&++h===r&&(a=o),(w=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=w}t=u===-1||a===-1?null:{start:u,end:a}}else t=null}t=t||{start:0,end:0}}else t=null;for(mi={focusedElem:e,selectionRange:t},Dr=!1,S=n;S!==null;)if(n=S,e=n.child,(n.subtreeFlags&1028)!==0&&e!==null)e.return=n,S=e;else for(;S!==null;){n=S;try{var g=n.alternate;if(n.flags&1024)switch(n.tag){case 0:case 11:case 15:break;case 1:if(g!==null){var k=g.memoizedProps,z=g.memoizedState,c=n.stateNode,s=c.getSnapshotBeforeUpdate(n.elementType===n.type?k:ze(n.type,k),z);c.__reactInternalSnapshotBeforeUpdate=s}break;case 3:var d=n.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){V(n,n.return,v)}if(e=n.sibling,e!==null){e.return=n.return,S=e;break}S=n.return}return g=Mu,Mu=!1,g}function Pt(e,n,t){var r=n.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&Ti(n,t,i)}l=l.next}while(l!==r)}}function sl(e,n){if(n=n.updateQueue,n=n!==null?n.lastEffect:null,n!==null){var t=n=n.next;do{if((t.tag&e)===e){var r=t.create;t.destroy=r()}t=t.next}while(t!==n)}}function Ri(e){var n=e.ref;if(n!==null){var t=e.stateNode;switch(e.tag){case 5:e=t;break;default:e=t}typeof n=="function"?n(e):n.current=e}}function $s(e){var n=e.alternate;n!==null&&(e.alternate=null,$s(n)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(n=e.stateNode,n!==null&&(delete n[Fe],delete n[Bt],delete n[yi],delete n[Xf],delete n[Zf])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Bs(e){return e.tag===5||e.tag===3||e.tag===4}function ju(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Bs(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Mi(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.nodeType===8?t.parentNode.insertBefore(e,n):t.insertBefore(e,n):(t.nodeType===8?(n=t.parentNode,n.insertBefore(e,t)):(n=t,n.appendChild(e)),t=t._reactRootContainer,t!=null||n.onclick!==null||(n.onclick=Ar));else if(r!==4&&(e=e.child,e!==null))for(Mi(e,n,t),e=e.sibling;e!==null;)Mi(e,n,t),e=e.sibling}function ji(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.insertBefore(e,n):t.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(ji(e,n,t),e=e.sibling;e!==null;)ji(e,n,t),e=e.sibling}var q=null,Te=!1;function Ze(e,n,t){for(t=t.child;t!==null;)Vs(e,n,t),t=t.sibling}function Vs(e,n,t){if(Ae&&typeof Ae.onCommitFiberUnmount=="function")try{Ae.onCommitFiberUnmount(nl,t)}catch{}switch(t.tag){case 5:re||Wn(t,n);case 6:var r=q,l=Te;q=null,Ze(e,n,t),q=r,Te=l,q!==null&&(Te?(e=q,t=t.stateNode,e.nodeType===8?e.parentNode.removeChild(t):e.removeChild(t)):q.removeChild(t.stateNode));break;case 18:q!==null&&(Te?(e=q,t=t.stateNode,e.nodeType===8?Dl(e.parentNode,t):e.nodeType===1&&Dl(e,t),It(e)):Dl(q,t.stateNode));break;case 4:r=q,l=Te,q=t.stateNode.containerInfo,Te=!0,Ze(e,n,t),q=r,Te=l;break;case 0:case 11:case 14:case 15:if(!re&&(r=t.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,o=i.destroy;i=i.tag,o!==void 0&&(i&2||i&4)&&Ti(t,n,o),l=l.next}while(l!==r)}Ze(e,n,t);break;case 1:if(!re&&(Wn(t,n),r=t.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=t.memoizedProps,r.state=t.memoizedState,r.componentWillUnmount()}catch(u){V(t,n,u)}Ze(e,n,t);break;case 21:Ze(e,n,t);break;case 22:t.mode&1?(re=(r=re)||t.memoizedState!==null,Ze(e,n,t),re=r):Ze(e,n,t);break;default:Ze(e,n,t)}}function Ou(e){var n=e.updateQueue;if(n!==null){e.updateQueue=null;var t=e.stateNode;t===null&&(t=e.stateNode=new dd),n.forEach(function(r){var l=_d.bind(null,e,r);t.has(r)||(t.add(r),r.then(l,l))})}}function Pe(e,n){var t=n.deletions;if(t!==null)for(var r=0;rl&&(l=o),r&=~i}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*hd(r/1960))-r,10e?16:e,tn===null)var r=!1;else{if(e=tn,tn=null,Jr=0,M&6)throw Error(y(331));var l=M;for(M|=4,S=e.current;S!==null;){var i=S,o=i.child;if(S.flags&16){var u=i.deletions;if(u!==null){for(var a=0;aQ()-Lo?Ln(e,0):Eo|=t),pe(e,n)}function Zs(e,n){n===0&&(e.mode&1?(n=or,or<<=1,!(or&130023424)&&(or=4194304)):n=1);var t=oe();e=Ge(e,n),e!==null&&(Xt(e,n,t),pe(e,t))}function Sd(e){var n=e.memoizedState,t=0;n!==null&&(t=n.retryLane),Zs(e,t)}function _d(e,n){var t=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(t=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(n),Zs(e,t)}var Js;Js=function(e,n,t){if(e!==null)if(e.memoizedProps!==n.pendingProps||fe.current)ce=!0;else{if(!(e.lanes&t)&&!(n.flags&128))return ce=!1,sd(e,n,t);ce=!!(e.flags&131072)}else ce=!1,A&&n.flags&1048576&&es(n,Vr,n.index);switch(n.lanes=0,n.tag){case 2:var r=n.type;xr(e,n),e=n.pendingProps;var l=bn(n,le.current);Zn(n,t),l=yo(null,n,r,e,l,t);var i=go();return n.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(n.tag=1,n.memoizedState=null,n.updateQueue=null,de(r)?(i=!0,$r(n)):i=!1,n.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,fo(n),l.updater=ul,n.stateNode=l,l._reactInternals=n,Ei(n,r,e,t),n=xi(null,n,r,!0,i,t)):(n.tag=0,A&&i&&lo(n),ie(null,n,l,t),n=n.child),n;case 16:r=n.elementType;e:{switch(xr(e,n),e=n.pendingProps,l=r._init,r=l(r._payload),n.type=r,l=n.tag=Ld(r),e=ze(r,e),l){case 0:n=Ci(null,n,r,e,t);break e;case 1:n=zu(null,n,r,e,t);break e;case 11:n=Nu(null,n,r,e,t);break e;case 14:n=Pu(null,n,r,ze(r.type,e),t);break e}throw Error(y(306,r,""))}return n;case 0:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:ze(r,l),Ci(e,n,r,l,t);case 1:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:ze(r,l),zu(e,n,r,l,t);case 3:e:{if(Os(n),e===null)throw Error(y(387));r=n.pendingProps,i=n.memoizedState,l=i.element,ls(e,n),Qr(n,r,null,t);var o=n.memoizedState;if(r=o.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:o.cache,pendingSuspenseBoundaries:o.pendingSuspenseBoundaries,transitions:o.transitions},n.updateQueue.baseState=i,n.memoizedState=i,n.flags&256){l=rt(Error(y(423)),n),n=Tu(e,n,r,t,l);break e}else if(r!==l){l=rt(Error(y(424)),n),n=Tu(e,n,r,t,l);break e}else for(he=un(n.stateNode.containerInfo.firstChild),ve=n,A=!0,Re=null,t=as(n,null,r,t),n.child=t;t;)t.flags=t.flags&-3|4096,t=t.sibling;else{if(et(),r===l){n=Ye(e,n,t);break e}ie(e,n,r,t)}n=n.child}return n;case 5:return ss(n),e===null&&ki(n),r=n.type,l=n.pendingProps,i=e!==null?e.memoizedProps:null,o=l.children,hi(r,l)?o=null:i!==null&&hi(r,i)&&(n.flags|=32),js(e,n),ie(e,n,o,t),n.child;case 6:return e===null&&ki(n),null;case 13:return Ds(e,n,t);case 4:return po(n,n.stateNode.containerInfo),r=n.pendingProps,e===null?n.child=nt(n,null,r,t):ie(e,n,r,t),n.child;case 11:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:ze(r,l),Nu(e,n,r,l,t);case 7:return ie(e,n,n.pendingProps,t),n.child;case 8:return ie(e,n,n.pendingProps.children,t),n.child;case 12:return ie(e,n,n.pendingProps.children,t),n.child;case 10:e:{if(r=n.type._context,l=n.pendingProps,i=n.memoizedProps,o=l.value,O(Hr,r._currentValue),r._currentValue=o,i!==null)if(Oe(i.value,o)){if(i.children===l.children&&!fe.current){n=Ye(e,n,t);break e}}else for(i=n.child,i!==null&&(i.return=n);i!==null;){var u=i.dependencies;if(u!==null){o=i.child;for(var a=u.firstContext;a!==null;){if(a.context===r){if(i.tag===1){a=We(-1,t&-t),a.tag=2;var f=i.updateQueue;if(f!==null){f=f.shared;var h=f.pending;h===null?a.next=a:(a.next=h.next,h.next=a),f.pending=a}}i.lanes|=t,a=i.alternate,a!==null&&(a.lanes|=t),Si(i.return,t,n),u.lanes|=t;break}a=a.next}}else if(i.tag===10)o=i.type===n.type?null:i.child;else if(i.tag===18){if(o=i.return,o===null)throw Error(y(341));o.lanes|=t,u=o.alternate,u!==null&&(u.lanes|=t),Si(o,t,n),o=i.sibling}else o=i.child;if(o!==null)o.return=i;else for(o=i;o!==null;){if(o===n){o=null;break}if(i=o.sibling,i!==null){i.return=o.return,o=i;break}o=o.return}i=o}ie(e,n,l.children,t),n=n.child}return n;case 9:return l=n.type,r=n.pendingProps.children,Zn(n,t),l=Ce(l),r=r(l),n.flags|=1,ie(e,n,r,t),n.child;case 14:return r=n.type,l=ze(r,n.pendingProps),l=ze(r.type,l),Pu(e,n,r,l,t);case 15:return Rs(e,n,n.type,n.pendingProps,t);case 17:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:ze(r,l),xr(e,n),n.tag=1,de(r)?(e=!0,$r(n)):e=!1,Zn(n,t),os(n,r,l),Ei(n,r,l,t),xi(null,n,r,!0,e,t);case 19:return Is(e,n,t);case 22:return Ms(e,n,t)}throw Error(y(156,n.tag))};function qs(e,n){return La(e,n)}function Ed(e,n,t,r){this.tag=e,this.key=t,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=n,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ee(e,n,t,r){return new Ed(e,n,t,r)}function Po(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Ld(e){if(typeof e=="function")return Po(e)?1:0;if(e!=null){if(e=e.$$typeof,e===Ki)return 11;if(e===Gi)return 14}return 2}function fn(e,n){var t=e.alternate;return t===null?(t=Ee(e.tag,n,e.key,e.mode),t.elementType=e.elementType,t.type=e.type,t.stateNode=e.stateNode,t.alternate=e,e.alternate=t):(t.pendingProps=n,t.type=e.type,t.flags=0,t.subtreeFlags=0,t.deletions=null),t.flags=e.flags&14680064,t.childLanes=e.childLanes,t.lanes=e.lanes,t.child=e.child,t.memoizedProps=e.memoizedProps,t.memoizedState=e.memoizedState,t.updateQueue=e.updateQueue,n=e.dependencies,t.dependencies=n===null?null:{lanes:n.lanes,firstContext:n.firstContext},t.sibling=e.sibling,t.index=e.index,t.ref=e.ref,t}function zr(e,n,t,r,l,i){var o=2;if(r=e,typeof e=="function")Po(e)&&(o=1);else if(typeof e=="string")o=5;else e:switch(e){case Dn:return Cn(t.children,l,i,n);case Qi:o=8,l|=8;break;case Gl:return e=Ee(12,t,n,l|2),e.elementType=Gl,e.lanes=i,e;case Yl:return e=Ee(13,t,n,l),e.elementType=Yl,e.lanes=i,e;case Xl:return e=Ee(19,t,n,l),e.elementType=Xl,e.lanes=i,e;case ua:return fl(t,l,i,n);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case ia:o=10;break e;case oa:o=9;break e;case Ki:o=11;break e;case Gi:o=14;break e;case Je:o=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return n=Ee(o,t,n,l),n.elementType=e,n.type=r,n.lanes=i,n}function Cn(e,n,t,r){return e=Ee(7,e,r,n),e.lanes=t,e}function fl(e,n,t,r){return e=Ee(22,e,r,n),e.elementType=ua,e.lanes=t,e.stateNode={isHidden:!1},e}function Hl(e,n,t){return e=Ee(6,e,null,n),e.lanes=t,e}function Wl(e,n,t){return n=Ee(4,e.children!==null?e.children:[],e.key,n),n.lanes=t,n.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},n}function Cd(e,n,t,r,l){this.tag=n,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Ll(0),this.expirationTimes=Ll(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Ll(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function zo(e,n,t,r,l,i,o,u,a){return e=new Cd(e,n,t,u,a),n===1?(n=1,i===!0&&(n|=8)):n=0,i=Ee(3,null,null,n),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:t,cache:null,transitions:null,pendingSuspenseBoundaries:null},fo(i),e}function xd(e,n,t){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(tc)}catch(e){console.error(e)}}tc(),ea.exports=ge;var Rd=ea.exports,Vu=Rd;Ql.createRoot=Vu.createRoot,Ql.hydrateRoot=Vu.hydrateRoot;const Md={"Acehnese (Arabic script)":"ace_Arab","Acehnese (Latin script)":"ace_Latn",Afrikaans:"afr_Latn",Akan:"aka_Latn",Amharic:"amh_Ethi",Armenian:"hye_Armn",Assamese:"asm_Beng",Asturian:"ast_Latn",Awadhi:"awa_Deva","Ayacucho Quechua":"quy_Latn",Balinese:"ban_Latn",Bambara:"bam_Latn","Banjar (Arabic script)":"bjn_Arab","Banjar (Latin script)":"bjn_Latn",Bashkir:"bak_Cyrl",Basque:"eus_Latn",Belarusian:"bel_Cyrl",Bemba:"bem_Latn",Bengali:"ben_Beng",Bhojpuri:"bho_Deva",Bosnian:"bos_Latn",Buginese:"bug_Latn",Bulgarian:"bul_Cyrl",Burmese:"mya_Mymr",Catalan:"cat_Latn",Cebuano:"ceb_Latn","Central Atlas Tamazight":"tzm_Tfng","Central Aymara":"ayr_Latn","Central Kanuri (Arabic script)":"knc_Arab","Central Kanuri (Latin script)":"knc_Latn","Central Kurdish":"ckb_Arab",Chhattisgarhi:"hne_Deva","Chinese (Simplified)":"zho_Hans","Chinese (Traditional)":"zho_Hant",Chokwe:"cjk_Latn","Crimean Tatar":"crh_Latn",Croatian:"hrv_Latn",Czech:"ces_Latn",Danish:"dan_Latn",Dari:"prs_Arab",Dutch:"nld_Latn",Dyula:"dyu_Latn",Dzongkha:"dzo_Tibt","Eastern Panjabi":"pan_Guru","Eastern Yiddish":"ydd_Hebr","Egyptian Arabic":"arz_Arab",English:"eng_Latn",Esperanto:"epo_Latn",Estonian:"est_Latn",Ewe:"ewe_Latn",Faroese:"fao_Latn",Fijian:"fij_Latn",Finnish:"fin_Latn",Fon:"fon_Latn",French:"fra_Latn",Friulian:"fur_Latn",Galician:"glg_Latn",Ganda:"lug_Latn",Georgian:"kat_Geor",German:"deu_Latn",Greek:"ell_Grek",Guarani:"grn_Latn",Gujarati:"guj_Gujr","Haitian Creole":"hat_Latn","Halh Mongolian":"khk_Cyrl",Hausa:"hau_Latn",Hebrew:"heb_Hebr",Hindi:"hin_Deva",Hungarian:"hun_Latn",Icelandic:"isl_Latn",Igbo:"ibo_Latn",Ilocano:"ilo_Latn",Indonesian:"ind_Latn",Irish:"gle_Latn",Italian:"ita_Latn",Japanese:"jpn_Jpan",Javanese:"jav_Latn",Jingpho:"kac_Latn",Kabiyè:"kbp_Latn",Kabuverdianu:"kea_Latn",Kabyle:"kab_Latn",Kamba:"kam_Latn",Kannada:"kan_Knda","Kashmiri (Arabic script)":"kas_Arab","Kashmiri (Devanagari script)":"kas_Deva",Kazakh:"kaz_Cyrl",Khmer:"khm_Khmr",Kikongo:"kon_Latn",Kikuyu:"kik_Latn",Kimbundu:"kmb_Latn",Kinyarwanda:"kin_Latn",Korean:"kor_Hang",Kyrgyz:"kir_Cyrl",Lao:"lao_Laoo",Latgalian:"ltg_Latn",Ligurian:"lij_Latn",Limburgish:"lim_Latn",Lingala:"lin_Latn",Lithuanian:"lit_Latn",Lombard:"lmo_Latn","Luba-Kasai":"lua_Latn",Luo:"luo_Latn",Luxembourgish:"ltz_Latn",Macedonian:"mkd_Cyrl",Magahi:"mag_Deva",Maithili:"mai_Deva",Malayalam:"mal_Mlym",Maltese:"mlt_Latn",Maori:"mri_Latn",Marathi:"mar_Deva","Meitei (Bengali script)":"mni_Beng","Mesopotamian Arabic":"acm_Arab","Minangkabau (Arabic script)":"min_Arab","Minangkabau (Latin script)":"min_Latn",Mizo:"lus_Latn","Modern Standard Arabic (Romanized)":"arb_Latn","Modern Standard Arabic":"arb_Arab","Moroccan Arabic":"ary_Arab",Mossi:"mos_Latn","Najdi Arabic":"ars_Arab",Nepali:"npi_Deva","Nigerian Fulfulde":"fuv_Latn","North Azerbaijani":"azj_Latn","North Levantine Arabic":"apc_Arab","Northern Kurdish":"kmr_Latn","Northern Sotho":"nso_Latn","Northern Uzbek":"uzn_Latn","Norwegian Bokmål":"nob_Latn","Norwegian Nynorsk":"nno_Latn",Nuer:"nus_Latn",Nyanja:"nya_Latn",Occitan:"oci_Latn",Odia:"ory_Orya",Pangasinan:"pag_Latn",Papiamento:"pap_Latn","Plateau Malagasy":"plt_Latn",Polish:"pol_Latn",Portuguese:"por_Latn",Romanian:"ron_Latn",Rundi:"run_Latn",Russian:"rus_Cyrl",Samoan:"smo_Latn",Sango:"sag_Latn",Sanskrit:"san_Deva",Santali:"sat_Olck",Sardinian:"srd_Latn","Scottish Gaelic":"gla_Latn",Serbian:"srp_Cyrl",Shan:"shn_Mymr",Shona:"sna_Latn",Sicilian:"scn_Latn",Silesian:"szl_Latn",Sindhi:"snd_Arab",Sinhala:"sin_Sinh",Slovak:"slk_Latn",Slovenian:"slv_Latn",Somali:"som_Latn","South Azerbaijani":"azb_Arab","South Levantine Arabic":"ajp_Arab","Southern Pashto":"pbt_Arab","Southern Sotho":"sot_Latn","Southwestern Dinka":"dik_Latn",Spanish:"spa_Latn","Standard Latvian":"lvs_Latn","Standard Malay":"zsm_Latn","Standard Tibetan":"bod_Tibt",Sundanese:"sun_Latn",Swahili:"swh_Latn",Swati:"ssw_Latn",Swedish:"swe_Latn",Tagalog:"tgl_Latn",Tajik:"tgk_Cyrl","Tamasheq (Latin script)":"taq_Latn","Tamasheq (Tifinagh script)":"taq_Tfng",Tamil:"tam_Taml",Tatar:"tat_Cyrl","Ta’izzi-Adeni Arabic":"acq_Arab",Telugu:"tel_Telu",Thai:"tha_Thai",Tigrinya:"tir_Ethi","Tok Pisin":"tpi_Latn","Tosk Albanian":"als_Latn",Tsonga:"tso_Latn",Tswana:"tsn_Latn",Tumbuka:"tum_Latn","Tunisian Arabic":"aeb_Arab",Turkish:"tur_Latn",Turkmen:"tuk_Latn",Twi:"twi_Latn",Ukrainian:"ukr_Cyrl",Umbundu:"umb_Latn",Urdu:"urd_Arab",Uyghur:"uig_Arab",Venetian:"vec_Latn",Vietnamese:"vie_Latn",Waray:"war_Latn",Welsh:"cym_Latn","West Central Oromo":"gaz_Latn","Western Persian":"pes_Arab",Wolof:"wol_Latn",Xhosa:"xho_Latn",Yoruba:"yor_Latn","Yue Chinese":"yue_Hant",Zulu:"zul_Latn"};function Hu({type:e,onChange:n,defaultLanguage:t}){return F.jsxs("div",{className:"language-selector",children:[F.jsxs("label",{children:[e,": "]}),F.jsx("select",{onChange:n,defaultValue:t,children:Object.entries(Md).map(([r,l])=>F.jsx("option",{value:l,children:r},r))})]})}function jd({text:e,percentage:n}){return n=n??0,F.jsx("div",{className:"progress-container",children:F.jsxs("div",{className:"progress-bar",style:{width:`${n}%`},children:[e," (",`${n.toFixed(2)}%`,")"]})})}function Od(){const[e,n]=ke.useState(null),[t,r]=ke.useState(!1),[l,i]=ke.useState([]),[o,u]=ke.useState("I love walking my dog."),[a,f]=ke.useState("eng_Latn"),[h,m]=ke.useState("fra_Latn"),[p,w]=ke.useState(""),g=ke.useRef(null);ke.useEffect(()=>{g.current||(g.current=new Worker(new URL("/assets/worker-22715bb5.js",self.location),{type:"module"}));const z=c=>{switch(c.data.status){case"initiate":n(!1),i(s=>[...s,c.data]);break;case"progress":i(s=>s.map(d=>d.file===c.data.file?{...d,progress:c.data.progress}:d));break;case"done":i(s=>s.filter(d=>d.file!==c.data.file));break;case"ready":n(!0);break;case"update":w(c.data.output);break;case"complete":r(!1);break}};return g.current.addEventListener("message",z),()=>g.current.removeEventListener("message",z)});const k=()=>{r(!0),g.current.postMessage({text:o,src_lang:a,tgt_lang:h})};return F.jsxs(F.Fragment,{children:[F.jsx("h1",{children:"Transformers.js"}),F.jsx("h2",{children:"ML-powered multilingual translation in React!"}),F.jsxs("div",{className:"container",children:[F.jsxs("div",{className:"language-container",children:[F.jsx(Hu,{type:"Source",defaultLanguage:"eng_Latn",onChange:z=>f(z.target.value)}),F.jsx(Hu,{type:"Target",defaultLanguage:"fra_Latn",onChange:z=>m(z.target.value)})]}),F.jsxs("div",{className:"textbox-container",children:[F.jsx("textarea",{value:o,rows:3,onChange:z=>u(z.target.value)}),F.jsx("textarea",{value:p,rows:3,readOnly:!0})]})]}),F.jsx("button",{disabled:t,onClick:k,children:"Translate"}),F.jsxs("div",{className:"progress-bars-container",children:[e===!1&&F.jsx("label",{children:"Loading models... (only run once)"}),l.map(z=>F.jsx("div",{children:F.jsx(jd,{text:z.file,percentage:z.progress})},z.file))]})]})}Ql.createRoot(document.getElementById("root")).render(F.jsx(kc.StrictMode,{children:F.jsx(Od,{})})); diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/text/cleaner.py b/spaces/XzJosh/Azuma-Bert-VITS2/text/cleaner.py deleted file mode 100644 index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azuma-Bert-VITS2/text/cleaner.py +++ /dev/null @@ -1,27 +0,0 @@ -from text import chinese, cleaned_text_to_sequence - - -language_module_map = { - 'ZH': chinese -} - - -def clean_text(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - return norm_text, phones, tones, word2ph - -def clean_text_bert(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - bert = language_module.get_bert_feature(norm_text, word2ph) - return phones, tones, bert - -def text_to_sequence(text, language): - norm_text, phones, tones, word2ph = clean_text(text, language) - return cleaned_text_to_sequence(phones, tones, language) - -if __name__ == '__main__': - pass diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/train_ms.py b/spaces/XzJosh/yoyo-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/yoyo-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/YUANAI/DiffspeechResearch/modules/commons/normalizing_flow/utils.py b/spaces/YUANAI/DiffspeechResearch/modules/commons/normalizing_flow/utils.py deleted file mode 100644 index 7eb56ec514bff822ba1a19a6474207ed82492410..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/modules/commons/normalizing_flow/utils.py +++ /dev/null @@ -1,29 +0,0 @@ -import torch - - -def squeeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - t = (t // n_sqz) * n_sqz - x = x[:, :, :t] - x_sqz = x.view(b, c, t // n_sqz, n_sqz) - x_sqz = x_sqz.permute(0, 3, 1, 2).contiguous().view(b, c * n_sqz, t // n_sqz) - - if x_mask is not None: - x_mask = x_mask[:, :, n_sqz - 1::n_sqz] - else: - x_mask = torch.ones(b, 1, t // n_sqz).to(device=x.device, dtype=x.dtype) - return x_sqz * x_mask, x_mask - - -def unsqueeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - x_unsqz = x.view(b, n_sqz, c // n_sqz, t) - x_unsqz = x_unsqz.permute(0, 2, 3, 1).contiguous().view(b, c // n_sqz, t * n_sqz) - - if x_mask is not None: - x_mask = x_mask.unsqueeze(-1).repeat(1, 1, 1, n_sqz).view(b, 1, t * n_sqz) - else: - x_mask = torch.ones(b, 1, t * n_sqz).to(device=x.device, dtype=x.dtype) - return x_unsqz * x_mask, x_mask diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/losses.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/losses.py deleted file mode 100644 index 41f9be6980713a46824ae9ec5eb8fd7c515d89c5..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - #print(logs_p) - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep.py deleted file mode 100644 index b7d0838026913b751128ad23925cef1e978fa906..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep.py +++ /dev/null @@ -1,533 +0,0 @@ -# Copyright 2022 TSAIL Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver - -import math -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, deprecate -from .scheduling_utils import SchedulerMixin, SchedulerOutput - - -def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin): - """ - DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with - the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality - samples, and it can generate quite good samples even in only 10 steps. - - For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 - - Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We - recommend to use `solver_order=2` for guided sampling, and `solver_order=3` for unconditional sampling. - - We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space - diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic - thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as - stable-diffusion). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - solver_order (`int`, default `2`): - the order of DPM-Solver; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided - sampling, and `solver_order=3` for unconditional sampling. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - thresholding (`bool`, default `False`): - whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487). - For pixel-space diffusion models, you can set both `algorithm_type=dpmsolver++` and `thresholding=True` to - use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion - models (such as stable-diffusion). - dynamic_thresholding_ratio (`float`, default `0.995`): - the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen - (https://arxiv.org/abs/2205.11487). - sample_max_value (`float`, default `1.0`): - the threshold value for dynamic thresholding. Valid only when `thresholding=True` and - `algorithm_type="dpmsolver++`. - algorithm_type (`str`, default `dpmsolver++`): - the algorithm type for the solver. Either `dpmsolver` or `dpmsolver++`. The `dpmsolver` type implements the - algorithms in https://arxiv.org/abs/2206.00927, and the `dpmsolver++` type implements the algorithms in - https://arxiv.org/abs/2211.01095. We recommend to use `dpmsolver++` with `solver_order=2` for guided - sampling (e.g. stable-diffusion). - solver_type (`str`, default `midpoint`): - the solver type for the second-order solver. Either `midpoint` or `heun`. The solver type slightly affects - the sample quality, especially for small number of steps. We empirically find that `midpoint` solvers are - slightly better, so we recommend to use the `midpoint` type. - lower_order_final (`bool`, default `True`): - whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically - find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. - - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - _deprecated_kwargs = ["predict_epsilon"] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - solver_order: int = 2, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - sample_max_value: float = 1.0, - algorithm_type: str = "dpmsolver++", - solver_type: str = "midpoint", - lower_order_final: bool = True, - **kwargs, - ): - message = ( - "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler =" - " DPMSolverMultistepScheduler.from_pretrained(, prediction_type='epsilon')`." - ) - predict_epsilon = deprecate("predict_epsilon", "0.11.0", message, take_from=kwargs) - if predict_epsilon is not None: - self.register_to_config(prediction_type="epsilon" if predict_epsilon else "sample") - - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - # Currently we only support VP-type noise schedule - self.alpha_t = torch.sqrt(self.alphas_cumprod) - self.sigma_t = torch.sqrt(1 - self.alphas_cumprod) - self.lambda_t = torch.log(self.alpha_t) - torch.log(self.sigma_t) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # settings for DPM-Solver - if algorithm_type not in ["dpmsolver", "dpmsolver++"]: - raise NotImplementedError(f"{algorithm_type} does is not implemented for {self.__class__}") - if solver_type not in ["midpoint", "heun"]: - raise NotImplementedError(f"{solver_type} does is not implemented for {self.__class__}") - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=np.float32)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps) - self.model_outputs = [None] * solver_order - self.lower_order_nums = 0 - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - timesteps = ( - np.linspace(0, self.num_train_timesteps - 1, num_inference_steps + 1) - .round()[::-1][:-1] - .copy() - .astype(np.int64) - ) - self.timesteps = torch.from_numpy(timesteps).to(device) - self.model_outputs = [ - None, - ] * self.config.solver_order - self.lower_order_nums = 0 - - def convert_model_output( - self, model_output: torch.FloatTensor, timestep: int, sample: torch.FloatTensor - ) -> torch.FloatTensor: - """ - Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. - - DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to - discretize an integral of the data prediction model. So we need to first convert the model output to the - corresponding type to match the algorithm. - - Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or - DPM-Solver++ for both noise prediction model and data prediction model. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the converted model output. - """ - # DPM-Solver++ needs to solve an integral of the data prediction model. - if self.config.algorithm_type == "dpmsolver++": - if self.config.prediction_type == "epsilon": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = (sample - sigma_t * model_output) / alpha_t - elif self.config.prediction_type == "sample": - x0_pred = model_output - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = alpha_t * sample - sigma_t * model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction` for the DPMSolverMultistepScheduler." - ) - - if self.config.thresholding: - # Dynamic thresholding in https://arxiv.org/abs/2205.11487 - orig_dtype = x0_pred.dtype - if orig_dtype not in [torch.float, torch.double]: - x0_pred = x0_pred.float() - dynamic_max_val = torch.quantile( - torch.abs(x0_pred).reshape((x0_pred.shape[0], -1)), self.config.dynamic_thresholding_ratio, dim=1 - ) - dynamic_max_val = torch.maximum( - dynamic_max_val, - self.config.sample_max_value * torch.ones_like(dynamic_max_val).to(dynamic_max_val.device), - )[(...,) + (None,) * (x0_pred.ndim - 1)] - x0_pred = torch.clamp(x0_pred, -dynamic_max_val, dynamic_max_val) / dynamic_max_val - x0_pred = x0_pred.type(orig_dtype) - return x0_pred - # DPM-Solver needs to solve an integral of the noise prediction model. - elif self.config.algorithm_type == "dpmsolver": - if self.config.prediction_type == "epsilon": - return model_output - elif self.config.prediction_type == "sample": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = (sample - alpha_t * model_output) / sigma_t - return epsilon - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = alpha_t * model_output + sigma_t * sample - return epsilon - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction` for the DPMSolverMultistepScheduler." - ) - - def dpm_solver_first_order_update( - self, - model_output: torch.FloatTensor, - timestep: int, - prev_timestep: int, - sample: torch.FloatTensor, - ) -> torch.FloatTensor: - """ - One step for the first-order DPM-Solver (equivalent to DDIM). - - See https://arxiv.org/abs/2206.00927 for the detailed derivation. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - lambda_t, lambda_s = self.lambda_t[prev_timestep], self.lambda_t[timestep] - alpha_t, alpha_s = self.alpha_t[prev_timestep], self.alpha_t[timestep] - sigma_t, sigma_s = self.sigma_t[prev_timestep], self.sigma_t[timestep] - h = lambda_t - lambda_s - if self.config.algorithm_type == "dpmsolver++": - x_t = (sigma_t / sigma_s) * sample - (alpha_t * (torch.exp(-h) - 1.0)) * model_output - elif self.config.algorithm_type == "dpmsolver": - x_t = (alpha_t / alpha_s) * sample - (sigma_t * (torch.exp(h) - 1.0)) * model_output - return x_t - - def multistep_dpm_solver_second_order_update( - self, - model_output_list: List[torch.FloatTensor], - timestep_list: List[int], - prev_timestep: int, - sample: torch.FloatTensor, - ) -> torch.FloatTensor: - """ - One step for the second-order multistep DPM-Solver. - - Args: - model_output_list (`List[torch.FloatTensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2] - m0, m1 = model_output_list[-1], model_output_list[-2] - lambda_t, lambda_s0, lambda_s1 = self.lambda_t[t], self.lambda_t[s0], self.lambda_t[s1] - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1 - r0 = h_0 / h - D0, D1 = m0, (1.0 / r0) * (m0 - m1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2211.01095 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (torch.exp(-h) - 1.0)) * D0 - - 0.5 * (alpha_t * (torch.exp(-h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (torch.exp(-h) - 1.0)) * D0 - + (alpha_t * ((torch.exp(-h) - 1.0) / h + 1.0)) * D1 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (torch.exp(h) - 1.0)) * D0 - - 0.5 * (sigma_t * (torch.exp(h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (torch.exp(h) - 1.0)) * D0 - - (sigma_t * ((torch.exp(h) - 1.0) / h - 1.0)) * D1 - ) - return x_t - - def multistep_dpm_solver_third_order_update( - self, - model_output_list: List[torch.FloatTensor], - timestep_list: List[int], - prev_timestep: int, - sample: torch.FloatTensor, - ) -> torch.FloatTensor: - """ - One step for the third-order multistep DPM-Solver. - - Args: - model_output_list (`List[torch.FloatTensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3] - m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3] - lambda_t, lambda_s0, lambda_s1, lambda_s2 = ( - self.lambda_t[t], - self.lambda_t[s0], - self.lambda_t[s1], - self.lambda_t[s2], - ) - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0, h_1 = lambda_t - lambda_s0, lambda_s0 - lambda_s1, lambda_s1 - lambda_s2 - r0, r1 = h_0 / h, h_1 / h - D0 = m0 - D1_0, D1_1 = (1.0 / r0) * (m0 - m1), (1.0 / r1) * (m1 - m2) - D1 = D1_0 + (r0 / (r0 + r1)) * (D1_0 - D1_1) - D2 = (1.0 / (r0 + r1)) * (D1_0 - D1_1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (torch.exp(-h) - 1.0)) * D0 - + (alpha_t * ((torch.exp(-h) - 1.0) / h + 1.0)) * D1 - - (alpha_t * ((torch.exp(-h) - 1.0 + h) / h**2 - 0.5)) * D2 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (torch.exp(h) - 1.0)) * D0 - - (sigma_t * ((torch.exp(h) - 1.0) / h - 1.0)) * D1 - - (sigma_t * ((torch.exp(h) - 1.0 - h) / h**2 - 0.5)) * D2 - ) - return x_t - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Step function propagating the sample with the multistep DPM-Solver. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - - Returns: - [`~scheduling_utils.SchedulerOutput`] or `tuple`: [`~scheduling_utils.SchedulerOutput`] if `return_dict` is - True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - step_index = (self.timesteps == timestep).nonzero() - if len(step_index) == 0: - step_index = len(self.timesteps) - 1 - else: - step_index = step_index.item() - prev_timestep = 0 if step_index == len(self.timesteps) - 1 else self.timesteps[step_index + 1] - lower_order_final = ( - (step_index == len(self.timesteps) - 1) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - lower_order_second = ( - (step_index == len(self.timesteps) - 2) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - - model_output = self.convert_model_output(model_output, timestep, sample) - for i in range(self.config.solver_order - 1): - self.model_outputs[i] = self.model_outputs[i + 1] - self.model_outputs[-1] = model_output - - if self.config.solver_order == 1 or self.lower_order_nums < 1 or lower_order_final: - prev_sample = self.dpm_solver_first_order_update(model_output, timestep, prev_timestep, sample) - elif self.config.solver_order == 2 or self.lower_order_nums < 2 or lower_order_second: - timestep_list = [self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_dpm_solver_second_order_update( - self.model_outputs, timestep_list, prev_timestep, sample - ) - else: - timestep_list = [self.timesteps[step_index - 2], self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_dpm_solver_third_order_update( - self.model_outputs, timestep_list, prev_timestep, sample - ) - - if self.lower_order_nums < self.config.solver_order: - self.lower_order_nums += 1 - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - self.alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Yudha515/Rvc-Models/tests/common_utils/temp_utils.py b/spaces/Yudha515/Rvc-Models/tests/common_utils/temp_utils.py deleted file mode 100644 index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittenly. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/commons.py b/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/apc_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/apc_head.py deleted file mode 100644 index c5aa9368bd5a5a7f1abf8a490a97eb021ab09795..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/apc_head.py +++ /dev/null @@ -1,170 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv - * Copyright (c) OpenMMLab. All rights reserved. -''' - -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ACM(nn.Module): - """Adaptive Context Module used in APCNet. - - Args: - pool_scale (int): Pooling scale used in Adaptive Context - Module to extract region features. - fusion (bool): Add one conv to fuse residual feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, pool_scale, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(ACM, self).__init__() - self.pool_scale = pool_scale - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.pooled_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.global_info = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.gla = nn.Conv2d(self.channels, self.pool_scale**2, 1, 1, 0) - - self.residual_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - pooled_x = F.adaptive_avg_pool2d(x, self.pool_scale) - # [batch_size, channels, h, w] - x = self.input_redu_conv(x) - # [batch_size, channels, pool_scale, pool_scale] - pooled_x = self.pooled_redu_conv(pooled_x) - batch_size = x.size(0) - # [batch_size, pool_scale * pool_scale, channels] - pooled_x = pooled_x.view(batch_size, self.channels, - -1).permute(0, 2, 1).contiguous() - # [batch_size, h * w, pool_scale * pool_scale] - affinity_matrix = self.gla(x + resize( - self.global_info(F.adaptive_avg_pool2d(x, 1)), size=x.shape[2:]) - ).permute(0, 2, 3, 1).reshape( - batch_size, -1, self.pool_scale**2) - affinity_matrix = F.sigmoid(affinity_matrix) - # [batch_size, h * w, channels] - z_out = torch.matmul(affinity_matrix, pooled_x) - # [batch_size, channels, h * w] - z_out = z_out.permute(0, 2, 1).contiguous() - # [batch_size, channels, h, w] - z_out = z_out.view(batch_size, self.channels, x.size(2), x.size(3)) - z_out = self.residual_conv(z_out) - z_out = F.relu(z_out + x) - if self.fusion: - z_out = self.fusion_conv(z_out) - - return z_out - - -@HEADS.register_module() -class APCHead(BaseDecodeHead): - """Adaptive Pyramid Context Network for Semantic Segmentation. - - This head is the implementation of - `APCNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Adaptive Context - Module. Default: (1, 2, 3, 6). - fusion (bool): Add one conv to fuse residual feature. - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), fusion=True, **kwargs): - super(APCHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.fusion = fusion - acm_modules = [] - for pool_scale in self.pool_scales: - acm_modules.append( - ACM(pool_scale, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.acm_modules = nn.ModuleList(acm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - acm_outs = [x] - for acm_module in self.acm_modules: - acm_outs.append(acm_module(x)) - acm_outs = torch.cat(acm_outs, dim=1) - output = self.bottleneck(acm_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/ahmedghani/svoice_demo/svoice/evaluate_auto_select.py b/spaces/ahmedghani/svoice_demo/svoice/evaluate_auto_select.py deleted file mode 100644 index da4f98f2c9ca05b976ad35d1c1fb4fad08614e91..0000000000000000000000000000000000000000 --- a/spaces/ahmedghani/svoice_demo/svoice/evaluate_auto_select.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Authors: Yossi Adi (adiyoss) - -import argparse -from concurrent.futures import ProcessPoolExecutor -import json -import logging -import sys - -import numpy as np -from pesq import pesq -from pystoi import stoi -import torch - -from .models.sisnr_loss import cal_loss -from .data.data import Validset -from . import distrib -from .utils import bold, deserialize_model, LogProgress -from .evaluate import _run_metrics - - -logger = logging.getLogger(__name__) - -parser = argparse.ArgumentParser( - 'Evaluate model automatic selection performance') -parser.add_argument('model_path_2spk', - help='Path to 2spk model file created by training') -parser.add_argument('model_path_3spk', - help='Path to 3spk model file created by training') -parser.add_argument('model_path_4spk', - help='Path to 4spk model file created by training') -parser.add_argument('model_path_5spk', - help='Path to 5spk model file created by training') -parser.add_argument( - 'data_dir', help='directory including mix.json, s1.json and s2.json files') -parser.add_argument('--device', default="cuda") -parser.add_argument('--sample_rate', default=8000, - type=int, help='Sample rate') -parser.add_argument('--thresh', default=0.001, - type=float, help='Threshold for model auto selection') -parser.add_argument('--num_workers', type=int, default=5) -parser.add_argument('-v', '--verbose', action='store_const', const=logging.DEBUG, - default=logging.INFO, help="More loggging") - - - -# test pariwise matching -def pair_wise(padded_source, estimate_source): - pair_wise = torch.sum(padded_source.unsqueeze( - 1)*estimate_source.unsqueeze(2), dim=3) - if estimate_source.shape[1] != padded_source.shape[1]: - idxs = pair_wise.argmax(dim=1) - new_src = torch.FloatTensor(padded_source.shape) - for b, idx in enumerate(idxs): - new_src[b:, :, ] = estimate_source[b][idx] - padded_source_pad = padded_source - estimate_source_pad = new_src.cuda() - else: - padded_source_pad = padded_source - estimate_source_pad = estimate_source - return estimate_source_pad - - -def evaluate_auto_select(args): - total_sisnr = 0 - total_pesq = 0 - total_stoi = 0 - total_cnt = 0 - updates = 5 - - models = list() - paths = [args.model_path_2spk, args.model_path_3spk, - args.model_path_4spk, args.model_path_5spk] - - for path in paths: - # Load model - pkg = torch.load(path) - if 'model' in pkg: - model = pkg['model'] - else: - model = pkg - model = deserialize_model(model) - if 'best_state' in pkg: - model.load_state_dict(pkg['best_state']) - logger.debug(model) - - model.eval() - model.to(args.device) - models.append(model) - - # Load data - dataset = Validset(args.data_dir) - data_loader = distrib.loader( - dataset, batch_size=1, num_workers=args.num_workers) - sr = args.sample_rate - y_hat = torch.zeros((4)) - - pendings = [] - with ProcessPoolExecutor(args.num_workers) as pool: - with torch.no_grad(): - iterator = LogProgress(logger, data_loader, name="Eval estimates") - for i, data in enumerate(iterator): - # Get batch data - mixture, lengths, sources = [x.to(args.device) for x in data] - estimated_sources = list() - reorder_estimated_sources = list() - - for model in models: - # Forward - with torch.no_grad(): - raw_estimate = model(mixture)[-1] - - estimate = pair_wise(sources, raw_estimate) - sisnr_loss, snr, estimate, reorder_estimate = cal_loss( - sources, estimate, lengths) - estimated_sources.insert(0, raw_estimate) - reorder_estimated_sources.insert(0, reorder_estimate) - - # =================== DETECT NUM. NON-ACTIVE CHANNELS ============== # - selected_idx = 0 - thresh = args.thresh - max_spk = 5 - mix_spk = 2 - ground = (max_spk - mix_spk) - while (selected_idx <= ground): - no_sils = 0 - vals = torch.mean( - (estimated_sources[selected_idx]/torch.abs(estimated_sources[selected_idx]).max())**2, axis=2) - new_selected_idx = max_spk - len(vals[vals > thresh]) - if new_selected_idx == selected_idx: - break - else: - selected_idx = new_selected_idx - if selected_idx < 0: - selected_idx = 0 - elif selected_idx > ground: - selected_idx = ground - - y_hat[ground - selected_idx] += 1 - reorder_estimate = reorder_estimated_sources[selected_idx].cpu( - ) - sources = sources.cpu() - mixture = mixture.cpu() - - pendings.append( - pool.submit(_run_metrics, sources, reorder_estimate, mixture, None, - sr=sr)) - total_cnt += sources.shape[0] - - for pending in LogProgress(logger, pendings, updates, name="Eval metrics"): - sisnr_i, pesq_i, stoi_i = pending.result() - total_sisnr += sisnr_i - total_pesq += pesq_i - total_stoi += stoi_i - - metrics = [total_sisnr, total_pesq, total_stoi] - sisnr, pesq, stoi = distrib.average( - [m/total_cnt for m in metrics], total_cnt) - logger.info(bold(f'Test set performance: SISNRi={sisnr:.2f} ' - f'PESQ={pesq}, STOI={stoi}.')) - logger.info(f'Two spks prob: {y_hat[0]/(total_cnt)}') - logger.info(f'Three spks prob: {y_hat[1]/(total_cnt)}') - logger.info(f'Four spks prob: {y_hat[2]/(total_cnt)}') - logger.info(f'Five spks prob: {y_hat[3]/(total_cnt)}') - return sisnr, pesq, stoi - - -def main(): - args = parser.parse_args() - logging.basicConfig(stream=sys.stderr, level=args.verbose) - logger.debug(args) - sisnr, pesq, stoi = evaluate_auto_select(args) - json.dump({'sisnr': sisnr, - 'pesq': pesq, 'stoi': stoi}, sys.stdout) - sys.stdout.write('\n') - - -if __name__ == '__main__': - main() diff --git a/spaces/ahuang11/mapnstreets/README.md b/spaces/ahuang11/mapnstreets/README.md deleted file mode 100644 index 68a7ea8550d2db0221afdbe9df29ce8104df15d5..0000000000000000000000000000000000000000 --- a/spaces/ahuang11/mapnstreets/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MapnStreets -emoji: 🛣️ -colorFrom: gray -colorTo: green -sdk: docker -pinned: false -duplicated_from: ahuang11/name-chronicles -license: bsd-3-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aipicasso/cool-japan-diffusion-latest-demo/README.md b/spaces/aipicasso/cool-japan-diffusion-latest-demo/README.md deleted file mode 100644 index 1f0182cd8c9f4f9de00240caa6b77bcf9bcda68e..0000000000000000000000000000000000000000 --- a/spaces/aipicasso/cool-japan-diffusion-latest-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cool Japan Diffusion -emoji: 🇯🇵 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/setup.py b/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/setup.py deleted file mode 100644 index 244fdec83bee181e187d88800300395f449b0fbc..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/setup.py +++ /dev/null @@ -1,78 +0,0 @@ -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -import os -import glob - -import torch - -from torch.utils.cpp_extension import CUDA_HOME -from torch.utils.cpp_extension import CppExtension -from torch.utils.cpp_extension import CUDAExtension - -from setuptools import find_packages -from setuptools import setup - -requirements = ["torch", "torchvision"] - -def get_extensions(): - this_dir = os.path.dirname(os.path.abspath(__file__)) - extensions_dir = os.path.join(this_dir, "src") - - main_file = glob.glob(os.path.join(extensions_dir, "*.cpp")) - source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp")) - source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu")) - - sources = main_file + source_cpu - extension = CppExtension - extra_compile_args = {"cxx": []} - define_macros = [] - - # Force cuda since torch ask for a device, not if cuda is in fact available. - if (os.environ.get('FORCE_CUDA') or torch.cuda.is_available()) and CUDA_HOME is not None: - extension = CUDAExtension - sources += source_cuda - define_macros += [("WITH_CUDA", None)] - extra_compile_args["nvcc"] = [ - "-DCUDA_HAS_FP16=1", - "-D__CUDA_NO_HALF_OPERATORS__", - "-D__CUDA_NO_HALF_CONVERSIONS__", - "-D__CUDA_NO_HALF2_OPERATORS__", - ] -# else: -# if CUDA_HOME is None: -# raise NotImplementedError('CUDA_HOME is None. Please set environment variable CUDA_HOME.') -# else: -# raise NotImplementedError('No CUDA runtime is found. Please set FORCE_CUDA=1 or test it by running torch.cuda.is_available().') - - sources = [os.path.join(extensions_dir, s) for s in sources] - include_dirs = [extensions_dir] - ext_modules = [ - extension( - "MultiScaleDeformableAttention", - sources, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - ) - ] - return ext_modules - -setup( - name="MultiScaleDeformableAttention", - version="1.0", - author="Weijie Su", - url="https://github.com/fundamentalvision/Deformable-DETR", - description="PyTorch Wrapper for CUDA Functions of Multi-Scale Deformable Attention", - packages=find_packages(exclude=("configs", "tests",)), - ext_modules=get_extensions(), - cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension}, -) diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Comment.pod b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Comment.pod deleted file mode 100644 index f8e2cb290e0e2baff9c371718eca6495265d3f92..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Comment.pod +++ /dev/null @@ -1,14 +0,0 @@ -=head1 NAME - -XML::DOM::Comment - An XML comment in XML::DOM - -=head1 DESCRIPTION - -XML::DOM::Comment extends L which extends -L. - -This node represents the content of a comment, i.e., all the characters -between the starting ''. Note that this is the -definition of a comment in XML, and, in practice, HTML, although some -HTML tools may implement the full SGML comment structure. - diff --git a/spaces/akiyamasho/AnimeBackgroundGAN/network/Transformer.py b/spaces/akiyamasho/AnimeBackgroundGAN/network/Transformer.py deleted file mode 100644 index 966c1c3aa654fbeb4650d361b4fc803695de5369..0000000000000000000000000000000000000000 --- a/spaces/akiyamasho/AnimeBackgroundGAN/network/Transformer.py +++ /dev/null @@ -1,180 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class Transformer(nn.Module): - def __init__(self): - super(Transformer, self).__init__() - # - self.refpad01_1 = nn.ReflectionPad2d(3) - self.conv01_1 = nn.Conv2d(3, 64, 7) - self.in01_1 = InstanceNormalization(64) - # relu - self.conv02_1 = nn.Conv2d(64, 128, 3, 2, 1) - self.conv02_2 = nn.Conv2d(128, 128, 3, 1, 1) - self.in02_1 = InstanceNormalization(128) - # relu - self.conv03_1 = nn.Conv2d(128, 256, 3, 2, 1) - self.conv03_2 = nn.Conv2d(256, 256, 3, 1, 1) - self.in03_1 = InstanceNormalization(256) - # relu - - ## res block 1 - self.refpad04_1 = nn.ReflectionPad2d(1) - self.conv04_1 = nn.Conv2d(256, 256, 3) - self.in04_1 = InstanceNormalization(256) - # relu - self.refpad04_2 = nn.ReflectionPad2d(1) - self.conv04_2 = nn.Conv2d(256, 256, 3) - self.in04_2 = InstanceNormalization(256) - # + input - - ## res block 2 - self.refpad05_1 = nn.ReflectionPad2d(1) - self.conv05_1 = nn.Conv2d(256, 256, 3) - self.in05_1 = InstanceNormalization(256) - # relu - self.refpad05_2 = nn.ReflectionPad2d(1) - self.conv05_2 = nn.Conv2d(256, 256, 3) - self.in05_2 = InstanceNormalization(256) - # + input - - ## res block 3 - self.refpad06_1 = nn.ReflectionPad2d(1) - self.conv06_1 = nn.Conv2d(256, 256, 3) - self.in06_1 = InstanceNormalization(256) - # relu - self.refpad06_2 = nn.ReflectionPad2d(1) - self.conv06_2 = nn.Conv2d(256, 256, 3) - self.in06_2 = InstanceNormalization(256) - # + input - - ## res block 4 - self.refpad07_1 = nn.ReflectionPad2d(1) - self.conv07_1 = nn.Conv2d(256, 256, 3) - self.in07_1 = InstanceNormalization(256) - # relu - self.refpad07_2 = nn.ReflectionPad2d(1) - self.conv07_2 = nn.Conv2d(256, 256, 3) - self.in07_2 = InstanceNormalization(256) - # + input - - ## res block 5 - self.refpad08_1 = nn.ReflectionPad2d(1) - self.conv08_1 = nn.Conv2d(256, 256, 3) - self.in08_1 = InstanceNormalization(256) - # relu - self.refpad08_2 = nn.ReflectionPad2d(1) - self.conv08_2 = nn.Conv2d(256, 256, 3) - self.in08_2 = InstanceNormalization(256) - # + input - - ## res block 6 - self.refpad09_1 = nn.ReflectionPad2d(1) - self.conv09_1 = nn.Conv2d(256, 256, 3) - self.in09_1 = InstanceNormalization(256) - # relu - self.refpad09_2 = nn.ReflectionPad2d(1) - self.conv09_2 = nn.Conv2d(256, 256, 3) - self.in09_2 = InstanceNormalization(256) - # + input - - ## res block 7 - self.refpad10_1 = nn.ReflectionPad2d(1) - self.conv10_1 = nn.Conv2d(256, 256, 3) - self.in10_1 = InstanceNormalization(256) - # relu - self.refpad10_2 = nn.ReflectionPad2d(1) - self.conv10_2 = nn.Conv2d(256, 256, 3) - self.in10_2 = InstanceNormalization(256) - # + input - - ## res block 8 - self.refpad11_1 = nn.ReflectionPad2d(1) - self.conv11_1 = nn.Conv2d(256, 256, 3) - self.in11_1 = InstanceNormalization(256) - # relu - self.refpad11_2 = nn.ReflectionPad2d(1) - self.conv11_2 = nn.Conv2d(256, 256, 3) - self.in11_2 = InstanceNormalization(256) - # + input - - ##------------------------------------## - self.deconv01_1 = nn.ConvTranspose2d(256, 128, 3, 2, 1, 1) - self.deconv01_2 = nn.Conv2d(128, 128, 3, 1, 1) - self.in12_1 = InstanceNormalization(128) - # relu - self.deconv02_1 = nn.ConvTranspose2d(128, 64, 3, 2, 1, 1) - self.deconv02_2 = nn.Conv2d(64, 64, 3, 1, 1) - self.in13_1 = InstanceNormalization(64) - # relu - self.refpad12_1 = nn.ReflectionPad2d(3) - self.deconv03_1 = nn.Conv2d(64, 3, 7) - # tanh - - def forward(self, x): - y = F.relu(self.in01_1(self.conv01_1(self.refpad01_1(x)))) - y = F.relu(self.in02_1(self.conv02_2(self.conv02_1(y)))) - t04 = F.relu(self.in03_1(self.conv03_2(self.conv03_1(y)))) - - ## - y = F.relu(self.in04_1(self.conv04_1(self.refpad04_1(t04)))) - t05 = self.in04_2(self.conv04_2(self.refpad04_2(y))) + t04 - - y = F.relu(self.in05_1(self.conv05_1(self.refpad05_1(t05)))) - t06 = self.in05_2(self.conv05_2(self.refpad05_2(y))) + t05 - - y = F.relu(self.in06_1(self.conv06_1(self.refpad06_1(t06)))) - t07 = self.in06_2(self.conv06_2(self.refpad06_2(y))) + t06 - - y = F.relu(self.in07_1(self.conv07_1(self.refpad07_1(t07)))) - t08 = self.in07_2(self.conv07_2(self.refpad07_2(y))) + t07 - - y = F.relu(self.in08_1(self.conv08_1(self.refpad08_1(t08)))) - t09 = self.in08_2(self.conv08_2(self.refpad08_2(y))) + t08 - - y = F.relu(self.in09_1(self.conv09_1(self.refpad09_1(t09)))) - t10 = self.in09_2(self.conv09_2(self.refpad09_2(y))) + t09 - - y = F.relu(self.in10_1(self.conv10_1(self.refpad10_1(t10)))) - t11 = self.in10_2(self.conv10_2(self.refpad10_2(y))) + t10 - - y = F.relu(self.in11_1(self.conv11_1(self.refpad11_1(t11)))) - y = self.in11_2(self.conv11_2(self.refpad11_2(y))) + t11 - ## - - y = F.relu(self.in12_1(self.deconv01_2(self.deconv01_1(y)))) - y = F.relu(self.in13_1(self.deconv02_2(self.deconv02_1(y)))) - y = torch.tanh(self.deconv03_1(self.refpad12_1(y))) - - return y - - -class InstanceNormalization(nn.Module): - def __init__(self, dim, eps=1e-9): - super(InstanceNormalization, self).__init__() - self.scale = nn.Parameter(torch.FloatTensor(dim)) - self.shift = nn.Parameter(torch.FloatTensor(dim)) - self.eps = eps - self._reset_parameters() - - def _reset_parameters(self): - self.scale.data.uniform_() - self.shift.data.zero_() - - def __call__(self, x): - n = x.size(2) * x.size(3) - t = x.view(x.size(0), x.size(1), n) - mean = torch.mean(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x) - # Calculate the biased var. torch.var returns unbiased var - var = torch.var(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x) * ( - (n - 1) / float(n) - ) - scale_broadcast = self.scale.unsqueeze(1).unsqueeze(1).unsqueeze(0) - scale_broadcast = scale_broadcast.expand_as(x) - shift_broadcast = self.shift.unsqueeze(1).unsqueeze(1).unsqueeze(0) - shift_broadcast = shift_broadcast.expand_as(x) - out = (x - mean) / torch.sqrt(var + self.eps) - out = out * scale_broadcast + shift_broadcast - return out diff --git a/spaces/alexray/btc_predictor/data_preprocessing.py b/spaces/alexray/btc_predictor/data_preprocessing.py deleted file mode 100644 index b3e9f14374a0bd68bdca05fb07cbede4a09bcd51..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/data_preprocessing.py +++ /dev/null @@ -1,38 +0,0 @@ -import os -import pandas as pd -from sklearn.preprocessing import StandardScaler - - -def preprocess_data(input_file="data/assets_data.csv", data_dir="data"): - - os.makedirs(data_dir, exist_ok=True) - - df = pd.read_csv(input_file, index_col=0) - - # Additional features - for column in df.drop(columns='target').columns: - df[f'{column}_ch'] = df[column] / df.shift(1)[column] - - df.dropna(inplace=True) - - X = df.drop('target', axis=1) - y = df['target'] - - # Scaling - scaler = StandardScaler() - X_scaled = scaler.fit_transform(X) - X_scaled_df = pd.DataFrame(X_scaled, index=X.index, columns=X.columns) - - # Train-test split - train_size = len(df) - 90 - X_train, X_test = X_scaled_df[:train_size], X_scaled_df[train_size:] - y_train, y_test = y[:train_size], y[train_size:] - - X_train.to_csv(f"{data_dir}/train_features.csv", index=True) - X_test.to_csv(f"{data_dir}/test_features.csv", index=True) - y_train.to_csv(f"{data_dir}/train_target.csv", index=True) - y_test.to_csv(f"{data_dir}/test_target.csv", index=True) - - -if __name__ == '__main__': - preprocess_data() diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/install/legacy.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/install/legacy.py deleted file mode 100644 index 5b7ef9017181e94a86b8985f7523feaea387f612..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/install/legacy.py +++ /dev/null @@ -1,120 +0,0 @@ -"""Legacy installation process, i.e. `setup.py install`. -""" - -import logging -import os -from distutils.util import change_root -from typing import List, Optional, Sequence - -from pip._internal.build_env import BuildEnvironment -from pip._internal.exceptions import InstallationError, LegacyInstallFailure -from pip._internal.models.scheme import Scheme -from pip._internal.utils.misc import ensure_dir -from pip._internal.utils.setuptools_build import make_setuptools_install_args -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory - -logger = logging.getLogger(__name__) - - -def write_installed_files_from_setuptools_record( - record_lines: List[str], - root: Optional[str], - req_description: str, -) -> None: - def prepend_root(path: str) -> str: - if root is None or not os.path.isabs(path): - return path - else: - return change_root(root, path) - - for line in record_lines: - directory = os.path.dirname(line) - if directory.endswith(".egg-info"): - egg_info_dir = prepend_root(directory) - break - else: - message = ( - "{} did not indicate that it installed an " - ".egg-info directory. Only setup.py projects " - "generating .egg-info directories are supported." - ).format(req_description) - raise InstallationError(message) - - new_lines = [] - for line in record_lines: - filename = line.strip() - if os.path.isdir(filename): - filename += os.path.sep - new_lines.append(os.path.relpath(prepend_root(filename), egg_info_dir)) - new_lines.sort() - ensure_dir(egg_info_dir) - inst_files_path = os.path.join(egg_info_dir, "installed-files.txt") - with open(inst_files_path, "w") as f: - f.write("\n".join(new_lines) + "\n") - - -def install( - install_options: List[str], - global_options: Sequence[str], - root: Optional[str], - home: Optional[str], - prefix: Optional[str], - use_user_site: bool, - pycompile: bool, - scheme: Scheme, - setup_py_path: str, - isolated: bool, - req_name: str, - build_env: BuildEnvironment, - unpacked_source_directory: str, - req_description: str, -) -> bool: - - header_dir = scheme.headers - - with TempDirectory(kind="record") as temp_dir: - try: - record_filename = os.path.join(temp_dir.path, "install-record.txt") - install_args = make_setuptools_install_args( - setup_py_path, - global_options=global_options, - install_options=install_options, - record_filename=record_filename, - root=root, - prefix=prefix, - header_dir=header_dir, - home=home, - use_user_site=use_user_site, - no_user_config=isolated, - pycompile=pycompile, - ) - - runner = runner_with_spinner_message( - f"Running setup.py install for {req_name}" - ) - with build_env: - runner( - cmd=install_args, - cwd=unpacked_source_directory, - ) - - if not os.path.exists(record_filename): - logger.debug("Record file %s not found", record_filename) - # Signal to the caller that we didn't install the new package - return False - - except Exception as e: - # Signal to the caller that we didn't install the new package - raise LegacyInstallFailure(package_details=req_name) from e - - # At this point, we have successfully installed the requirement. - - # We intentionally do not use any encoding to read the file because - # setuptools writes the file using distutils.file_util.write_file, - # which does not specify an encoding. - with open(record_filename) as f: - record_lines = f.read().splitlines() - - write_installed_files_from_setuptools_record(record_lines, root, req_description) - return True diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/control.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/control.py deleted file mode 100644 index c98d0d7d98bd6de3b6b14b89ccffabbbefc5aefb..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/control.py +++ /dev/null @@ -1,175 +0,0 @@ -from typing import Any, Callable, Dict, Iterable, List, TYPE_CHECKING, Union - -from .segment import ControlCode, ControlType, Segment - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderResult - -STRIP_CONTROL_CODES = [ - 8, # Backspace - 11, # Vertical tab - 12, # Form feed - 13, # Carriage return -] -_CONTROL_TRANSLATE = {_codepoint: None for _codepoint in STRIP_CONTROL_CODES} - - -CONTROL_CODES_FORMAT: Dict[int, Callable[..., str]] = { - ControlType.BELL: lambda: "\x07", - ControlType.CARRIAGE_RETURN: lambda: "\r", - ControlType.HOME: lambda: "\x1b[H", - ControlType.CLEAR: lambda: "\x1b[2J", - ControlType.ENABLE_ALT_SCREEN: lambda: "\x1b[?1049h", - ControlType.DISABLE_ALT_SCREEN: lambda: "\x1b[?1049l", - ControlType.SHOW_CURSOR: lambda: "\x1b[?25h", - ControlType.HIDE_CURSOR: lambda: "\x1b[?25l", - ControlType.CURSOR_UP: lambda param: f"\x1b[{param}A", - ControlType.CURSOR_DOWN: lambda param: f"\x1b[{param}B", - ControlType.CURSOR_FORWARD: lambda param: f"\x1b[{param}C", - ControlType.CURSOR_BACKWARD: lambda param: f"\x1b[{param}D", - ControlType.CURSOR_MOVE_TO_COLUMN: lambda param: f"\x1b[{param+1}G", - ControlType.ERASE_IN_LINE: lambda param: f"\x1b[{param}K", - ControlType.CURSOR_MOVE_TO: lambda x, y: f"\x1b[{y+1};{x+1}H", -} - - -class Control: - """A renderable that inserts a control code (non printable but may move cursor). - - Args: - *codes (str): Positional arguments are either a :class:`~rich.segment.ControlType` enum or a - tuple of ControlType and an integer parameter - """ - - __slots__ = ["segment"] - - def __init__(self, *codes: Union[ControlType, ControlCode]) -> None: - control_codes: List[ControlCode] = [ - (code,) if isinstance(code, ControlType) else code for code in codes - ] - _format_map = CONTROL_CODES_FORMAT - rendered_codes = "".join( - _format_map[code](*parameters) for code, *parameters in control_codes - ) - self.segment = Segment(rendered_codes, None, control_codes) - - @classmethod - def bell(cls) -> "Control": - """Ring the 'bell'.""" - return cls(ControlType.BELL) - - @classmethod - def home(cls) -> "Control": - """Move cursor to 'home' position.""" - return cls(ControlType.HOME) - - @classmethod - def move(cls, x: int = 0, y: int = 0) -> "Control": - """Move cursor relative to current position. - - Args: - x (int): X offset. - y (int): Y offset. - - Returns: - ~Control: Control object. - - """ - - def get_codes() -> Iterable[ControlCode]: - control = ControlType - if x: - yield ( - control.CURSOR_FORWARD if x > 0 else control.CURSOR_BACKWARD, - abs(x), - ) - if y: - yield ( - control.CURSOR_DOWN if y > 0 else control.CURSOR_UP, - abs(y), - ) - - control = cls(*get_codes()) - return control - - @classmethod - def move_to_column(cls, x: int, y: int = 0) -> "Control": - """Move to the given column, optionally add offset to row. - - Returns: - x (int): absolute x (column) - y (int): optional y offset (row) - - Returns: - ~Control: Control object. - """ - - return ( - cls( - (ControlType.CURSOR_MOVE_TO_COLUMN, x), - ( - ControlType.CURSOR_DOWN if y > 0 else ControlType.CURSOR_UP, - abs(y), - ), - ) - if y - else cls((ControlType.CURSOR_MOVE_TO_COLUMN, x)) - ) - - @classmethod - def move_to(cls, x: int, y: int) -> "Control": - """Move cursor to absolute position. - - Args: - x (int): x offset (column) - y (int): y offset (row) - - Returns: - ~Control: Control object. - """ - return cls((ControlType.CURSOR_MOVE_TO, x, y)) - - @classmethod - def clear(cls) -> "Control": - """Clear the screen.""" - return cls(ControlType.CLEAR) - - @classmethod - def show_cursor(cls, show: bool) -> "Control": - """Show or hide the cursor.""" - return cls(ControlType.SHOW_CURSOR if show else ControlType.HIDE_CURSOR) - - @classmethod - def alt_screen(cls, enable: bool) -> "Control": - """Enable or disable alt screen.""" - if enable: - return cls(ControlType.ENABLE_ALT_SCREEN, ControlType.HOME) - else: - return cls(ControlType.DISABLE_ALT_SCREEN) - - def __str__(self) -> str: - return self.segment.text - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - if self.segment.text: - yield self.segment - - -def strip_control_codes( - text: str, _translate_table: Dict[int, None] = _CONTROL_TRANSLATE -) -> str: - """Remove control codes from text. - - Args: - text (str): A string possibly contain control codes. - - Returns: - str: String with control codes removed. - """ - return text.translate(_translate_table) - - -if __name__ == "__main__": # pragma: no cover - print(strip_control_codes("hello\rWorld")) diff --git a/spaces/amankishore/sjc/run_img_sampling.py b/spaces/amankishore/sjc/run_img_sampling.py deleted file mode 100644 index bded1a0a2eb1b5530c590ae55c8d10c54720253b..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/run_img_sampling.py +++ /dev/null @@ -1,235 +0,0 @@ -from pathlib import Path -import numpy as np -import torch - -from misc import torch_samps_to_imgs -from adapt import Karras, ScoreAdapter, power_schedule -from adapt_gddpm import GuidedDDPM -from adapt_ncsn import NCSN as _NCSN -# from adapt_vesde import VESDE # not included to prevent import conflicts -from adapt_sd import StableDiffusion - -from my.utils import tqdm, EventStorage, HeartBeat, EarlyLoopBreak -from my.config import BaseConf, dispatch -from my.utils.seed import seed_everything - - -class GDDPM(BaseConf): - """Guided DDPM from OpenAI""" - model: str = "m_lsun_256" - lsun_cat: str = "bedroom" - imgnet_cat: int = -1 - - def make(self): - args = self.dict() - model = GuidedDDPM(**args) - return model - - -class SD(BaseConf): - """Stable Diffusion""" - variant: str = "v1" - v2_highres: bool = False - prompt: str = "a photograph of an astronaut riding a horse" - scale: float = 3.0 # classifier free guidance scale - precision: str = 'autocast' - - def make(self): - args = self.dict() - model = StableDiffusion(**args) - return model - - -class SDE(BaseConf): - def make(self): - args = self.dict() - model = VESDE(**args) - return model - - -class NCSN(BaseConf): - def make(self): - args = self.dict() - model = _NCSN(**args) - return model - - -class KarrasGen(BaseConf): - family: str = "gddpm" - gddpm: GDDPM = GDDPM() - sd: SD = SD() - # sde: SDE = SDE() - ncsn: NCSN = NCSN() - - batch_size: int = 10 - num_images: int = 1250 - num_t: int = 40 - σ_max: float = 80.0 - heun: bool = True - langevin: bool = False - cls_scaling: float = 1.0 # classifier guidance scaling - - def run(self): - args = self.dict() - family = args.pop("family") - model = getattr(self, family).make() - self.karras_generate(model, **args) - - @staticmethod - def karras_generate( - model: ScoreAdapter, - batch_size, num_images, σ_max, num_t, langevin, heun, cls_scaling, - **kwargs - ): - del kwargs # removed extra args - num_batches = num_images // batch_size - - fuse = EarlyLoopBreak(5) - with tqdm(total=num_batches) as pbar, \ - HeartBeat(pbar) as hbeat, \ - EventStorage() as metric: - - all_imgs = [] - - for _ in range(num_batches): - if fuse.on_break(): - break - - pipeline = Karras.inference( - model, batch_size, num_t, - init_xs=None, heun=heun, σ_max=σ_max, - langevin=langevin, cls_scaling=cls_scaling - ) - - for imgs in tqdm(pipeline, total=num_t+1, disable=False): - # _std = imgs.std().item() - # print(_std) - hbeat.beat() - pass - - if isinstance(model, StableDiffusion): - imgs = model.decode(imgs) - - imgs = torch_samps_to_imgs(imgs, uncenter=model.samps_centered()) - all_imgs.append(imgs) - - pbar.update() - - all_imgs = np.concatenate(all_imgs, axis=0) - metric.put_artifact("imgs", ".npy", lambda fn: np.save(fn, all_imgs)) - metric.step() - hbeat.done() - - -class SMLDGen(BaseConf): - family: str = "ncsn" - gddpm: GDDPM = GDDPM() - # sde: SDE = SDE() - ncsn: NCSN = NCSN() - - batch_size: int = 16 - num_images: int = 16 - num_stages: int = 80 - num_steps: int = 15 - σ_max: float = 80.0 - ε: float = 1e-5 - - def run(self): - args = self.dict() - family = args.pop("family") - model = getattr(self, family).make() - self.smld_generate(model, **args) - - @staticmethod - def smld_generate( - model: ScoreAdapter, - batch_size, num_images, num_stages, num_steps, σ_max, ε, - **kwargs - ): - num_batches = num_images // batch_size - σs = power_schedule(σ_max, model.σ_min, num_stages) - σs = [model.snap_t_to_nearest_tick(σ)[0] for σ in σs] - - fuse = EarlyLoopBreak(5) - with tqdm(total=num_batches) as pbar, \ - HeartBeat(pbar) as hbeat, \ - EventStorage() as metric: - - all_imgs = [] - - for _ in range(num_batches): - if fuse.on_break(): - break - - init_xs = torch.rand(batch_size, *model.data_shape(), device=model.device) - if model.samps_centered(): - init_xs = init_xs * 2 - 1 # [0, 1] -> [-1, 1] - - pipeline = smld_inference( - model, σs, num_steps, ε, init_xs - ) - - for imgs in tqdm(pipeline, total=(num_stages * num_steps)+1, disable=False): - pbar.set_description(f"{imgs.max().item():.3f}") - metric.put_scalars( - max=imgs.max().item(), min=imgs.min().item(), std=imgs.std().item() - ) - metric.step() - hbeat.beat() - - pbar.update() - imgs = torch_samps_to_imgs(imgs, uncenter=model.samps_centered()) - all_imgs.append(imgs) - - all_imgs = np.concatenate(all_imgs, axis=0) - metric.put_artifact("imgs", ".npy", lambda fn: np.save(fn, all_imgs)) - metric.step() - hbeat.done() - - -def smld_inference(model, σs, num_steps, ε, init_xs): - from math import sqrt - # not doing conditioning or cls guidance; for gddpm only lsun works; fine. - - xs = init_xs - yield xs - - for i in range(len(σs)): - α_i = ε * ((σs[i] / σs[-1]) ** 2) - for _ in range(num_steps): - grad = model.score(xs, σs[i]) - z = torch.randn_like(xs) - xs = xs + α_i * grad + sqrt(2 * α_i) * z - yield xs - - -def load_np_imgs(fname): - fname = Path(fname) - data = np.load(fname) - if fname.suffix == ".npz": - imgs = data['arr_0'] - else: - imgs = data - return imgs - - -def visualize(max_n_imgs=16): - import torchvision.utils as vutils - from imageio import imwrite - from einops import rearrange - - all_imgs = load_np_imgs("imgs/step_0.npy") - - imgs = all_imgs[:max_n_imgs] - imgs = rearrange(imgs, "N H W C -> N C H W", C=3) - imgs = torch.from_numpy(imgs) - pane = vutils.make_grid(imgs, padding=2, nrow=4) - pane = rearrange(pane, "C H W -> H W C", C=3) - pane = pane.numpy() - imwrite("preview.jpg", pane) - - -if __name__ == "__main__": - seed_everything(0) - dispatch(KarrasGen) - visualize(16) diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_callbackstop.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_callbackstop.c deleted file mode 100644 index fba9dca4dbf3215ee9e5301cd424e1d425ff2b0e..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_callbackstop.c +++ /dev/null @@ -1,252 +0,0 @@ -/** @file patest_callbackstop.c - @ingroup test_src - @brief Test the paComplete callback result code. - @author Ross Bencina -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com/ - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" - -#define NUM_SECONDS (5) -#define NUM_LOOPS (4) -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (67) - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -#define TABLE_SIZE (200) -typedef struct -{ - float sine[TABLE_SIZE]; - int phase; - unsigned long generatedFramesCount; - volatile int callbackReturnedPaComplete; - volatile int callbackInvokedAfterReturningPaComplete; - char message[100]; -} -TestData; - -/* - This routine will be called by the PortAudio stream when audio is needed. - It may be called at interrupt level on some machines so don't do anything - that could mess up the system like calling malloc() or free(). -*/ -static int TestCallback( const void *input, void *output, - unsigned long frameCount, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - TestData *data = (TestData*)userData; - float *out = (float*)output; - unsigned long i; - float x; - - (void) input; /* Prevent unused variable warnings. */ - (void) timeInfo; - (void) statusFlags; - - - if( data->callbackReturnedPaComplete ) - data->callbackInvokedAfterReturningPaComplete = 1; - - for( i=0; isine[ data->phase++ ]; - if( data->phase >= TABLE_SIZE ) - data->phase -= TABLE_SIZE; - - *out++ = x; /* left */ - *out++ = x; /* right */ - } - - data->generatedFramesCount += frameCount; - if( data->generatedFramesCount >= (NUM_SECONDS * SAMPLE_RATE) ) - { - data->callbackReturnedPaComplete = 1; - return paComplete; - } - else - { - return paContinue; - } -} - -/* - * This routine is called by portaudio when playback is done. - */ -static void StreamFinished( void* userData ) -{ - TestData *data = (TestData *) userData; - printf( "Stream Completed: %s\n", data->message ); -} - - -/*----------------------------------------------------------------------------*/ -int main(void); -int main(void) -{ - PaStreamParameters outputParameters; - PaStream *stream; - PaError err; - TestData data; - int i, j; - - - printf( "PortAudio Test: output sine wave. SR = %d, BufSize = %d\n", - SAMPLE_RATE, FRAMES_PER_BUFFER ); - - /* initialise sinusoidal wavetable */ - for( i=0; idefaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* output will be in-range, so no need to clip */ - TestCallback, - &data ); - if( err != paNoError ) goto error; - - sprintf( data.message, "Loop: XX" ); - err = Pa_SetStreamFinishedCallback( stream, &StreamFinished ); - if( err != paNoError ) goto error; - - printf("Repeating test %d times.\n", NUM_LOOPS ); - - for( i=0; i < NUM_LOOPS; ++i ) - { - data.phase = 0; - data.generatedFramesCount = 0; - data.callbackReturnedPaComplete = 0; - data.callbackInvokedAfterReturningPaComplete = 0; - sprintf( data.message, "Loop: %d", i ); - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - printf("Play for %d seconds.\n", NUM_SECONDS ); - - /* wait for the callback to complete generating NUM_SECONDS of tone */ - - do - { - Pa_Sleep( 500 ); - } - while( !data.callbackReturnedPaComplete ); - - printf( "Callback returned paComplete.\n" ); - printf( "Waiting for buffers to finish playing...\n" ); - - /* wait for stream to become inactive, - or for a timeout of approximately NUM_SECONDS - */ - - j = 0; - while( (err = Pa_IsStreamActive( stream )) == 1 && j < NUM_SECONDS * 2 ) - { - printf(".\n" ); - Pa_Sleep( 500 ); - ++j; - } - - if( err < 0 ) - { - goto error; - } - else if( err == 1 ) - { - printf( "TEST FAILED: Timed out waiting for buffers to finish playing.\n" ); - } - else - { - printf("Buffers finished.\n" ); - } - - if( data.callbackInvokedAfterReturningPaComplete ) - { - printf( "TEST FAILED: Callback was invoked after returning paComplete.\n" ); - } - - - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - - printf( "sleeping for 1 second...\n" ); - Pa_Sleep( 1000 ); - } - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - - Pa_Terminate(); - printf("Test finished.\n"); - - return err; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/amielle/patent-summarizer/util/examples.py b/spaces/amielle/patent-summarizer/util/examples.py deleted file mode 100644 index 938d20b32739868f206f8f19cebbb8813fe118cc..0000000000000000000000000000000000000000 --- a/spaces/amielle/patent-summarizer/util/examples.py +++ /dev/null @@ -1,112 +0,0 @@ -from util import summarizer - -entries = [ - [ - "US9820315B2", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 250 - ], - [ - "US9820315B2", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 350 - ], - [ - "https://patents.google.com/patent/US9820315B2", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 400 - ], - [ - "https://patents.google.com/patent/US10263802B2/en?q=smart+user+interface&oq=smart+user+interface", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 350 - ], - [ - "CN211575647U", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 250 - ], - [ - "CN211575647U", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 350 - ], - [ - "https://patents.google.com/patent/CN211575647U", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 400 - ], - [ - "https://patents.google.com/patent/CN211575647U/en", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 350 - ], - [ - "US10125002B2", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 250 - ], - [ - "US10125002B2", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 350 - ], - [ - "https://patents.google.com/patent/US10125002B2", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 400 - ], - [ - "https://patents.google.com/patent/US10125002B2/en", - summarizer.summary_options, - summarizer.model_names[3], - summarizer.model_names[0], - summarizer.model_names[2], - True, - 350 - ] -] \ No newline at end of file diff --git a/spaces/anhalu/transformer-ocr/README.md b/spaces/anhalu/transformer-ocr/README.md deleted file mode 100644 index da799b34c46f3cd005b2b2e6889b60ba1819f6c8..0000000000000000000000000000000000000000 --- a/spaces/anhalu/transformer-ocr/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Transformer Ocr -emoji: 📊 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/antonovmaxim/text-generation-webui-space/extensions/openai/README.md b/spaces/antonovmaxim/text-generation-webui-space/extensions/openai/README.md deleted file mode 100644 index b20eba3326b297630e64a3bedf96ef82c0d359b8..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/extensions/openai/README.md +++ /dev/null @@ -1,144 +0,0 @@ -# An OpenedAI API (openai like) - -This extension creates an API that works kind of like openai (ie. api.openai.com). -It's incomplete so far but perhaps is functional enough for you. - -## Setup & installation - -Optional (for flask_cloudflared, embeddings): - -``` -pip3 install -r requirements.txt -``` - -It listens on tcp port 5001 by default. You can use the OPENEDAI_PORT environment variable to change this. - -To enable the bare bones image generation (txt2img) set: SD_WEBUI_URL to point to your Stable Diffusion API ([Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)). - -Example: -``` -SD_WEBUI_URL=http://127.0.0.1:7861 -``` - -### Embeddings (alpha) - -Embeddings requires ```sentence-transformers``` installed, but chat and completions will function without it loaded. The embeddings endpoint is currently using the HuggingFace model: ```sentence-transformers/all-mpnet-base-v2``` for embeddings. This produces 768 dimensional embeddings (the same as the text-davinci-002 embeddings), which is different from OpenAI's current default ```text-embedding-ada-002``` model which produces 1536 dimensional embeddings. The model is small-ish and fast-ish. This model and embedding size may change in the future. - -| model name | dimensions | input max tokens | speed | size | Avg. performance | -| --- | --- | --- | --- | --- | --- | -| text-embedding-ada-002 | 1536 | 8192| - | - | - | -| text-davinci-002 | 768 | 2046 | - | - | - | -| all-mpnet-base-v2 | 768 | 384 | 2800 | 420M | 63.3 | -| all-MiniLM-L6-v2 | 384 | 256 | 14200 | 80M | 58.8 | - -In short, the all-MiniLM-L6-v2 model is 5x faster, 5x smaller ram, 2x smaller storage, and still offers good quality. Stats from (https://www.sbert.net/docs/pretrained_models.html). To change the model from the default you can set the environment variable OPENEDAI_EMBEDDING_MODEL, ex. "OPENEDAI_EMBEDDING_MODEL=all-MiniLM-L6-v2". - -Warning: You cannot mix embeddings from different models even if they have the same dimensions. They are not comparable. - -### Client Application Setup - -Almost everything you use it with will require you to set a dummy OpenAI API key environment variable. - -With the [official python openai client](https://github.com/openai/openai-python), you can set the OPENAI_API_BASE environment variable before you import the openai module, like so: - -``` -OPENAI_API_KEY=dummy -OPENAI_API_BASE=http://127.0.0.1:5001/v1 -``` - -If needed, replace 127.0.0.1 with the IP/port of your server. - -If using .env files to save the OPENAI_API_BASE and OPENAI_API_KEY variables, you can ensure compatibility by loading the .env file before loading the openai module, like so in python: - -``` -from dotenv import load_dotenv -load_dotenv() -import openai -``` - -With the [official Node.js openai client](https://github.com/openai/openai-node) it is slightly more more complex because the environment variables are not used by default, so small source code changes may be required to use the environment variables, like so: - -``` -const openai = OpenAI(Configuration({ - apiKey: process.env.OPENAI_API_KEY, - basePath: process.env.OPENAI_API_BASE, -})); -``` - -For apps made with the [chatgpt-api Node.js client library](https://github.com/transitive-bullshit/chatgpt-api): - -``` -const api = new ChatGPTAPI({ - apiKey: process.env.OPENAI_API_KEY, - apiBaseUrl: process.env.OPENAI_API_BASE, -}) -``` - -## Compatibility & not so compatibility - -| API endpoint | tested with | notes | -| --- | --- | --- | -| /v1/models | openai.Model.list() | returns the currently loaded model_name and some mock compatibility options | -| /v1/models/{id} | openai.Model.get() | returns whatever you ask for, model does nothing yet anyways | -| /v1/text_completion | openai.Completion.create() | the most tested, only supports single string input so far | -| /v1/chat/completions | openai.ChatCompletion.create() | depending on the model, this may add leading linefeeds | -| /v1/edits | openai.Edit.create() | Assumes an instruction following model, but may work with others | -| /v1/images/generations | openai.Image.create() | Bare bones, no model configuration, response_format='b64_json' only. | -| /v1/embeddings | openai.Embedding.create() | Using Sentence Transformer, dimensions are different and may never be directly comparable to openai embeddings. | -| /v1/moderations | openai.Moderation.create() | does nothing. successfully. | -| /v1/engines/\*/... completions, embeddings, generate | python-openai v0.25 and earlier | Legacy engines endpoints | -| /v1/images/edits | openai.Image.create_edit() | not supported | -| /v1/images/variations | openai.Image.create_variation() | not supported | -| /v1/audio/\* | openai.Audio.\* | not supported | -| /v1/files\* | openai.Files.\* | not supported | -| /v1/fine-tunes\* | openai.FineTune.\* | not supported | - -The model name setting is ignored in completions, but you may need to adjust the maximum token length to fit the model (ie. set to <2048 tokens instead of 4096, 8k, etc). To mitigate some of this, the max_tokens value is halved until it is less than truncation_length for the model (typically 2k). - -Streaming, temperature, top_p, max_tokens, stop, should all work as expected, but not all parameters are mapped correctly. - -Some hacky mappings: - -| OpenAI | text-generation-webui | note | -| --- | --- | --- | -| frequency_penalty | encoder_repetition_penalty | this seems to operate with a different scale and defaults, I tried to scale it based on range & defaults, but the results are terrible. hardcoded to 1.18 until there is a better way | -| presence_penalty | repetition_penalty | same issues as frequency_penalty, hardcoded to 1.0 | -| best_of | top_k | | -| stop | custom_stopping_strings | this is also stuffed with ['\nsystem:', '\nuser:', '\nhuman:', '\nassistant:', '\n###', ] for good measure. | -| n | 1 | hardcoded, it may be worth implementing this but I'm not sure how yet | -| 1.0 | typical_p | hardcoded | -| 1 | num_beams | hardcoded | -| max_tokens | max_new_tokens | max_tokens is scaled down by powers of 2 until it's smaller than truncation length. | -| logprobs | - | ignored | - -defaults are mostly from openai, so are different. I use the openai defaults where I can and try to scale them to the webui defaults with the same intent. - -### Models - -This has been successfully tested with Koala, Alpaca, gpt4-x-alpaca, GPT4all-snoozy, wizard-vicuna, stable-vicuna and Vicuna 1.1 - ie. Instruction Following models. If you test with other models please let me know how it goes. Less than satisfying results (so far): RWKV-4-Raven, llama, mpt-7b-instruct/chat - -### Applications - -Everything needs OPENAI_API_KEY=dummy set. - -| Compatibility | Application/Library | url | notes / setting | -| --- | --- | --- | --- | -| ✅❌ | openai-python | https://github.com/openai/openai-python | only the endpoints from above are working. OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ✅❌ | openai-node | https://github.com/openai/openai-node | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) | -| ✅❌ | chatgpt-api | https://github.com/transitive-bullshit/chatgpt-api | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) | -| ✅ | shell_gpt | https://github.com/TheR1D/shell_gpt | OPENAI_API_HOST=http://127.0.0.1:5001 | -| ✅ | gpt-shell | https://github.com/jla/gpt-shell | OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ✅ | gpt-discord-bot | https://github.com/openai/gpt-discord-bot | OPENAI_API_BASE=http://127.0.0.1:5001/v1 | -| ✅❌ | langchain | https://github.com/hwchase17/langchain | OPENAI_API_BASE=http://127.0.0.1:5001/v1 even with a good 30B-4bit model the result is poor so far. It assumes zero shot python/json coding. Some model tailored prompt formatting improves results greatly. | -| ✅❌ | Auto-GPT | https://github.com/Significant-Gravitas/Auto-GPT | OPENAI_API_BASE=http://127.0.0.1:5001/v1 Same issues as langchain. Also assumes a 4k+ context | -| ✅❌ | babyagi | https://github.com/yoheinakajima/babyagi | OPENAI_API_BASE=http://127.0.0.1:5001/v1 | - -## Future plans -* better error handling -* model changing, esp. something for swapping loras or embedding models -* consider switching to FastAPI + starlette for SSE (openai SSE seems non-standard) -* do something about rate limiting or locking requests for completions, most systems will only be able handle a single request at a time before OOM - -## Bugs? Feedback? Comments? Pull requests? - -Are all appreciated, please @matatonic and I'll try to get back to you as soon as possible. diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/prompt_encoder.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/prompt_encoder.py deleted file mode 100644 index c3143f4f8e02ddd7ca8587b40ff5d47c3a6b7ef3..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/prompt_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch import nn - -from typing import Any, Optional, Tuple, Type - -from .common import LayerNorm2d - - -class PromptEncoder(nn.Module): - def __init__( - self, - embed_dim: int, - image_embedding_size: Tuple[int, int], - input_image_size: Tuple[int, int], - mask_in_chans: int, - activation: Type[nn.Module] = nn.GELU, - ) -> None: - """ - Encodes prompts for input to SAM's mask decoder. - - Arguments: - embed_dim (int): The prompts' embedding dimension - image_embedding_size (tuple(int, int)): The spatial size of the - image embedding, as (H, W). - input_image_size (int): The padded size of the image as input - to the image encoder, as (H, W). - mask_in_chans (int): The number of hidden channels used for - encoding input masks. - activation (nn.Module): The activation to use when encoding - input masks. - """ - super().__init__() - self.embed_dim = embed_dim - self.input_image_size = input_image_size - self.image_embedding_size = image_embedding_size - self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) - - self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners - point_embeddings = [nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings)] - self.point_embeddings = nn.ModuleList(point_embeddings) - self.not_a_point_embed = nn.Embedding(1, embed_dim) - - self.mask_input_size = (4 * image_embedding_size[0], 4 * image_embedding_size[1]) - self.mask_downscaling = nn.Sequential( - nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans // 4), - activation(), - nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans), - activation(), - nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), - ) - self.no_mask_embed = nn.Embedding(1, embed_dim) - - def get_dense_pe(self) -> torch.Tensor: - """ - Returns the positional encoding used to encode point prompts, - applied to a dense set of points the shape of the image encoding. - - Returns: - torch.Tensor: Positional encoding with shape - 1x(embed_dim)x(embedding_h)x(embedding_w) - """ - return self.pe_layer(self.image_embedding_size).unsqueeze(0) - - def _embed_points( - self, - points: torch.Tensor, - labels: torch.Tensor, - pad: bool, - ) -> torch.Tensor: - """Embeds point prompts.""" - points = points + 0.5 # Shift to center of pixel - if pad: - padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) - padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) - points = torch.cat([points, padding_point], dim=1) - labels = torch.cat([labels, padding_label], dim=1) - point_embedding = self.pe_layer.forward_with_coords(points, self.input_image_size) - point_embedding[labels == -1] = 0.0 - point_embedding[labels == -1] += self.not_a_point_embed.weight - point_embedding[labels == 0] += self.point_embeddings[0].weight - point_embedding[labels == 1] += self.point_embeddings[1].weight - return point_embedding - - def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: - """Embeds box prompts.""" - boxes = boxes + 0.5 # Shift to center of pixel - coords = boxes.reshape(-1, 2, 2) - corner_embedding = self.pe_layer.forward_with_coords(coords, self.input_image_size) - corner_embedding[:, 0, :] += self.point_embeddings[2].weight - corner_embedding[:, 1, :] += self.point_embeddings[3].weight - return corner_embedding - - def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: - """Embeds mask inputs.""" - mask_embedding = self.mask_downscaling(masks) - return mask_embedding - - def _get_batch_size( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> int: - """ - Gets the batch size of the output given the batch size of the input prompts. - """ - if points is not None: - return points[0].shape[0] - elif boxes is not None: - return boxes.shape[0] - elif masks is not None: - return masks.shape[0] - else: - return 1 - - def _get_device(self) -> torch.device: - return self.point_embeddings[0].weight.device - - def forward( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Embeds different types of prompts, returning both sparse and dense - embeddings. - - Arguments: - points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates - and labels to embed. - boxes (torch.Tensor or none): boxes to embed - masks (torch.Tensor or none): masks to embed - - Returns: - torch.Tensor: sparse embeddings for the points and boxes, with shape - BxNx(embed_dim), where N is determined by the number of input points - and boxes. - torch.Tensor: dense embeddings for the masks, in the shape - Bx(embed_dim)x(embed_H)x(embed_W) - """ - bs = self._get_batch_size(points, boxes, masks) - sparse_embeddings = torch.empty((bs, 0, self.embed_dim), device=self._get_device()) - if points is not None: - coords, labels = points - point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) - sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) - if boxes is not None: - box_embeddings = self._embed_boxes(boxes) - sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) - - if masks is not None: - dense_embeddings = self._embed_masks(masks) - else: - dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( - bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] - ) - - return sparse_embeddings, dense_embeddings - - -class PositionEmbeddingRandom(nn.Module): - """ - Positional encoding using random spatial frequencies. - """ - - def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: - super().__init__() - if scale is None or scale <= 0.0: - scale = 1.0 - self.register_buffer( - "positional_encoding_gaussian_matrix", - scale * torch.randn((2, num_pos_feats)), - ) - - def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: - """Positionally encode points that are normalized to [0,1].""" - # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape - coords = 2 * coords - 1 - coords = coords @ self.positional_encoding_gaussian_matrix - coords = 2 * np.pi * coords - # outputs d_1 x ... x d_n x C shape - return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) - - def forward(self, size: Tuple[int, int]) -> torch.Tensor: - """Generate positional encoding for a grid of the specified size.""" - h, w = size - device: Any = self.positional_encoding_gaussian_matrix.device - grid = torch.ones((h, w), device=device, dtype=torch.float32) - y_embed = grid.cumsum(dim=0) - 0.5 - x_embed = grid.cumsum(dim=1) - 0.5 - y_embed = y_embed / h - x_embed = x_embed / w - - pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) - return pe.permute(2, 0, 1) # C x H x W - - def forward_with_coords( - self, coords_input: torch.Tensor, image_size: Tuple[int, int] - ) -> torch.Tensor: - """Positionally encode points that are not normalized to [0,1].""" - coords = coords_input.clone() - coords[:, :, 0] = coords[:, :, 0] / image_size[1] - coords[:, :, 1] = coords[:, :, 1] / image_size[0] - return self._pe_encoding(coords.to(torch.float)) # B x N x C diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/codeformer_model.py b/spaces/aodianyun/stable-diffusion-webui/modules/codeformer_model.py deleted file mode 100644 index fc8fbc3e947f64a7f23f0da7243d6e0ad7cbeb79..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/codeformer_model.py +++ /dev/null @@ -1,143 +0,0 @@ -import os -import sys -import traceback - -import cv2 -import torch - -import modules.face_restoration -import modules.shared -from modules import shared, devices, modelloader -from modules.paths import models_path - -# codeformer people made a choice to include modified basicsr library to their project which makes -# it utterly impossible to use it alongside with other libraries that also use basicsr, like GFPGAN. -# I am making a choice to include some files from codeformer to work around this issue. -model_dir = "Codeformer" -model_path = os.path.join(models_path, model_dir) -model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth' - -have_codeformer = False -codeformer = None - - -def setup_model(dirname): - global model_path - if not os.path.exists(model_path): - os.makedirs(model_path) - - path = modules.paths.paths.get("CodeFormer", None) - if path is None: - return - - try: - from torchvision.transforms.functional import normalize - from modules.codeformer.codeformer_arch import CodeFormer - from basicsr.utils.download_util import load_file_from_url - from basicsr.utils import imwrite, img2tensor, tensor2img - from facelib.utils.face_restoration_helper import FaceRestoreHelper - from facelib.detection.retinaface import retinaface - from modules.shared import cmd_opts - - net_class = CodeFormer - - class FaceRestorerCodeFormer(modules.face_restoration.FaceRestoration): - def name(self): - return "CodeFormer" - - def __init__(self, dirname): - self.net = None - self.face_helper = None - self.cmd_dir = dirname - - def create_models(self): - - if self.net is not None and self.face_helper is not None: - self.net.to(devices.device_codeformer) - return self.net, self.face_helper - model_paths = modelloader.load_models(model_path, model_url, self.cmd_dir, download_name='codeformer-v0.1.0.pth') - if len(model_paths) != 0: - ckpt_path = model_paths[0] - else: - print("Unable to load codeformer model.") - return None, None - net = net_class(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9, connect_list=['32', '64', '128', '256']).to(devices.device_codeformer) - checkpoint = torch.load(ckpt_path)['params_ema'] - net.load_state_dict(checkpoint) - net.eval() - - if hasattr(retinaface, 'device'): - retinaface.device = devices.device_codeformer - face_helper = FaceRestoreHelper(1, face_size=512, crop_ratio=(1, 1), det_model='retinaface_resnet50', save_ext='png', use_parse=True, device=devices.device_codeformer) - - self.net = net - self.face_helper = face_helper - - return net, face_helper - - def send_model_to(self, device): - self.net.to(device) - self.face_helper.face_det.to(device) - self.face_helper.face_parse.to(device) - - def restore(self, np_image, w=None): - np_image = np_image[:, :, ::-1] - - original_resolution = np_image.shape[0:2] - - self.create_models() - if self.net is None or self.face_helper is None: - return np_image - - self.send_model_to(devices.device_codeformer) - - self.face_helper.clean_all() - self.face_helper.read_image(np_image) - self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5) - self.face_helper.align_warp_face() - - for idx, cropped_face in enumerate(self.face_helper.cropped_faces): - cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True) - normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - cropped_face_t = cropped_face_t.unsqueeze(0).to(devices.device_codeformer) - - try: - with torch.no_grad(): - output = self.net(cropped_face_t, w=w if w is not None else shared.opts.code_former_weight, adain=True)[0] - restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1)) - del output - torch.cuda.empty_cache() - except Exception as error: - print(f'\tFailed inference for CodeFormer: {error}', file=sys.stderr) - restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1)) - - restored_face = restored_face.astype('uint8') - self.face_helper.add_restored_face(restored_face) - - self.face_helper.get_inverse_affine(None) - - restored_img = self.face_helper.paste_faces_to_input_image() - restored_img = restored_img[:, :, ::-1] - - if original_resolution != restored_img.shape[0:2]: - restored_img = cv2.resize(restored_img, (0, 0), fx=original_resolution[1]/restored_img.shape[1], fy=original_resolution[0]/restored_img.shape[0], interpolation=cv2.INTER_LINEAR) - - self.face_helper.clean_all() - - if shared.opts.face_restoration_unload: - self.send_model_to(devices.cpu) - - return restored_img - - global have_codeformer - have_codeformer = True - - global codeformer - codeformer = FaceRestorerCodeFormer(dirname) - shared.face_restorers.append(codeformer) - - except Exception: - print("Error setting up CodeFormer:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - # sys.path = stored_sys_path diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/logging.py b/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/logging.py deleted file mode 100644 index b2c01f0a4ef6666c0c2e1147dbee9d6850d277c0..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/logging.py +++ /dev/null @@ -1,24 +0,0 @@ -import datetime -import json -import os - -saved_params_shared = {"model_name", "model_hash", "initial_step", "num_of_dataset_images", "learn_rate", "batch_size", "clip_grad_mode", "clip_grad_value", "gradient_step", "data_root", "log_directory", "training_width", "training_height", "steps", "create_image_every", "template_file", "gradient_step", "latent_sampling_method"} -saved_params_ti = {"embedding_name", "num_vectors_per_token", "save_embedding_every", "save_image_with_stored_embedding"} -saved_params_hypernet = {"hypernetwork_name", "layer_structure", "activation_func", "weight_init", "add_layer_norm", "use_dropout", "save_hypernetwork_every"} -saved_params_all = saved_params_shared | saved_params_ti | saved_params_hypernet -saved_params_previews = {"preview_prompt", "preview_negative_prompt", "preview_steps", "preview_sampler_index", "preview_cfg_scale", "preview_seed", "preview_width", "preview_height"} - - -def save_settings_to_file(log_directory, all_params): - now = datetime.datetime.now() - params = {"datetime": now.strftime("%Y-%m-%d %H:%M:%S")} - - keys = saved_params_all - if all_params.get('preview_from_txt2img'): - keys = keys | saved_params_previews - - params.update({k: v for k, v in all_params.items() if k in keys}) - - filename = f'settings-{now.strftime("%Y-%m-%d-%H-%M-%S")}.json' - with open(os.path.join(log_directory, filename), "w") as file: - json.dump(params, file, indent=4) diff --git a/spaces/apsys/normflows/app.py b/spaces/apsys/normflows/app.py deleted file mode 100644 index cb2a47e6bec4337afb3db2cd12f6a842b3e264d4..0000000000000000000000000000000000000000 --- a/spaces/apsys/normflows/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import streamlit as st -import torch -from normflows import nflow -import numpy as np -import seaborn as sns -import pandas as pd - -uploaded_file = st.file_uploader("Choose original dataset") -col1,col2,col3 = st.columns(3) -bw = col1.number_input('Scale',value=3.05) -wd = col2.number_input('Weight Decay',value=0.0002) -iters = col3.number_input('Iterations',value=400) - - - -def compute(dim): - api = nflow(dim=dim,latent=16,dataset=uploaded_file) - api.compile(optim=torch.optim.ASGD,bw=bw,lr=0.0001,wd=wd) - - my_bar = st.progress(0) - - for idx in api.train(iters=iters): - my_bar.progress(idx[0]/iters) - my_bar.progress(100) - samples = np.delete(np.array(api.model.sample(torch.tensor(api.scaled).float()).detach()),np.argmin(np.array(api.model.sample(torch.tensor(api.scaled).float()).detach()),axis=0),0) - # samples = np.delete(samples,np.argmax(samples,axis=0),0) - - - - # fig, ax = plt.subplots() - g = sns.jointplot(x=samples[:, 0], y=samples[:, 1], kind='kde',cmap=sns.color_palette("Blues", as_cmap=True),fill=True,label='Gaussian KDE',levels=1000) - - w = sns.scatterplot(x=api.scaled[:,0],y=api.scaled[:,1],ax=g.ax_joint,c='orange',marker='+',s=100,label='Real') - st.pyplot(w.get_figure()) - - - def random_normal_samples(n, dim=3): - return torch.zeros(n, dim).normal_(mean=0, std=1) - - samples = np.array(api.model.sample(torch.tensor(random_normal_samples(1000,api.scaled.shape[-1])).float()).detach()) - - return api.scaler.inverse_transform(samples) - -with st.form('login_form'): - st.write('Token for generation:') - token = st.text_input('Token') - submit = st.form_submit_button('Submit') - -if token in st.secrets['tokens'] and submit: - - if uploaded_file is not None: - dims = len(uploaded_file.getvalue().decode("utf-8").split('\n')[0].split(','))-1 - samples=compute(dims) - st.download_button('Download generated CSV', pd.DataFrame(samples).to_csv(), 'text/csv') - - elif not uploaded_file: - st.write('Upload your file') - -else: - st.markdown('## :red[You dont have access]') - st.markdown('Buy tokens here: [@advprop](https://adprop.t.me)') \ No newline at end of file diff --git a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/README.md b/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/README.md deleted file mode 100644 index d56f95805723fd89a09598a1aa1b853b2d20c296..0000000000000000000000000000000000000000 --- a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: AI-minato Aqua -emoji: 🐠 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: other -duplicated_from: DoNotSelect/AI-minato_aqua ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/dataset.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/dataset.md deleted file mode 100644 index 92d381aca552c6fe95a9573d76227b8aa51a8dc0..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/dataset.md +++ /dev/null @@ -1,25 +0,0 @@ -# Datasets - -## TTS Dataset - -```{eval-rst} -.. autoclass:: TTS.tts.datasets.TTSDataset - :members: -``` - -## Vocoder Dataset - -```{eval-rst} -.. autoclass:: TTS.vocoder.datasets.gan_dataset.GANDataset - :members: -``` - -```{eval-rst} -.. autoclass:: TTS.vocoder.datasets.wavegrad_dataset.WaveGradDataset - :members: -``` - -```{eval-rst} -.. autoclass:: TTS.vocoder.datasets.wavernn_dataset.WaveRNNDataset - :members: -``` \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES3.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES3.py deleted file mode 100644 index c0d93671332171fd44900ab280a593bb9a486066..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES3.py +++ /dev/null @@ -1,187 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Cipher/DES3.py : DES3 -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== -""" -Module's constants for the modes of operation supported with Triple DES: - -:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` -:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` -:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` -:var MODE_OFB: :ref:`Output FeedBack (OFB) ` -:var MODE_CTR: :ref:`CounTer Mode (CTR) ` -:var MODE_OPENPGP: :ref:`OpenPGP Mode ` -:var MODE_EAX: :ref:`EAX Mode ` -""" - -import sys - -from Crypto.Cipher import _create_cipher -from Crypto.Util.py3compat import byte_string, bchr, bord, bstr -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - c_size_t) - -_raw_des3_lib = load_pycryptodome_raw_lib( - "Crypto.Cipher._raw_des3", - """ - int DES3_start_operation(const uint8_t key[], - size_t key_len, - void **pResult); - int DES3_encrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int DES3_decrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int DES3_stop_operation(void *state); - """) - - -def adjust_key_parity(key_in): - """Set the parity bits in a TDES key. - - :param key_in: the TDES key whose bits need to be adjusted - :type key_in: byte string - - :returns: a copy of ``key_in``, with the parity bits correctly set - :rtype: byte string - - :raises ValueError: if the TDES key is not 16 or 24 bytes long - :raises ValueError: if the TDES key degenerates into Single DES - """ - - def parity_byte(key_byte): - parity = 1 - for i in range(1, 8): - parity ^= (key_byte >> i) & 1 - return (key_byte & 0xFE) | parity - - if len(key_in) not in key_size: - raise ValueError("Not a valid TDES key") - - key_out = b"".join([ bchr(parity_byte(bord(x))) for x in key_in ]) - - if key_out[:8] == key_out[8:16] or key_out[-16:-8] == key_out[-8:]: - raise ValueError("Triple DES key degenerates to single DES") - - return key_out - - -def _create_base_cipher(dict_parameters): - """This method instantiates and returns a handle to a low-level base cipher. - It will absorb named parameters in the process.""" - - try: - key_in = dict_parameters.pop("key") - except KeyError: - raise TypeError("Missing 'key' parameter") - - key = adjust_key_parity(bstr(key_in)) - - start_operation = _raw_des3_lib.DES3_start_operation - stop_operation = _raw_des3_lib.DES3_stop_operation - - cipher = VoidPointer() - result = start_operation(key, - c_size_t(len(key)), - cipher.address_of()) - if result: - raise ValueError("Error %X while instantiating the TDES cipher" - % result) - return SmartPointer(cipher.get(), stop_operation) - - -def new(key, mode, *args, **kwargs): - """Create a new Triple DES cipher. - - :param key: - The secret key to use in the symmetric cipher. - It must be 16 or 24 byte long. The parity bits will be ignored. - :type key: bytes/bytearray/memoryview - - :param mode: - The chaining mode to use for encryption or decryption. - :type mode: One of the supported ``MODE_*`` constants - - :Keyword Arguments: - * **iv** (*bytes*, *bytearray*, *memoryview*) -- - (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, - and ``MODE_OPENPGP`` modes). - - The initialization vector to use for encryption or decryption. - - For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. - - For ``MODE_OPENPGP`` mode only, - it must be 8 bytes long for encryption - and 10 bytes for decryption (in the latter case, it is - actually the *encrypted* IV which was prefixed to the ciphertext). - - If not provided, a random byte string is generated (you must then - read its value with the :attr:`iv` attribute). - - * **nonce** (*bytes*, *bytearray*, *memoryview*) -- - (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). - - A value that must never be reused for any other encryption done - with this key. - - For ``MODE_EAX`` there are no - restrictions on its length (recommended: **16** bytes). - - For ``MODE_CTR``, its length must be in the range **[0..7]**. - - If not provided for ``MODE_EAX``, a random byte string is generated (you - can read it back via the ``nonce`` attribute). - - * **segment_size** (*integer*) -- - (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext - are segmented in. It must be a multiple of 8. - If not specified, it will be assumed to be 8. - - * **mac_len** : (*integer*) -- - (Only ``MODE_EAX``) - Length of the authentication tag, in bytes. - It must be no longer than 8 (default). - - * **initial_value** : (*integer*) -- - (Only ``MODE_CTR``). The initial value for the counter within - the counter block. By default it is **0**. - - :Return: a Triple DES object, of the applicable mode. - """ - - return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) - -MODE_ECB = 1 -MODE_CBC = 2 -MODE_CFB = 3 -MODE_OFB = 5 -MODE_CTR = 6 -MODE_OPENPGP = 7 -MODE_EAX = 9 - -# Size of a data block (in bytes) -block_size = 8 -# Size of a key (in bytes) -key_size = (16, 24) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/SunImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/SunImagePlugin.py deleted file mode 100644 index c03759a01e6d3422dd636ff102d4284644adf0ff..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/SunImagePlugin.py +++ /dev/null @@ -1,136 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Sun image file handling -# -# History: -# 1995-09-10 fl Created -# 1996-05-28 fl Fixed 32-bit alignment -# 1998-12-29 fl Import ImagePalette module -# 2001-12-18 fl Fixed palette loading (from Jean-Claude Rimbault) -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1995-1996 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - - -from . import Image, ImageFile, ImagePalette -from ._binary import i32be as i32 - - -def _accept(prefix): - return len(prefix) >= 4 and i32(prefix) == 0x59A66A95 - - -## -# Image plugin for Sun raster files. - - -class SunImageFile(ImageFile.ImageFile): - - format = "SUN" - format_description = "Sun Raster File" - - def _open(self): - - # The Sun Raster file header is 32 bytes in length - # and has the following format: - - # typedef struct _SunRaster - # { - # DWORD MagicNumber; /* Magic (identification) number */ - # DWORD Width; /* Width of image in pixels */ - # DWORD Height; /* Height of image in pixels */ - # DWORD Depth; /* Number of bits per pixel */ - # DWORD Length; /* Size of image data in bytes */ - # DWORD Type; /* Type of raster file */ - # DWORD ColorMapType; /* Type of color map */ - # DWORD ColorMapLength; /* Size of the color map in bytes */ - # } SUNRASTER; - - # HEAD - s = self.fp.read(32) - if not _accept(s): - raise SyntaxError("not an SUN raster file") - - offset = 32 - - self._size = i32(s, 4), i32(s, 8) - - depth = i32(s, 12) - # data_length = i32(s, 16) # unreliable, ignore. - file_type = i32(s, 20) - palette_type = i32(s, 24) # 0: None, 1: RGB, 2: Raw/arbitrary - palette_length = i32(s, 28) - - if depth == 1: - self.mode, rawmode = "1", "1;I" - elif depth == 4: - self.mode, rawmode = "L", "L;4" - elif depth == 8: - self.mode = rawmode = "L" - elif depth == 24: - if file_type == 3: - self.mode, rawmode = "RGB", "RGB" - else: - self.mode, rawmode = "RGB", "BGR" - elif depth == 32: - if file_type == 3: - self.mode, rawmode = "RGB", "RGBX" - else: - self.mode, rawmode = "RGB", "BGRX" - else: - raise SyntaxError("Unsupported Mode/Bit Depth") - - if palette_length: - if palette_length > 1024: - raise SyntaxError("Unsupported Color Palette Length") - - if palette_type != 1: - raise SyntaxError("Unsupported Palette Type") - - offset = offset + palette_length - self.palette = ImagePalette.raw("RGB;L", self.fp.read(palette_length)) - if self.mode == "L": - self.mode = "P" - rawmode = rawmode.replace("L", "P") - - # 16 bit boundaries on stride - stride = ((self.size[0] * depth + 15) // 16) * 2 - - # file type: Type is the version (or flavor) of the bitmap - # file. The following values are typically found in the Type - # field: - # 0000h Old - # 0001h Standard - # 0002h Byte-encoded - # 0003h RGB format - # 0004h TIFF format - # 0005h IFF format - # FFFFh Experimental - - # Old and standard are the same, except for the length tag. - # byte-encoded is run-length-encoded - # RGB looks similar to standard, but RGB byte order - # TIFF and IFF mean that they were converted from T/IFF - # Experimental means that it's something else. - # (https://www.fileformat.info/format/sunraster/egff.htm) - - if file_type in (0, 1, 3, 4, 5): - self.tile = [("raw", (0, 0) + self.size, offset, (rawmode, stride))] - elif file_type == 2: - self.tile = [("sun_rle", (0, 0) + self.size, offset, rawmode)] - else: - raise SyntaxError("Unsupported Sun Raster file type") - - -# -# registry - - -Image.register_open(SunImageFile.format, SunImageFile, _accept) - -Image.register_extension(SunImageFile.format, ".ras") diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_horizontal.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_horizontal.py deleted file mode 100644 index 9d145f7fd908df48f54fd5b5860590849e802871..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_horizontal.py +++ /dev/null @@ -1,15 +0,0 @@ -""" -Horizontal Bar Chart --------------------- -This example is a bar chart drawn horizontally by putting the quantitative value on the x axis. -""" -# category: bar charts -import altair as alt -from vega_datasets import data - -source = data.wheat() - -alt.Chart(source).mark_bar().encode( - x='wheat:Q', - y="year:O" -).properties(height=700) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/histogram_responsive.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/histogram_responsive.py deleted file mode 100644 index c8af4e7148d97d9babde22a9824843693bcd6074..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/histogram_responsive.py +++ /dev/null @@ -1,35 +0,0 @@ -""" -Histogram with Responsive Bins ------------------------------- -This shows an example of a histogram with bins that are responsive to a -selection domain. Click and drag on the bottom panel to see the bins -change on the top panel. -""" -# category: histograms -import altair as alt -from vega_datasets import data - -source = data.flights_5k.url - -brush = alt.selection_interval(encodings=['x']) - -base = alt.Chart(source).transform_calculate( - time="hours(datum.date) + minutes(datum.date) / 60" -).mark_bar().encode( - y='count():Q' -).properties( - width=600, - height=100 -) - -alt.vconcat( - base.encode( - alt.X('time:Q', - bin=alt.Bin(maxbins=30, extent=brush), - scale=alt.Scale(domain=brush) - ) - ), - base.encode( - alt.X('time:Q', bin=alt.Bin(maxbins=30)), - ).add_selection(brush) -) \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/multifeature_scatter_plot.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/multifeature_scatter_plot.py deleted file mode 100644 index 40e189bbf6608330dba5c9d69ba24af286782f17..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/multifeature_scatter_plot.py +++ /dev/null @@ -1,17 +0,0 @@ -""" -Multifeature Scatter Plot -========================= -This example shows how to make a scatter plot with multiple feature encodings. -""" -# category: scatter plots -import altair as alt -from vega_datasets import data - -source = data.iris() - -alt.Chart(source).mark_circle().encode( - alt.X('sepalLength', scale=alt.Scale(zero=False)), - alt.Y('sepalWidth', scale=alt.Scale(zero=False, padding=1)), - color='species', - size='petalWidth' -) diff --git a/spaces/aryadytm/remove-photo-background/src/models/backbones/mobilenetv2.py b/spaces/aryadytm/remove-photo-background/src/models/backbones/mobilenetv2.py deleted file mode 100644 index 709d352565799f181bfaed652c796ef065e71a0f..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/remove-photo-background/src/models/backbones/mobilenetv2.py +++ /dev/null @@ -1,199 +0,0 @@ -""" This file is adapted from https://github.com/thuyngch/Human-Segmentation-PyTorch""" - -import math -import json -from functools import reduce - -import torch -from torch import nn - - -#------------------------------------------------------------------------------ -# Useful functions -#------------------------------------------------------------------------------ - -def _make_divisible(v, divisor, min_value=None): - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - - -def conv_bn(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - nn.BatchNorm2d(oup), - nn.ReLU6(inplace=True) - ) - - -def conv_1x1_bn(inp, oup): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - nn.ReLU6(inplace=True) - ) - - -#------------------------------------------------------------------------------ -# Class of Inverted Residual block -#------------------------------------------------------------------------------ - -class InvertedResidual(nn.Module): - def __init__(self, inp, oup, stride, expansion, dilation=1): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2] - - hidden_dim = round(inp * expansion) - self.use_res_connect = self.stride == 1 and inp == oup - - if expansion == 1: - self.conv = nn.Sequential( - # dw - nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, dilation=dilation, bias=False), - nn.BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - ) - else: - self.conv = nn.Sequential( - # pw - nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False), - nn.BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # dw - nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, dilation=dilation, bias=False), - nn.BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - ) - - def forward(self, x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - -#------------------------------------------------------------------------------ -# Class of MobileNetV2 -#------------------------------------------------------------------------------ - -class MobileNetV2(nn.Module): - def __init__(self, in_channels, alpha=1.0, expansion=6, num_classes=1000): - super(MobileNetV2, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - input_channel = 32 - last_channel = 1280 - interverted_residual_setting = [ - # t, c, n, s - [1 , 16, 1, 1], - [expansion, 24, 2, 2], - [expansion, 32, 3, 2], - [expansion, 64, 4, 2], - [expansion, 96, 3, 1], - [expansion, 160, 3, 2], - [expansion, 320, 1, 1], - ] - - # building first layer - input_channel = _make_divisible(input_channel*alpha, 8) - self.last_channel = _make_divisible(last_channel*alpha, 8) if alpha > 1.0 else last_channel - self.features = [conv_bn(self.in_channels, input_channel, 2)] - - # building inverted residual blocks - for t, c, n, s in interverted_residual_setting: - output_channel = _make_divisible(int(c*alpha), 8) - for i in range(n): - if i == 0: - self.features.append(InvertedResidual(input_channel, output_channel, s, expansion=t)) - else: - self.features.append(InvertedResidual(input_channel, output_channel, 1, expansion=t)) - input_channel = output_channel - - # building last several layers - self.features.append(conv_1x1_bn(input_channel, self.last_channel)) - - # make it nn.Sequential - self.features = nn.Sequential(*self.features) - - # building classifier - if self.num_classes is not None: - self.classifier = nn.Sequential( - nn.Dropout(0.2), - nn.Linear(self.last_channel, num_classes), - ) - - # Initialize weights - self._init_weights() - - def forward(self, x): - # Stage1 - x = self.features[0](x) - x = self.features[1](x) - # Stage2 - x = self.features[2](x) - x = self.features[3](x) - # Stage3 - x = self.features[4](x) - x = self.features[5](x) - x = self.features[6](x) - # Stage4 - x = self.features[7](x) - x = self.features[8](x) - x = self.features[9](x) - x = self.features[10](x) - x = self.features[11](x) - x = self.features[12](x) - x = self.features[13](x) - # Stage5 - x = self.features[14](x) - x = self.features[15](x) - x = self.features[16](x) - x = self.features[17](x) - x = self.features[18](x) - - # Classification - if self.num_classes is not None: - x = x.mean(dim=(2,3)) - x = self.classifier(x) - - # Output - return x - - def _load_pretrained_model(self, pretrained_file): - pretrain_dict = torch.load(pretrained_file, map_location='cpu') - model_dict = {} - state_dict = self.state_dict() - print("[MobileNetV2] Loading pretrained model...") - for k, v in pretrain_dict.items(): - if k in state_dict: - model_dict[k] = v - else: - print(k, "is ignored") - state_dict.update(model_dict) - self.load_state_dict(state_dict) - - def _init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - n = m.weight.size(1) - m.weight.data.normal_(0, 0.01) - m.bias.data.zero_() diff --git a/spaces/avivdm1/AutoGPT/autogpt/commands/twitter.py b/spaces/avivdm1/AutoGPT/autogpt/commands/twitter.py deleted file mode 100644 index 3eaed36e20e1c520690ac59f25a4da6501f3440f..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/autogpt/commands/twitter.py +++ /dev/null @@ -1,26 +0,0 @@ -import os - -import tweepy -from dotenv import load_dotenv - -load_dotenv() - - -def send_tweet(tweet_text): - consumer_key = os.environ.get("TW_CONSUMER_KEY") - consumer_secret = os.environ.get("TW_CONSUMER_SECRET") - access_token = os.environ.get("TW_ACCESS_TOKEN") - access_token_secret = os.environ.get("TW_ACCESS_TOKEN_SECRET") - # Authenticate to Twitter - auth = tweepy.OAuthHandler(consumer_key, consumer_secret) - auth.set_access_token(access_token, access_token_secret) - - # Create API object - api = tweepy.API(auth) - - # Send tweet - try: - api.update_status(tweet_text) - print("Tweet sent successfully!") - except tweepy.TweepyException as e: - print("Error sending tweet: {}".format(e.reason)) diff --git a/spaces/awacke1/ChatGPTStreamlit7/app.py b/spaces/awacke1/ChatGPTStreamlit7/app.py deleted file mode 100644 index 9fcd544b1048b3bbe3efd012716150f93ac3564d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ChatGPTStreamlit7/app.py +++ /dev/null @@ -1,258 +0,0 @@ -import streamlit as st -import openai -import os -import base64 -import glob -import json -import mistune -import pytz -import math -import requests - -from datetime import datetime -from openai import ChatCompletion -from xml.etree import ElementTree as ET -from bs4 import BeautifulSoup -from collections import deque -from audio_recorder_streamlit import audio_recorder - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%I%M") - safe_prompt = "".join(x for x in prompt if x.isalnum())[:45] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -def chat_with_model(prompt, document_section): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - response = openai.ChatCompletion.create(model=model, messages=conversation) - #return response - return response['choices'][0]['message']['content'] - -def transcribe_audio(openai_key, file_path, model): - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - - response2 = chat_with_model(response.json().get('text'), '') # ************************************* - st.write('Responses:') - #st.write(response) - st.write(response2) - return response.json().get('text') - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -def create_file(filename, prompt, response): - if filename.endswith(".txt"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n{response}") - elif filename.endswith(".htm"): - with open(filename, 'w') as file: - file.write(f"{prompt} {response}") - elif filename.endswith(".md"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n\n{response}") - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - - - -def chat_with_file_contents(prompt, file_content): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - - -# Sidebar and global -openai.api_key = os.getenv('OPENAI_KEY') -st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide") -menu = ["htm", "txt", "xlsx", "csv", "md", "py"] #619 -choice = st.sidebar.selectbox("Output File Type:", menu) -model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - -# Audio, transcribe, GPT: -filename = save_and_play_audio(audio_recorder) -if filename is not None: - transcription = transcribe_audio(openai.api_key, filename, "whisper-1") - st.write(transcription) - gptOutput = chat_with_model(transcription, '') # ************************************* - filename = generate_filename(transcription, choice) - create_file(filename, transcription, gptOutput) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - -def main(): - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - #max_length = 12000 - optimal for gpt35 turbo. 2x=24000 for gpt4. 8x=96000 for gpt4-32k. - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["xml", "json", "xlsx","csv","html", "htm", "md", "txt"]) - - document_sections = deque() - document_responses = {} - - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - - if len(document_sections) > 0: - - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section) # ************************************* - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, ''.join(list(document_sections))) # ************************************* - st.write('Response:') - st.write(response) - - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - - # sidebar of files - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - response = chat_with_file_contents(user_prompt, file_contents) - st.write('Response:') - st.write(response) - filename = generate_filename(file_content_area, choice) - create_file(filename, file_content_area, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/awacke1/Docker.Jupyterlab.Integration.HF/README.md b/spaces/awacke1/Docker.Jupyterlab.Integration.HF/README.md deleted file mode 100644 index 2d16889cfdb8e0cd7bd088ff78a4f33a3f5464cf..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Docker.Jupyterlab.Integration.HF/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: JupyterLab -emoji: 💻🐳 -colorFrom: gray -colorTo: green -sdk: docker -pinned: false -tags: -- jupyterlab -duplicated_from: DockerTemplates/jupyterlab ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/MLOpsStreamlit/README.md b/spaces/awacke1/MLOpsStreamlit/README.md deleted file mode 100644 index 244e4a9cdc157fc0faef2319be43b7495e342efe..0000000000000000000000000000000000000000 --- a/spaces/awacke1/MLOpsStreamlit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 📊Graph ML Ops NLP Words📊 -emoji: 📊💬📊 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/SOTA-Plan/app.py b/spaces/awacke1/SOTA-Plan/app.py deleted file mode 100644 index 3f851aeabce66431337ae8ca11b4ba4ae1e40b0b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/SOTA-Plan/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr - -context = "What should be documented in a care plan?\n" -context = context + "Regardless of what your preferences are, your care plan should include:\n" -context = context + "What your assessed care needs are.\n" -context = context + "What type of support you should receive.\n" -context = context + "Your desired outcomes.\n" -context = context + "Who should provide care.\n" -context = context + "When care and support should be provided.\n" -context = context + "Records of care provided.\n" -context = context + "Your wishes and personal preferences.\n" -context = context + "The costs of the services.\n" -context = context + "Dimensions\n" -context = context + "1-Ontology of Plan\n" -context = context + "2-Problems as evidenced by Signs of Systems\n" -context = context + "3-Assessment of Needs\n" -context = context + "4-Questions about problems faced\n" -context = context + "5-Goals for long and short term improvements\n" -context = context + "6-Knowledge-Behavior-Status Quality Measures\n" -context = context + "7-Intervention List of Options\n" -context = context + "8-Quality Measures\n" -context = context + "9-Pathways Available\n" - -with open('WritingCarePlans.txt', 'r') as file: - context = file.read() - -question = "What should be documented in a care plan?" - -gr.Interface.load( - "huggingface/deepset/roberta-base-squad2", - theme="default", - css=".footer{display:none !important}", - inputs=[gr.inputs.Textbox(lines=12, default=context, label="Context paragraph"), gr.inputs.Textbox(lines=3, default=question, label="Question")], - outputs=[gr.outputs.Textbox(label="Answer"), gr.outputs.Textbox(label="Score")], - title=None, - description="Provide your own paragraph and ask any question about the text. How well does the model answer?").launch() \ No newline at end of file diff --git a/spaces/awacke1/Try.Playing.Learning.Sharing.On.This/style.css b/spaces/awacke1/Try.Playing.Learning.Sharing.On.This/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Try.Playing.Learning.Sharing.On.This/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/AMFLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/AMFLoader.js deleted file mode 100644 index 55ef7cd3b770acb75aa3ae266f733efe40983130..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/AMFLoader.js +++ /dev/null @@ -1,495 +0,0 @@ -/* - * @author tamarintech / https://tamarintech.com - * - * Description: Early release of an AMF Loader following the pattern of the - * example loaders in the three.js project. - * - * More information about the AMF format: http://amf.wikispaces.com - * - * Usage: - * var loader = new AMFLoader(); - * loader.load('/path/to/project.amf', function(objecttree) { - * scene.add(objecttree); - * }); - * - * Materials now supported, material colors supported - * Zip support, requires jszip - * No constellation support (yet)! - * - */ - -THREE.AMFLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - -}; - -THREE.AMFLoader.prototype = { - - constructor: THREE.AMFLoader, - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var loader = new THREE.FileLoader( scope.manager ); - loader.setPath( scope.path ); - loader.setResponseType( 'arraybuffer' ); - loader.load( url, function ( text ) { - - onLoad( scope.parse( text ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - parse: function ( data ) { - - function loadDocument( data ) { - - var view = new DataView( data ); - var magic = String.fromCharCode( view.getUint8( 0 ), view.getUint8( 1 ) ); - - if ( magic === 'PK' ) { - - var zip = null; - var file = null; - - console.log( 'THREE.AMFLoader: Loading Zip' ); - - try { - - zip = new JSZip( data ); // eslint-disable-line no-undef - - } catch ( e ) { - - if ( e instanceof ReferenceError ) { - - console.log( 'THREE.AMFLoader: jszip missing and file is compressed.' ); - return null; - - } - - } - - for ( file in zip.files ) { - - if ( file.toLowerCase().substr( - 4 ) === '.amf' ) { - - break; - - } - - } - - console.log( 'THREE.AMFLoader: Trying to load file asset: ' + file ); - view = new DataView( zip.file( file ).asArrayBuffer() ); - - } - - var fileText = THREE.LoaderUtils.decodeText( view ); - var xmlData = new DOMParser().parseFromString( fileText, 'application/xml' ); - - if ( xmlData.documentElement.nodeName.toLowerCase() !== 'amf' ) { - - console.log( 'THREE.AMFLoader: Error loading AMF - no AMF document found.' ); - return null; - - } - - return xmlData; - - } - - function loadDocumentScale( node ) { - - var scale = 1.0; - var unit = 'millimeter'; - - if ( node.documentElement.attributes.unit !== undefined ) { - - unit = node.documentElement.attributes.unit.value.toLowerCase(); - - } - - var scaleUnits = { - millimeter: 1.0, - inch: 25.4, - feet: 304.8, - meter: 1000.0, - micron: 0.001 - }; - - if ( scaleUnits[ unit ] !== undefined ) { - - scale = scaleUnits[ unit ]; - - } - - console.log( 'THREE.AMFLoader: Unit scale: ' + scale ); - return scale; - - } - - function loadMaterials( node ) { - - var matName = 'AMF Material'; - var matId = node.attributes.id.textContent; - var color = { r: 1.0, g: 1.0, b: 1.0, a: 1.0 }; - - var loadedMaterial = null; - - for ( var i = 0; i < node.childNodes.length; i ++ ) { - - var matChildEl = node.childNodes[ i ]; - - if ( matChildEl.nodeName === 'metadata' && matChildEl.attributes.type !== undefined ) { - - if ( matChildEl.attributes.type.value === 'name' ) { - - matName = matChildEl.textContent; - - } - - } else if ( matChildEl.nodeName === 'color' ) { - - color = loadColor( matChildEl ); - - } - - } - - loadedMaterial = new THREE.MeshPhongMaterial( { - flatShading: true, - color: new THREE.Color( color.r, color.g, color.b ), - name: matName - } ); - - if ( color.a !== 1.0 ) { - - loadedMaterial.transparent = true; - loadedMaterial.opacity = color.a; - - } - - return { id: matId, material: loadedMaterial }; - - } - - function loadColor( node ) { - - var color = { r: 1.0, g: 1.0, b: 1.0, a: 1.0 }; - - for ( var i = 0; i < node.childNodes.length; i ++ ) { - - var matColor = node.childNodes[ i ]; - - if ( matColor.nodeName === 'r' ) { - - color.r = matColor.textContent; - - } else if ( matColor.nodeName === 'g' ) { - - color.g = matColor.textContent; - - } else if ( matColor.nodeName === 'b' ) { - - color.b = matColor.textContent; - - } else if ( matColor.nodeName === 'a' ) { - - color.a = matColor.textContent; - - } - - } - - return color; - - } - - function loadMeshVolume( node ) { - - var volume = { name: '', triangles: [], materialid: null }; - - var currVolumeNode = node.firstElementChild; - - if ( node.attributes.materialid !== undefined ) { - - volume.materialId = node.attributes.materialid.nodeValue; - - } - - while ( currVolumeNode ) { - - if ( currVolumeNode.nodeName === 'metadata' ) { - - if ( currVolumeNode.attributes.type !== undefined ) { - - if ( currVolumeNode.attributes.type.value === 'name' ) { - - volume.name = currVolumeNode.textContent; - - } - - } - - } else if ( currVolumeNode.nodeName === 'triangle' ) { - - var v1 = currVolumeNode.getElementsByTagName( 'v1' )[ 0 ].textContent; - var v2 = currVolumeNode.getElementsByTagName( 'v2' )[ 0 ].textContent; - var v3 = currVolumeNode.getElementsByTagName( 'v3' )[ 0 ].textContent; - - volume.triangles.push( v1, v2, v3 ); - - } - - currVolumeNode = currVolumeNode.nextElementSibling; - - } - - return volume; - - } - - function loadMeshVertices( node ) { - - var vertArray = []; - var normalArray = []; - var currVerticesNode = node.firstElementChild; - - while ( currVerticesNode ) { - - if ( currVerticesNode.nodeName === 'vertex' ) { - - var vNode = currVerticesNode.firstElementChild; - - while ( vNode ) { - - if ( vNode.nodeName === 'coordinates' ) { - - var x = vNode.getElementsByTagName( 'x' )[ 0 ].textContent; - var y = vNode.getElementsByTagName( 'y' )[ 0 ].textContent; - var z = vNode.getElementsByTagName( 'z' )[ 0 ].textContent; - - vertArray.push( x, y, z ); - - } else if ( vNode.nodeName === 'normal' ) { - - var nx = vNode.getElementsByTagName( 'nx' )[ 0 ].textContent; - var ny = vNode.getElementsByTagName( 'ny' )[ 0 ].textContent; - var nz = vNode.getElementsByTagName( 'nz' )[ 0 ].textContent; - - normalArray.push( nx, ny, nz ); - - } - - vNode = vNode.nextElementSibling; - - } - - } - currVerticesNode = currVerticesNode.nextElementSibling; - - } - - return { 'vertices': vertArray, 'normals': normalArray }; - - } - - function loadObject( node ) { - - var objId = node.attributes.id.textContent; - var loadedObject = { name: 'amfobject', meshes: [] }; - var currColor = null; - var currObjNode = node.firstElementChild; - - while ( currObjNode ) { - - if ( currObjNode.nodeName === 'metadata' ) { - - if ( currObjNode.attributes.type !== undefined ) { - - if ( currObjNode.attributes.type.value === 'name' ) { - - loadedObject.name = currObjNode.textContent; - - } - - } - - } else if ( currObjNode.nodeName === 'color' ) { - - currColor = loadColor( currObjNode ); - - } else if ( currObjNode.nodeName === 'mesh' ) { - - var currMeshNode = currObjNode.firstElementChild; - var mesh = { vertices: [], normals: [], volumes: [], color: currColor }; - - while ( currMeshNode ) { - - if ( currMeshNode.nodeName === 'vertices' ) { - - var loadedVertices = loadMeshVertices( currMeshNode ); - - mesh.normals = mesh.normals.concat( loadedVertices.normals ); - mesh.vertices = mesh.vertices.concat( loadedVertices.vertices ); - - } else if ( currMeshNode.nodeName === 'volume' ) { - - mesh.volumes.push( loadMeshVolume( currMeshNode ) ); - - } - - currMeshNode = currMeshNode.nextElementSibling; - - } - - loadedObject.meshes.push( mesh ); - - } - - currObjNode = currObjNode.nextElementSibling; - - } - - return { 'id': objId, 'obj': loadedObject }; - - } - - var xmlData = loadDocument( data ); - var amfName = ''; - var amfAuthor = ''; - var amfScale = loadDocumentScale( xmlData ); - var amfMaterials = {}; - var amfObjects = {}; - var childNodes = xmlData.documentElement.childNodes; - - var i, j; - - for ( i = 0; i < childNodes.length; i ++ ) { - - var child = childNodes[ i ]; - - if ( child.nodeName === 'metadata' ) { - - if ( child.attributes.type !== undefined ) { - - if ( child.attributes.type.value === 'name' ) { - - amfName = child.textContent; - - } else if ( child.attributes.type.value === 'author' ) { - - amfAuthor = child.textContent; - - } - - } - - } else if ( child.nodeName === 'material' ) { - - var loadedMaterial = loadMaterials( child ); - - amfMaterials[ loadedMaterial.id ] = loadedMaterial.material; - - } else if ( child.nodeName === 'object' ) { - - var loadedObject = loadObject( child ); - - amfObjects[ loadedObject.id ] = loadedObject.obj; - - } - - } - - var sceneObject = new THREE.Group(); - var defaultMaterial = new THREE.MeshPhongMaterial( { color: 0xaaaaff, flatShading: true } ); - - sceneObject.name = amfName; - sceneObject.userData.author = amfAuthor; - sceneObject.userData.loader = 'AMF'; - - for ( var id in amfObjects ) { - - var part = amfObjects[ id ]; - var meshes = part.meshes; - var newObject = new THREE.Group(); - newObject.name = part.name || ''; - - for ( i = 0; i < meshes.length; i ++ ) { - - var objDefaultMaterial = defaultMaterial; - var mesh = meshes[ i ]; - var vertices = new THREE.Float32BufferAttribute( mesh.vertices, 3 ); - var normals = null; - - if ( mesh.normals.length ) { - - normals = new THREE.Float32BufferAttribute( mesh.normals, 3 ); - - } - - if ( mesh.color ) { - - var color = mesh.color; - - objDefaultMaterial = defaultMaterial.clone(); - objDefaultMaterial.color = new THREE.Color( color.r, color.g, color.b ); - - if ( color.a !== 1.0 ) { - - objDefaultMaterial.transparent = true; - objDefaultMaterial.opacity = color.a; - - } - - } - - var volumes = mesh.volumes; - - for ( j = 0; j < volumes.length; j ++ ) { - - var volume = volumes[ j ]; - var newGeometry = new THREE.BufferGeometry(); - var material = objDefaultMaterial; - - newGeometry.setIndex( volume.triangles ); - newGeometry.addAttribute( 'position', vertices.clone() ); - - if ( normals ) { - - newGeometry.addAttribute( 'normal', normals.clone() ); - - } - - if ( amfMaterials[ volume.materialId ] !== undefined ) { - - material = amfMaterials[ volume.materialId ]; - - } - - newGeometry.scale( amfScale, amfScale, amfScale ); - newObject.add( new THREE.Mesh( newGeometry, material.clone() ) ); - - } - - } - - sceneObject.add( newObject ); - - } - - return sceneObject; - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/PRWMLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/PRWMLoader.js deleted file mode 100644 index 8b19970353704f7491953d63351ee4f91cb920b3..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/PRWMLoader.js +++ /dev/null @@ -1,299 +0,0 @@ -/** - * @author Kevin Chapelier / https://github.com/kchapelier - * See https://github.com/kchapelier/PRWM for more informations about this file format - */ - -( function ( THREE ) { - - 'use strict'; - - var bigEndianPlatform = null; - - /** - * Check if the endianness of the platform is big-endian (most significant bit first) - * @returns {boolean} True if big-endian, false if little-endian - */ - function isBigEndianPlatform() { - - if ( bigEndianPlatform === null ) { - - var buffer = new ArrayBuffer( 2 ), - uint8Array = new Uint8Array( buffer ), - uint16Array = new Uint16Array( buffer ); - - uint8Array[ 0 ] = 0xAA; // set first byte - uint8Array[ 1 ] = 0xBB; // set second byte - bigEndianPlatform = ( uint16Array[ 0 ] === 0xAABB ); - - } - - return bigEndianPlatform; - - } - - // match the values defined in the spec to the TypedArray types - var InvertedEncodingTypes = [ - null, - Float32Array, - null, - Int8Array, - Int16Array, - null, - Int32Array, - Uint8Array, - Uint16Array, - null, - Uint32Array - ]; - - // define the method to use on a DataView, corresponding the TypedArray type - var getMethods = { - Uint16Array: 'getUint16', - Uint32Array: 'getUint32', - Int16Array: 'getInt16', - Int32Array: 'getInt32', - Float32Array: 'getFloat32', - Float64Array: 'getFloat64' - }; - - - function copyFromBuffer( sourceArrayBuffer, viewType, position, length, fromBigEndian ) { - - var bytesPerElement = viewType.BYTES_PER_ELEMENT, - result; - - if ( fromBigEndian === isBigEndianPlatform() || bytesPerElement === 1 ) { - - result = new viewType( sourceArrayBuffer, position, length ); - - } else { - - var readView = new DataView( sourceArrayBuffer, position, length * bytesPerElement ), - getMethod = getMethods[ viewType.name ], - littleEndian = ! fromBigEndian, - i = 0; - - result = new viewType( length ); - - for ( ; i < length; i ++ ) { - - result[ i ] = readView[ getMethod ]( i * bytesPerElement, littleEndian ); - - } - - } - - return result; - - } - - - function decodePrwm( buffer ) { - - var array = new Uint8Array( buffer ), - version = array[ 0 ], - flags = array[ 1 ], - indexedGeometry = !! ( flags >> 7 & 0x01 ), - indicesType = flags >> 6 & 0x01, - bigEndian = ( flags >> 5 & 0x01 ) === 1, - attributesNumber = flags & 0x1F, - valuesNumber = 0, - indicesNumber = 0; - - if ( bigEndian ) { - - valuesNumber = ( array[ 2 ] << 16 ) + ( array[ 3 ] << 8 ) + array[ 4 ]; - indicesNumber = ( array[ 5 ] << 16 ) + ( array[ 6 ] << 8 ) + array[ 7 ]; - - } else { - - valuesNumber = array[ 2 ] + ( array[ 3 ] << 8 ) + ( array[ 4 ] << 16 ); - indicesNumber = array[ 5 ] + ( array[ 6 ] << 8 ) + ( array[ 7 ] << 16 ); - - } - - /** PRELIMINARY CHECKS **/ - - if ( version === 0 ) { - - throw new Error( 'PRWM decoder: Invalid format version: 0' ); - - } else if ( version !== 1 ) { - - throw new Error( 'PRWM decoder: Unsupported format version: ' + version ); - - } - - if ( ! indexedGeometry ) { - - if ( indicesType !== 0 ) { - - throw new Error( 'PRWM decoder: Indices type must be set to 0 for non-indexed geometries' ); - - } else if ( indicesNumber !== 0 ) { - - throw new Error( 'PRWM decoder: Number of indices must be set to 0 for non-indexed geometries' ); - - } - - } - - /** PARSING **/ - - var pos = 8; - - var attributes = {}, - attributeName, - char, - attributeType, - cardinality, - encodingType, - arrayType, - values, - indices, - i; - - for ( i = 0; i < attributesNumber; i ++ ) { - - attributeName = ''; - - while ( pos < array.length ) { - - char = array[ pos ]; - pos ++; - - if ( char === 0 ) { - - break; - - } else { - - attributeName += String.fromCharCode( char ); - - } - - } - - flags = array[ pos ]; - - attributeType = flags >> 7 & 0x01; - cardinality = ( flags >> 4 & 0x03 ) + 1; - encodingType = flags & 0x0F; - arrayType = InvertedEncodingTypes[ encodingType ]; - - pos ++; - - // padding to next multiple of 4 - pos = Math.ceil( pos / 4 ) * 4; - - values = copyFromBuffer( buffer, arrayType, pos, cardinality * valuesNumber, bigEndian ); - - pos += arrayType.BYTES_PER_ELEMENT * cardinality * valuesNumber; - - attributes[ attributeName ] = { - type: attributeType, - cardinality: cardinality, - values: values - }; - - } - - pos = Math.ceil( pos / 4 ) * 4; - - indices = null; - - if ( indexedGeometry ) { - - indices = copyFromBuffer( - buffer, - indicesType === 1 ? Uint32Array : Uint16Array, - pos, - indicesNumber, - bigEndian - ); - - } - - return { - version: version, - attributes: attributes, - indices: indices - }; - - } - - // Define the public interface - - THREE.PRWMLoader = function PRWMLoader( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - - }; - - THREE.PRWMLoader.prototype = { - - constructor: THREE.PRWMLoader, - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var loader = new THREE.FileLoader( scope.manager ); - loader.setPath( scope.path ); - loader.setResponseType( 'arraybuffer' ); - - url = url.replace( /\*/g, isBigEndianPlatform() ? 'be' : 'le' ); - - loader.load( url, function ( arrayBuffer ) { - - onLoad( scope.parse( arrayBuffer ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - parse: function ( arrayBuffer ) { - - console.time( 'PRWMLoader' ); - - var data = decodePrwm( arrayBuffer ), - attributesKey = Object.keys( data.attributes ), - bufferGeometry = new THREE.BufferGeometry(), - attribute, - i; - - for ( i = 0; i < attributesKey.length; i ++ ) { - - attribute = data.attributes[ attributesKey[ i ] ]; - bufferGeometry.addAttribute( attributesKey[ i ], new THREE.BufferAttribute( attribute.values, attribute.cardinality, attribute.normalized ) ); - - } - - if ( data.indices !== null ) { - - bufferGeometry.setIndex( new THREE.BufferAttribute( data.indices, 1 ) ); - - } - - console.timeEnd( 'PRWMLoader' ); - - return bufferGeometry; - - } - - }; - - THREE.PRWMLoader.isBigEndianPlatform = function () { - - return isBigEndianPlatform(); - - }; - -} )( THREE ); diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CatmullRomCurve3.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CatmullRomCurve3.js deleted file mode 100644 index 5f53a36f11bc6b721ecea5febe277881d3d05b8a..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CatmullRomCurve3.js +++ /dev/null @@ -1,255 +0,0 @@ -import { Vector3 } from '../../math/Vector3.js'; -import { Curve } from '../core/Curve.js'; - -/** - * @author zz85 https://github.com/zz85 - * - * Centripetal CatmullRom Curve - which is useful for avoiding - * cusps and self-intersections in non-uniform catmull rom curves. - * http://www.cemyuksel.com/research/catmullrom_param/catmullrom.pdf - * - * curve.type accepts centripetal(default), chordal and catmullrom - * curve.tension is used for catmullrom which defaults to 0.5 - */ - - -/* -Based on an optimized c++ solution in - - http://stackoverflow.com/questions/9489736/catmull-rom-curve-with-no-cusps-and-no-self-intersections/ - - http://ideone.com/NoEbVM - -This CubicPoly class could be used for reusing some variables and calculations, -but for three.js curve use, it could be possible inlined and flatten into a single function call -which can be placed in CurveUtils. -*/ - -function CubicPoly() { - - var c0 = 0, c1 = 0, c2 = 0, c3 = 0; - - /* - * Compute coefficients for a cubic polynomial - * p(s) = c0 + c1*s + c2*s^2 + c3*s^3 - * such that - * p(0) = x0, p(1) = x1 - * and - * p'(0) = t0, p'(1) = t1. - */ - function init( x0, x1, t0, t1 ) { - - c0 = x0; - c1 = t0; - c2 = - 3 * x0 + 3 * x1 - 2 * t0 - t1; - c3 = 2 * x0 - 2 * x1 + t0 + t1; - - } - - return { - - initCatmullRom: function ( x0, x1, x2, x3, tension ) { - - init( x1, x2, tension * ( x2 - x0 ), tension * ( x3 - x1 ) ); - - }, - - initNonuniformCatmullRom: function ( x0, x1, x2, x3, dt0, dt1, dt2 ) { - - // compute tangents when parameterized in [t1,t2] - var t1 = ( x1 - x0 ) / dt0 - ( x2 - x0 ) / ( dt0 + dt1 ) + ( x2 - x1 ) / dt1; - var t2 = ( x2 - x1 ) / dt1 - ( x3 - x1 ) / ( dt1 + dt2 ) + ( x3 - x2 ) / dt2; - - // rescale tangents for parametrization in [0,1] - t1 *= dt1; - t2 *= dt1; - - init( x1, x2, t1, t2 ); - - }, - - calc: function ( t ) { - - var t2 = t * t; - var t3 = t2 * t; - return c0 + c1 * t + c2 * t2 + c3 * t3; - - } - - }; - -} - -// - -var tmp = new Vector3(); -var px = new CubicPoly(), py = new CubicPoly(), pz = new CubicPoly(); - -function CatmullRomCurve3( points, closed, curveType, tension ) { - - Curve.call( this ); - - this.type = 'CatmullRomCurve3'; - - this.points = points || []; - this.closed = closed || false; - this.curveType = curveType || 'centripetal'; - this.tension = tension || 0.5; - -} - -CatmullRomCurve3.prototype = Object.create( Curve.prototype ); -CatmullRomCurve3.prototype.constructor = CatmullRomCurve3; - -CatmullRomCurve3.prototype.isCatmullRomCurve3 = true; - -CatmullRomCurve3.prototype.getPoint = function ( t, optionalTarget ) { - - var point = optionalTarget || new Vector3(); - - var points = this.points; - var l = points.length; - - var p = ( l - ( this.closed ? 0 : 1 ) ) * t; - var intPoint = Math.floor( p ); - var weight = p - intPoint; - - if ( this.closed ) { - - intPoint += intPoint > 0 ? 0 : ( Math.floor( Math.abs( intPoint ) / l ) + 1 ) * l; - - } else if ( weight === 0 && intPoint === l - 1 ) { - - intPoint = l - 2; - weight = 1; - - } - - var p0, p1, p2, p3; // 4 points - - if ( this.closed || intPoint > 0 ) { - - p0 = points[ ( intPoint - 1 ) % l ]; - - } else { - - // extrapolate first point - tmp.subVectors( points[ 0 ], points[ 1 ] ).add( points[ 0 ] ); - p0 = tmp; - - } - - p1 = points[ intPoint % l ]; - p2 = points[ ( intPoint + 1 ) % l ]; - - if ( this.closed || intPoint + 2 < l ) { - - p3 = points[ ( intPoint + 2 ) % l ]; - - } else { - - // extrapolate last point - tmp.subVectors( points[ l - 1 ], points[ l - 2 ] ).add( points[ l - 1 ] ); - p3 = tmp; - - } - - if ( this.curveType === 'centripetal' || this.curveType === 'chordal' ) { - - // init Centripetal / Chordal Catmull-Rom - var pow = this.curveType === 'chordal' ? 0.5 : 0.25; - var dt0 = Math.pow( p0.distanceToSquared( p1 ), pow ); - var dt1 = Math.pow( p1.distanceToSquared( p2 ), pow ); - var dt2 = Math.pow( p2.distanceToSquared( p3 ), pow ); - - // safety check for repeated points - if ( dt1 < 1e-4 ) dt1 = 1.0; - if ( dt0 < 1e-4 ) dt0 = dt1; - if ( dt2 < 1e-4 ) dt2 = dt1; - - px.initNonuniformCatmullRom( p0.x, p1.x, p2.x, p3.x, dt0, dt1, dt2 ); - py.initNonuniformCatmullRom( p0.y, p1.y, p2.y, p3.y, dt0, dt1, dt2 ); - pz.initNonuniformCatmullRom( p0.z, p1.z, p2.z, p3.z, dt0, dt1, dt2 ); - - } else if ( this.curveType === 'catmullrom' ) { - - px.initCatmullRom( p0.x, p1.x, p2.x, p3.x, this.tension ); - py.initCatmullRom( p0.y, p1.y, p2.y, p3.y, this.tension ); - pz.initCatmullRom( p0.z, p1.z, p2.z, p3.z, this.tension ); - - } - - point.set( - px.calc( weight ), - py.calc( weight ), - pz.calc( weight ) - ); - - return point; - -}; - -CatmullRomCurve3.prototype.copy = function ( source ) { - - Curve.prototype.copy.call( this, source ); - - this.points = []; - - for ( var i = 0, l = source.points.length; i < l; i ++ ) { - - var point = source.points[ i ]; - - this.points.push( point.clone() ); - - } - - this.closed = source.closed; - this.curveType = source.curveType; - this.tension = source.tension; - - return this; - -}; - -CatmullRomCurve3.prototype.toJSON = function () { - - var data = Curve.prototype.toJSON.call( this ); - - data.points = []; - - for ( var i = 0, l = this.points.length; i < l; i ++ ) { - - var point = this.points[ i ]; - data.points.push( point.toArray() ); - - } - - data.closed = this.closed; - data.curveType = this.curveType; - data.tension = this.tension; - - return data; - -}; - -CatmullRomCurve3.prototype.fromJSON = function ( json ) { - - Curve.prototype.fromJSON.call( this, json ); - - this.points = []; - - for ( var i = 0, l = json.points.length; i < l; i ++ ) { - - var point = json.points[ i ]; - this.points.push( new Vector3().fromArray( point ) ); - - } - - this.closed = json.closed; - this.curveType = json.curveType; - this.tension = json.tension; - - return this; - -}; - - -export { CatmullRomCurve3 }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_vertex.glsl.js deleted file mode 100644 index ecfc773911a311201d77bcaf4726ab9bf0f80659..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_vertex.glsl.js +++ /dev/null @@ -1,7 +0,0 @@ -export default /* glsl */` -#ifdef USE_FOG - - fogDepth = -mvPosition.z; - -#endif -`; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/linedashed_vert.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/linedashed_vert.glsl.js deleted file mode 100644 index 4a7a7a4b74742d86f06f0931fb10c2e5e3175be2..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/linedashed_vert.glsl.js +++ /dev/null @@ -1,27 +0,0 @@ -export default /* glsl */` -uniform float scale; -attribute float lineDistance; - -varying float vLineDistance; - -#include -#include -#include -#include -#include - -void main() { - - #include - - vLineDistance = scale * lineDistance; - - vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 ); - gl_Position = projectionMatrix * mvPosition; - - #include - #include - #include - -} -`; diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/models/stylegan2/op/upfirdn2d.py b/spaces/bankholdup/stylegan_petbreeder/e4e/models/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 7bc5a1e331c2bbb1893ac748cfd0f144ff0651b4..0000000000000000000000000000000000000000 --- a/spaces/bankholdup/stylegan_petbreeder/e4e/models/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,184 +0,0 @@ -import os - -import torch -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'upfirdn2d.cpp'), - os.path.join(module_path, 'upfirdn2d_kernel.cu'), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - - return out[:, ::down_y, ::down_x, :] diff --git a/spaces/benmaor/FoodVision_Big/app.py b/spaces/benmaor/FoodVision_Big/app.py deleted file mode 100644 index fac4f54b531efdab955c94105d72eb86d50301ce..0000000000000000000000000000000000000000 --- a/spaces/benmaor/FoodVision_Big/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import gradio as gr -import os -import torch - -from model import create_ViT -from timeit import default_timer as timer -from typing import Tuple, Dict - -with open("class_names.txt", "r") as f: - class_names = [food_name.strip() for food_name in f.readlines()] - - -vit, vit_transforms = create_ViT( - num_classes=101, -) - -vit.load_state_dict( - torch.load( - f="09_VisionTransformer_food_101_100_percent.pth", - map_location=torch.device("cpu"), - ) -) - -def predict(img) -> Tuple[Dict, float]: - start_time = timer() - - img = vit_transforms(img).unsqueeze(0) - vit.eval() - - with torch.inference_mode(): - pred_probs = torch.softmax(vit(img), dim=1) - - pred_labels_and_probs = {class_names[i]: float(pred_probs[0][i]) for i in range(len(class_names))} - - pred_time = round(timer() - start_time, 5) - return pred_labels_and_probs, pred_time - - -title = "FoodVision Big" -description = "A Vision TransFormer computer vision model to classify images of 101 kinds of food as pizza, steak, risotto or sushi and more." -article = "Created at 09. PyTorch Model Deployment." - -example_list = [["examples/" + example] for example in os.listdir("examples")] - -demo = gr.Interface(fn=predict, - inputs=gr.Image(type="pil"), - outputs=[gr.Label(num_top_classes=4, label="Predictions"), - gr.Number(label="Prediction time (s)")], - examples=example_list, - title=title, - description=description, - article=article) - -demo.launch(debug=False) diff --git a/spaces/bennydou/gitea/README.md b/spaces/bennydou/gitea/README.md deleted file mode 100644 index 6c2c6b3577677373ed57bec9ca7605c78a6be1dc..0000000000000000000000000000000000000000 --- a/spaces/bennydou/gitea/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Gitea -emoji: 😸 -sdk: docker -colorFrom: green -colorTo: green -pinned: false -license: mit -app_port: 3000 ---- diff --git a/spaces/bespin-global/Bespin-QuestionAnswering/app.py b/spaces/bespin-global/Bespin-QuestionAnswering/app.py deleted file mode 100644 index fe256adc13fb7f9b3ffb84d93daf59d22b5e5a4d..0000000000000000000000000000000000000000 --- a/spaces/bespin-global/Bespin-QuestionAnswering/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import streamlit as st -import torch -from transformers import AutoModelForQuestionAnswering, AutoTokenizer - -device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') - -@st.cache(allow_output_mutation=True) -def get_model(): - # Load fine-tuned MRC model by HuggingFace Model Hub - HUGGINGFACE_MODEL_PATH = "bespin-global/klue-bert-base-aihub-mrc" - tokenizer = AutoTokenizer.from_pretrained(HUGGINGFACE_MODEL_PATH) - model = AutoModelForQuestionAnswering.from_pretrained(HUGGINGFACE_MODEL_PATH).to(device) - - return tokenizer, model - -tokenizer, model = get_model() - - -## Title -st.title('☁️ Bespin → QuestionAnswering') - -## Text -st.write('[⚡bespin-global/klue-bert-base-aihub-mrc](https://huggingface.co/bespin-global/klue-bert-base-aihub-mrc) 모델 성능 테스트 페이지 입니다.') - - -context_option = st.selectbox(' 📑 Select Context Examples.', - ( - '스티븐 폴 스티브 잡스(영어: Steven Paul "Steve" Jobs, 1955년 2월 24일 ~ 2011년 10월 5일)는 미국의 기업인이었으며 애플의 전 CEO이자 공동 창립자이다. 2011년 10월 5일 췌장암으로 사망했다. 1976년 스티브 워즈니악, 로널드 웨인과 함께 애플을 공동 창업하고, 애플 2를 통해 개인용 컴퓨터를 대중화했다. 또한, GUI와 마우스의 가능성을 처음으로 내다보고 애플 리사와 매킨토시에서 이 기술을 도입하였다. 1986년 경영분쟁에 의해 애플에서 나온 이후 NeXT 컴퓨터를 창업하여 새로운 개념의 운영 체제를 개발했다. 1996년 애플이 NeXT를 인수하게 되면서 다시 애플로 돌아오게 되었고 1997년에는 임시 CEO로 애플을 다시 이끌게 되었으며 이후 다시금 애플을 혁신해 시장에서 성공을 거두게 이끌었다. 2001년 아이팟을 출시하여 음악 산업 전체를 뒤바꾸어 놓았다. 또한, 2007년 아이폰을 출시하면서 스마트폰 시장을 바꾸어 놓았고 2010년 아이패드를 출시함으로써 포스트PC 시대(Post-PC era)를 열었다. 스티브 잡스는 애니메이션 영화 《인크레더블》과 《토이 스토리》 등을 제작한 컴퓨터 애니메이션 제작사인 픽사의 소유주이자 CEO였다. 월트 디즈니 회사는 74억 달러어치의 자사 주식으로 이 회사를 구입하였다. 2006년 6월 이 거래가 완료되어 잡스는 이 거래를 통해 디즈니 지분의 7%를 소유한, 최대의 개인 주주이자 디즈니 이사회의 이사가 되었다. 한편 그는 2003년 무렵부터 췌장암으로 투병생활을 이어왔다. 그의 악화된 건강상태로 인하여 2011년 8월 24일 애플은 스티브 잡스가 최고경영책임자(CEO)를 사임하고 최고운영책임자(COO)인 팀 쿡이 새로운 CEO를 맡는다고 밝혔다. 잡스는 CEO직에서 물러나지만 이사회 의장직은 유지시키기로 했으나, 건강상태가 더욱 악화되어 사임 2개월도 지나지 않은 2011년 10월 5일 향년 56세의 나이로 사망했다.', - '비트코인은 2009년 사토시 나카모토[6]가 만든 가상화폐로, 통화를 발행하고 관리하는 중앙 장치가 존재하지 않는 구조를 가지고 있다. 대신, 비트코인의 거래는 P2P 기반 분산 데이터베이스에 의해 이루어지며, 공개 키 암호 방식 기반으로 거래를 수행한다. 비트코인은 공개성을 가지고 있다. 비트코인은 지갑 파일의 형태로 저장되며, 이 지갑에는 각각의 고유 주소가 부여되며, 그 주소를 기반으로 비트코인의 거래가 이루어진다. 비트코인은 1998년 웨이따이가 사이버펑크 메일링 리스트에 올린 암호통화(cryptocurrency)란 구상을 최초로 구현한 것 중의 하나이다.[7][8] 비트코인은 공개 키 암호 방식을 이용해 공개된 계정간에 거래를 한다. 모든 거래는 비공개적이나 거래의 기록은 남으며, 분산 데이터베이스에 저장된다. 분산된 시간서버로 일련의 작업증명(proof-of-work)을 하여 중복지출(double-spending)을 방지한다. 거래 기록은 모두 데이터베이스에 저장되어야 한다. 저장소 크기를 줄이기 위해 머클 트리(Merkle tree)가 사용된다.' - ) -) -# Text Input -context = st.text_area("Context.", value=context_option, height=300, on_change=None) # placeholder="Please input some context..", - - -if '스티븐 폴 스티브 잡스' in context_option: - question_option = st.selectbox('💡 Select Question Examples.', - ( - '스티브 잡스가 누구야?', '스티브 잡스는 애플로 돌아와서 어떻게 했어?', '왜 애플을 나왔어?', '스티브 잡스는 어떻게 다시 애플로 돌아오게 되었어?', '픽사는 뭘 제작했어?', '왜 팀 쿡을 새로운 CEO로 맡았어?', '스티브 잡스는 언제 사망했어?' - ) - ) -elif '비트코인' in context_option: - question_option = st.selectbox('💡 Select Question Examples.', - ( - '비트코인은 어떤 구조야?', '비트코인은 어떻게 거래가 돼?', '비트코인 지갑에는 뭐가 부여 돼?', '공개된 계정간 거래 시 뭘 이용해?', '모든 거래는 어떻게 남아?', '머클 트리가 왜 사용 돼?' - ) - ) - -# Text Area -question = st.text_area("Question.", value=question_option, on_change=None) # placeholder="Please input your question.." - - - -if st.button("Submit", key='question'): - try: - # Progress spinner - with st.spinner('Wait for it...'): - # Encoding - encodings = tokenizer(context, question, - max_length=512, - truncation=True, - padding="max_length", - return_token_type_ids=False, - return_offsets_mapping=True - ) - encodings = {key: torch.tensor([val]).to(device) for key, val in encodings.items()} - - # Predict - pred = model(encodings["input_ids"], attention_mask=encodings["attention_mask"]) - start_logits, end_logits = pred.start_logits, pred.end_logits - token_start_index, token_end_index = start_logits.argmax(dim=-1), end_logits.argmax(dim=-1) - pred_ids = encodings["input_ids"][0][token_start_index: token_end_index + 1] - prediction = tokenizer.decode(pred_ids) - - # Offset - answer_start_offset = int(encodings['offset_mapping'][0][token_start_index][0][0]) - answer_end_offset = int(encodings['offset_mapping'][0][token_end_index][0][1]) - answer_offset = (answer_start_offset, answer_end_offset) - - # answer - st.success(prediction) - - - except Exception as e: - st.error(e) diff --git a/spaces/bioriAsaeru/text-to-voice/Benefits of X Force X32 Exe Vault Professional 2013 for Product Lifecycle Management.md b/spaces/bioriAsaeru/text-to-voice/Benefits of X Force X32 Exe Vault Professional 2013 for Product Lifecycle Management.md deleted file mode 100644 index 8ed6d723182b16a5de046211522d1f39e6d00b00..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Benefits of X Force X32 Exe Vault Professional 2013 for Product Lifecycle Management.md +++ /dev/null @@ -1,6 +0,0 @@ -

X Force X32 Exe Vault Professional 2013


Download →→→ https://urloso.com/2uyQkk



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Captain Underpants And The Terrifying Perilious Misfortune Of The T.P. Mummy 720p HOT.md b/spaces/bioriAsaeru/text-to-voice/Captain Underpants And The Terrifying Perilious Misfortune Of The T.P. Mummy 720p HOT.md deleted file mode 100644 index aea58bf2d31988cb94988dd579612921943ef389..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Captain Underpants And The Terrifying Perilious Misfortune Of The T.P. Mummy 720p HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

Captain Underpants And The Terrifying Perilious Misfortune Of The T.P. Mummy 720p


Download File ►►► https://urloso.com/2uyPHk



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/How to Enjoy Karl Jenkins Songs of Sanctuary Zip A Guide to the Music of Adiemus.md b/spaces/bioriAsaeru/text-to-voice/How to Enjoy Karl Jenkins Songs of Sanctuary Zip A Guide to the Music of Adiemus.md deleted file mode 100644 index 4c09a6f35ba11d8634263d1770ad75f5fe44ff0d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/How to Enjoy Karl Jenkins Songs of Sanctuary Zip A Guide to the Music of Adiemus.md +++ /dev/null @@ -1,6 +0,0 @@ -

karl jenkins songs of sanctuary zip


Download Filehttps://urloso.com/2uyPLu



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bkhmsi/Font-To-Sketch/code/bezier.py b/spaces/bkhmsi/Font-To-Sketch/code/bezier.py deleted file mode 100644 index bb2b8ef67b0219bc5293acf8409f5866b38b940b..0000000000000000000000000000000000000000 --- a/spaces/bkhmsi/Font-To-Sketch/code/bezier.py +++ /dev/null @@ -1,122 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from scipy.special import binom -from numpy.linalg import norm - -def num_bezier(n_ctrl, degree=3): - if type(n_ctrl) == np.ndarray: - n_ctrl = len(n_ctrl) - return int((n_ctrl - 1) / degree) - -def bernstein(n, i): - bi = binom(n, i) - return lambda t, bi=bi, n=n, i=i: bi * t**i * (1 - t)**(n - i) - -def bezier(P, t, d=0): - '''Bezier curve of degree len(P)-1. d is the derivative order (0 gives positions)''' - n = P.shape[0] - 1 - if d > 0: - Q = np.diff(P, axis=0)*n - return bezier(Q, t, d-1) - B = np.vstack([bernstein(n, i)(t) for i, p in enumerate(P)]) - return (P.T @ B).T - -def cubic_bezier(P, t): - return (1.0-t)**3*P[0] + 3*(1.0-t)**2*t*P[1] + 3*(1.0-t)*t**2*P[2] + t**3*P[3] - -def bezier_piecewise(Cp, subd=100, degree=3, d=0): - ''' sample a piecewise Bezier curve given a sequence of control points''' - num = num_bezier(Cp.shape[0], degree) - X = [] - for i in range(num): - P = Cp[i*degree:i*degree+degree+1, :] - t = np.linspace(0, 1., subd)[:-1] - Y = bezier(P, t, d) - X += [Y] - X.append(Cp[-1]) - X = np.vstack(X) - return X - -def compute_beziers(beziers, subd=100, degree=3): - chain = beziers_to_chain(beziers) - return bezier_piecewise(chain, subd, degree) - -def plot_control_polygon(Cp, degree=3, lw=0.5, linecolor=np.ones(3)*0.1): - n_bezier = num_bezier(len(Cp), degree) - for i in range(n_bezier): - cp = Cp[i*degree:i*degree+degree+1, :] - if degree==3: - plt.plot(cp[0:2,0], cp[0:2, 1], ':', color=linecolor, linewidth=lw) - plt.plot(cp[2:,0], cp[2:,1], ':', color=linecolor, linewidth=lw) - plt.plot(cp[:,0], cp[:,1], 'o', color=[0, 0.5, 1.], markersize=4) - else: - plt.plot(cp[:,0], cp[:,1], ':', color=linecolor, linewidth=lw) - plt.plot(cp[:,0], cp[:,1], 'o', color=[0, 0.5, 1.]) - - -def chain_to_beziers(chain, degree=3): - ''' Convert Bezier chain to list of curve segments (4 control points each)''' - num = num_bezier(chain.shape[0], degree) - beziers = [] - for i in range(num): - beziers.append(chain[i*degree:i*degree+degree+1,:]) - return beziers - - -def beziers_to_chain(beziers): - ''' Convert list of Bezier curve segments to a piecewise bezier chain (shares vertices)''' - n = len(beziers) - chain = [] - for i in range(n): - chain.append(list(beziers[i][:-1])) - chain.append([beziers[-1][-1]]) - return np.array(sum(chain, [])) - - -def split_cubic(bez, t): - p1, p2, p3, p4 = bez - - p12 = (p2 - p1) * t + p1 - p23 = (p3 - p2) * t + p2 - p34 = (p4 - p3) * t + p3 - - p123 = (p23 - p12) * t + p12 - p234 = (p34 - p23) * t + p23 - - p1234 = (p234 - p123) * t + p123 - - return np.array([p1, p12, p123, p1234]), np.array([p1234, p234, p34, p4]) - - -def approx_arc_length(bez): - c0, c1, c2, c3 = bez - v0 = norm(c1-c0)*0.15 - v1 = norm(-0.558983582205757*c0 + 0.325650248872424*c1 + 0.208983582205757*c2 + 0.024349751127576*c3) - v2 = norm(c3-c0+c2-c1)*0.26666666666666666 - v3 = norm(-0.024349751127576*c0 - 0.208983582205757*c1 - 0.325650248872424*c2 + 0.558983582205757*c3) - v4 = norm(c3-c2)*.15 - return v0 + v1 + v2 + v3 + v4 - - -def subdivide_bezier(bez, thresh): - stack = [bez] - res = [] - while stack: - bez = stack.pop() - l = approx_arc_length(bez) - if l < thresh: - res.append(bez) - else: - b1, b2 = split_cubic(bez, 0.5) - stack += [b2, b1] - return res - -def subdivide_bezier_chain(C, thresh): - beziers = chain_to_beziers(C) - res = [] - for bez in beziers: - res += subdivide_bezier(bez, thresh) - return beziers_to_chain(res) - - - diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/MBD.md b/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/MBD.md deleted file mode 100644 index 296d08407bac9155380a48bdc9faa5798db32bcb..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/MBD.md +++ /dev/null @@ -1,117 +0,0 @@ -# MultiBand Diffusion - -AudioCraft provides the code and models for MultiBand Diffusion, [From Discrete Tokens to High Fidelity Audio using MultiBand Diffusion][arxiv]. -MultiBand diffusion is a collection of 4 models that can decode tokens from -EnCodec tokenizer into waveform audio. - - - Open In Colab - -
- - -## Installation - -Please follow the AudioCraft installation instructions from the [README](../README.md). - - -## Usage - -We offer a number of way to use MultiBand Diffusion: -1. The MusicGen demo includes a toggle to try diffusion decoder. You can use the demo locally by running [`python -m demos.musicgen_app --share`](../demos/musicgen_app.py), or through the [MusicGen Colab](https://colab.research.google.com/drive/1JlTOjB-G0A2Hz3h8PK63vLZk4xdCI5QB?usp=sharing). -2. You can play with MusicGen by running the jupyter notebook at [`demos/musicgen_demo.ipynb`](../demos/musicgen_demo.ipynb) locally (if you have a GPU). - -## API - -We provide a simple API and pre-trained models for MusicGen and for EnCodec at 24 khz for 3 bitrates (1.5 kbps, 3 kbps and 6 kbps). - -See after a quick example for using MultiBandDiffusion with the MusicGen API: - -```python -import torchaudio -from audiocraft.models import MusicGen, MultiBandDiffusion -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('facebook/musicgen-melody') -mbd = MultiBandDiffusion.get_mbd_musicgen() -model.set_generation_params(duration=8) # generate 8 seconds. -wav, tokens = model.generate_unconditional(4, return_tokens=True) # generates 4 unconditional audio samples and keep the tokens for MBD generation -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav_diffusion = mbd.tokens_to_wav(tokens) -wav, tokens = model.generate(descriptions, return_tokens=True) # generates 3 samples and keep the tokens. -wav_diffusion = mbd.tokens_to_wav(tokens) -melody, sr = torchaudio.load('./assets/bach.mp3') -# Generates using the melody from the given audio and the provided descriptions, returns audio and audio tokens. -wav, tokens = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr, return_tokens=True) -wav_diffusion = mbd.tokens_to_wav(tokens) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav and {idx}_diffusion.wav, with loudness normalization at -14 db LUFS for comparing the methods. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) - audio_write(f'{idx}_diffusion', wav_diffusion[idx].cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` - -For the compression task (and to compare with [EnCodec](https://github.com/facebookresearch/encodec)): - -```python -import torch -from audiocraft.models import MultiBandDiffusion -from encodec import EncodecModel -from audiocraft.data.audio import audio_read, audio_write - -bandwidth = 3.0 # 1.5, 3.0, 6.0 -mbd = MultiBandDiffusion.get_mbd_24khz(bw=bandwidth) -encodec = EncodecModel.get_encodec_24khz() - -somepath = '' -wav, sr = audio_read(somepath) -with torch.no_grad(): - compressed_encodec = encodec(wav) - compressed_diffusion = mbd.regenerate(wav, sample_rate=sr) - -audio_write('sample_encodec', compressed_encodec.squeeze(0).cpu(), mbd.sample_rate, strategy="loudness", loudness_compressor=True) -audio_write('sample_diffusion', compressed_diffusion.squeeze(0).cpu(), mbd.sample_rate, strategy="loudness", loudness_compressor=True) -``` - - -## Training - -The [DiffusionSolver](../audiocraft/solvers/diffusion.py) implements our diffusion training pipeline. -It generates waveform audio conditioned on the embeddings extracted from a pre-trained EnCodec model -(see [EnCodec documentation](./ENCODEC.md) for more details on how to train such model). - -Note that **we do NOT provide any of the datasets** used for training our diffusion models. -We provide a dummy dataset containing just a few examples for illustrative purposes. - -### Example configurations and grids - -One can train diffusion models as described in the paper by using this [dora grid](../audiocraft/grids/diffusion/4_bands_base_32khz.py). -```shell -# 4 bands MBD trainning -dora grid diffusion.4_bands_base_32khz -``` - -### Learn more - -Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md). - - -## Citation - -``` -@article{sanroman2023fromdi, - title={From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion}, - author={San Roman, Robin and Adi, Yossi and Deleforge, Antoine and Serizel, Romain and Synnaeve, Gabriel and Défossez, Alexandre}, - journal={arXiv preprint arXiv:}, - year={2023} -} -``` - - -## License - -See license information in the [README](../README.md). - - -[arxiv]: https://dl.fbaipublicfiles.com/encodec/Diffusion/paper.pdf -[mbd_samples]: https://ai.honu.io/papers/mbd/ diff --git a/spaces/briancatmaster/Tropic-AI/README.md b/spaces/briancatmaster/Tropic-AI/README.md deleted file mode 100644 index 23ac9b3c6bad1581e0b1858e66edb274a9af2efa..0000000000000000000000000000000000000000 --- a/spaces/briancatmaster/Tropic-AI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Tropic AI -emoji: 📉 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/hrfpn.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/hrfpn.py deleted file mode 100644 index 08ec420fa24e1e8f5074baf2e9ae737aff2ab12e..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/hrfpn.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -MIT License -Copyright (c) 2019 Microsoft -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. -""" - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from detectron2.layers import ShapeSpec -from detectron2.modeling.backbone import BACKBONE_REGISTRY -from detectron2.modeling.backbone.backbone import Backbone - -from .hrnet import build_pose_hrnet_backbone - - -class HRFPN(Backbone): - """HRFPN (High Resolution Feature Pyramids) - Transforms outputs of HRNet backbone so they are suitable for the ROI_heads - arXiv: https://arxiv.org/abs/1904.04514 - Adapted from https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/necks/hrfpn.py - Args: - bottom_up: (list) output of HRNet - in_features (list): names of the input features (output of HRNet) - in_channels (list): number of channels for each branch - out_channels (int): output channels of feature pyramids - n_out_features (int): number of output stages - pooling (str): pooling for generating feature pyramids (from {MAX, AVG}) - share_conv (bool): Have one conv per output, or share one with all the outputs - """ - - def __init__( - self, - bottom_up, - in_features, - n_out_features, - in_channels, - out_channels, - pooling="AVG", - share_conv=False, - ): - super(HRFPN, self).__init__() - assert isinstance(in_channels, list) - self.bottom_up = bottom_up - self.in_features = in_features - self.n_out_features = n_out_features - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.share_conv = share_conv - - if self.share_conv: - self.fpn_conv = nn.Conv2d( - in_channels=out_channels, out_channels=out_channels, kernel_size=3, padding=1 - ) - else: - self.fpn_conv = nn.ModuleList() - for _ in range(self.n_out_features): - self.fpn_conv.append( - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=3, - padding=1, - ) - ) - - # Custom change: Replaces a simple bilinear interpolation - self.interp_conv = nn.ModuleList() - for i in range(len(self.in_features)): - self.interp_conv.append( - nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels[i], - out_channels=in_channels[i], - kernel_size=4, - stride=2**i, - padding=0, - output_padding=0, - bias=False, - ), - nn.BatchNorm2d(in_channels[i], momentum=0.1), - nn.ReLU(inplace=True), - ) - ) - - # Custom change: Replaces a couple (reduction conv + pooling) by one conv - self.reduction_pooling_conv = nn.ModuleList() - for i in range(self.n_out_features): - self.reduction_pooling_conv.append( - nn.Sequential( - nn.Conv2d(sum(in_channels), out_channels, kernel_size=2**i, stride=2**i), - nn.BatchNorm2d(out_channels, momentum=0.1), - nn.ReLU(inplace=True), - ) - ) - - if pooling == "MAX": - self.pooling = F.max_pool2d - else: - self.pooling = F.avg_pool2d - - self._out_features = [] - self._out_feature_channels = {} - self._out_feature_strides = {} - - for i in range(self.n_out_features): - self._out_features.append("p%d" % (i + 1)) - self._out_feature_channels.update({self._out_features[-1]: self.out_channels}) - self._out_feature_strides.update({self._out_features[-1]: 2 ** (i + 2)}) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, a=1) - nn.init.constant_(m.bias, 0) - - def forward(self, inputs): - bottom_up_features = self.bottom_up(inputs) - assert len(bottom_up_features) == len(self.in_features) - inputs = [bottom_up_features[f] for f in self.in_features] - - outs = [] - for i in range(len(inputs)): - outs.append(self.interp_conv[i](inputs[i])) - shape_2 = min(o.shape[2] for o in outs) - shape_3 = min(o.shape[3] for o in outs) - out = torch.cat([o[:, :, :shape_2, :shape_3] for o in outs], dim=1) - outs = [] - for i in range(self.n_out_features): - outs.append(self.reduction_pooling_conv[i](out)) - for i in range(len(outs)): # Make shapes consistent - outs[-1 - i] = outs[-1 - i][ - :, :, : outs[-1].shape[2] * 2**i, : outs[-1].shape[3] * 2**i - ] - outputs = [] - for i in range(len(outs)): - if self.share_conv: - outputs.append(self.fpn_conv(outs[i])) - else: - outputs.append(self.fpn_conv[i](outs[i])) - - assert len(self._out_features) == len(outputs) - return dict(zip(self._out_features, outputs)) - - -@BACKBONE_REGISTRY.register() -def build_hrfpn_backbone(cfg, input_shape: ShapeSpec) -> HRFPN: - - in_channels = cfg.MODEL.HRNET.STAGE4.NUM_CHANNELS - in_features = ["p%d" % (i + 1) for i in range(cfg.MODEL.HRNET.STAGE4.NUM_BRANCHES)] - n_out_features = len(cfg.MODEL.ROI_HEADS.IN_FEATURES) - out_channels = cfg.MODEL.HRNET.HRFPN.OUT_CHANNELS - hrnet = build_pose_hrnet_backbone(cfg, input_shape) - hrfpn = HRFPN( - hrnet, - in_features, - n_out_features, - in_channels, - out_channels, - pooling="AVG", - share_conv=False, - ) - - return hrfpn diff --git a/spaces/c-s-ale/ArxivChainLitDemo/app.py b/spaces/c-s-ale/ArxivChainLitDemo/app.py deleted file mode 100644 index 754bfa066afac6db6769d38b2621a8f49cb02084..0000000000000000000000000000000000000000 --- a/spaces/c-s-ale/ArxivChainLitDemo/app.py +++ /dev/null @@ -1,103 +0,0 @@ -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.document_loaders import PyMuPDFLoader -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.vectorstores import Chroma -from langchain.chains import RetrievalQAWithSourcesChain -from langchain.chat_models import ChatOpenAI -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) -import os -import arxiv -import chainlit as cl -from chainlit import user_session - -@cl.langchain_factory(use_async=True) -async def init(): - arxiv_query = None - - # Wait for the user to ask an Arxiv question - while arxiv_query == None: - arxiv_query = await cl.AskUserMessage( - content="Please enter a topic to begin!", timeout=15 - ).send() - - # Obtain the top 30 results from Arxiv for the query - search = arxiv.Search( - query=arxiv_query["content"], - max_results=3, - sort_by=arxiv.SortCriterion.Relevance, - ) - - await cl.Message(content="Downloading and chunking articles...").send() - # download each of the pdfs - pdf_data = [] - for result in search.results(): - loader = PyMuPDFLoader(result.pdf_url) - loaded_pdf = loader.load() - - for document in loaded_pdf: - document.metadata["source"] = result.entry_id - document.metadata["file_path"] = result.pdf_url - document.metadata["title"] = result.title - pdf_data.append(document) - - # Create a Chroma vector store - embeddings = OpenAIEmbeddings( - disallowed_special=(), - ) - - # If operation takes too long, make_async allows to run in a thread - # docsearch = await cl.make_async(Chroma.from_documents)(pdf_data, embeddings) - docsearch = Chroma.from_documents(pdf_data, embeddings) - - # Create a chain that uses the Chroma vector store - chain = RetrievalQAWithSourcesChain.from_chain_type( - ChatOpenAI( - model_name="gpt-3.5-turbo-16k", - temperature=0, - ), - chain_type="stuff", - retriever=docsearch.as_retriever(), - return_source_documents=True, - ) - - # Let the user know that the system is ready - await cl.Message( - content=f"We found a few papers about `{arxiv_query['content']}` you can now ask questions!" - ).send() - - return chain - - -@cl.langchain_postprocess -async def process_response(res): - answer = res["answer"] - source_elements_dict = {} - source_elements = [] - for idx, source in enumerate(res["source_documents"]): - title = source.metadata["title"] - - if title not in source_elements_dict: - source_elements_dict[title] = { - "page_number": [source.metadata["page"]], - "url": source.metadata["file_path"], - } - - else: - source_elements_dict[title]["page_number"].append(source.metadata["page"]) - - # sort the page numbers - source_elements_dict[title]["page_number"].sort() - - for title, source in source_elements_dict.items(): - # create a string for the page numbers - page_numbers = ", ".join([str(x) for x in source["page_number"]]) - text_for_source = f"Page Number(s): {page_numbers}\nURL: {source['url']}" - source_elements.append( - cl.Text(name=title, content=text_for_source, display="inline") - ) - - await cl.Message(content=answer, elements=source_elements).send() \ No newline at end of file diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ContainerIO.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ContainerIO.py deleted file mode 100644 index 45e80b39af72c15aa58c08618daa7289d96649d0..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ContainerIO.py +++ /dev/null @@ -1,120 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a class to read from a container file -# -# History: -# 1995-06-18 fl Created -# 1995-09-07 fl Added readline(), readlines() -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1995 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - - -import io - - -class ContainerIO: - """ - A file object that provides read access to a part of an existing - file (for example a TAR file). - """ - - def __init__(self, file, offset, length): - """ - Create file object. - - :param file: Existing file. - :param offset: Start of region, in bytes. - :param length: Size of region, in bytes. - """ - self.fh = file - self.pos = 0 - self.offset = offset - self.length = length - self.fh.seek(offset) - - ## - # Always false. - - def isatty(self): - return False - - def seek(self, offset, mode=io.SEEK_SET): - """ - Move file pointer. - - :param offset: Offset in bytes. - :param mode: Starting position. Use 0 for beginning of region, 1 - for current offset, and 2 for end of region. You cannot move - the pointer outside the defined region. - """ - if mode == 1: - self.pos = self.pos + offset - elif mode == 2: - self.pos = self.length + offset - else: - self.pos = offset - # clamp - self.pos = max(0, min(self.pos, self.length)) - self.fh.seek(self.offset + self.pos) - - def tell(self): - """ - Get current file pointer. - - :returns: Offset from start of region, in bytes. - """ - return self.pos - - def read(self, n=0): - """ - Read data. - - :param n: Number of bytes to read. If omitted or zero, - read until end of region. - :returns: An 8-bit string. - """ - if n: - n = min(n, self.length - self.pos) - else: - n = self.length - self.pos - if not n: # EOF - return b"" if "b" in self.fh.mode else "" - self.pos = self.pos + n - return self.fh.read(n) - - def readline(self): - """ - Read a line of text. - - :returns: An 8-bit string. - """ - s = b"" if "b" in self.fh.mode else "" - newline_character = b"\n" if "b" in self.fh.mode else "\n" - while True: - c = self.read(1) - if not c: - break - s = s + c - if c == newline_character: - break - return s - - def readlines(self): - """ - Read multiple lines of text. - - :returns: A list of 8-bit strings. - """ - lines = [] - while True: - s = self.readline() - if not s: - break - lines.append(s) - return lines diff --git a/spaces/candlend/vits-hoshimi/sovits/hubert/__init__.py b/spaces/candlend/vits-hoshimi/sovits/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/captchaboy/FAST-ABINet-OCR/transforms.py b/spaces/captchaboy/FAST-ABINet-OCR/transforms.py deleted file mode 100644 index 5a7042f3368bc832566d5c22d1e18abe5d8547f5..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/FAST-ABINet-OCR/transforms.py +++ /dev/null @@ -1,329 +0,0 @@ -import math -import numbers -import random - -import cv2 -import numpy as np -from PIL import Image -from torchvision import transforms -from torchvision.transforms import Compose - - -def sample_asym(magnitude, size=None): - return np.random.beta(1, 4, size) * magnitude - -def sample_sym(magnitude, size=None): - return (np.random.beta(4, 4, size=size) - 0.5) * 2 * magnitude - -def sample_uniform(low, high, size=None): - return np.random.uniform(low, high, size=size) - -def get_interpolation(type='random'): - if type == 'random': - choice = [cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA] - interpolation = choice[random.randint(0, len(choice)-1)] - elif type == 'nearest': interpolation = cv2.INTER_NEAREST - elif type == 'linear': interpolation = cv2.INTER_LINEAR - elif type == 'cubic': interpolation = cv2.INTER_CUBIC - elif type == 'area': interpolation = cv2.INTER_AREA - else: raise TypeError('Interpolation types only nearest, linear, cubic, area are supported!') - return interpolation - -class CVRandomRotation(object): - def __init__(self, degrees=15): - assert isinstance(degrees, numbers.Number), "degree should be a single number." - assert degrees >= 0, "degree must be positive." - self.degrees = degrees - - @staticmethod - def get_params(degrees): - return sample_sym(degrees) - - def __call__(self, img): - angle = self.get_params(self.degrees) - src_h, src_w = img.shape[:2] - M = cv2.getRotationMatrix2D(center=(src_w/2, src_h/2), angle=angle, scale=1.0) - abs_cos, abs_sin = abs(M[0,0]), abs(M[0,1]) - dst_w = int(src_h * abs_sin + src_w * abs_cos) - dst_h = int(src_h * abs_cos + src_w * abs_sin) - M[0, 2] += (dst_w - src_w)/2 - M[1, 2] += (dst_h - src_h)/2 - - flags = get_interpolation() - return cv2.warpAffine(img, M, (dst_w, dst_h), flags=flags, borderMode=cv2.BORDER_REPLICATE) - -class CVRandomAffine(object): - def __init__(self, degrees, translate=None, scale=None, shear=None): - assert isinstance(degrees, numbers.Number), "degree should be a single number." - assert degrees >= 0, "degree must be positive." - self.degrees = degrees - - if translate is not None: - assert isinstance(translate, (tuple, list)) and len(translate) == 2, \ - "translate should be a list or tuple and it must be of length 2." - for t in translate: - if not (0.0 <= t <= 1.0): - raise ValueError("translation values should be between 0 and 1") - self.translate = translate - - if scale is not None: - assert isinstance(scale, (tuple, list)) and len(scale) == 2, \ - "scale should be a list or tuple and it must be of length 2." - for s in scale: - if s <= 0: - raise ValueError("scale values should be positive") - self.scale = scale - - if shear is not None: - if isinstance(shear, numbers.Number): - if shear < 0: - raise ValueError("If shear is a single number, it must be positive.") - self.shear = [shear] - else: - assert isinstance(shear, (tuple, list)) and (len(shear) == 2), \ - "shear should be a list or tuple and it must be of length 2." - self.shear = shear - else: - self.shear = shear - - def _get_inverse_affine_matrix(self, center, angle, translate, scale, shear): - # https://github.com/pytorch/vision/blob/v0.4.0/torchvision/transforms/functional.py#L717 - from numpy import sin, cos, tan - - if isinstance(shear, numbers.Number): - shear = [shear, 0] - - if not isinstance(shear, (tuple, list)) and len(shear) == 2: - raise ValueError( - "Shear should be a single value or a tuple/list containing " + - "two values. Got {}".format(shear)) - - rot = math.radians(angle) - sx, sy = [math.radians(s) for s in shear] - - cx, cy = center - tx, ty = translate - - # RSS without scaling - a = cos(rot - sy) / cos(sy) - b = -cos(rot - sy) * tan(sx) / cos(sy) - sin(rot) - c = sin(rot - sy) / cos(sy) - d = -sin(rot - sy) * tan(sx) / cos(sy) + cos(rot) - - # Inverted rotation matrix with scale and shear - # det([[a, b], [c, d]]) == 1, since det(rotation) = 1 and det(shear) = 1 - M = [d, -b, 0, - -c, a, 0] - M = [x / scale for x in M] - - # Apply inverse of translation and of center translation: RSS^-1 * C^-1 * T^-1 - M[2] += M[0] * (-cx - tx) + M[1] * (-cy - ty) - M[5] += M[3] * (-cx - tx) + M[4] * (-cy - ty) - - # Apply center translation: C * RSS^-1 * C^-1 * T^-1 - M[2] += cx - M[5] += cy - return M - - @staticmethod - def get_params(degrees, translate, scale_ranges, shears, height): - angle = sample_sym(degrees) - if translate is not None: - max_dx = translate[0] * height - max_dy = translate[1] * height - translations = (np.round(sample_sym(max_dx)), np.round(sample_sym(max_dy))) - else: - translations = (0, 0) - - if scale_ranges is not None: - scale = sample_uniform(scale_ranges[0], scale_ranges[1]) - else: - scale = 1.0 - - if shears is not None: - if len(shears) == 1: - shear = [sample_sym(shears[0]), 0.] - elif len(shears) == 2: - shear = [sample_sym(shears[0]), sample_sym(shears[1])] - else: - shear = 0.0 - - return angle, translations, scale, shear - - - def __call__(self, img): - src_h, src_w = img.shape[:2] - angle, translate, scale, shear = self.get_params( - self.degrees, self.translate, self.scale, self.shear, src_h) - - M = self._get_inverse_affine_matrix((src_w/2, src_h/2), angle, (0, 0), scale, shear) - M = np.array(M).reshape(2,3) - - startpoints = [(0, 0), (src_w - 1, 0), (src_w - 1, src_h - 1), (0, src_h - 1)] - project = lambda x, y, a, b, c: int(a*x + b*y + c) - endpoints = [(project(x, y, *M[0]), project(x, y, *M[1])) for x, y in startpoints] - - rect = cv2.minAreaRect(np.array(endpoints)) - bbox = cv2.boxPoints(rect).astype(dtype=np.int) - max_x, max_y = bbox[:, 0].max(), bbox[:, 1].max() - min_x, min_y = bbox[:, 0].min(), bbox[:, 1].min() - - dst_w = int(max_x - min_x) - dst_h = int(max_y - min_y) - M[0, 2] += (dst_w - src_w) / 2 - M[1, 2] += (dst_h - src_h) / 2 - - # add translate - dst_w += int(abs(translate[0])) - dst_h += int(abs(translate[1])) - if translate[0] < 0: M[0, 2] += abs(translate[0]) - if translate[1] < 0: M[1, 2] += abs(translate[1]) - - flags = get_interpolation() - return cv2.warpAffine(img, M, (dst_w , dst_h), flags=flags, borderMode=cv2.BORDER_REPLICATE) - -class CVRandomPerspective(object): - def __init__(self, distortion=0.5): - self.distortion = distortion - - def get_params(self, width, height, distortion): - offset_h = sample_asym(distortion * height / 2, size=4).astype(dtype=np.int) - offset_w = sample_asym(distortion * width / 2, size=4).astype(dtype=np.int) - topleft = ( offset_w[0], offset_h[0]) - topright = (width - 1 - offset_w[1], offset_h[1]) - botright = (width - 1 - offset_w[2], height - 1 - offset_h[2]) - botleft = ( offset_w[3], height - 1 - offset_h[3]) - - startpoints = [(0, 0), (width - 1, 0), (width - 1, height - 1), (0, height - 1)] - endpoints = [topleft, topright, botright, botleft] - return np.array(startpoints, dtype=np.float32), np.array(endpoints, dtype=np.float32) - - def __call__(self, img): - height, width = img.shape[:2] - startpoints, endpoints = self.get_params(width, height, self.distortion) - M = cv2.getPerspectiveTransform(startpoints, endpoints) - - # TODO: more robust way to crop image - rect = cv2.minAreaRect(endpoints) - bbox = cv2.boxPoints(rect).astype(dtype=np.int) - max_x, max_y = bbox[:, 0].max(), bbox[:, 1].max() - min_x, min_y = bbox[:, 0].min(), bbox[:, 1].min() - min_x, min_y = max(min_x, 0), max(min_y, 0) - - flags = get_interpolation() - img = cv2.warpPerspective(img, M, (max_x, max_y), flags=flags, borderMode=cv2.BORDER_REPLICATE) - img = img[min_y:, min_x:] - return img - -class CVRescale(object): - - def __init__(self, factor=4, base_size=(128, 512)): - """ Define image scales using gaussian pyramid and rescale image to target scale. - - Args: - factor: the decayed factor from base size, factor=4 keeps target scale by default. - base_size: base size the build the bottom layer of pyramid - """ - if isinstance(factor, numbers.Number): - self.factor = round(sample_uniform(0, factor)) - elif isinstance(factor, (tuple, list)) and len(factor) == 2: - self.factor = round(sample_uniform(factor[0], factor[1])) - else: - raise Exception('factor must be number or list with length 2') - # assert factor is valid - self.base_h, self.base_w = base_size[:2] - - def __call__(self, img): - if self.factor == 0: return img - src_h, src_w = img.shape[:2] - cur_w, cur_h = self.base_w, self.base_h - scale_img = cv2.resize(img, (cur_w, cur_h), interpolation=get_interpolation()) - for _ in range(self.factor): - scale_img = cv2.pyrDown(scale_img) - scale_img = cv2.resize(scale_img, (src_w, src_h), interpolation=get_interpolation()) - return scale_img - -class CVGaussianNoise(object): - def __init__(self, mean=0, var=20): - self.mean = mean - if isinstance(var, numbers.Number): - self.var = max(int(sample_asym(var)), 1) - elif isinstance(var, (tuple, list)) and len(var) == 2: - self.var = int(sample_uniform(var[0], var[1])) - else: - raise Exception('degree must be number or list with length 2') - - def __call__(self, img): - noise = np.random.normal(self.mean, self.var**0.5, img.shape) - img = np.clip(img + noise, 0, 255).astype(np.uint8) - return img - -class CVMotionBlur(object): - def __init__(self, degrees=12, angle=90): - if isinstance(degrees, numbers.Number): - self.degree = max(int(sample_asym(degrees)), 1) - elif isinstance(degrees, (tuple, list)) and len(degrees) == 2: - self.degree = int(sample_uniform(degrees[0], degrees[1])) - else: - raise Exception('degree must be number or list with length 2') - self.angle = sample_uniform(-angle, angle) - - def __call__(self, img): - M = cv2.getRotationMatrix2D((self.degree // 2, self.degree // 2), self.angle, 1) - motion_blur_kernel = np.zeros((self.degree, self.degree)) - motion_blur_kernel[self.degree // 2, :] = 1 - motion_blur_kernel = cv2.warpAffine(motion_blur_kernel, M, (self.degree, self.degree)) - motion_blur_kernel = motion_blur_kernel / self.degree - img = cv2.filter2D(img, -1, motion_blur_kernel) - img = np.clip(img, 0, 255).astype(np.uint8) - return img - -class CVGeometry(object): - def __init__(self, degrees=15, translate=(0.3, 0.3), scale=(0.5, 2.), - shear=(45, 15), distortion=0.5, p=0.5): - self.p = p - type_p = random.random() - if type_p < 0.33: - self.transforms = CVRandomRotation(degrees=degrees) - elif type_p < 0.66: - self.transforms = CVRandomAffine(degrees=degrees, translate=translate, scale=scale, shear=shear) - else: - self.transforms = CVRandomPerspective(distortion=distortion) - - def __call__(self, img): - if random.random() < self.p: - img = np.array(img) - return Image.fromarray(self.transforms(img)) - else: return img - -class CVDeterioration(object): - def __init__(self, var, degrees, factor, p=0.5): - self.p = p - transforms = [] - if var is not None: - transforms.append(CVGaussianNoise(var=var)) - if degrees is not None: - transforms.append(CVMotionBlur(degrees=degrees)) - if factor is not None: - transforms.append(CVRescale(factor=factor)) - - random.shuffle(transforms) - transforms = Compose(transforms) - self.transforms = transforms - - def __call__(self, img): - if random.random() < self.p: - img = np.array(img) - return Image.fromarray(self.transforms(img)) - else: return img - - -class CVColorJitter(object): - def __init__(self, brightness=0.5, contrast=0.5, saturation=0.5, hue=0.1, p=0.5): - self.p = p - self.transforms = transforms.ColorJitter(brightness=brightness, contrast=contrast, - saturation=saturation, hue=hue) - - def __call__(self, img): - if random.random() < self.p: return self.transforms(img) - else: return img diff --git a/spaces/ccolas/EmotionPlaylist/utils.py b/spaces/ccolas/EmotionPlaylist/utils.py deleted file mode 100644 index 3371c4ab199e42bec46b991d9dfb780e9a136976..0000000000000000000000000000000000000000 --- a/spaces/ccolas/EmotionPlaylist/utils.py +++ /dev/null @@ -1,193 +0,0 @@ -import numpy as np -import json -import os - -valid_track_infos = {'uri', 'name', 'artist_name', 'popularity', 'artist_genres', 'album', - 'artist_popularity', 'audio_features', 'audio_analysis'} - -def get_all_tracks_from_playlist_uri(sp, playlist_uri): - # get all playlist_tracks - offset = 0 - tracks = [] - done = False - while not done: - new_tracks = sp.playlist_tracks(playlist_uri, offset=offset, limit=100)["items"] - tracks += new_tracks - if len(new_tracks) < 100: - done = True - else: - offset += 100 - return tracks - -def update_data_with_audio_features(sp, uris, data): - assert len(uris) <= 100 - tracks_audio_features = sp.audio_features(uris) - for i in range(len(uris)): - data[uris[i]]['track']['audio_features'] = tracks_audio_features[i] - return data, [] - -def check_all_track_has_audio_features(data): - for uri in data.keys(): - assert 'audio_features' in data[uri]['track'].keys() - -def get_all_tracks_from_playlists(sp, playlist_uris, verbose=False): - if verbose: print(f'Extracting all tracks from {len(playlist_uris)} playlists.') - # load data - cache_path = './cache_track_features_tmp.json' - if True: #not os.path.exists(cache_path): - with open(cache_path, 'w') as f: - json.dump(dict(), f) - with open(cache_path, 'r') as f: - data = json.load(f) - for k in list(data.keys()).copy(): - if k not in playlist_uris: - data.pop(k) - else: - print(k) - if verbose: print(f'\t{len(data.keys())} tracks loaded from cache') - - # for each playlist, extract all tracks, remove doubles - if verbose: print(f'\tScanning tracks for each playlist') - new_additions = 0 - added_uris = [] - for i_playlist, playlist_uri in enumerate(playlist_uris): - new_tracks = get_all_tracks_from_playlist_uri(sp, playlist_uri) - # remove doubles - for new_track in new_tracks: - uri = new_track['track']['uri'].split(':')[-1] - if uri not in set(data.keys()): - genres = sp.artist(new_track['track']['artists'][0]['uri'])['genres'] - new_track['track']['genres'] = genres - data[uri] = new_track - added_uris.append(uri) - new_additions += 1 - # when 100 new added uris, compute their audio features - if len(added_uris) == 100: - data, added_uris = update_data_with_audio_features(sp, added_uris, data) - if (new_additions + 1) % 1000 == 0: - data, added_uris = update_data_with_audio_features(sp, added_uris, data) - check_all_track_has_audio_features(data) - with open(cache_path, 'w') as f: - json.dump(data, f) - if verbose: print(f"\t\t{i_playlist + 1} playlists scanned ({new_additions} new tracks, total: {len(data.keys())} tracks)") - if verbose: print('\tDone.') - data, _ = update_data_with_audio_features(sp, added_uris, data) - check_all_track_has_audio_features(data) - with open(cache_path, 'w') as f: - json.dump(data, f) - return data - - -def get_all_tracks_from_user(sp, user_id='bkayf', verbose=False): - if verbose: print(f'Extracting all tracks from user {user_id}.') - # load data - if user_id == 'bkayf': - cache_path = '../data/bkayf/cache_track_features.json' - if not os.path.exists(cache_path): - with open(cache_path, 'w') as f: - json.dump(dict(), f) - with open(cache_path, 'r') as f: - data = json.load(f) - else: - data = dict() - if verbose: print(f'\t{len(data.keys())} tracks loaded from cache') - - # first get all playlists - offset = 0 - done = False - playlists = [] - if verbose: print(f'\tScanning playlists.') - while not done: - new_playlists = sp.user_playlists(user_id, offset=offset, limit=50)['items'] - playlists += new_playlists - if len(new_playlists) < 50: - done = True - if verbose: print(f'\t\tfrom {offset} to {offset + len(new_playlists)} (complete).') - else: - if verbose: print(f'\t\tfrom {offset} to {offset + len(new_playlists)},') - offset += 50 - - # for each playlist, extract all tracks, remove doubles - if verbose: print(f'\tScanning tracks for each playlist') - new_additions = 0 - added_uris = [] - for i_playlist, playlist in enumerate(playlists): - if (i_playlist + 1) % 5 == 0: - if verbose: print(f"\t\t{i_playlist + 1} playlists scanned ({new_additions} new tracks, total: {len(data.keys())} tracks)") - playlist_uri = playlist['uri'].split(':')[-1] - new_tracks = get_all_tracks_from_playlist_uri(sp, playlist_uri) - # remove doubles - for new_track in new_tracks: - uri = new_track['track']['uri'].split(':')[-1] - if uri not in set(data.keys()): - data[uri] = new_track - added_uris.append(uri) - new_additions += 1 - # when 100 new added uris, compute their audio features - if len(added_uris) == 100: - data, added_uris = update_data_with_audio_features(sp, added_uris, data) - if (new_additions + 1) % 1000 == 0 and user_id == "bkayf": - data, added_uris = update_data_with_audio_features(sp, added_uris, data) - check_all_track_has_audio_features(data) - with open(cache_path, 'w') as f: - json.dump(data, f) - if verbose: print('\tDone.') - if user_id == "bkayf": - data, _ = update_data_with_audio_features(sp, added_uris, data) - check_all_track_has_audio_features(data) - with open(cache_path, 'w') as f: - json.dump(data, f) - return data - - -def get_uri_from_link(link): - return link.split("?")[0].split("/")[-1] - - - - -def get_track_info_from_playlist_uri(sp, playlist_uri, which_info=['uri'], verbose=False): - output = dict() - assert len(set(which_info) - valid_track_infos) == 0, f"Error which_info. Valid infos are: {valid_track_infos}" - - tracks = get_all_tracks_from_playlist_uri(sp, playlist_uri) - if verbose: print(f'Playlist with {len(tracks)} tracks.') - - # prepare artist info if needed - if any([w in which_info for w in ['artist_genres', 'artist_popularity', 'artist_name']]): - artist_uris = [x["track"]["artists"][0]["uri"] for x in tracks] - artist_infos = [sp.artist(artist_uri) for artist_uri in artist_uris] - - for info in which_info: - # print(info) - if info in ['uri', 'name', 'album', 'popularity']: - output[info] = [] - for i_t, x in enumerate(tracks): - print(i_t) - output[info].append(x["track"][info]) - # output[info] = [x["track"][info] for x in tracks] - elif info in ['artist_genres', 'artist_popularity', 'artist_name']: - output[info] = [artist_info[info.split('_')[1]] for artist_info in artist_infos] - elif info == 'album': - output[info] = [x["track"][info]["name"] for x in tracks] - elif info == 'audio_features': - output[info] = [] - for i_t, x in enumerate(tracks): - print(i_t) - output[info].append(sp.audio_features(x["track"]["uri"])) - # output[info] = [sp.audio_features(x["track"]["uri"]) for x in tracks] - elif info == 'audio_analysis': - output[info] = [sp.audio_analysis(x["track"]["uri"]) for x in tracks] - else: - raise NotImplementedError - - return output - -def compute_progress_and_eta(times, iter, total, n_av=3000): - av_time = np.mean(times[-n_av:]) - progress = int(((iter + 1) / total) * 100) - eta_h = int(av_time * (total - iter) // 3600) - eta_m = int((av_time * (total - iter) - (eta_h * 3600)) // 60) - eta_s = int((av_time * (total - iter) - (eta_h * 3600) - eta_m * 60)) - eta = f"Progress: {progress}%, ETA: {eta_h}H{eta_m}M{eta_s}S." - return eta diff --git a/spaces/chenyangqi/FateZero/inference_fatezero.py b/spaces/chenyangqi/FateZero/inference_fatezero.py deleted file mode 100644 index 120a7844fb49ddb3e779be658946d659b4186cba..0000000000000000000000000000000000000000 --- a/spaces/chenyangqi/FateZero/inference_fatezero.py +++ /dev/null @@ -1,127 +0,0 @@ - -from FateZero.test_fatezero import * - -import copy -import gradio as gr - -class merge_config_then_run(): - def __init__(self) -> None: - # Load the tokenizer - pretrained_model_path = 'FateZero/ckpt/stable-diffusion-v1-4' - self.tokenizer = None - self.text_encoder = None - self.vae = None - self.unet = None - - cache_ckpt = True - if cache_ckpt: - self.tokenizer = AutoTokenizer.from_pretrained( - pretrained_model_path, - # 'FateZero/ckpt/stable-diffusion-v1-4', - subfolder="tokenizer", - use_fast=False, - ) - - # Load models and create wrapper for stable diffusion - self.text_encoder = CLIPTextModel.from_pretrained( - pretrained_model_path, - subfolder="text_encoder", - ) - - self.vae = AutoencoderKL.from_pretrained( - pretrained_model_path, - subfolder="vae", - ) - model_config = { - "lora": 160, - # temporal_downsample_time: 4 - "SparseCausalAttention_index": ['mid'], - "least_sc_channel": 640 - } - self.unet = UNetPseudo3DConditionModel.from_2d_model( - os.path.join(pretrained_model_path, "unet"), model_config=model_config - ) - - def run( - self, - # def merge_config_then_run( - model_id, - data_path, - source_prompt, - target_prompt, - cross_replace_steps, - self_replace_steps, - enhance_words, - enhance_words_value, - num_steps, - guidance_scale, - user_input_video=None, - - # Temporal and spatial crop of the video - start_sample_frame=0, - n_sample_frame=8, - stride=1, - left_crop=0, - right_crop=0, - top_crop=0, - bottom_crop=0, - ): - # , ] = inputs - default_edit_config='FateZero/config/low_resource_teaser/jeep_watercolor_ddim_10_steps.yaml' - Omegadict_default_edit_config = OmegaConf.load(default_edit_config) - - dataset_time_string = get_time_string() - config_now = copy.deepcopy(Omegadict_default_edit_config) - print(f"config_now['pretrained_model_path'] = model_id {model_id}") - # config_now['pretrained_model_path'] = model_id - config_now['train_dataset']['prompt'] = source_prompt - config_now['train_dataset']['path'] = data_path - # ImageSequenceDataset_dict = { } - offset_dict = { - "left": left_crop, - "right": right_crop, - "top": top_crop, - "bottom": bottom_crop, - } - ImageSequenceDataset_dict = { - "start_sample_frame" : start_sample_frame, - "n_sample_frame" : n_sample_frame, - "sampling_rate" : stride, - "offset": offset_dict, - } - config_now['train_dataset'].update(ImageSequenceDataset_dict) - if user_input_video and data_path is None: - raise gr.Error('You need to upload a video or choose a provided video') - if user_input_video is not None: - if isinstance(user_input_video, str): - config_now['train_dataset']['path'] = user_input_video - elif hasattr(user_input_video, 'name') and user_input_video.name is not None: - config_now['train_dataset']['path'] = user_input_video.name - config_now['validation_sample_logger_config']['prompts'] = [target_prompt] - - - # fatezero config - p2p_config_now = copy.deepcopy(config_now['validation_sample_logger_config']['p2p_config'][0]) - p2p_config_now['cross_replace_steps']['default_'] = cross_replace_steps - p2p_config_now['self_replace_steps'] = self_replace_steps - p2p_config_now['eq_params']['words'] = enhance_words.split(" ") - p2p_config_now['eq_params']['values'] = [enhance_words_value,]*len(p2p_config_now['eq_params']['words']) - config_now['validation_sample_logger_config']['p2p_config'][0] = copy.deepcopy(p2p_config_now) - - - # ddim config - config_now['validation_sample_logger_config']['guidance_scale'] = guidance_scale - config_now['validation_sample_logger_config']['num_inference_steps'] = num_steps - - - logdir = default_edit_config.replace('config', 'result').replace('.yml', '').replace('.yaml', '')+f'_{dataset_time_string}' - config_now['logdir'] = logdir - print(f'Saving at {logdir}') - save_path = test(tokenizer = self.tokenizer, - text_encoder = self.text_encoder, - vae = self.vae, - unet = self.unet, - config=default_edit_config, **config_now) - mp4_path = save_path.replace('_0.gif', '_0_0_0.mp4') - return mp4_path - diff --git a/spaces/cihyFjudo/fairness-paper-search/Accounts Dictionary Free Download PDF A Practical and User-Friendly Guide to Accounting.md b/spaces/cihyFjudo/fairness-paper-search/Accounts Dictionary Free Download PDF A Practical and User-Friendly Guide to Accounting.md deleted file mode 100644 index 040346ae336d1a13da3a03569ce429274fbc060b..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Accounts Dictionary Free Download PDF A Practical and User-Friendly Guide to Accounting.md +++ /dev/null @@ -1,6 +0,0 @@ -

accounts dictionary free download pdf


Downloadhttps://tinurli.com/2uwjdd



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Descargar Facebook Chat para KP500 conecta con tus contactos de Instagram desde Messenger.md b/spaces/cihyFjudo/fairness-paper-search/Descargar Facebook Chat para KP500 conecta con tus contactos de Instagram desde Messenger.md deleted file mode 100644 index 2a3b9f28d963dd0018525605f0e1dc4954e96834..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Descargar Facebook Chat para KP500 conecta con tus contactos de Instagram desde Messenger.md +++ /dev/null @@ -1,6 +0,0 @@ -

descargar facebook chat para kp500


Download Filehttps://tinurli.com/2uwkkk



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Open Kannan Star Song Download 2021.md b/spaces/cihyFjudo/fairness-paper-search/Open Kannan Star Song Download 2021.md deleted file mode 100644 index 9bb76d25a0e3ad899ee1b6fe7bc15d20d513ffbd..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Open Kannan Star Song Download 2021.md +++ /dev/null @@ -1,5 +0,0 @@ -
-

Watch the open kannam star songs Song video before converting or downloading, you can preview it by clicking Watch Video button, Download MP3 button will convert to mp3 and Download MP4 button will convert to mp4; SavefromNets.com allows you to download any videos from the supported website into MP3, MP4, and more format.

-

Open Kannan Star Song Download


Download Ziphttps://tinurli.com/2uwitv



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Size Zero Full Movie 658 The Ultimate Romantic Comedy for All Sizes.md b/spaces/cihyFjudo/fairness-paper-search/Size Zero Full Movie 658 The Ultimate Romantic Comedy for All Sizes.md deleted file mode 100644 index 6da8f2d902d7fc48ebeaffaec894dd542dc29f8b..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Size Zero Full Movie 658 The Ultimate Romantic Comedy for All Sizes.md +++ /dev/null @@ -1,6 +0,0 @@ -
-

That thing was like a Tank but on legs, massive. Beyond massive. It towered over all the houses, every step of its two heavy legs shook us to the core. Thankfully it was slow, but every aspect of it from its size to the gigantic weapons hanging underneath it screamed danger. Death. Run.

-

For the Star Wars Anthology Series film, Rogue One , the X-wings at the Rebel base were realized by a combination of full-size props and cutouts, similar to the fighters in the original Star Wars film.[34]

-

size zero full movie 658


Download Zip ►►►►► https://tinurli.com/2uwi00



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cloudstack/CSV-ChatBot/modules/history.py b/spaces/cloudstack/CSV-ChatBot/modules/history.py deleted file mode 100644 index 56b68e963cc1d0c8f644156106a7f6bf1a4d5703..0000000000000000000000000000000000000000 --- a/spaces/cloudstack/CSV-ChatBot/modules/history.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import streamlit as st -from streamlit_chat import message - - -class ChatHistory: - def __init__(self): - self.history = st.session_state.get("history", []) - st.session_state["history"] = self.history - - def default_greeting(self): - return "Hello ! 👋" - - def default_prompt(self, topic): - return f"Hello! {topic} ask anything about 🤗" - - def initialize_user_history(self): - st.session_state["user"] = [self.default_greeting()] - - def initialize_assistant_history(self, uploaded_file): - st.session_state["assistant"] = [self.default_prompt(uploaded_file.name)] - - def initialize(self, uploaded_file): - if "assistant" not in st.session_state: - self.initialize_assistant_history(uploaded_file) - if "user" not in st.session_state: - self.initialize_user_history() - - def reset(self, uploaded_file): - st.session_state["history"] = [] - self.initialize_user_history() - self.initialize_assistant_history(uploaded_file) - st.session_state["reset_chat"] = False - - def append(self, mode, message): - st.session_state[mode].append(message) - - def generate_messages(self, container): - if st.session_state["assistant"]: - with container: - for i in range(len(st.session_state["assistant"])): - message( - st.session_state["user"][i], - is_user=True, - key=f"{i}_user", - avatar_style="big-smile", - ) - message(st.session_state["assistant"][i], key=str(i), avatar_style="thumbs") - - def load(self): - if os.path.exists(self.history_file): - with open(self.history_file, "r") as f: - self.history = f.read().splitlines() - - def save(self): - with open(self.history_file, "w") as f: - f.write("\n".join(self.history)) \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxavs.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxavs.c deleted file mode 100644 index 6c29539f2436d482f6a6cd3cf9cf17ca9b88b589..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxavs.c +++ /dev/null @@ -1,441 +0,0 @@ -/* - * AVS encoding using the xavs library - * Copyright (C) 2010 Amanda, Y.N. Wu - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include -#include -#include -#include -#include -#include -#include "avcodec.h" -#include "codec_internal.h" -#include "encode.h" -#include "packet_internal.h" -#include "libavutil/internal.h" -#include "libavutil/mem.h" -#include "libavutil/opt.h" - -#define END_OF_STREAM 0x001 - -#define XAVS_PART_I8X8 0x002 /* Analyze i8x8 (requires 8x8 transform) */ -#define XAVS_PART_P8X8 0x010 /* Analyze p16x8, p8x16 and p8x8 */ -#define XAVS_PART_B8X8 0x100 /* Analyze b16x8, b*/ - -typedef struct XavsContext { - AVClass *class; - xavs_param_t params; - xavs_t *enc; - xavs_picture_t pic; - uint8_t *sei; - int sei_size; - int end_of_stream; - float crf; - int cqp; - int b_bias; - float cplxblur; - int direct_pred; - int aud; - int fast_pskip; - int motion_est; - int mbtree; - int mixed_refs; - int b_frame_strategy; - int chroma_offset; - int scenechange_threshold; - int noise_reduction; - - int64_t *pts_buffer; - int out_frame_count; -} XavsContext; - -static void XAVS_log(void *p, int level, const char *fmt, va_list args) -{ - static const int level_map[] = { - [XAVS_LOG_ERROR] = AV_LOG_ERROR, - [XAVS_LOG_WARNING] = AV_LOG_WARNING, - [XAVS_LOG_INFO] = AV_LOG_INFO, - [XAVS_LOG_DEBUG] = AV_LOG_DEBUG - }; - - if (level < 0 || level > XAVS_LOG_DEBUG) - return; - - av_vlog(p, level_map[level], fmt, args); -} - -static int encode_nals(AVCodecContext *ctx, AVPacket *pkt, - xavs_nal_t *nals, int nnal) -{ - XavsContext *x4 = ctx->priv_data; - int64_t size = x4->sei_size; - uint8_t *p, *p_end; - int i, s, ret; - - if (!nnal) - return 0; - - for (i = 0; i < nnal; i++) - size += 3U + nals[i].i_payload; - - if ((ret = ff_get_encode_buffer(ctx, pkt, size, 0)) < 0) - return ret; - p = pkt->data; - p_end = pkt->data + size; - - /* Write the SEI as part of the first frame. */ - if (x4->sei_size > 0 && nnal > 0) { - memcpy(p, x4->sei, x4->sei_size); - p += x4->sei_size; - x4->sei_size = 0; - } - - for (i = 0; i < nnal; i++) { - int size = p_end - p; - s = xavs_nal_encode(p, &size, 1, nals + i); - if (s < 0) - return AVERROR_EXTERNAL; - if (s != 3U + nals[i].i_payload) - return AVERROR_EXTERNAL; - p += s; - } - - return 1; -} - -static int XAVS_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *frame, int *got_packet) -{ - XavsContext *x4 = avctx->priv_data; - xavs_nal_t *nal; - int nnal, i, ret; - xavs_picture_t pic_out; - int pict_type; - - x4->pic.img.i_csp = XAVS_CSP_I420; - x4->pic.img.i_plane = 3; - - if (frame) { - for (i = 0; i < 3; i++) { - x4->pic.img.plane[i] = frame->data[i]; - x4->pic.img.i_stride[i] = frame->linesize[i]; - } - - x4->pic.i_pts = frame->pts; - x4->pic.i_type = XAVS_TYPE_AUTO; - x4->pts_buffer[avctx->frame_num % (avctx->max_b_frames+1)] = frame->pts; - } - - if (xavs_encoder_encode(x4->enc, &nal, &nnal, - frame? &x4->pic: NULL, &pic_out) < 0) - return AVERROR_EXTERNAL; - - ret = encode_nals(avctx, pkt, nal, nnal); - - if (ret < 0) - return ret; - - if (!ret) { - if (!frame && !(x4->end_of_stream)) { - if ((ret = ff_get_encode_buffer(avctx, pkt, 4, 0)) < 0) - return ret; - - pkt->data[0] = 0x0; - pkt->data[1] = 0x0; - pkt->data[2] = 0x01; - pkt->data[3] = 0xb1; - pkt->dts = 2*x4->pts_buffer[(x4->out_frame_count-1)%(avctx->max_b_frames+1)] - - x4->pts_buffer[(x4->out_frame_count-2)%(avctx->max_b_frames+1)]; - x4->end_of_stream = END_OF_STREAM; - *got_packet = 1; - } - return 0; - } - - pkt->pts = pic_out.i_pts; - if (avctx->has_b_frames) { - if (!x4->out_frame_count) - pkt->dts = pkt->pts - (x4->pts_buffer[1] - x4->pts_buffer[0]); - else - pkt->dts = x4->pts_buffer[(x4->out_frame_count-1)%(avctx->max_b_frames+1)]; - } else - pkt->dts = pkt->pts; - - switch (pic_out.i_type) { - case XAVS_TYPE_IDR: - case XAVS_TYPE_I: - pict_type = AV_PICTURE_TYPE_I; - break; - case XAVS_TYPE_P: - pict_type = AV_PICTURE_TYPE_P; - break; - case XAVS_TYPE_B: - case XAVS_TYPE_BREF: - pict_type = AV_PICTURE_TYPE_B; - break; - default: - pict_type = AV_PICTURE_TYPE_NONE; - } - - /* There is no IDR frame in AVS JiZhun */ - /* Sequence header is used as a flag */ - if (pic_out.i_type == XAVS_TYPE_I) { - pkt->flags |= AV_PKT_FLAG_KEY; - } - - ff_side_data_set_encoder_stats(pkt, (pic_out.i_qpplus1 - 1) * FF_QP2LAMBDA, NULL, 0, pict_type); - - x4->out_frame_count++; - *got_packet = ret; - return 0; -} - -static av_cold int XAVS_close(AVCodecContext *avctx) -{ - XavsContext *x4 = avctx->priv_data; - - av_freep(&x4->sei); - av_freep(&x4->pts_buffer); - - if (x4->enc) - xavs_encoder_close(x4->enc); - - return 0; -} - -static av_cold int XAVS_init(AVCodecContext *avctx) -{ - XavsContext *x4 = avctx->priv_data; - - x4->sei_size = 0; - xavs_param_default(&x4->params); - - x4->params.pf_log = XAVS_log; - x4->params.p_log_private = avctx; - x4->params.i_keyint_max = avctx->gop_size; - if (avctx->bit_rate) { - x4->params.rc.i_bitrate = avctx->bit_rate / 1000; - x4->params.rc.i_rc_method = XAVS_RC_ABR; - } - x4->params.rc.i_vbv_buffer_size = avctx->rc_buffer_size / 1000; - x4->params.rc.i_vbv_max_bitrate = avctx->rc_max_rate / 1000; - x4->params.rc.b_stat_write = avctx->flags & AV_CODEC_FLAG_PASS1; - if (avctx->flags & AV_CODEC_FLAG_PASS2) { - x4->params.rc.b_stat_read = 1; - } else { - if (x4->crf >= 0) { - x4->params.rc.i_rc_method = XAVS_RC_CRF; - x4->params.rc.f_rf_constant = x4->crf; - } else if (x4->cqp >= 0) { - x4->params.rc.i_rc_method = XAVS_RC_CQP; - x4->params.rc.i_qp_constant = x4->cqp; - } - } - - if (x4->aud >= 0) - x4->params.b_aud = x4->aud; - if (x4->mbtree >= 0) - x4->params.rc.b_mb_tree = x4->mbtree; - if (x4->direct_pred >= 0) - x4->params.analyse.i_direct_mv_pred = x4->direct_pred; - if (x4->fast_pskip >= 0) - x4->params.analyse.b_fast_pskip = x4->fast_pskip; - if (x4->motion_est >= 0) - x4->params.analyse.i_me_method = x4->motion_est; - if (x4->mixed_refs >= 0) - x4->params.analyse.b_mixed_references = x4->mixed_refs; - if (x4->b_bias != INT_MIN) - x4->params.i_bframe_bias = x4->b_bias; - if (x4->cplxblur >= 0) - x4->params.rc.f_complexity_blur = x4->cplxblur; - - x4->params.i_bframe = avctx->max_b_frames; - /* cabac is not included in AVS JiZhun Profile */ - x4->params.b_cabac = 0; - - x4->params.i_bframe_adaptive = x4->b_frame_strategy; - - avctx->has_b_frames = !!avctx->max_b_frames; - - /* AVS doesn't allow B picture as reference */ - /* The max allowed reference frame number of B is 2 */ - x4->params.i_keyint_min = avctx->keyint_min; - if (x4->params.i_keyint_min > x4->params.i_keyint_max) - x4->params.i_keyint_min = x4->params.i_keyint_max; - - x4->params.i_scenecut_threshold = x4->scenechange_threshold; - - // x4->params.b_deblocking_filter = avctx->flags & AV_CODEC_FLAG_LOOP_FILTER; - - x4->params.rc.i_qp_min = avctx->qmin; - x4->params.rc.i_qp_max = avctx->qmax; - x4->params.rc.i_qp_step = avctx->max_qdiff; - - x4->params.rc.f_qcompress = avctx->qcompress; /* 0.0 => cbr, 1.0 => constant qp */ - x4->params.rc.f_qblur = avctx->qblur; /* temporally blur quants */ - - x4->params.i_frame_reference = avctx->refs; - - x4->params.i_width = avctx->width; - x4->params.i_height = avctx->height; - x4->params.vui.i_sar_width = avctx->sample_aspect_ratio.num; - x4->params.vui.i_sar_height = avctx->sample_aspect_ratio.den; - /* This is only used for counting the fps */ - x4->params.i_fps_num = avctx->time_base.den; - x4->params.i_fps_den = avctx->time_base.num; - x4->params.analyse.inter = XAVS_ANALYSE_I8x8 |XAVS_ANALYSE_PSUB16x16| XAVS_ANALYSE_BSUB16x16; - - x4->params.analyse.i_me_range = avctx->me_range; - x4->params.analyse.i_subpel_refine = avctx->me_subpel_quality; - - x4->params.analyse.b_chroma_me = avctx->me_cmp & FF_CMP_CHROMA; - /* AVS P2 only enables 8x8 transform */ - x4->params.analyse.b_transform_8x8 = 1; //avctx->flags2 & AV_CODEC_FLAG2_8X8DCT; - - x4->params.analyse.i_trellis = avctx->trellis; - - x4->params.analyse.i_noise_reduction = x4->noise_reduction; - - if (avctx->level > 0) - x4->params.i_level_idc = avctx->level; - - if (avctx->bit_rate > 0) - x4->params.rc.f_rate_tolerance = - (float)avctx->bit_rate_tolerance / avctx->bit_rate; - - if ((avctx->rc_buffer_size) && - (avctx->rc_initial_buffer_occupancy <= avctx->rc_buffer_size)) { - x4->params.rc.f_vbv_buffer_init = - (float)avctx->rc_initial_buffer_occupancy / avctx->rc_buffer_size; - } else - x4->params.rc.f_vbv_buffer_init = 0.9; - - /* TAG:do we have MB tree RC method */ - /* what is the RC method we are now using? Default NO */ - x4->params.rc.f_ip_factor = 1 / fabs(avctx->i_quant_factor); - x4->params.rc.f_pb_factor = avctx->b_quant_factor; - - x4->params.analyse.i_chroma_qp_offset = x4->chroma_offset; - - x4->params.analyse.b_psnr = avctx->flags & AV_CODEC_FLAG_PSNR; - x4->params.i_log_level = XAVS_LOG_DEBUG; - x4->params.i_threads = avctx->thread_count; - x4->params.b_interlaced = avctx->flags & AV_CODEC_FLAG_INTERLACED_DCT; - - if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) - x4->params.b_repeat_headers = 0; - - x4->enc = xavs_encoder_open(&x4->params); - if (!x4->enc) - return AVERROR_EXTERNAL; - - if (!FF_ALLOCZ_TYPED_ARRAY(x4->pts_buffer, avctx->max_b_frames + 1)) - return AVERROR(ENOMEM); - - /* TAG: Do we have GLOBAL HEADER in AVS */ - /* We Have PPS and SPS in AVS */ - if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER && 0) { - xavs_nal_t *nal; - int nnal, s, i, size; - uint8_t *p; - - s = xavs_encoder_headers(x4->enc, &nal, &nnal); - - avctx->extradata = p = av_malloc(s); - for (i = 0; i < nnal; i++) { - /* Don't put the SEI in extradata. */ - if (nal[i].i_type == NAL_SEI) { - x4->sei = av_malloc( 5 + nal[i].i_payload * 4 / 3 ); - if (xavs_nal_encode(x4->sei, &x4->sei_size, 1, nal + i) < 0) - return -1; - - continue; - } - size = xavs_nal_encode(p, &s, 1, nal + i); - if (size < 0) - return -1; - p += size; - } - avctx->extradata_size = p - avctx->extradata; - } - return 0; -} - -#define OFFSET(x) offsetof(XavsContext, x) -#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption options[] = { - { "crf", "Select the quality for constant quality mode", OFFSET(crf), AV_OPT_TYPE_FLOAT, {.dbl = -1 }, -1, FLT_MAX, VE }, - { "qp", "Constant quantization parameter rate control method",OFFSET(cqp), AV_OPT_TYPE_INT, {.i64 = -1 }, -1, INT_MAX, VE }, - { "b-bias", "Influences how often B-frames are used", OFFSET(b_bias), AV_OPT_TYPE_INT, {.i64 = INT_MIN}, INT_MIN, INT_MAX, VE }, - { "cplxblur", "Reduce fluctuations in QP (before curve compression)", OFFSET(cplxblur), AV_OPT_TYPE_FLOAT, {.dbl = -1 }, -1, FLT_MAX, VE}, - { "direct-pred", "Direct MV prediction mode", OFFSET(direct_pred), AV_OPT_TYPE_INT, {.i64 = -1 }, -1, INT_MAX, VE, "direct-pred" }, - { "none", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_DIRECT_PRED_NONE }, 0, 0, VE, "direct-pred" }, - { "spatial", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_DIRECT_PRED_SPATIAL }, 0, 0, VE, "direct-pred" }, - { "temporal", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_DIRECT_PRED_TEMPORAL }, 0, 0, VE, "direct-pred" }, - { "auto", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_DIRECT_PRED_AUTO }, 0, 0, VE, "direct-pred" }, - { "aud", "Use access unit delimiters.", OFFSET(aud), AV_OPT_TYPE_BOOL, {.i64 = -1 }, -1, 1, VE}, - { "mbtree", "Use macroblock tree ratecontrol.", OFFSET(mbtree), AV_OPT_TYPE_BOOL, {.i64 = -1 }, -1, 1, VE}, - { "mixed-refs", "One reference per partition, as opposed to one reference per macroblock", OFFSET(mixed_refs), AV_OPT_TYPE_BOOL, {.i64 = -1}, -1, 1, VE }, - { "fast-pskip", NULL, OFFSET(fast_pskip), AV_OPT_TYPE_BOOL, {.i64 = -1 }, -1, 1, VE}, - { "motion-est", "Set motion estimation method", OFFSET(motion_est), AV_OPT_TYPE_INT, { .i64 = XAVS_ME_DIA }, -1, XAVS_ME_TESA, VE, "motion-est"}, - { "dia", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_ME_DIA }, INT_MIN, INT_MAX, VE, "motion-est" }, - { "hex", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_ME_HEX }, INT_MIN, INT_MAX, VE, "motion-est" }, - { "umh", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_ME_UMH }, INT_MIN, INT_MAX, VE, "motion-est" }, - { "esa", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_ME_ESA }, INT_MIN, INT_MAX, VE, "motion-est" }, - { "tesa", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = XAVS_ME_TESA }, INT_MIN, INT_MAX, VE, "motion-est" }, - { "b_strategy", "Strategy to choose between I/P/B-frames", OFFSET(b_frame_strategy), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, 2, VE}, - { "chromaoffset", "QP difference between chroma and luma", OFFSET(chroma_offset), AV_OPT_TYPE_INT, {.i64 = 0 }, INT_MIN, INT_MAX, VE}, - { "sc_threshold", "Scene change threshold", OFFSET(scenechange_threshold), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, INT_MAX, VE}, - { "noise_reduction", "Noise reduction", OFFSET(noise_reduction), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, INT_MAX, VE}, - - { NULL }, -}; - -static const AVClass xavs_class = { - .class_name = "libxavs", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -static const FFCodecDefault xavs_defaults[] = { - { "b", "0" }, - { NULL }, -}; - -const FFCodec ff_libxavs_encoder = { - .p.name = "libxavs", - CODEC_LONG_NAME("libxavs Chinese AVS (Audio Video Standard)"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_CAVS, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_OTHER_THREADS, - .priv_data_size = sizeof(XavsContext), - .init = XAVS_init, - FF_CODEC_ENCODE_CB(XAVS_frame), - .close = XAVS_close, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE | - FF_CODEC_CAP_AUTO_THREADS, - .p.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE }, - .p.priv_class = &xavs_class, - .defaults = xavs_defaults, - .p.wrapper_name = "libxavs", -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/videodsp_init.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/videodsp_init.c deleted file mode 100644 index 92ade4f846636e10a5ccb37ca4182be31bc2690b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/videodsp_init.c +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Copyright (c) 2021 Loongson Technology Corporation Limited - * Contributed by Xiwei Gu - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavcodec/videodsp.h" -#include "libavutil/attributes.h" - -static void prefetch_loongarch(const uint8_t *mem, ptrdiff_t stride, int h) -{ - register const uint8_t *p = mem; - - __asm__ volatile ( - "1: \n\t" - "preld 0, %[p], 0 \n\t" - "preld 0, %[p], 32 \n\t" - "addi.d %[h], %[h], -1 \n\t" - "add.d %[p], %[p], %[stride] \n\t" - - "blt $r0, %[h], 1b \n\t" - : [p] "+r" (p), [h] "+r" (h) - : [stride] "r" (stride) - ); -} - -av_cold void ff_videodsp_init_loongarch(VideoDSPContext *ctx, int bpc) -{ - ctx->prefetch = prefetch_loongarch; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Dig Deep MOD Menu APK for Android Latest Version.md b/spaces/congsaPfin/Manga-OCR/logs/Download Dig Deep MOD Menu APK for Android Latest Version.md deleted file mode 100644 index 1c04b0468c555a8cc1b902f5cb98bde0edb2e026..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Dig Deep MOD Menu APK for Android Latest Version.md +++ /dev/null @@ -1,81 +0,0 @@ -
-

Dig Deep Mod Menu APK: How to Download and Install It

-

Do you love mining games? Do you want to become a rich tycoon by digging for treasure? If yes, then you should try Dig Deep, a fun and addictive idle miner game. But wait, there's more! You can also download Dig Deep Mod Menu APK, a modified version of the game that gives you unlimited money, no ads, and easy controls. Sounds awesome, right? In this article, we will tell you what Dig Deep Mod Menu APK is, what features it has, and how to download and install it on your Android device. Let's get started!

-

What is Dig Deep Mod Menu APK?

-

Dig Deep is a simulator game where you go underground and dig deep for treasure. You can hire workers, upgrade your equipment, and explore different biomes. The game is fun and relaxing, but it can also be challenging and rewarding. You can earn money by selling your loot, and use it to expand your mining empire.

-

dig deep mod menu apk


Download File ---> https://urlca.com/2uO5Nk



-

Dig Deep Mod Menu APK is a modified version of the game that gives you some extra benefits. With this mod menu, you can get unlimited money, no ads, and easy controls. You can use the money to buy anything you want in the game, without worrying about running out. You can also enjoy the game without any annoying ads interrupting your gameplay. And you can control your workers and machines with just one tap, making the game more convenient and enjoyable.

-

Features of Dig Deep Mod Menu APK

-

Here are some of the features of Dig Deep Mod Menu APK that make it better than the original game:

-

Unlimited money

-

With this mod menu, you can get unlimited money in the game. You can use it to buy workers, equipment, upgrades, and anything else you need. You can also unlock new biomes and discover rare treasures. You don't have to wait for your workers to dig or sell your loot to earn money. You can just tap on the screen and get as much money as you want.

-

No ads

-

Another benefit of this mod menu is that it removes all the ads from the game. You can play Dig Deep without any interruptions or distractions. You don't have to watch videos or click on banners to get rewards or bonuses. You can just focus on digging and having fun.

-

Easy controls

-

The last feature of this mod menu is that it makes the game easier to control. You don't have to swipe or drag on the screen to move your workers or machines. You can just tap on the screen and they will do what you want them to do. You can also tap on the icons on the top right corner of the screen to access the shop, settings, achievements, and other options.

-

How to Download and Install Dig Deep Mod Menu APK?

-

If you want to download and install Dig Deep Mod Menu APK on your Android device, you need to follow these simple steps:

-

dig deep mod apk unlimited money
-dig deep mod apk latest version
-dig deep mod apk download for android
-dig deep mod menu apk free download
-dig deep mod menu apk 2023
-dig deep mod menu apk no root
-dig deep mod menu apk unlimited gems
-dig deep mod menu apk hack
-dig deep mod menu apk online
-dig deep mod menu apk offline
-dig deep casual game mod apk
-dig deep idle mining game mod apk
-dig deep gold miner game mod apk
-dig deep mining simulator mod apk
-dig deep adventure game mod apk
-dig deep modded apk for android
-dig deep hacked apk for android
-dig deep cracked apk for android
-dig deep premium apk for android
-dig deep pro apk for android
-how to install dig deep mod apk
-how to use dig deep mod menu apk
-how to get dig deep mod menu apk
-how to download dig deep mod menu apk
-how to update dig deep mod menu apk
-is dig deep mod menu apk safe
-is dig deep mod menu apk legit
-is dig deep mod menu apk working
-is dig deep mod menu apk real
-is dig deep mod menu apk virus free
-best dig deep mod menu apk 2023
-best site to download dig deep mod menu apk
-best source for dig deep mod menu apk
-best alternative to dig deep mod menu apk
-best features of dig deep mod menu apk
-benefits of using dig deep mod menu apk
-advantages of downloading dig deep mod menu apk
-disadvantages of installing dig deep mod menu apk
-reviews of dig deep mod menu apk 2023
-ratings of dig deep mod menu apk 2023

-

Step 1: Enable unknown sources

-

Before you can install any APK file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

-

Step 2: Download the APK file

-

Next, you need to download the APK file of Dig Deep Mod Menu from a reliable source. You can use this link to download it directly from MODAPK website. The file size is about 40 MB, so make sure you have enough space on your device.

-

Step 3: Install the APK

Step 3: Install the APK file

-

After you have downloaded the APK file, you need to install it on your device. To do this, locate the file in your file manager and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on Install and wait for the process to finish.

-

Step 4: Launch the game and enjoy

-

Once the installation is done, you can launch the game and enjoy the mod menu. You will see a new icon on your home screen with the name Dig Deep Mod Menu. Tap on it and start playing. You will notice that you have unlimited money, no ads, and easy controls. You can also access the mod menu by tapping on the M button on the top left corner of the screen. From there, you can customize your settings and preferences.

-

Conclusion

-

Dig Deep Mod Menu APK is a great way to enhance your gaming experience and have more fun with Dig Deep. You can get unlimited money, no ads, and easy controls with this mod menu. You can also download and install it easily on your Android device by following the steps we have provided in this article. So, what are you waiting for? Download Dig Deep Mod Menu APK today and start digging for treasure!

-

FAQs

-

Here are some frequently asked questions about Dig Deep Mod Menu APK:

-

Q: Is Dig Deep Mod Menu APK safe to use?

-

A: Yes, Dig Deep Mod Menu APK is safe to use as long as you download it from a trusted source like MODAPK website. However, you should always be careful when installing any APK file on your device and make sure you have a backup of your data in case something goes wrong.

-

Q: Do I need to root my device to use Dig Deep Mod Menu APK?

-

A: No, you don't need to root your device to use Dig Deep Mod Menu APK. You can install it on any Android device without any root access or special permissions.

-

Q: Will Dig Deep Mod Menu APK affect my game progress?

-

A: No, Dig Deep Mod Menu APK will not affect your game progress. You can play the game normally with or without the mod menu. However, you should be aware that using the mod menu may make the game less challenging and rewarding, as you will have unlimited resources and advantages.

-

Q: Can I update Dig Deep Mod Menu APK?

-

A: Yes, you can update Dig Deep Mod Menu APK whenever there is a new version available. However, you should always check the compatibility of the mod menu with the latest version of the game before updating it. You should also backup your data before updating in case something goes wrong.

-

Q: Can I play Dig Deep online with Dig Deep Mod Menu APK?

-

A: Yes, you can play Dig Deep online with Dig Deep Mod Menu APK. However, you should be careful not to abuse the mod menu or cheat in any way that may affect other players or violate the game's terms of service. You should also respect other players and play fair.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Euchre Offline A Fun and Free Card Game to Download and Play Anywhere.md b/spaces/congsaPfin/Manga-OCR/logs/Euchre Offline A Fun and Free Card Game to Download and Play Anywhere.md deleted file mode 100644 index 6ba6c2e4c61b9aa339227eb8638687239be09ef9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Euchre Offline A Fun and Free Card Game to Download and Play Anywhere.md +++ /dev/null @@ -1,141 +0,0 @@ - -

How to Download Euchre and Enjoy the Classic Card Game

-

Euchre is a trick-taking card game that is popular in many countries, especially in North America, Australia, New Zealand, and Great Britain. It is a fun and challenging game that can be played with friends, family, or online with other players. In this article, we will show you how to download euchre on different devices, and how to play euchre online with some of the best euchre apps and websites. But first, let's learn more about euchre and why you should play it.

-

download euchre


DOWNLOAD » https://urlca.com/2uObjE



-

What is Euchre and Why Should You Play It?

-

Euchre is a card game that originated in Europe in the 18th or 19th century, most likely from an Alsatian game called Jucker. It is played with a deck of 24, 28, or 32 cards, depending on the number of players and the variant. The game involves two teams of two players each, who try to win tricks by playing the highest card of the suit led or the highest trump card. The trump suit is determined by a bidding process, where one team can choose to make the suit of a turned-up card as trump, or name another suit as trump, or pass. The team that chooses trump becomes the "makers", and the other team becomes the "defenders". The makers need to win at least three tricks out of five to score points, while the defenders need to prevent them from doing so. A player who is confident of winning all five tricks can choose to play alone, without their partner's help, for extra points.

-

The History and Origins of Euchre

-

Euchre has a rich and fascinating history that spans across continents and centuries. The earliest reference to euchre dates back to 1810, when it was mentioned in a German book about card games. It was brought to America by German immigrants in Pennsylvania, where it spread throughout the nation. It was also introduced to England by French prisoners of war during the Napoleonic Wars, where it became popular in Cornwall and Devon. Euchre was responsible for introducing the joker into the modern deck of cards, as it was used as the "best bower" or the highest trump card in some variants. Euchre remains one of the most popular card games today, especially in regions where it has a strong cultural presence.

-

The Rules and Variations of Euchre

-

The rules of euchre have changed over time, but they still remain largely similar to those of its early days. The basic rules are as follows:

-
    -
  • The game is played with four players in two teams of two.
  • -
  • A standard 52-card deck is stripped down to 24 cards by removing all cards below 9 (or below 7 in some variants).
  • -
  • Each player is dealt five cards in a three-two or two-three sequence.
  • -
  • The remaining four cards are placed face down on the table, and the top card is turned face up as a prospective trump.
  • -
  • The player to the left of the dealer can either accept or reject the turned-up card as trump by saying "I order it up" or "I pass". If they accept it, their partner must discard one card face down and pick up the turned-up card. If they reject it, the next player in clockwise order can do the same.
  • -
  • If all four players pass on the turned-up card,. - If all four players pass on the turned-up card, the dealer can either name another suit as trump or turn down the card. If they turn down the card, the next player can name another suit as trump or pass, and so on until a suit is named or all players pass again. If all players pass again, the cards are reshuffled and redealt by the next dealer.
  • -
  • The player to the left of the dealer leads the first trick by playing any card from their hand. The other players must follow suit if they can, or play any other card if they cannot. The highest card of the suit led or the highest trump card wins the trick and takes it for their team.
  • -
  • The player who wins the trick leads the next trick, and so on until all five tricks are played.
  • -
  • The team that chooses trump is called the makers, and they need to win at least three tricks to score points. The other team is called the defenders, and they need to prevent the makers from winning three tricks. If the makers win three or four tricks, they score one point. If they win all five tricks, they score two points. This is called a euchre, and it also means that the defenders lose two points. If a player chooses to play alone without their partner's help, they score four points for winning all five tricks, or one point for winning three or four tricks.
  • -
  • The first team to reach a predetermined number of points, usually 10, wins the game.
  • -
-

There are many variations of euchre that have different rules, such as British euchre, Australian euchre, Bid euchre, Railroad euchre, and Buck euchre. Some of these variations use different numbers of cards, different scoring systems, different bidding methods, or different roles for the joker. You can find more information about these variations online or in books about euchre.

-

The Benefits of Playing Euchre

-

Playing euchre is not only fun, but also beneficial for your mental and social health. Here are some of the benefits of playing euchre:

-
    -
  • It improves your memory, concentration, and strategic thinking skills by requiring you to remember the cards played, keep track of the trump suit, and plan your moves ahead.
  • -
  • It enhances your communication and teamwork skills by requiring you to cooperate with your partner and signal your intentions without revealing them to your opponents.
  • -
  • It reduces your stress and anxiety levels by providing you with a relaxing and enjoyable activity that distracts you from your worries and problems.
  • -
  • It strengthens your social bonds and friendships by allowing you to interact with other people who share your interest in euchre and have fun together.
  • -
  • It boosts your self-esteem and confidence by giving you a sense of achievement and satisfaction when you win a game or make a good play.
  • -
-

How to Download Euchre on Different Devices

-

If you want to play euchre on your own device, you will need to download an app or a software that supports euchre. There are many options available for different devices, such as Android, iOS, Windows, and Mac. Here are some of the best ones that we recommend:

-

download euchre app for free
-download euchre classic card game
-download euchre offline game
-download euchre online multiplayer
-download euchre for windows 10
-download euchre for android
-download euchre for iphone
-download euchre for ipad
-download euchre for mac
-download euchre for pc
-download euchre with friends
-download euchre with joker
-download euchre with benny
-download euchre with canadian loner
-download euchre with stick the dealer
-download euchre with going under
-download euchre with statistics
-download euchre with custom cards
-download euchre with custom table
-download euchre with challenging computers
-download euchre jogatina
-download euchre karman games
-download euchre sng studios
-download euchre gazeus games
-download euchre new scientist
-how to download euchre game
-where to download euchre game
-best app to download euchre game
-best site to download euchre game
-best way to download euchre game
-why download euchre game
-what is euchre game and how to play it
-learn how to play euchre game after downloading it
-improve your skills in euchre game by downloading it
-enjoy the fun of playing euchre game by downloading it
-play free euchre game without downloading it
-play online euchre game without downloading it
-play rated euchre game without downloading it
-compare different apps to download euchre game
-review different sites to download euchre game
-tips and tricks for playing euchre game after downloading it
-rules and options for playing euchre game after downloading it
-features and benefits of downloading euchre game
-advantages and disadvantages of downloading euchre game
-pros and cons of downloading euchre game
-alternatives and substitutes for downloading euchre game
-recommendations and suggestions for downloading euchre game
-feedback and comments on downloading euchre game
-ratings and reviews on downloading euchre game

-

Download Euchre on Android

-

If you have an Android device, you can download one of these apps from the Google Play Store:

- - - - - -
NameDescriptionRating
Euchre Free: Classic Card GameThis app offers a smooth and realistic euchre experience with customizable settings, smart AI opponents, and online multiplayer mode. You can also track your statistics and achievements.4.5/5
Euchre - Offline Free Card GamesThis app allows you to play euchre offline with three difficulty levels, adjustable game speed, and various rules options. You can also play online with friends or strangers.4.4/5
Euchre OnlineThis app lets you play euchre online with real players from around the world. You can join public tables or create your own private ones. You can also chat with other players and send emojis.4.3/5
-

Download Euchre on iOS

-

If you have an iOS device, you can download one of these apps from the App Store:

- - - - - -
NameDescriptionRating
Euchre 3DThis app features stunning 3D graphics, realistic animations, and sound effects that make you feel like you are playing euchre in real life. You can play with smart AI opponents or online with other players. You can also customize the game rules, the cards, and the table.4.8/5
Euchre - Hardwood GamesThis app offers a beautiful and elegant euchre game with high-quality graphics, animations, and sounds. You can play solo or online with other players. You can also choose from different card decks, backgrounds, and avatars.4.7/5
Euchre GoldThis app provides a simple and fast euchre game with intuitive controls and smooth gameplay. You can play offline with AI opponents or online with other players. You can also adjust the game settings, such as the difficulty level, the scoring system, and the trump suit.4.6/5
-

Download Euchre on Windows

-

If you have a Windows device, you can download one of these software from the Microsoft Store:

- - - - - -
NameDescriptionRating
Euchre Free!This software allows you to play euchre for free on your Windows device. You can play with AI opponents or online with other players. You can also change the game rules, the card design, and the background music.4.2/5
Euchre OnlineThis software lets you play euchre online with real players from around the world. You can join public tables or create your own private ones. You can also chat with other players and send emojis.4.1/5
Euchre ClubThis software offers a fun and social euchre game with various features and options. You can play offline with AI opponents or online with other players. You can also join clubs, earn coins, unlock achievements, and compete in tournaments.4/5
-

Download Euchre on Mac

-

If you have a Mac device, you can download one of these software from the Mac App Store:

- - - - - -
NameDescriptionRating
Euchre HDThis software delivers a high-definition euchre game with crisp graphics and smooth gameplay. You can play with AI opponents or online with other players. You can also customize the game rules, the card style, and the table theme.4.7/5
Euchre SandboxThis software provides a simple and easy euchre game with basic features and options. You can play offline with AI opponents or online with other players. You can also change the game settings, such as the difficulty level, the scoring system, and the trump suit.4.5/5
Euchre 3D ProThis software features a realistic and immersive euchre game with 3D graphics, animations, and sound effects. You can play with smart AI opponents or online with other players. You can also choose from different card decks, backgrounds, and avatars.4.3/5
-

How to Play Euchre Online with Friends or Other Players

-

If you want to play euchre online with your friends or other players, you will need to find an app or a website that supports euchre multiplayer mode. There are many options available for different devices, but here are some of the best ones that we recommend:

-

Trickster Euchre

-

Trickster Euchre is a website that allows you to play euchre online with your friends or other players from around the world. You can create your own custom games or join existing ones. You can also chat with other players, send emojis, and view your statistics and rankings.

-

Euchre 3D

-

Euchre 3D is an app that lets you play euchre online with your friends or other players from around the world. You can create your own private games or join public ones. You can also chat with other players, send emojis, and view your statistics and achievements.

-

Euchre by KARMAN Games

-

Euchre by KARMAN Games is an app that enables you to play euchre online with your friends or other players from around the world. You can create your own games or join existing ones. You can also chat with other players, send emojis, and view your statistics and ratings.

-

Conclusion

-

Euchre is a classic card game that is fun, challenging, and rewarding to play. It is a great way to spend some quality time with your friends, family, or online with other players. It is also a great way to improve your mental and social skills, such as memory, concentration, strategy, communication, and teamwork. If you want to play euchre on your own device, you can download one of the many apps or software that support euchre. You can also play euchre online with some of the best euchre apps and websites that we have recommended in this article. We hope you have enjoyed this article and learned how to download euchre and enjoy the classic card game.

-

FAQs

-

Here are some of the frequently asked questions about euchre:

-
    -
  • Q: How many players can play euchre?
  • -
  • A: Euchre is usually played with four players in two teams of two, but it can also be played with two, three, five, or six players with different rules and variations.
  • -
  • Q: How do you score points in euchre?
  • -
  • A: The team that chooses trump is called the makers, and they need to win at least three tricks out of five to score points. The other team is called the defenders, and they need to prevent the makers from winning three tricks. If the makers win three or four tricks, they score one point. If they win all five tricks, they score two points. This is called a euchre, and it also means that the defenders lose two points. If a player chooses to play alone without their partner's help, they score four points for winning all five tricks, or one point for winning three or four tricks.
  • -
  • Q: What is the best card in euchre?
  • -
  • A: The best card in euchre is the right bower, which is the jack of the trump suit. The second best card is the left bower, which is the jack of the same color as the trump suit. For example, if spades are trump, then the right bower is the jack of spades and the left bower is the jack of clubs.
  • -
  • Q: How do you signal your partner in euchre?
  • -
  • A: Signaling your partner in euchre is a way of communicating your intentions and information without revealing them to your opponents. There are different ways of signaling your partner, such as using your cards, your bids, or your gestures. For example, you can signal your partner that you have a high card of a certain suit by leading that suit or discarding a low card of that suit. You can also signal your partner that you want them to choose a certain suit as trump by ordering up or passing on that suit. You can also signal your partner that you have a strong hand by playing alone or bidding high.
  • -
  • Q: How do you play euchre online?
  • -
  • A: To play euchre online, you need to find an app or a website that supports euchre multiplayer mode. You can create your own custom games or join existing ones. You can also chat with other players, send emojis, and view your statistics and rankings.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Activate and Use eSIM with Azercell.md b/spaces/congsaPfin/Manga-OCR/logs/How to Activate and Use eSIM with Azercell.md deleted file mode 100644 index 10ec87663e241dfd3cadc423612a160784280710..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Activate and Use eSIM with Azercell.md +++ /dev/null @@ -1,103 +0,0 @@ -
-

e-sim azercell: A new generation of SIM technology

-

If you are looking for a more convenient, secure, and eco-friendly way to use your mobile phone, you might want to consider switching to e-sim. e-sim is a digital microchip that is embedded in your smartphone, eliminating the need for a physical SIM card. You can activate it by installing the “e-sim profile” of an operator that provides the service, such as Azercell. In this article, we will explain what e-sim is, how it works, and how you can benefit from it with Azercell.

-

e-sim azercell


Download Zip ✒ ✒ ✒ https://urlca.com/2uOcRw



-

What is e-sim and how does it work?

-

e-sim stands for embedded SIM or electronic SIM. It is a new generation of SIM technology that allows you to connect to a mobile network without inserting a plastic SIM card into your phone. Instead, you can download and install the “e-sim profile” of your chosen operator, which contains all the information and settings needed to access their services. You can do this by scanning a QR code or using an app provided by the operator.

-

e-sim works the same way as a traditional SIM card, but with some advantages. For example, you can store multiple e-sim profiles on one device and switch between them easily. You can also change your operator or plan without having to replace your SIM card. Moreover, you can protect your data in case of loss, theft, or damage of your device, as the e-sim cannot be removed or tampered with.

-

The benefits of e-sim over traditional SIM cards

-

There are many reasons why you might want to use e-sim instead of a regular SIM card. Here are some of them:

-
    -
  • More secure: Your data is more secure with e-sim because the virtual card cannot be stolen or lost. You can also remotely lock or erase your e-sim profile if your device is missing or compromised.
  • -
  • More convenient: With e-sim, you can connect several numbers to one smartphone without having to swap SIM cards. This is useful if you have multiple lines for personal or business use, or if you travel frequently and need local numbers.
  • -
  • Eco-friendly: e-sim is completely digital, eliminating the need for plastic cards. This reduces waste and environmental impact.
  • -
-

How to activate e-sim with Azercell

-

If you want to use e-sim with Azercell, you need to have a compatible device and an internet connection (wi-fi or mobile internet). You also need to check if your device's IMEI code is registered. You can do this by clicking here. Then, you can follow these steps:

-

For new numbers

-
    -
  1. Go to www.azercell.com/en/e-sim and fill out the online form with your personal information and the number you want to activate.
  2. -
  3. You will receive an email with a QR code and a PIN code. Scan the QR code with your device's camera and enter the PIN code when prompted.
  4. -
  5. Your e-sim profile will be installed on your device and you can start using your new number.
  6. -
-

For existing numbers

-
    -
  1. Visit any Azercell customer service center and request to transfer your existing number to e-sim. You will need to present your ID card and your device.
  2. -
  3. You will receive a QR code and a PIN code from the customer service representative. Scan the QR code with your device's camera and enter the PIN code when prompted.
  4. -
  5. Your e-sim profile will be installed on your device and you can continue using your existing number.
  6. -
-

Which devices support e-sim and how to check them?

-

Not all devices are compatible with e-sim technology. You need to have a smartphone that has an e-sim chip built-in. Some of the devices that support e-sim are:

-

e-sim azercell online
-e-sim azercell price
-e-sim azercell activation
-e-sim azercell benefits
-e-sim azercell sima
-e-sim azercell qr code
-e-sim azercell customer service
-e-sim azercell security
-e-sim azercell devices
-e-sim azercell transfer
-e-sim azercell duplicate
-e-sim azercell roaming
-e-sim azercell internet
-e-sim azercell plans
-e-sim azercell offers
-e-sim azercell review
-e-sim azercell comparison
-e-sim azercell number
-e-sim azercell registration
-e-sim azercell faq
-e-sim azercell iphone
-e-sim azercell android
-e-sim azercell samsung
-e-sim azercell huawei
-e-sim azercell apple watch
-e-sim azercell ipad
-e-sim azercell laptop
-e-sim azercell tablet
-e-sim azercell smartwatch
-e-sim azercell hotspot
-e-sim azercell wifi calling
-e-sim azercell voip
-e-sim azercell sms
-e-sim azercell mms
-e-sim azercell voice mail
-e-sim azercell call forwarding
-e-sim azercell call waiting
-e-sim azercell call barring
-e-sim azercell conference call
-e-sim azercell balance check
-e-sim azercell recharge
-e-sim azercell top up
-e-sim azercell data usage
-e-sim azercell data rollover
-e-sim azercell data sharing
-e-sim azercell data pack
-e-sim azercell unlimited data
-e-sim azercell free data

-
    -
  • iPhone XS, XS Max, XR, 11, 11 Pro, 11 Pro Max, SE (2020), 12, 12 Mini, 12 Pro, 12 Pro Max
  • -
  • Samsung Galaxy S20, S20+, S20 Ultra, S21, S21+, S21 Ultra, Z Flip, Z Fold2
  • -
  • Huawei P40, P40 Pro
  • -
  • Google Pixel 3, 3 XL, 4, 4 XL, 5
  • -
-

To check if your device supports e-sim, you can do the following:

-
    -
  1. Go to Settings > General > About on your device and look for the IMEI number. If you see two IMEI numbers, one for primary and one for secondary, it means your device has an e-sim chip.
  2. -
  3. Alternatively, you can go to Settings > Cellular > Add Cellular Plan on your device and see if you have the option to scan a QR code or enter a confirmation code. If you do, it means your device supports e-sim.
  4. -
-

How to manage multiple e-sim numbers on one device?

-

One of the advantages of e-sim is that you can have more than one number on one device. You can store up to nine e-sim profiles on your device and switch between them easily. You can also order, transfer, or delete e-sim numbers as you wish. Here is how you can do that:

-

How to order, transfer, or delete e-sim numbers

-

If you want to order a new e-sim number from Azercell, you can follow the same steps as described above for activating e-sim with Azercell. You can choose any number from the available list or keep your current number if you are an existing Azercell customer.

-

If you want to transfer an existing e-sim number from another operator to Azercell, you need to visit an Azercell customer service center and request a portability service. You will need to present your ID card and your device. You will receive a QR code and a PIN code from the customer service representative. Scan the QR code with your device's camera and enter the PIN code when prompted. Your e-sim profile will be transferred to Azercell and you can use their services.

-

If you want to delete an e-sim number from your device, you can go to Settings > Cellular > Cellular Plans on your device and tap on the number you want to remove. Then tap on Remove Cellular Plan and confirm your choice. Your e-sim profile will be deleted from your device and you will not be able to use that number anymore.

-

How to switch between different e-sim profiles

-

If you have more than one e-sim profile on your device, you can switch between them easily depending on your needs. You can do this by going to Settings > Cellular > Cellular Plans on your device and tapping on the number you want to use as your default line. You can also choose which line to use for voice calls, data, iMessage, FaceTime, etc.

-

How to get e-sim from Azercell

-

If you are interested in getting e-sim from Azercell, you have two options: online or offline. Here is how they differ:

-

Online option

-

The online option is convenient and fast. you are interested in trying e-sim with Azercell, you can order it online or offline and enjoy the benefits of this new technology. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading and have a great day!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/No Limit Drag Racing 2 APK The Best Drag Racing Game Youve Ever Seen.md b/spaces/congsaPfin/Manga-OCR/logs/No Limit Drag Racing 2 APK The Best Drag Racing Game Youve Ever Seen.md deleted file mode 100644 index 65c737d09396964c612f378f924742353ed04bab..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/No Limit Drag Racing 2 APK The Best Drag Racing Game Youve Ever Seen.md +++ /dev/null @@ -1,95 +0,0 @@ - -

No Limit Drag Racing 2 APK: A Guide for Racing Fans

-

If you love drag racing games, you might want to check out No Limit Drag Racing 2, a realistic and immersive racing simulator that lets you customize your car, compete with other players, and progress through a career mode. In this article, we will tell you everything you need to know about No Limit Drag Racing 2 APK, including its features, how to download and install it, and some tips and tricks to help you win more races.

-

no limit drag racing 2 apk


Download File ✑ ✑ ✑ https://urlca.com/2uO9nc



-

What is No Limit Drag Racing 2?

-

No Limit Drag Racing 2 is a sequel to the popular No Limit Drag Racing game that was released in 2015. It is developed by Battle Creek Games, a studio that specializes in racing games. No Limit Drag Racing 2 is available for Android devices and can be downloaded for free from the Google Play Store or from third-party websites as an APK file.

-

Features of No Limit Drag Racing 2

-

No Limit Drag Racing 2 has many features that make it one of the best drag racing games on the market. Here are some of them:

-

Customization

-

In No Limit Drag Racing 2, you can customize your car in various ways, such as changing its color, wheels, decals, hood, spoiler, and more. You can also modify its performance by adjusting the gearing, rev limiter, suspension, timing, fuel delivery, boost, and launch control. You can use the included dyno to test the changes you make and see how they affect your car's power and speed.

-

Multiplayer

-

No Limit Drag Racing 2 allows you to play online with other racers from all over the world. You can join or create a lobby and race against up to four opponents at a time. You can also chat with other players, send them challenges, and join a team to compete in team events. Be careful though, there are some fast folks on this game!

-

Career Mode

-

If you prefer to play solo, you can try the career mode in No Limit Drag Racing 2. In this mode, you start with a basic car and work your way up through different classes of racing. You can earn money by winning races and use it to buy new cars or upgrade your existing ones. You can also unlock new tracks and events as you progress.

-

no limit drag racing 2 mod apk unlimited money
-no limit drag racing 2 apk download for android
-no limit drag racing 2 cheats and hacks
-no limit drag racing 2 tuning guide
-no limit drag racing 2 best cars
-no limit drag racing 2 online multiplayer
-no limit drag racing 2 free gold
-no limit drag racing 2 custom paint
-no limit drag racing 2 car show
-no limit drag racing 2 update
-no limit drag racing 2 tips and tricks
-no limit drag racing 2 pro mod
-no limit drag racing 2 dyno test
-no limit drag racing 2 wheelie bar
-no limit drag racing 2 engine swap
-no limit drag racing 2 nitrous oxide
-no limit drag racing 2 launch control
-no limit drag racing 2 rev limiter
-no limit drag racing 2 suspension settings
-no limit drag racing 2 gear ratios
-no limit drag racing 2 fuel delivery
-no limit drag racing 2 boost pressure
-no limit drag racing 2 timing adjustment
-no limit drag racing 2 valve train upgrade
-no limit drag racing 2 exhaust system
-no limit drag racing 2 intake manifold
-no limit drag racing 2 engine block
-no limit drag racing 2 tires and wheels
-no limit drag racing 2 body kits and decals
-no limit drag racing 2 wraps and stickers
-no limit drag racing 2 game review
-no limit drag racing 2 gameplay video
-no limit drag racing 2 trailer and screenshots
-no limit drag racing 2 developer and publisher
-no limit drag racing 2 latest version and changelog
-no limit drag racing 2 download size and requirements
-no limit drag racing 2 install and uninstall instructions
-no limit drag racing 2 support and feedback
-no limit drag racing 2 privacy policy and terms of service
-no limit drag racing 2 social media and community links

-

How to Download and Install No Limit Drag Racing 2 APK

-

If you want to download and install No Limit Drag Racing 2 APK on your Android device, here are the requirements and steps you need to follow:

-

Requirements

-
    -
  • An Android device running Android 5.0 or higher.
  • -
  • At least 300 MB of free storage space.
  • -
  • A stable internet connection.
  • -
  • Allow installation of apps from unknown sources in your device settings.
  • -
-

Steps

-
    -
  1. Download the No Limit Drag Racing 2 APK file from a reliable website such as [APKCombo](^1^) or [Google Play Store](^2^).
  2. -
  3. Locate the downloaded file in your device's file manager and tap on it to start the installation process.
  4. -
  5. Follow the instructions on the screen and wait for the installation to finish.
  6. -
  7. Launch the game and enjoy!
  8. -
-

Tips and Tricks for No Limit Drag Racing 2

-

No Limit Drag Racing 2 is a fun and challenging game that requires skill and strategy to win. Here are some tips and tricks that can help you improve your performance:

-

Tune Your Car

-

Tuning your car is essential to optimize its performance and suit your driving style. You can adjust various parameters such as the gear ratio, the tire pressure, the suspension stiffness, and the camber angle. You can also use the dyno to see how your changes affect the horsepower, torque, and acceleration of your car. Experiment with different settings until you find the best combination for each track and race.

-

Practice Your Launch

-

The launch is one of the most important aspects of drag racing, as it determines how fast you can get off the line and gain an advantage over your opponent. To launch your car effectively, you need to pay attention to the RPM gauge and the green light. You want to rev your engine to the optimal point and release the clutch at the right moment. If you rev too high or too low, you will lose speed and traction. If you release the clutch too early or too late, you will either stall or spin out. Practice your launch until you master the timing and feel of your car.

-

Upgrade Your Parts

-

As you progress through the game, you will face tougher opponents and harder races. To keep up with the competition, you will need to upgrade your car parts and improve its performance. You can buy new parts such as engines, turbos, nitrous, tires, brakes, and more from the shop. You can also sell your old parts to earn some extra cash. Upgrading your parts will not only increase your car's speed and power, but also its appearance and value.

-

Conclusion

-

No Limit Drag Racing 2 is a thrilling and realistic drag racing game that will keep you hooked for hours. You can customize your car, race online or offline, and advance through a career mode. You can also download and install No Limit Drag Racing 2 APK on your Android device easily and safely. If you follow our tips and tricks, you will be able to win more races and become a drag racing legend.

-

FAQs

-
    -
  • Q: How do I use nitrous in No Limit Drag Racing 2?
  • -
  • A: You can use nitrous by tapping on the blue button on the right side of the screen. You can only use nitrous once per race, so make sure you use it wisely.
  • -
  • Q: How do I earn money in No Limit Drag Racing 2?
  • -
  • A: You can earn money by winning races, completing challenges, selling parts, or watching ads. You can also buy money with real money if you want to.
  • -
  • Q: How do I join a team in No Limit Drag Racing 2?
  • -
  • A: You can join a team by going to the multiplayer menu and tapping on the team icon. You can either join an existing team or create your own team.
  • -
  • Q: How do I change the camera view in No Limit Drag Racing 2?
  • -
  • A: You can change the camera view by tapping on the camera icon on the top left corner of the screen. You can choose between three views: cockpit, bumper, or third-person.
  • -
  • Q: How do I reset my progress in No Limit Drag Racing 2?
  • -
  • A: You can reset your progress by going to the settings menu and tapping on the reset button. Be careful though, this will erase all your data and start from scratch.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Sen al Kapimi (Love Is in the Air) Season 1 and 2 Hindi Dubbed Download All Episodes [720p HD] Turkish Romance Comedy Series.md b/spaces/congsaPfin/Manga-OCR/logs/Sen al Kapimi (Love Is in the Air) Season 1 and 2 Hindi Dubbed Download All Episodes [720p HD] Turkish Romance Comedy Series.md deleted file mode 100644 index 28fdfd02531ca786d3ff2b155d75cf75a74f8825..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Sen al Kapimi (Love Is in the Air) Season 1 and 2 Hindi Dubbed Download All Episodes [720p HD] Turkish Romance Comedy Series.md +++ /dev/null @@ -1,158 +0,0 @@ -
-

Love Is in the Air: A Romantic Turkish Drama Series in Hindi

-

If you are looking for a romantic comedy series with a captivating plot, charming characters, and beautiful scenery, then you should check out Love Is in the Air, a Turkish drama series that is now available in Hindi. In this article, we will tell you what Love Is in the Air is about, who are the main characters and actors, why you should watch it, and how to download it in Hindi.

-

love is in the air download in hindi


DOWNLOAD ->>> https://urlca.com/2uObut



-

Introduction

-

Love Is in the Air (original title: Sen Çal Kapimi) is a Turkish drama series that premiered on July 8, 2020 on FOX Turkey. It has been dubbed in Hindi and other languages and has gained popularity among international audiences. The series has two seasons, with a total of 61 episodes as of June 2023. The series is also known as You Knock on My Door or Knock on My Door.

-

What is Love Is in the Air about?

-

The story revolves around Eda Yildiz, a young and ambitious florist who lost her parents in an accident and is living with her aunt and cousin. She dreams of studying abroad, but her scholarship is cancelled by Serkan Bolat, a wealthy and arrogant businessman who owns the company that sponsors her education. To get her scholarship back, Eda agrees to pretend to be Serkan's fiancee for two months. However, as they spend more time together, they start to develop feelings for each other and face various challenges and misunderstandings.

-

Who are the main characters and actors?

-

The main characters of Love Is in the Air are:

-

love is in the air hindi dubbed download
-love is in the air season 1 hindi download
-love is in the air turkish drama hindi dubbed download
-love is in the air sen cal kapimi hindi download
-love is in the air web-dl 720p hd hindi download
-love is in the air katmoviehd hindi download
-love is in the air episode 156-161 hindi download
-love is in the air hande ercel hindi download
-love is in the air kerem bursin hindi download
-love is in the air comedy romance drama hindi download
-love is in the air dailymotion hindi dubbed
-love is in the air episode 02 hindi dubbed
-love is in the air samm video hindi dubbed
-love is in the air phim hay hindi dubbed
-love is in the air fuuka hata tv hindi dubbed
-love is in the air season 2 hindi dubbed trailer
-love is in the air eda yildiz hindi dubbed
-love is in the air serkan bolat hindi dubbed
-love is in the air youtube hindi dubbed
-love is in the air official trailer hindi dubbed
-love is in the air watch online free hindi
-love is in the air full episodes online hindi
-love is in the air stream online hindi
-love is in the air gdrive online hindi
-love is in the air english subtitles online hindi
-love is in the air free download hd hindi
-love is in the air 480p/720p hd download hindi
-love is in the air all episodes download hindi
-love is in the air complete series download hindi
-love is in the air single episodes link download hindi
-love is in the air netflix download hindi
-love is in the air imdb rating download hindi
-love is in the air 7.8/10 rating download hindi
-love is in the air turkish series download hindi
-love is in the air you knock on my door download hindi
-love is in the air subtitles download hindi
-love is in the air srt file download hindi
-love is in the air zip file download hindi
-love is in the air torrent download hindi
-love is in the air magnet link download hindi
-love is in the air mp4/mkv/avi format download hindi
-love is in the air direct link download hindi
-love is in the air high speed download hindi
-love is in the air no ads download hindi
-love is in the air no registration download hindi
-love is in the air no survey download hindi
-love is in the air no captcha download hindi
-love is in the air no virus/malware download hindi
-love is in the air safe and secure download hindi

-
    -
  • Eda Yildiz, played by Hande Erçel, a beautiful and spirited florist who loves nature and books. She is smart, loyal, and brave, but also stubborn and impulsive. She has a crush on Serkan since she was a child.
  • -
  • Serkan Bolat, played by Kerem Bürsin, a handsome and successful architect who runs his family's company. He is cold, arrogant, and perfectionist, but also caring and protective. He has a troubled past and does not believe in love.
  • -
  • Selin Atakan, played by Neslihan Yeldan, Serkan's ex-girlfriend and business partner. She is elegant, ambitious, and manipulative. She wants to get back with Serkan and tries to sabotage his relationship with Eda.
  • -
  • Ferit Simsek, played by Alican Aytekin, Selin's fiance and Serkan's friend. He is kind, gentle, and supportive. He loves Selin but does not trust her.
  • -
  • Ceren Basarir, played by Evrim Dogan, Eda's best friend and co-worker. She is cheerful, optimistic, and helpful. She has a crush on Ferit.
  • -
  • Kaan Karadag, played by Anil Ilter, Serkan's rival and enemy. He is cunning, greedy, and vengeful. He wants to destroy Serkan's company and reputation.
  • -
-

Why should you watch Love Is in the Air?

-

There are many reasons why you should watch Love Is in the Air, such as:

-
    -
  • It has a captivating plot that will keep you hooked from the first episode to the last. It has romance, comedy, drama, suspense, and twists that will make you laugh, cry, swoon, and gasp.
  • -
  • It has charming characters that will make you fall in love with them. They have chemistry, personality, and growth. They will make you root for them, relate to them, and learn from them.
  • -
  • It has beautiful scenery that will make you want to visit Turkey. The series showcases the stunning landscapes, architecture, culture, and cuisine of Istanbul and other places in Turkey.
  • -
  • It has a catchy soundtrack that will make you hum along. The series features original songs and covers of popular Turkish and international songs that suit the mood and theme of each scene.
  • -
  • It has a loyal fan base that will make you feel part of a community. The series has millions of fans around the world who share their love and support for the show on social media, forums, blogs, and podcasts.
  • -
-

How to download Love Is in the Air in Hindi?

-

If you are wondering how to download Love Is in the Air in Hindi, you have several options to choose from. Here are some of the most popular ones:

-

Option 1: KatMovieHD

-

KatMovieHD is a website that offers free downloads of movies and TV shows in various languages and formats. You can find Love Is in the Air in Hindi on KatMovieHD by following these steps:

-
    -
  1. Go to KatMovieHD and search for Love Is in the Air or Sen Çal Kapimi.
  2. -
  3. Select the season and episode that you want to download and click on it.
  4. -
  5. Choose the quality and language that you prefer and click on the download button.
  6. -
  7. Wait for the download to finish and enjoy watching Love Is in the Air in Hindi.
  8. -
-

Pros and cons of KatMovieHD

-

KatMovieHD has some advantages and disadvantages that you should consider before using it. Here are some of them:

- - - - - - -
ProsCons
It is free and easy to use.It may contain ads, pop-ups, and malware that can harm your device or data.
It offers a wide range of movies and TV shows in different languages and genres.It may have low-quality videos, audio, or subtitles that can affect your viewing experience.
It updates regularly with new content and features.It may not have all the episodes or seasons of Love Is in the Air in Hindi or other languages.
It allows you to download multiple files at once or resume interrupted downloads.It may violate the copyright laws or terms of service of the original content creators or distributors.
-

Option 2: Dailymotion

-

Dailymotion is a video-sharing platform that hosts millions of videos from various categories and sources. You can watch Love Is in the Air in Hindi on Dailymotion by following these steps:

-
    -
  1. Go to Dailymotion and search for Love Is in the Air or Sen Çal Kapimi.
  2. -
  3. Select the season and episode that you want to watch and click on it.
  4. -
  5. Enjoy watching Love Is in the Air in Hindi online or offline by downloading it using a third-party tool or app.
  6. -
-

Pros and cons of Dailymotion

-

Dailymotion has some advantages and disadvantages that you should consider before using it. Here are some of them:

- - - - - - -
ProsCons
It is free and accessible from any device or browser.It may contain ads, pop-ups, and malware that can harm your device or data.
It offers a large collection of videos from various creators and channels.It may have low-quality videos, audio, or subtitles that can affect your viewing experience.
It supports multiple languages and subtitles for different videos.It may not have all the episodes or seasons of Love Is in the Air in Hindi or other languages.
It allows you to create playlists, follow channels, comment, share, and like videos.It may delete or block some videos due to copyright infringement or community guidelines violations.
-

Option 3: YouTube

-

YouTube is a video-sharing platform that hosts billions of videos from various categories and sources. You can watch Love Is in the Air in Hindi on YouTube by following these steps:

    -
  1. Go to YouTube and search for Love Is in the Air or Sen Çal Kapimi.
  2. -
  3. Select the season and episode that you want to watch and click on it.
  4. -
  5. Enjoy watching Love Is in the Air in Hindi online or offline by downloading it using a third-party tool or app.
  6. -
-

Pros and cons of YouTube

-

YouTube has some advantages and disadvantages that you should consider before using it. Here are some of them:

- - - - - - -
ProsCons
It is free and available on any device or browser.It may contain ads, pop-ups, and malware that can harm your device or data.
It offers a huge variety of videos from various creators and channels.It may have low-quality videos, audio, or subtitles that can affect your viewing experience.
It supports multiple languages and subtitles for different videos.It may not have all the episodes or seasons of Love Is in the Air in Hindi or other languages.
It allows you to create playlists, follow channels, comment, share, and like videos.It may delete or block some videos due to copyright infringement or community guidelines violations.
-

Conclusion

-

In conclusion, Love Is in the Air is a romantic Turkish drama series that is worth watching if you are looking for a fun, sweet, and exciting story with lovable characters and stunning scenery. You can download Love Is in the Air in Hindi from various websites or platforms, such as KatMovieHD, Dailymotion, or YouTube. However, you should be aware of the pros and cons of each option and choose the one that suits your preferences and needs. We hope this article has helped you learn more about Love Is in the Air and how to download it in Hindi. Happy watching!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about Love Is in the Air and how to download it in Hindi:

-

Q: Is Love Is in the Air based on a true story?

-

A: No, Love Is in the Air is not based on a true story. It is a fictional story created by Ayse Üner Kutlu, who is also the screenwriter of the series.

-

Q: How many seasons and episodes does Love Is in the Air have?

-

A: Love Is in the Air has two seasons, with a total of 61 episodes as of June 2023. Each episode is about 120 minutes long.

-

Q: Where can I watch Love Is in the Air in other languages?

-

A: You can watch Love Is in the Air in other languages on various websites or platforms that offer subtitles or dubbing for the series. Some of the languages that you can find are English, Spanish, French, Arabic, Urdu, and more.

-

Q: What are some of the awards and nominations that Love Is in the Air has received?

-

A: Love Is in the Air has received several awards and nominations for its outstanding performance and popularity. Some of them are:

-
    -
  • The Golden Butterfly Awards for Best Romantic Comedy Series, Best Actress (Hande Erçel), Best Actor (Kerem Bürsin), Best Couple (Hande Erçel and Kerem Bürsin), Best Supporting Actress (Neslihan Yeldan), Best Supporting Actor (Alican Aytekin), Best Screenplay (Ayse Üner Kutlu), and Best Director (Altan Dönmez).
  • -
  • The Ayakli Gazete TV Stars Awards for Best Romantic Comedy Series, Best Actress (Hande Erçel), Best Actor (Kerem Bürsin), Best Couple (Hande Erçel and Kerem Bürsin), Best Supporting Actress (Neslihan Yeldan), Best Supporting Actor (Alican Aytekin), Best Screenplay (Ayse Üner Kutlu), and Best Director (Altan Dönmez).
  • -
  • The Pantene Golden Lens Awards for Best Romantic Comedy Series, Best Actress (Hande Erçel), Best Actor (Kerem Bürsin), Best Couple (Hande Erçel and Kerem Bürsin), Best Supporting Actress (Neslihan Yeldan), Best Supporting Actor (Alican Aytekin), Best Screenplay (Ayse Üner Kutlu), and Best Director (Altan Dönmez).
  • -
  • The E! People's Choice Awards for The Drama Show of 2023, The Female TV Star of 2023 (Hande Erçel), The Male TV Star of 2023 (Kerem Bürsin), and The Bingeworthy Show of 2023.
  • -
-

Q: Where can I find more information and updates about Love Is in the Air?

-

A: You can find more information and updates about Love Is in the Air on its official website, social media accounts, fan pages, and podcasts. Some of the links that you can follow are:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/parsers/parser.py b/spaces/cooelf/Multimodal-CoT/timm/data/parsers/parser.py deleted file mode 100644 index 76ab6d18283644702424d0ff2af5832d6d6dd3b7..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/data/parsers/parser.py +++ /dev/null @@ -1,17 +0,0 @@ -from abc import abstractmethod - - -class Parser: - def __init__(self): - pass - - @abstractmethod - def _filename(self, index, basename=False, absolute=False): - pass - - def filename(self, index, basename=False, absolute=False): - return self._filename(index, basename=basename, absolute=absolute) - - def filenames(self, basename=False, absolute=False): - return [self._filename(index, basename=basename, absolute=absolute) for index in range(len(self))] - diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/tresnet.py b/spaces/cooelf/Multimodal-CoT/timm/models/tresnet.py deleted file mode 100644 index 372bfb7bc0ce89241121f8b85ea928f376af8bd5..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/tresnet.py +++ /dev/null @@ -1,297 +0,0 @@ -""" -TResNet: High Performance GPU-Dedicated Architecture -https://arxiv.org/pdf/2003.13630.pdf - -Original model: https://github.com/mrT23/TResNet - -""" -from collections import OrderedDict - -import torch -import torch.nn as nn - -from .helpers import build_model_with_cfg -from .layers import SpaceToDepthModule, BlurPool2d, InplaceAbn, ClassifierHead, SEModule -from .registry import register_model - -__all__ = ['tresnet_m', 'tresnet_l', 'tresnet_xl'] - - -def _cfg(url='', **kwargs): - return { - 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), - 'crop_pct': 0.875, 'interpolation': 'bilinear', - 'mean': (0, 0, 0), 'std': (1, 1, 1), - 'first_conv': 'body.conv1.0', 'classifier': 'head.fc', - **kwargs - } - - -default_cfgs = { - 'tresnet_m': _cfg( - url='https://miil-public-eu.oss-eu-central-1.aliyuncs.com/model-zoo/ImageNet_21K_P/models/timm/tresnet_m_1k_miil_83_1.pth'), - 'tresnet_m_miil_in21k': _cfg( - url='https://miil-public-eu.oss-eu-central-1.aliyuncs.com/model-zoo/ImageNet_21K_P/models/timm/tresnet_m_miil_in21k.pth', num_classes=11221), - 'tresnet_l': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_l_81_5-235b486c.pth'), - 'tresnet_xl': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_xl_82_0-a2d51b00.pth'), - 'tresnet_m_448': _cfg( - input_size=(3, 448, 448), pool_size=(14, 14), - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_m_448-bc359d10.pth'), - 'tresnet_l_448': _cfg( - input_size=(3, 448, 448), pool_size=(14, 14), - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_l_448-940d0cd1.pth'), - 'tresnet_xl_448': _cfg( - input_size=(3, 448, 448), pool_size=(14, 14), - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_xl_448-8c1815de.pth') -} - - -def IABN2Float(module: nn.Module) -> nn.Module: - """If `module` is IABN don't use half precision.""" - if isinstance(module, InplaceAbn): - module.float() - for child in module.children(): - IABN2Float(child) - return module - - -def conv2d_iabn(ni, nf, stride, kernel_size=3, groups=1, act_layer="leaky_relu", act_param=1e-2): - return nn.Sequential( - nn.Conv2d( - ni, nf, kernel_size=kernel_size, stride=stride, padding=kernel_size // 2, groups=groups, bias=False), - InplaceAbn(nf, act_layer=act_layer, act_param=act_param) - ) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True, aa_layer=None): - super(BasicBlock, self).__init__() - if stride == 1: - self.conv1 = conv2d_iabn(inplanes, planes, stride=1, act_param=1e-3) - else: - if aa_layer is None: - self.conv1 = conv2d_iabn(inplanes, planes, stride=2, act_param=1e-3) - else: - self.conv1 = nn.Sequential( - conv2d_iabn(inplanes, planes, stride=1, act_param=1e-3), - aa_layer(channels=planes, filt_size=3, stride=2)) - - self.conv2 = conv2d_iabn(planes, planes, stride=1, act_layer="identity") - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - rd_chs = max(planes * self.expansion // 4, 64) - self.se = SEModule(planes * self.expansion, rd_channels=rd_chs) if use_se else None - - def forward(self, x): - if self.downsample is not None: - shortcut = self.downsample(x) - else: - shortcut = x - - out = self.conv1(x) - out = self.conv2(out) - - if self.se is not None: - out = self.se(out) - - out += shortcut - out = self.relu(out) - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True, - act_layer="leaky_relu", aa_layer=None): - super(Bottleneck, self).__init__() - self.conv1 = conv2d_iabn( - inplanes, planes, kernel_size=1, stride=1, act_layer=act_layer, act_param=1e-3) - if stride == 1: - self.conv2 = conv2d_iabn( - planes, planes, kernel_size=3, stride=1, act_layer=act_layer, act_param=1e-3) - else: - if aa_layer is None: - self.conv2 = conv2d_iabn( - planes, planes, kernel_size=3, stride=2, act_layer=act_layer, act_param=1e-3) - else: - self.conv2 = nn.Sequential( - conv2d_iabn(planes, planes, kernel_size=3, stride=1, act_layer=act_layer, act_param=1e-3), - aa_layer(channels=planes, filt_size=3, stride=2)) - - reduction_chs = max(planes * self.expansion // 8, 64) - self.se = SEModule(planes, rd_channels=reduction_chs) if use_se else None - - self.conv3 = conv2d_iabn( - planes, planes * self.expansion, kernel_size=1, stride=1, act_layer="identity") - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - if self.downsample is not None: - shortcut = self.downsample(x) - else: - shortcut = x - - out = self.conv1(x) - out = self.conv2(out) - if self.se is not None: - out = self.se(out) - - out = self.conv3(out) - out = out + shortcut # no inplace - out = self.relu(out) - - return out - - -class TResNet(nn.Module): - def __init__(self, layers, in_chans=3, num_classes=1000, width_factor=1.0, global_pool='fast', drop_rate=0.): - self.num_classes = num_classes - self.drop_rate = drop_rate - super(TResNet, self).__init__() - - aa_layer = BlurPool2d - - # TResnet stages - self.inplanes = int(64 * width_factor) - self.planes = int(64 * width_factor) - conv1 = conv2d_iabn(in_chans * 16, self.planes, stride=1, kernel_size=3) - layer1 = self._make_layer( - BasicBlock, self.planes, layers[0], stride=1, use_se=True, aa_layer=aa_layer) # 56x56 - layer2 = self._make_layer( - BasicBlock, self.planes * 2, layers[1], stride=2, use_se=True, aa_layer=aa_layer) # 28x28 - layer3 = self._make_layer( - Bottleneck, self.planes * 4, layers[2], stride=2, use_se=True, aa_layer=aa_layer) # 14x14 - layer4 = self._make_layer( - Bottleneck, self.planes * 8, layers[3], stride=2, use_se=False, aa_layer=aa_layer) # 7x7 - - # body - self.body = nn.Sequential(OrderedDict([ - ('SpaceToDepth', SpaceToDepthModule()), - ('conv1', conv1), - ('layer1', layer1), - ('layer2', layer2), - ('layer3', layer3), - ('layer4', layer4)])) - - self.feature_info = [ - dict(num_chs=self.planes, reduction=2, module=''), # Not with S2D? - dict(num_chs=self.planes, reduction=4, module='body.layer1'), - dict(num_chs=self.planes * 2, reduction=8, module='body.layer2'), - dict(num_chs=self.planes * 4 * Bottleneck.expansion, reduction=16, module='body.layer3'), - dict(num_chs=self.planes * 8 * Bottleneck.expansion, reduction=32, module='body.layer4'), - ] - - # head - self.num_features = (self.planes * 8) * Bottleneck.expansion - self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=drop_rate) - - # model initilization - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='leaky_relu') - elif isinstance(m, nn.BatchNorm2d) or isinstance(m, InplaceAbn): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # residual connections special initialization - for m in self.modules(): - if isinstance(m, BasicBlock): - m.conv2[1].weight = nn.Parameter(torch.zeros_like(m.conv2[1].weight)) # BN to zero - if isinstance(m, Bottleneck): - m.conv3[1].weight = nn.Parameter(torch.zeros_like(m.conv3[1].weight)) # BN to zero - if isinstance(m, nn.Linear): - m.weight.data.normal_(0, 0.01) - - def _make_layer(self, block, planes, blocks, stride=1, use_se=True, aa_layer=None): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - layers = [] - if stride == 2: - # avg pooling before 1x1 conv - layers.append(nn.AvgPool2d(kernel_size=2, stride=2, ceil_mode=True, count_include_pad=False)) - layers += [conv2d_iabn( - self.inplanes, planes * block.expansion, kernel_size=1, stride=1, act_layer="identity")] - downsample = nn.Sequential(*layers) - - layers = [] - layers.append(block( - self.inplanes, planes, stride, downsample, use_se=use_se, aa_layer=aa_layer)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block(self.inplanes, planes, use_se=use_se, aa_layer=aa_layer)) - return nn.Sequential(*layers) - - def get_classifier(self): - return self.head.fc - - def reset_classifier(self, num_classes, global_pool='fast'): - self.head = ClassifierHead( - self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate) - - def forward_features(self, x): - return self.body(x) - - def forward(self, x): - x = self.forward_features(x) - x = self.head(x) - return x - - -def _create_tresnet(variant, pretrained=False, **kwargs): - return build_model_with_cfg( - TResNet, variant, pretrained, - default_cfg=default_cfgs[variant], - feature_cfg=dict(out_indices=(1, 2, 3, 4), flatten_sequential=True), - **kwargs) - - -@register_model -def tresnet_m(pretrained=False, **kwargs): - model_kwargs = dict(layers=[3, 4, 11, 3], **kwargs) - return _create_tresnet('tresnet_m', pretrained=pretrained, **model_kwargs) - - -@register_model -def tresnet_m_miil_in21k(pretrained=False, **kwargs): - model_kwargs = dict(layers=[3, 4, 11, 3], **kwargs) - return _create_tresnet('tresnet_m_miil_in21k', pretrained=pretrained, **model_kwargs) - - -@register_model -def tresnet_l(pretrained=False, **kwargs): - model_kwargs = dict(layers=[4, 5, 18, 3], width_factor=1.2, **kwargs) - return _create_tresnet('tresnet_l', pretrained=pretrained, **model_kwargs) - - -@register_model -def tresnet_xl(pretrained=False, **kwargs): - model_kwargs = dict(layers=[4, 5, 24, 3], width_factor=1.3, **kwargs) - return _create_tresnet('tresnet_xl', pretrained=pretrained, **model_kwargs) - - -@register_model -def tresnet_m_448(pretrained=False, **kwargs): - model_kwargs = dict(layers=[3, 4, 11, 3], **kwargs) - return _create_tresnet('tresnet_m_448', pretrained=pretrained, **model_kwargs) - - -@register_model -def tresnet_l_448(pretrained=False, **kwargs): - model_kwargs = dict(layers=[4, 5, 18, 3], width_factor=1.2, **kwargs) - return _create_tresnet('tresnet_l_448', pretrained=pretrained, **model_kwargs) - - -@register_model -def tresnet_xl_448(pretrained=False, **kwargs): - model_kwargs = dict(layers=[4, 5, 24, 3], width_factor=1.3, **kwargs) - return _create_tresnet('tresnet_xl_448', pretrained=pretrained, **model_kwargs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/models/src/main/assets/run_tflite.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/models/src/main/assets/run_tflite.py deleted file mode 100644 index 4b8ebe235758d3d0f3d357c51ed54d78ac7eea8e..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/models/src/main/assets/run_tflite.py +++ /dev/null @@ -1,75 +0,0 @@ -# Flex ops are included in the nightly build of the TensorFlow Python package. You can use TFLite models containing Flex ops by the same Python API as normal TFLite models. The nightly TensorFlow build can be installed with this command: -# Flex ops will be added to the TensorFlow Python package's and the tflite_runtime package from version 2.3 for Linux and 2.4 for other environments. -# https://www.tensorflow.org/lite/guide/ops_select#running_the_model - -# You must use: tf-nightly -# pip install tf-nightly - -import os -import glob -import cv2 -import numpy as np - -import tensorflow as tf - -width=256 -height=256 -model_name="model.tflite" -#model_name="model_quant.tflite" -image_name="dog.jpg" - -# input -img = cv2.imread(image_name) -img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - -mean=[0.485, 0.456, 0.406] -std=[0.229, 0.224, 0.225] -img = (img - mean) / std - -img_resized = tf.image.resize(img, [width,height], method='bicubic', preserve_aspect_ratio=False) -#img_resized = tf.transpose(img_resized, [2, 0, 1]) -img_input = img_resized.numpy() -reshape_img = img_input.reshape(1,width,height,3) -tensor = tf.convert_to_tensor(reshape_img, dtype=tf.float32) - -# load model -print("Load model...") -interpreter = tf.lite.Interpreter(model_path=model_name) -print("Allocate tensor...") -interpreter.allocate_tensors() -print("Get input/output details...") -input_details = interpreter.get_input_details() -output_details = interpreter.get_output_details() -print("Get input shape...") -input_shape = input_details[0]['shape'] -print(input_shape) -print(input_details) -print(output_details) -#input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) -print("Set input tensor...") -interpreter.set_tensor(input_details[0]['index'], tensor) - -print("invoke()...") -interpreter.invoke() - -# The function `get_tensor()` returns a copy of the tensor data. -# Use `tensor()` in order to get a pointer to the tensor. -print("get output tensor...") -output = interpreter.get_tensor(output_details[0]['index']) -#output = np.squeeze(output) -output = output.reshape(width, height) -#print(output) -prediction = np.array(output) -print("reshape prediction...") -prediction = prediction.reshape(width, height) - -# output file -#prediction = cv2.resize(prediction, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC) -print(" Write image to: output.png") -depth_min = prediction.min() -depth_max = prediction.max() -img_out = (255 * (prediction - depth_min) / (depth_max - depth_min)).astype("uint8") -print("save output image...") -cv2.imwrite("output.png", img_out) - -print("finished") \ No newline at end of file diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py deleted file mode 100644 index f6dfcf4c9983b431f0a978701e5ddd9598faf381..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py +++ /dev/null @@ -1,435 +0,0 @@ -''' -VQGAN code, adapted from the original created by the Unleashing Transformers authors: -https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py - -''' -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import copy -from basicsr.utils import get_root_logger -from basicsr.utils.registry import ARCH_REGISTRY - -def normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -@torch.jit.script -def swish(x): - return x*torch.sigmoid(x) - - -# Define VQVAE classes -class VectorQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, beta): - super(VectorQuantizer, self).__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.beta = beta # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - self.embedding = nn.Embedding(self.codebook_size, self.emb_dim) - self.embedding.weight.data.uniform_(-1.0 / self.codebook_size, 1.0 / self.codebook_size) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.emb_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d = (z_flattened ** 2).sum(dim=1, keepdim=True) + (self.embedding.weight**2).sum(1) - \ - 2 * torch.matmul(z_flattened, self.embedding.weight.t()) - - mean_distance = torch.mean(d) - # find closest encodings - # min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - min_encoding_scores, min_encoding_indices = torch.topk(d, 1, dim=1, largest=False) - # [0-1], higher score, higher confidence - min_encoding_scores = torch.exp(-min_encoding_scores/10) - - min_encodings = torch.zeros(min_encoding_indices.shape[0], self.codebook_size).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * torch.mean((z_q - z.detach()) ** 2) - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, { - "perplexity": perplexity, - "min_encodings": min_encodings, - "min_encoding_indices": min_encoding_indices, - "min_encoding_scores": min_encoding_scores, - "mean_distance": mean_distance - } - - def get_codebook_feat(self, indices, shape): - # input indices: batch*token_num -> (batch*token_num)*1 - # shape: batch, height, width, channel - indices = indices.view(-1,1) - min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices) - min_encodings.scatter_(1, indices, 1) - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: # reshape back to match original input shape - z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, num_hiddens, straight_through=False, kl_weight=5e-4, temp_init=1.0): - super().__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - self.proj = nn.Conv2d(num_hiddens, codebook_size, 1) # projects last encoder layer to quantized logits - self.embed = nn.Embedding(codebook_size, emb_dim) - - def forward(self, z): - hard = self.straight_through if self.training else True - - logits = self.proj(z) - - soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard) - - z_q = torch.einsum("b n h w, n d -> b d h w", soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean() - min_encoding_indices = soft_one_hot.argmax(dim=1) - - return z_q, diff, { - "min_encoding_indices": min_encoding_indices - } - - -class Downsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = torch.nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0) - - def forward(self, x): - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - return x - - -class Upsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) - - def forward(self, x): - x = F.interpolate(x, scale_factor=2.0, mode="nearest") - x = self.conv(x) - - return x - - -class ResBlock(nn.Module): - def __init__(self, in_channels, out_channels=None): - super(ResBlock, self).__init__() - self.in_channels = in_channels - self.out_channels = in_channels if out_channels is None else out_channels - self.norm1 = normalize(in_channels) - self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - self.norm2 = normalize(out_channels) - self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - if self.in_channels != self.out_channels: - self.conv_out = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, x_in): - x = x_in - x = self.norm1(x) - x = swish(x) - x = self.conv1(x) - x = self.norm2(x) - x = swish(x) - x = self.conv2(x) - if self.in_channels != self.out_channels: - x_in = self.conv_out(x_in) - - return x + x_in - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h*w) - q = q.permute(0, 2, 1) - k = k.reshape(b, c, h*w) - w_ = torch.bmm(q, k) - w_ = w_ * (int(c)**(-0.5)) - w_ = F.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h*w) - w_ = w_.permute(0, 2, 1) - h_ = torch.bmm(v, w_) - h_ = h_.reshape(b, c, h, w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Encoder(nn.Module): - def __init__(self, in_channels, nf, emb_dim, ch_mult, num_res_blocks, resolution, attn_resolutions): - super().__init__() - self.nf = nf - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.attn_resolutions = attn_resolutions - - curr_res = self.resolution - in_ch_mult = (1,)+tuple(ch_mult) - - blocks = [] - # initial convultion - blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1)) - - # residual and downsampling blocks, with attention on smaller res (16x16) - for i in range(self.num_resolutions): - block_in_ch = nf * in_ch_mult[i] - block_out_ch = nf * ch_mult[i] - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - if curr_res in attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != self.num_resolutions - 1: - blocks.append(Downsample(block_in_ch)) - curr_res = curr_res // 2 - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - # normalise and convert to latent size - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, emb_dim, kernel_size=3, stride=1, padding=1)) - self.blocks = nn.ModuleList(blocks) - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -class Generator(nn.Module): - def __init__(self, nf, emb_dim, ch_mult, res_blocks, img_size, attn_resolutions): - super().__init__() - self.nf = nf - self.ch_mult = ch_mult - self.num_resolutions = len(self.ch_mult) - self.num_res_blocks = res_blocks - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.in_channels = emb_dim - self.out_channels = 3 - block_in_ch = self.nf * self.ch_mult[-1] - curr_res = self.resolution // 2 ** (self.num_resolutions-1) - - blocks = [] - # initial conv - blocks.append(nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1)) - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - for i in reversed(range(self.num_resolutions)): - block_out_ch = self.nf * self.ch_mult[i] - - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - - if curr_res in self.attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != 0: - blocks.append(Upsample(block_in_ch)) - curr_res = curr_res * 2 - - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1)) - - self.blocks = nn.ModuleList(blocks) - - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -@ARCH_REGISTRY.register() -class VQAutoEncoder(nn.Module): - def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=[16], codebook_size=1024, emb_dim=256, - beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None): - super().__init__() - logger = get_root_logger() - self.in_channels = 3 - self.nf = nf - self.n_blocks = res_blocks - self.codebook_size = codebook_size - self.embed_dim = emb_dim - self.ch_mult = ch_mult - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.quantizer_type = quantizer - self.encoder = Encoder( - self.in_channels, - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - if self.quantizer_type == "nearest": - self.beta = beta #0.25 - self.quantize = VectorQuantizer(self.codebook_size, self.embed_dim, self.beta) - elif self.quantizer_type == "gumbel": - self.gumbel_num_hiddens = emb_dim - self.straight_through = gumbel_straight_through - self.kl_weight = gumbel_kl_weight - self.quantize = GumbelQuantizer( - self.codebook_size, - self.embed_dim, - self.gumbel_num_hiddens, - self.straight_through, - self.kl_weight - ) - self.generator = Generator( - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_ema' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_ema']) - logger.info(f'vqgan is loaded from: {model_path} [params_ema]') - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - logger.info(f'vqgan is loaded from: {model_path} [params]') - else: - raise ValueError(f'Wrong params!') - - - def forward(self, x): - x = self.encoder(x) - quant, codebook_loss, quant_stats = self.quantize(x) - x = self.generator(quant) - return x, codebook_loss, quant_stats - - - -# patch based discriminator -@ARCH_REGISTRY.register() -class VQGANDiscriminator(nn.Module): - def __init__(self, nc=3, ndf=64, n_layers=4, model_path=None): - super().__init__() - - layers = [nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, True)] - ndf_mult = 1 - ndf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n, 8) - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=2, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n_layers, 8) - - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=1, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - layers += [ - nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1)] # output 1 channel prediction map - self.main = nn.Sequential(*layers) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_d' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_d']) - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - else: - raise ValueError(f'Wrong params!') - - def forward(self, x): - return self.main(x) \ No newline at end of file diff --git a/spaces/datasciencemmw/ContextXLA-beta-demo/README.md b/spaces/datasciencemmw/ContextXLA-beta-demo/README.md deleted file mode 100644 index f38189c7f9c8d95b2215a01fad462e494138b514..0000000000000000000000000000000000000000 --- a/spaces/datasciencemmw/ContextXLA-beta-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ContextXLA Beta Demo -emoji: 📊 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/score_sde_ve/test_score_sde_ve.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/score_sde_ve/test_score_sde_ve.py deleted file mode 100644 index 036ecc3f6bf3c3a61780933c0a404ca91abe5dc4..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/score_sde_ve/test_score_sde_ve.py +++ /dev/null @@ -1,91 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import unittest - -import numpy as np -import torch - -from diffusers import ScoreSdeVePipeline, ScoreSdeVeScheduler, UNet2DModel -from diffusers.utils.testing_utils import require_torch, slow, torch_device - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class ScoreSdeVeipelineFastTests(unittest.TestCase): - @property - def dummy_uncond_unet(self): - torch.manual_seed(0) - model = UNet2DModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=3, - out_channels=3, - down_block_types=("DownBlock2D", "AttnDownBlock2D"), - up_block_types=("AttnUpBlock2D", "UpBlock2D"), - ) - return model - - def test_inference(self): - unet = self.dummy_uncond_unet - scheduler = ScoreSdeVeScheduler() - - sde_ve = ScoreSdeVePipeline(unet=unet, scheduler=scheduler) - sde_ve.to(torch_device) - sde_ve.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - image = sde_ve(num_inference_steps=2, output_type="numpy", generator=generator).images - - generator = torch.manual_seed(0) - image_from_tuple = sde_ve(num_inference_steps=2, output_type="numpy", generator=generator, return_dict=False)[ - 0 - ] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - - -@slow -@require_torch -class ScoreSdeVePipelineIntegrationTests(unittest.TestCase): - def test_inference(self): - model_id = "google/ncsnpp-church-256" - model = UNet2DModel.from_pretrained(model_id) - - scheduler = ScoreSdeVeScheduler.from_pretrained(model_id) - - sde_ve = ScoreSdeVePipeline(unet=model, scheduler=scheduler) - sde_ve.to(torch_device) - sde_ve.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - image = sde_ve(num_inference_steps=10, output_type="numpy", generator=generator).images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 256, 256, 3) - - expected_slice = np.array([0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/README.md b/spaces/deeplearning/audioldm-text-to-audio-generation/README.md deleted file mode 100644 index 34881f9775d9d094bafdcdb8a10a7b25c0c7f5b2..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Audioldm Text To Audio Generation -emoji: 🔊 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: bigscience-openrail-m -duplicated_from: AIFILMS/audioldm-text-to-audio-generation ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -## Reference -Part of the code from this repo is borrowed from the following repos. We would like to thank the authors of them for their contribution. - -> https://github.com/LAION-AI/CLAP -> https://github.com/CompVis/stable-diffusion -> https://github.com/v-iashin/SpecVQGAN -> https://github.com/toshas/torch-fidelity \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/MAGIX VEGAS Movie Studio Platinum 15.0.0.102 Crack _VERIFIED_.md b/spaces/diacanFperku/AutoGPT/MAGIX VEGAS Movie Studio Platinum 15.0.0.102 Crack _VERIFIED_.md deleted file mode 100644 index 7791bd44d715c007eb9de367b1623e6edd4813c2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/MAGIX VEGAS Movie Studio Platinum 15.0.0.102 Crack _VERIFIED_.md +++ /dev/null @@ -1,6 +0,0 @@ -

MAGIX VEGAS Movie Studio Platinum 15.0.0.102 Crack


DOWNLOADhttps://gohhs.com/2uFUcA



- -Unduh Gratis MAGIX VEGAS Movie Studio 17 Pro + Platinum versi lengkap untuk Windows yang cepat, alat pemotongan dan pengeditan yang ... 1fdad05405
-
-
-

diff --git a/spaces/diffle/ComfyUI/README.md b/spaces/diffle/ComfyUI/README.md deleted file mode 100644 index 4becbe61b28c4a8b76e4185e77acf341e875b0ce..0000000000000000000000000000000000000000 --- a/spaces/diffle/ComfyUI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ComfyUI -emoji: 🪟.UI -colorFrom: green -colorTo: pink -sdk: static -pinned: true -license: creativeml-openrail-m ---- - -![](https://raw.githubusercontent.com/ehristoforu/imghost/main/31c77b98-1019-4ea6-9c89-d66d25ab1586.jpg) -

ComfyUI

-

This is UI for everyone! Generate image for FREE!

\ No newline at end of file diff --git a/spaces/diffusers/convert/app.py b/spaces/diffusers/convert/app.py deleted file mode 100644 index 2d11e0ba2cda09a158acfcc31c91516f7d7d2fe8..0000000000000000000000000000000000000000 --- a/spaces/diffusers/convert/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import csv -from datetime import datetime -import os -from typing import Optional -import gradio as gr - -from convert import convert -from huggingface_hub import HfApi, Repository - - -DATASET_REPO_URL = "https://huggingface.co/datasets/safetensors/conversions" -DATA_FILENAME = "data.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) - -HF_TOKEN = os.environ.get("HF_TOKEN") - -repo: Optional[Repository] = None -if HF_TOKEN: - repo = Repository(local_dir="data", clone_from=DATASET_REPO_URL, token=HF_TOKEN) - - -def run(token: str, model_id: str) -> str: - if token == "" or model_id == "": - return """ - ### Invalid input 🐞 - - Please fill a token and model_id. - """ - try: - api = HfApi(token=token) - is_private = api.model_info(repo_id=model_id).private - print("is_private", is_private) - - commit_info = convert(api=api, model_id=model_id, force=True) - print("[commit_info]", commit_info) - - # save in a (public) dataset: - if repo is not None and not is_private: - repo.git_pull(rebase=True) - print("pulled") - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter( - csvfile, fieldnames=["model_id", "pr_url", "time"] - ) - writer.writerow( - { - "model_id": model_id, - "pr_url": commit_info.pr_url, - "time": str(datetime.now()), - } - ) - commit_url = repo.push_to_hub() - print("[dataset]", commit_url) - - return f""" - ### Success 🔥 - - Yay! This model was successfully converted and a PR was open using your token, here: - - [{commit_info.pr_url}]({commit_info.pr_url}) - """ - except Exception as e: - return f""" - ### Error 😢😢😢 - - {e} - """ - - -DESCRIPTION = """ -The steps are the following: - -- Paste a read-access token from hf.co/settings/tokens. Read access is enough given that we will open a PR against the source repo. -- Input a model id from the Hub -- Click "Submit" -- That's it! You'll get feedback if it works or not, and if it worked, you'll get the URL of the opened PR 🔥 - -⚠️ For now only `pytorch_model.bin` files are supported but we'll extend in the future. -""" - -demo = gr.Interface( - title="Convert any model to Safetensors and open a PR", - description=DESCRIPTION, - allow_flagging="never", - article="Check out the [Safetensors repo on GitHub](https://github.com/huggingface/safetensors)", - inputs=[ - gr.Text(max_lines=1, label="your_hf_token"), - gr.Text(max_lines=1, label="model_id"), - ], - outputs=[gr.Markdown(label="output")], - fn=run, -) - -demo.launch() diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/short_audio_transcribe.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/short_audio_transcribe.py deleted file mode 100644 index f1e8b30671f2c2f2fa3c93feb1f4edd3fbe2f545..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Eileen-Bert-Vits2/short_audio_transcribe.py +++ /dev/null @@ -1,122 +0,0 @@ -import whisper -import os -import json -import torchaudio -import argparse -import torch - -lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } -def transcribe_one(audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(beam_size=5) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - return lang, result.text -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--languages", default="CJE") - parser.add_argument("--whisper_size", default="medium") - args = parser.parse_args() - if args.languages == "CJE": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } - elif args.languages == "CJ": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - } - elif args.languages == "C": - lang2token = { - 'zh': "[ZH]", - } - assert (torch.cuda.is_available()), "Please enable GPU in order to run Whisper!" - model = whisper.load_model(args.whisper_size) - parent_dir = "./custom_character_voice/" - speaker_names = list(os.walk(parent_dir))[0][1] - speaker_annos = [] - total_files = sum([len(files) for r, d, files in os.walk(parent_dir)]) - # resample audios - # 2023/4/21: Get the target sampling rate - with open("./configs/config.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - target_sr = hps['data']['sampling_rate'] - processed_files = 0 - for speaker in speaker_names: - for i, wavfile in enumerate(list(os.walk(parent_dir + speaker))[0][2]): - # try to load file as audio - if wavfile.startswith("processed_"): - continue - try: - wav, sr = torchaudio.load(parent_dir + speaker + "/" + wavfile, frame_offset=0, num_frames=-1, normalize=True, - channels_first=True) - wav = wav.mean(dim=0).unsqueeze(0) - if sr != target_sr: - wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=target_sr)(wav) - if wav.shape[1] / sr > 20: - print(f"{wavfile} too long, ignoring\n") - save_path = parent_dir + speaker + "/" + f"processed_{i}.wav" - torchaudio.save(save_path, wav, target_sr, channels_first=True) - # transcribe text - lang, text = transcribe_one(save_path) - if lang not in list(lang2token.keys()): - print(f"{lang} not supported, ignoring\n") - continue - text = "ZH|" + text + "\n"# - #text = lang2token[lang] + text + lang2token[lang] + "\n" - speaker_annos.append(save_path + "|" + speaker + "|" + text) - - processed_files += 1 - print(f"Processed: {processed_files}/{total_files}") - except: - continue - - # # clean annotation - # import argparse - # import text - # from utils import load_filepaths_and_text - # for i, line in enumerate(speaker_annos): - # path, sid, txt = line.split("|") - # cleaned_text = text._clean_text(txt, ["cjke_cleaners2"]) - # cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - # speaker_annos[i] = path + "|" + sid + "|" + cleaned_text - # write into annotation - if len(speaker_annos) == 0: - print("Warning: no short audios found, this IS expected if you have only uploaded long audios, videos or video links.") - print("this IS NOT expected if you have uploaded a zip file of short audios. Please check your file structure or make sure your audio language is supported.") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - - # import json - # # generate new config - # with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - # hps = json.load(f) - # # modify n_speakers - # hps['data']["n_speakers"] = 1000 + len(speaker2id) - # # add speaker names - # for speaker in speaker_names: - # hps['speakers'][speaker] = speaker2id[speaker] - # # save modified config - # with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - # json.dump(hps, f, indent=2) - # print("finished") diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/text/chinese_bert.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Eileen-Bert-Vits2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/short_audio_transcribe.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/short_audio_transcribe.py deleted file mode 100644 index f1e8b30671f2c2f2fa3c93feb1f4edd3fbe2f545..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/short_audio_transcribe.py +++ /dev/null @@ -1,122 +0,0 @@ -import whisper -import os -import json -import torchaudio -import argparse -import torch - -lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } -def transcribe_one(audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(beam_size=5) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - return lang, result.text -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--languages", default="CJE") - parser.add_argument("--whisper_size", default="medium") - args = parser.parse_args() - if args.languages == "CJE": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } - elif args.languages == "CJ": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - } - elif args.languages == "C": - lang2token = { - 'zh': "[ZH]", - } - assert (torch.cuda.is_available()), "Please enable GPU in order to run Whisper!" - model = whisper.load_model(args.whisper_size) - parent_dir = "./custom_character_voice/" - speaker_names = list(os.walk(parent_dir))[0][1] - speaker_annos = [] - total_files = sum([len(files) for r, d, files in os.walk(parent_dir)]) - # resample audios - # 2023/4/21: Get the target sampling rate - with open("./configs/config.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - target_sr = hps['data']['sampling_rate'] - processed_files = 0 - for speaker in speaker_names: - for i, wavfile in enumerate(list(os.walk(parent_dir + speaker))[0][2]): - # try to load file as audio - if wavfile.startswith("processed_"): - continue - try: - wav, sr = torchaudio.load(parent_dir + speaker + "/" + wavfile, frame_offset=0, num_frames=-1, normalize=True, - channels_first=True) - wav = wav.mean(dim=0).unsqueeze(0) - if sr != target_sr: - wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=target_sr)(wav) - if wav.shape[1] / sr > 20: - print(f"{wavfile} too long, ignoring\n") - save_path = parent_dir + speaker + "/" + f"processed_{i}.wav" - torchaudio.save(save_path, wav, target_sr, channels_first=True) - # transcribe text - lang, text = transcribe_one(save_path) - if lang not in list(lang2token.keys()): - print(f"{lang} not supported, ignoring\n") - continue - text = "ZH|" + text + "\n"# - #text = lang2token[lang] + text + lang2token[lang] + "\n" - speaker_annos.append(save_path + "|" + speaker + "|" + text) - - processed_files += 1 - print(f"Processed: {processed_files}/{total_files}") - except: - continue - - # # clean annotation - # import argparse - # import text - # from utils import load_filepaths_and_text - # for i, line in enumerate(speaker_annos): - # path, sid, txt = line.split("|") - # cleaned_text = text._clean_text(txt, ["cjke_cleaners2"]) - # cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - # speaker_annos[i] = path + "|" + sid + "|" + cleaned_text - # write into annotation - if len(speaker_annos) == 0: - print("Warning: no short audios found, this IS expected if you have only uploaded long audios, videos or video links.") - print("this IS NOT expected if you have uploaded a zip file of short audios. Please check your file structure or make sure your audio language is supported.") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - - # import json - # # generate new config - # with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - # hps = json.load(f) - # # modify n_speakers - # hps['data']["n_speakers"] = 1000 + len(speaker2id) - # # add speaker names - # for speaker in speaker_names: - # hps['speakers'][speaker] = speaker2id[speaker] - # # save modified config - # with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - # json.dump(hps, f, indent=2) - # print("finished") diff --git a/spaces/dirge/voicevox/test/test_mock_synthesis_engine.py b/spaces/dirge/voicevox/test/test_mock_synthesis_engine.py deleted file mode 100644 index c06a0504a37d316c4769fcf0c658ac245f0e50d8..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/test/test_mock_synthesis_engine.py +++ /dev/null @@ -1,140 +0,0 @@ -from unittest import TestCase - -from voicevox_engine.dev.synthesis_engine import MockSynthesisEngine -from voicevox_engine.kana_parser import create_kana -from voicevox_engine.model import AccentPhrase, AudioQuery, Mora - - -class TestMockSynthesisEngine(TestCase): - def setUp(self): - super().setUp() - - self.accent_phrases_hello_hiho = [ - AccentPhrase( - moras=[ - Mora( - text="コ", - consonant="k", - consonant_length=0.0, - vowel="o", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ン", - consonant=None, - consonant_length=None, - vowel="N", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ニ", - consonant="n", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="チ", - consonant="ch", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ワ", - consonant="w", - consonant_length=0.0, - vowel="a", - vowel_length=0.0, - pitch=0.0, - ), - ], - accent=5, - pause_mora=Mora( - text="、", - consonant=None, - consonant_length=None, - vowel="pau", - vowel_length=0.0, - pitch=0.0, - ), - ), - AccentPhrase( - moras=[ - Mora( - text="ヒ", - consonant="h", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ホ", - consonant="h", - consonant_length=0.0, - vowel="o", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="デ", - consonant="d", - consonant_length=0.0, - vowel="e", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ス", - consonant="s", - consonant_length=0.0, - vowel="U", - vowel_length=0.0, - pitch=0.0, - ), - ], - accent=1, - pause_mora=None, - ), - ] - self.engine = MockSynthesisEngine(speakers="", supported_devices="") - - def test_replace_phoneme_length(self): - self.assertEqual( - self.engine.replace_phoneme_length( - accent_phrases=self.accent_phrases_hello_hiho, - speaker_id=0, - ), - self.accent_phrases_hello_hiho, - ) - - def test_replace_mora_pitch(self): - self.assertEqual( - self.engine.replace_mora_pitch( - accent_phrases=self.accent_phrases_hello_hiho, - speaker_id=0, - ), - self.accent_phrases_hello_hiho, - ) - - def test_synthesis(self): - self.engine.synthesis( - AudioQuery( - accent_phrases=self.accent_phrases_hello_hiho, - speedScale=1, - pitchScale=0, - intonationScale=1, - volumeScale=1, - prePhonemeLength=0.1, - postPhonemeLength=0.1, - outputSamplingRate=24000, - outputStereo=False, - kana=create_kana(self.accent_phrases_hello_hiho), - ), - speaker_id=0, - ) diff --git a/spaces/divilis/chatgpt/app.py b/spaces/divilis/chatgpt/app.py deleted file mode 100644 index 5523a648e43b4dab0e8c504fed92b0bd32bb8fbd..0000000000000000000000000000000000000000 --- a/spaces/divilis/chatgpt/app.py +++ /dev/null @@ -1,454 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from utils import * -from presets import * -from overwrites import * -from chat_func import * - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -my_api_key = "" # 在这里输入你的 API 密钥 - -# if we are running in Docker -if os.environ.get("dockerrun") == "yes": - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get("my_api_key") - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get("USERNAME") - password = os.environ.get("PASSWORD") - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if ( - not my_api_key - and os.path.exists("api_key.txt") - and os.path.getsize("api_key.txt") - ): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks( - css=customCSS, - theme=gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ), -) as demo: - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - gr.HTML(title) - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - with gr.Row(scale=1).style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - with gr.Row(scale=1): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - value=load_template( - get_template_names(plain=True)[0], mode=1 - )[0], - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - default_btn = gr.Button("🔙 恢复默认设置") - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - apiurlTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API地址...", - label="API地址", - value="https://api.openai.com/v1/chat/completions", - lines=2, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - - keyTxt.submit(submit_key, keyTxt, [user_api_key, status_display]) - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]) - # Chatbot - user_input.submit( - predict, - [ - user_api_key, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click( - predict, - [ - user_api_key, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - - retryBtn.click( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(0), - model_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_url, - [apiurlTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - # if running in Docker - if dockerflag: - if authflag: - demo.queue().launch( - server_name="0.0.0.0", server_port=7860, auth=(username, password), - favicon_path="./assets/favicon.png" - ) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False, favicon_path="./assets/favicon.png") - # if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password), favicon_path="./assets/favicon.png", inbrowser=True) - else: - demo.queue().launch(share=False, favicon_path="./assets/favicon.png", inbrowser=True) # 改为 share=True 可以创建公开分享链接 - # demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/dolceschokolade/chatbot-mini/types/env.ts b/spaces/dolceschokolade/chatbot-mini/types/env.ts deleted file mode 100644 index f6b9dd7c97885ba49e2b6c238a297cbf98070961..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/types/env.ts +++ /dev/null @@ -1,7 +0,0 @@ -export interface ProcessEnv { - OPENAI_API_KEY: string; - OPENAI_API_HOST?: string; - OPENAI_API_TYPE?: 'openai' | 'azure'; - OPENAI_API_VERSION?: string; - OPENAI_ORGANIZATION?: string; -} diff --git a/spaces/dorkai/ChatUIPro/app/api/utils/stream.ts b/spaces/dorkai/ChatUIPro/app/api/utils/stream.ts deleted file mode 100644 index 2da1359e4c629b4d50aa8b757db1ef858ee236b8..0000000000000000000000000000000000000000 --- a/spaces/dorkai/ChatUIPro/app/api/utils/stream.ts +++ /dev/null @@ -1,25 +0,0 @@ -export async function OpenAIStream(res: { body: any }) { - const reader = res.body.getReader(); - - const stream = new ReadableStream({ - // https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Using_readable_streams - // https://github.com/whichlight/chatgpt-api-streaming/blob/master/pages/api/OpenAIStream.ts - start(controller) { - return pump(); - function pump() { - return reader.read().then(({ done, value }: any) => { - // When no more data needs to be consumed, close the stream - if (done) { - controller.close(); - return; - } - // Enqueue the next data chunk into our target stream - controller.enqueue(value); - return pump(); - }); - } - }, - }); - - return stream; -} \ No newline at end of file diff --git a/spaces/dvitel/codebleu/parser_DFG.py b/spaces/dvitel/codebleu/parser_DFG.py deleted file mode 100644 index 93be6bcfb689ba81335a078c7501a8556204b3e5..0000000000000000000000000000000000000000 --- a/spaces/dvitel/codebleu/parser_DFG.py +++ /dev/null @@ -1,1184 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - -from tree_sitter import Language, Parser -from .parser_utils import (remove_comments_and_docstrings, - tree_to_token_index, - index_to_code_token, - tree_to_variable_index) - - -def DFG_python(root_node,index_to_code,states): - assignment=['assignment','augmented_assignment','for_in_clause'] - if_statement=['if_statement'] - for_statement=['for_statement'] - while_statement=['while_statement'] - do_first_statement=['for_in_clause'] - def_statement=['default_parameter'] - states=states.copy() - if (len(root_node.children)==0 or root_node.type in ['string_literal','string','character_literal']) and root_node.type!='comment': - idx,code=index_to_code[(root_node.start_point,root_node.end_point)] - if root_node.type==code: - return [],states - elif code in states: - return [(code,idx,'comesFrom',[code],states[code].copy())],states - else: - if root_node.type=='identifier': - states[code]=[idx] - return [(code,idx,'comesFrom',[],[])],states - elif root_node.type in def_statement: - name=root_node.child_by_field_name('name') - value=root_node.child_by_field_name('value') - DFG=[] - if value is None: - indexs=tree_to_variable_index(name,index_to_code) - for index in indexs: - idx,code=index_to_code[index] - DFG.append((code,idx,'comesFrom',[],[])) - states[code]=[idx] - return sorted(DFG,key=lambda x:x[1]),states - else: - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - temp,states=DFG_python(value,index_to_code,states) - DFG+=temp - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'comesFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in assignment: - if root_node.type=='for_in_clause': - right_nodes=[root_node.children[-1]] - left_nodes=[root_node.child_by_field_name('left')] - else: - if root_node.child_by_field_name('right') is None: - return [],states - left_nodes=[x for x in root_node.child_by_field_name('left').children if x.type!=','] - right_nodes=[x for x in root_node.child_by_field_name('right').children if x.type!=','] - if len(right_nodes)!=len(left_nodes): - left_nodes=[root_node.child_by_field_name('left')] - right_nodes=[root_node.child_by_field_name('right')] - if len(left_nodes)==0: - left_nodes=[root_node.child_by_field_name('left')] - if len(right_nodes)==0: - right_nodes=[root_node.child_by_field_name('right')] - DFG=[] - for node in right_nodes: - temp,states=DFG_python(node,index_to_code,states) - DFG+=temp - - for left_node,right_node in zip(left_nodes,right_nodes): - left_tokens_index=tree_to_variable_index(left_node,index_to_code) - right_tokens_index=tree_to_variable_index(right_node,index_to_code) - temp=[] - for token1_index in left_tokens_index: - idx1,code1=index_to_code[token1_index] - temp.append((code1,idx1,'computedFrom',[index_to_code[x][1] for x in right_tokens_index], - [index_to_code[x][0] for x in right_tokens_index])) - states[code1]=[idx1] - DFG+=temp - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in if_statement: - DFG=[] - current_states=states.copy() - others_states=[] - tag=False - if 'else' in root_node.type: - tag=True - for child in root_node.children: - if 'else' in child.type: - tag=True - if child.type not in ['elif_clause','else_clause']: - temp,current_states=DFG_python(child,index_to_code,current_states) - DFG+=temp - else: - temp,new_states=DFG_python(child,index_to_code,states) - DFG+=temp - others_states.append(new_states) - others_states.append(current_states) - if tag is False: - others_states.append(states) - new_states={} - for dic in others_states: - for key in dic: - if key not in new_states: - new_states[key]=dic[key].copy() - else: - new_states[key]+=dic[key] - for key in new_states: - new_states[key]=sorted(list(set(new_states[key]))) - return sorted(DFG,key=lambda x:x[1]),new_states - elif root_node.type in for_statement: - DFG=[] - for i in range(2): - right_nodes=[x for x in root_node.child_by_field_name('right').children if x.type!=','] - left_nodes=[x for x in root_node.child_by_field_name('left').children if x.type!=','] - if len(right_nodes)!=len(left_nodes): - left_nodes=[root_node.child_by_field_name('left')] - right_nodes=[root_node.child_by_field_name('right')] - if len(left_nodes)==0: - left_nodes=[root_node.child_by_field_name('left')] - if len(right_nodes)==0: - right_nodes=[root_node.child_by_field_name('right')] - for node in right_nodes: - temp,states=DFG_python(node,index_to_code,states) - DFG+=temp - for left_node,right_node in zip(left_nodes,right_nodes): - left_tokens_index=tree_to_variable_index(left_node,index_to_code) - right_tokens_index=tree_to_variable_index(right_node,index_to_code) - temp=[] - for token1_index in left_tokens_index: - idx1,code1=index_to_code[token1_index] - temp.append((code1,idx1,'computedFrom',[index_to_code[x][1] for x in right_tokens_index], - [index_to_code[x][0] for x in right_tokens_index])) - states[code1]=[idx1] - DFG+=temp - if root_node.children[-1].type=="block": - temp,states=DFG_python(root_node.children[-1],index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in while_statement: - DFG=[] - for i in range(2): - for child in root_node.children: - temp,states=DFG_python(child,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - else: - DFG=[] - for child in root_node.children: - if child.type in do_first_statement: - temp,states=DFG_python(child,index_to_code,states) - DFG+=temp - for child in root_node.children: - if child.type not in do_first_statement: - temp,states=DFG_python(child,index_to_code,states) - DFG+=temp - - return sorted(DFG,key=lambda x:x[1]),states - - -def DFG_java(root_node,index_to_code,states): - assignment=['assignment_expression'] - def_statement=['variable_declarator'] - increment_statement=['update_expression'] - if_statement=['if_statement','else'] - for_statement=['for_statement'] - enhanced_for_statement=['enhanced_for_statement'] - while_statement=['while_statement'] - do_first_statement=[] - states=states.copy() - if (len(root_node.children)==0 or root_node.type in ['string_literal','string','character_literal']) and root_node.type!='comment': - idx,code=index_to_code[(root_node.start_point,root_node.end_point)] - if root_node.type==code: - return [],states - elif code in states: - return [(code,idx,'comesFrom',[code],states[code].copy())],states - else: - if root_node.type=='identifier': - states[code]=[idx] - return [(code,idx,'comesFrom',[],[])],states - elif root_node.type in def_statement: - name=root_node.child_by_field_name('name') - value=root_node.child_by_field_name('value') - DFG=[] - if value is None: - indexs=tree_to_variable_index(name,index_to_code) - for index in indexs: - idx,code=index_to_code[index] - DFG.append((code,idx,'comesFrom',[],[])) - states[code]=[idx] - return sorted(DFG,key=lambda x:x[1]),states - else: - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - temp,states=DFG_java(value,index_to_code,states) - DFG+=temp - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'comesFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in assignment: - left_nodes=root_node.child_by_field_name('left') - right_nodes=root_node.child_by_field_name('right') - DFG=[] - temp,states=DFG_java(right_nodes,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(left_nodes,index_to_code) - value_indexs=tree_to_variable_index(right_nodes,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in increment_statement: - DFG=[] - indexs=tree_to_variable_index(root_node,index_to_code) - for index1 in indexs: - idx1,code1=index_to_code[index1] - for index2 in indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in if_statement: - DFG=[] - current_states=states.copy() - others_states=[] - flag=False - tag=False - if 'else' in root_node.type: - tag=True - for child in root_node.children: - if 'else' in child.type: - tag=True - if child.type not in if_statement and flag is False: - temp,current_states=DFG_java(child,index_to_code,current_states) - DFG+=temp - else: - flag=True - temp,new_states=DFG_java(child,index_to_code,states) - DFG+=temp - others_states.append(new_states) - others_states.append(current_states) - if tag is False: - others_states.append(states) - new_states={} - for dic in others_states: - for key in dic: - if key not in new_states: - new_states[key]=dic[key].copy() - else: - new_states[key]+=dic[key] - for key in new_states: - new_states[key]=sorted(list(set(new_states[key]))) - return sorted(DFG,key=lambda x:x[1]),new_states - elif root_node.type in for_statement: - DFG=[] - for child in root_node.children: - temp,states=DFG_java(child,index_to_code,states) - DFG+=temp - flag=False - for child in root_node.children: - if flag: - temp,states=DFG_java(child,index_to_code,states) - DFG+=temp - elif child.type=="local_variable_declaration": - flag=True - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in enhanced_for_statement: - name=root_node.child_by_field_name('name') - value=root_node.child_by_field_name('value') - body=root_node.child_by_field_name('body') - DFG=[] - for i in range(2): - temp,states=DFG_java(value,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - temp,states=DFG_java(body,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in while_statement: - DFG=[] - for i in range(2): - for child in root_node.children: - temp,states=DFG_java(child,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - else: - DFG=[] - for child in root_node.children: - if child.type in do_first_statement: - temp,states=DFG_java(child,index_to_code,states) - DFG+=temp - for child in root_node.children: - if child.type not in do_first_statement: - temp,states=DFG_java(child,index_to_code,states) - DFG+=temp - - return sorted(DFG,key=lambda x:x[1]),states - -def DFG_csharp(root_node,index_to_code,states): - assignment=['assignment_expression'] - def_statement=['variable_declarator'] - increment_statement=['postfix_unary_expression'] - if_statement=['if_statement','else'] - for_statement=['for_statement'] - enhanced_for_statement=['for_each_statement'] - while_statement=['while_statement'] - do_first_statement=[] - states=states.copy() - if (len(root_node.children)==0 or root_node.type in ['string_literal','string','character_literal']) and root_node.type!='comment': - idx,code=index_to_code[(root_node.start_point,root_node.end_point)] - if root_node.type==code: - return [],states - elif code in states: - return [(code,idx,'comesFrom',[code],states[code].copy())],states - else: - if root_node.type=='identifier': - states[code]=[idx] - return [(code,idx,'comesFrom',[],[])],states - elif root_node.type in def_statement: - if len(root_node.children)==2: - name=root_node.children[0] - value=root_node.children[1] - else: - name=root_node.children[0] - value=None - DFG=[] - if value is None: - indexs=tree_to_variable_index(name,index_to_code) - for index in indexs: - idx,code=index_to_code[index] - DFG.append((code,idx,'comesFrom',[],[])) - states[code]=[idx] - return sorted(DFG,key=lambda x:x[1]),states - else: - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - temp,states=DFG_csharp(value,index_to_code,states) - DFG+=temp - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'comesFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in assignment: - left_nodes=root_node.child_by_field_name('left') - right_nodes=root_node.child_by_field_name('right') - DFG=[] - temp,states=DFG_csharp(right_nodes,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(left_nodes,index_to_code) - value_indexs=tree_to_variable_index(right_nodes,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in increment_statement: - DFG=[] - indexs=tree_to_variable_index(root_node,index_to_code) - for index1 in indexs: - idx1,code1=index_to_code[index1] - for index2 in indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in if_statement: - DFG=[] - current_states=states.copy() - others_states=[] - flag=False - tag=False - if 'else' in root_node.type: - tag=True - for child in root_node.children: - if 'else' in child.type: - tag=True - if child.type not in if_statement and flag is False: - temp,current_states=DFG_csharp(child,index_to_code,current_states) - DFG+=temp - else: - flag=True - temp,new_states=DFG_csharp(child,index_to_code,states) - DFG+=temp - others_states.append(new_states) - others_states.append(current_states) - if tag is False: - others_states.append(states) - new_states={} - for dic in others_states: - for key in dic: - if key not in new_states: - new_states[key]=dic[key].copy() - else: - new_states[key]+=dic[key] - for key in new_states: - new_states[key]=sorted(list(set(new_states[key]))) - return sorted(DFG,key=lambda x:x[1]),new_states - elif root_node.type in for_statement: - DFG=[] - for child in root_node.children: - temp,states=DFG_csharp(child,index_to_code,states) - DFG+=temp - flag=False - for child in root_node.children: - if flag: - temp,states=DFG_csharp(child,index_to_code,states) - DFG+=temp - elif child.type=="local_variable_declaration": - flag=True - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in enhanced_for_statement: - name=root_node.child_by_field_name('left') - value=root_node.child_by_field_name('right') - body=root_node.child_by_field_name('body') - DFG=[] - for i in range(2): - temp,states=DFG_csharp(value,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - temp,states=DFG_csharp(body,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in while_statement: - DFG=[] - for i in range(2): - for child in root_node.children: - temp,states=DFG_csharp(child,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - else: - DFG=[] - for child in root_node.children: - if child.type in do_first_statement: - temp,states=DFG_csharp(child,index_to_code,states) - DFG+=temp - for child in root_node.children: - if child.type not in do_first_statement: - temp,states=DFG_csharp(child,index_to_code,states) - DFG+=temp - - return sorted(DFG,key=lambda x:x[1]),states - - - - -def DFG_ruby(root_node,index_to_code,states): - assignment=['assignment','operator_assignment'] - if_statement=['if','elsif','else','unless','when'] - for_statement=['for'] - while_statement=['while_modifier','until'] - do_first_statement=[] - def_statement=['keyword_parameter'] - if (len(root_node.children)==0 or root_node.type in ['string_literal','string','character_literal']) and root_node.type!='comment': - states=states.copy() - idx,code=index_to_code[(root_node.start_point,root_node.end_point)] - if root_node.type==code: - return [],states - elif code in states: - return [(code,idx,'comesFrom',[code],states[code].copy())],states - else: - if root_node.type=='identifier': - states[code]=[idx] - return [(code,idx,'comesFrom',[],[])],states - elif root_node.type in def_statement: - name=root_node.child_by_field_name('name') - value=root_node.child_by_field_name('value') - DFG=[] - if value is None: - indexs=tree_to_variable_index(name,index_to_code) - for index in indexs: - idx,code=index_to_code[index] - DFG.append((code,idx,'comesFrom',[],[])) - states[code]=[idx] - return sorted(DFG,key=lambda x:x[1]),states - else: - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - temp,states=DFG_ruby(value,index_to_code,states) - DFG+=temp - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'comesFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in assignment: - left_nodes=[x for x in root_node.child_by_field_name('left').children if x.type!=','] - right_nodes=[x for x in root_node.child_by_field_name('right').children if x.type!=','] - if len(right_nodes)!=len(left_nodes): - left_nodes=[root_node.child_by_field_name('left')] - right_nodes=[root_node.child_by_field_name('right')] - if len(left_nodes)==0: - left_nodes=[root_node.child_by_field_name('left')] - if len(right_nodes)==0: - right_nodes=[root_node.child_by_field_name('right')] - if root_node.type=="operator_assignment": - left_nodes=[root_node.children[0]] - right_nodes=[root_node.children[-1]] - - DFG=[] - for node in right_nodes: - temp,states=DFG_ruby(node,index_to_code,states) - DFG+=temp - - for left_node,right_node in zip(left_nodes,right_nodes): - left_tokens_index=tree_to_variable_index(left_node,index_to_code) - right_tokens_index=tree_to_variable_index(right_node,index_to_code) - temp=[] - for token1_index in left_tokens_index: - idx1,code1=index_to_code[token1_index] - temp.append((code1,idx1,'computedFrom',[index_to_code[x][1] for x in right_tokens_index], - [index_to_code[x][0] for x in right_tokens_index])) - states[code1]=[idx1] - DFG+=temp - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in if_statement: - DFG=[] - current_states=states.copy() - others_states=[] - tag=False - if 'else' in root_node.type: - tag=True - for child in root_node.children: - if 'else' in child.type: - tag=True - if child.type not in if_statement: - temp,current_states=DFG_ruby(child,index_to_code,current_states) - DFG+=temp - else: - temp,new_states=DFG_ruby(child,index_to_code,states) - DFG+=temp - others_states.append(new_states) - others_states.append(current_states) - if tag is False: - others_states.append(states) - new_states={} - for dic in others_states: - for key in dic: - if key not in new_states: - new_states[key]=dic[key].copy() - else: - new_states[key]+=dic[key] - for key in new_states: - new_states[key]=sorted(list(set(new_states[key]))) - return sorted(DFG,key=lambda x:x[1]),new_states - elif root_node.type in for_statement: - DFG=[] - for i in range(2): - left_nodes=[root_node.child_by_field_name('pattern')] - right_nodes=[root_node.child_by_field_name('value')] - assert len(right_nodes)==len(left_nodes) - for node in right_nodes: - temp,states=DFG_ruby(node,index_to_code,states) - DFG+=temp - for left_node,right_node in zip(left_nodes,right_nodes): - left_tokens_index=tree_to_variable_index(left_node,index_to_code) - right_tokens_index=tree_to_variable_index(right_node,index_to_code) - temp=[] - for token1_index in left_tokens_index: - idx1,code1=index_to_code[token1_index] - temp.append((code1,idx1,'computedFrom',[index_to_code[x][1] for x in right_tokens_index], - [index_to_code[x][0] for x in right_tokens_index])) - states[code1]=[idx1] - DFG+=temp - temp,states=DFG_ruby(root_node.child_by_field_name('body'),index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in while_statement: - DFG=[] - for i in range(2): - for child in root_node.children: - temp,states=DFG_ruby(child,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - else: - DFG=[] - for child in root_node.children: - if child.type in do_first_statement: - temp,states=DFG_ruby(child,index_to_code,states) - DFG+=temp - for child in root_node.children: - if child.type not in do_first_statement: - temp,states=DFG_ruby(child,index_to_code,states) - DFG+=temp - - return sorted(DFG,key=lambda x:x[1]),states - -def DFG_go(root_node,index_to_code,states): - assignment=['assignment_statement',] - def_statement=['var_spec'] - increment_statement=['inc_statement'] - if_statement=['if_statement','else'] - for_statement=['for_statement'] - enhanced_for_statement=[] - while_statement=[] - do_first_statement=[] - states=states.copy() - if (len(root_node.children)==0 or root_node.type in ['string_literal','string','character_literal']) and root_node.type!='comment': - idx,code=index_to_code[(root_node.start_point,root_node.end_point)] - if root_node.type==code: - return [],states - elif code in states: - return [(code,idx,'comesFrom',[code],states[code].copy())],states - else: - if root_node.type=='identifier': - states[code]=[idx] - return [(code,idx,'comesFrom',[],[])],states - elif root_node.type in def_statement: - name=root_node.child_by_field_name('name') - value=root_node.child_by_field_name('value') - DFG=[] - if value is None: - indexs=tree_to_variable_index(name,index_to_code) - for index in indexs: - idx,code=index_to_code[index] - DFG.append((code,idx,'comesFrom',[],[])) - states[code]=[idx] - return sorted(DFG,key=lambda x:x[1]),states - else: - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - temp,states=DFG_go(value,index_to_code,states) - DFG+=temp - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'comesFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in assignment: - left_nodes=root_node.child_by_field_name('left') - right_nodes=root_node.child_by_field_name('right') - DFG=[] - temp,states=DFG_go(right_nodes,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(left_nodes,index_to_code) - value_indexs=tree_to_variable_index(right_nodes,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in increment_statement: - DFG=[] - indexs=tree_to_variable_index(root_node,index_to_code) - for index1 in indexs: - idx1,code1=index_to_code[index1] - for index2 in indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in if_statement: - DFG=[] - current_states=states.copy() - others_states=[] - flag=False - tag=False - if 'else' in root_node.type: - tag=True - for child in root_node.children: - if 'else' in child.type: - tag=True - if child.type not in if_statement and flag is False: - temp,current_states=DFG_go(child,index_to_code,current_states) - DFG+=temp - else: - flag=True - temp,new_states=DFG_go(child,index_to_code,states) - DFG+=temp - others_states.append(new_states) - others_states.append(current_states) - if tag is False: - others_states.append(states) - new_states={} - for dic in others_states: - for key in dic: - if key not in new_states: - new_states[key]=dic[key].copy() - else: - new_states[key]+=dic[key] - for key in states: - if key not in new_states: - new_states[key]=states[key] - else: - new_states[key]+=states[key] - for key in new_states: - new_states[key]=sorted(list(set(new_states[key]))) - return sorted(DFG,key=lambda x:x[1]),new_states - elif root_node.type in for_statement: - DFG=[] - for child in root_node.children: - temp,states=DFG_go(child,index_to_code,states) - DFG+=temp - flag=False - for child in root_node.children: - if flag: - temp,states=DFG_go(child,index_to_code,states) - DFG+=temp - elif child.type=="for_clause": - if child.child_by_field_name('update') is not None: - temp,states=DFG_go(child.child_by_field_name('update'),index_to_code,states) - DFG+=temp - flag=True - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - else: - DFG=[] - for child in root_node.children: - if child.type in do_first_statement: - temp,states=DFG_go(child,index_to_code,states) - DFG+=temp - for child in root_node.children: - if child.type not in do_first_statement: - temp,states=DFG_go(child,index_to_code,states) - DFG+=temp - - return sorted(DFG,key=lambda x:x[1]),states - - - - -def DFG_php(root_node,index_to_code,states): - assignment=['assignment_expression','augmented_assignment_expression'] - def_statement=['simple_parameter'] - increment_statement=['update_expression'] - if_statement=['if_statement','else_clause'] - for_statement=['for_statement'] - enhanced_for_statement=['foreach_statement'] - while_statement=['while_statement'] - do_first_statement=[] - states=states.copy() - if (len(root_node.children)==0 or root_node.type in ['string_literal','string','character_literal']) and root_node.type!='comment': - idx,code=index_to_code[(root_node.start_point,root_node.end_point)] - if root_node.type==code: - return [],states - elif code in states: - return [(code,idx,'comesFrom',[code],states[code].copy())],states - else: - if root_node.type=='identifier': - states[code]=[idx] - return [(code,idx,'comesFrom',[],[])],states - elif root_node.type in def_statement: - name=root_node.child_by_field_name('name') - value=root_node.child_by_field_name('default_value') - DFG=[] - if value is None: - indexs=tree_to_variable_index(name,index_to_code) - for index in indexs: - idx,code=index_to_code[index] - DFG.append((code,idx,'comesFrom',[],[])) - states[code]=[idx] - return sorted(DFG,key=lambda x:x[1]),states - else: - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - temp,states=DFG_php(value,index_to_code,states) - DFG+=temp - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'comesFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in assignment: - left_nodes=root_node.child_by_field_name('left') - right_nodes=root_node.child_by_field_name('right') - DFG=[] - temp,states=DFG_php(right_nodes,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(left_nodes,index_to_code) - value_indexs=tree_to_variable_index(right_nodes,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in increment_statement: - DFG=[] - indexs=tree_to_variable_index(root_node,index_to_code) - for index1 in indexs: - idx1,code1=index_to_code[index1] - for index2 in indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in if_statement: - DFG=[] - current_states=states.copy() - others_states=[] - flag=False - tag=False - if 'else' in root_node.type: - tag=True - for child in root_node.children: - if 'else' in child.type: - tag=True - if child.type not in if_statement and flag is False: - temp,current_states=DFG_php(child,index_to_code,current_states) - DFG+=temp - else: - flag=True - temp,new_states=DFG_php(child,index_to_code,states) - DFG+=temp - others_states.append(new_states) - others_states.append(current_states) - new_states={} - for dic in others_states: - for key in dic: - if key not in new_states: - new_states[key]=dic[key].copy() - else: - new_states[key]+=dic[key] - for key in states: - if key not in new_states: - new_states[key]=states[key] - else: - new_states[key]+=states[key] - for key in new_states: - new_states[key]=sorted(list(set(new_states[key]))) - return sorted(DFG,key=lambda x:x[1]),new_states - elif root_node.type in for_statement: - DFG=[] - for child in root_node.children: - temp,states=DFG_php(child,index_to_code,states) - DFG+=temp - flag=False - for child in root_node.children: - if flag: - temp,states=DFG_php(child,index_to_code,states) - DFG+=temp - elif child.type=="assignment_expression": - flag=True - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in enhanced_for_statement: - name=None - value=None - for child in root_node.children: - if child.type=='variable_name' and value is None: - value=child - elif child.type=='variable_name' and name is None: - name=child - break - body=root_node.child_by_field_name('body') - DFG=[] - for i in range(2): - temp,states=DFG_php(value,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - temp,states=DFG_php(body,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in while_statement: - DFG=[] - for i in range(2): - for child in root_node.children: - temp,states=DFG_php(child,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - else: - DFG=[] - for child in root_node.children: - if child.type in do_first_statement: - temp,states=DFG_php(child,index_to_code,states) - DFG+=temp - for child in root_node.children: - if child.type not in do_first_statement: - temp,states=DFG_php(child,index_to_code,states) - DFG+=temp - - return sorted(DFG,key=lambda x:x[1]),states - - -def DFG_javascript(root_node,index_to_code,states): - assignment=['assignment_pattern','augmented_assignment_expression'] - def_statement=['variable_declarator'] - increment_statement=['update_expression'] - if_statement=['if_statement','else'] - for_statement=['for_statement'] - enhanced_for_statement=[] - while_statement=['while_statement'] - do_first_statement=[] - states=states.copy() - if (len(root_node.children)==0 or root_node.type in ['string_literal','string','character_literal']) and root_node.type!='comment': - idx,code=index_to_code[(root_node.start_point,root_node.end_point)] - if root_node.type==code: - return [],states - elif code in states: - return [(code,idx,'comesFrom',[code],states[code].copy())],states - else: - if root_node.type=='identifier': - states[code]=[idx] - return [(code,idx,'comesFrom',[],[])],states - elif root_node.type in def_statement: - name=root_node.child_by_field_name('name') - value=root_node.child_by_field_name('value') - DFG=[] - if value is None: - indexs=tree_to_variable_index(name,index_to_code) - for index in indexs: - idx,code=index_to_code[index] - DFG.append((code,idx,'comesFrom',[],[])) - states[code]=[idx] - return sorted(DFG,key=lambda x:x[1]),states - else: - name_indexs=tree_to_variable_index(name,index_to_code) - value_indexs=tree_to_variable_index(value,index_to_code) - temp,states=DFG_javascript(value,index_to_code,states) - DFG+=temp - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'comesFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in assignment: - left_nodes=root_node.child_by_field_name('left') - right_nodes=root_node.child_by_field_name('right') - DFG=[] - temp,states=DFG_javascript(right_nodes,index_to_code,states) - DFG+=temp - name_indexs=tree_to_variable_index(left_nodes,index_to_code) - value_indexs=tree_to_variable_index(right_nodes,index_to_code) - for index1 in name_indexs: - idx1,code1=index_to_code[index1] - for index2 in value_indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in increment_statement: - DFG=[] - indexs=tree_to_variable_index(root_node,index_to_code) - for index1 in indexs: - idx1,code1=index_to_code[index1] - for index2 in indexs: - idx2,code2=index_to_code[index2] - DFG.append((code1,idx1,'computedFrom',[code2],[idx2])) - states[code1]=[idx1] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in if_statement: - DFG=[] - current_states=states.copy() - others_states=[] - flag=False - tag=False - if 'else' in root_node.type: - tag=True - for child in root_node.children: - if 'else' in child.type: - tag=True - if child.type not in if_statement and flag is False: - temp,current_states=DFG_javascript(child,index_to_code,current_states) - DFG+=temp - else: - flag=True - temp,new_states=DFG_javascript(child,index_to_code,states) - DFG+=temp - others_states.append(new_states) - others_states.append(current_states) - if tag is False: - others_states.append(states) - new_states={} - for dic in others_states: - for key in dic: - if key not in new_states: - new_states[key]=dic[key].copy() - else: - new_states[key]+=dic[key] - for key in states: - if key not in new_states: - new_states[key]=states[key] - else: - new_states[key]+=states[key] - for key in new_states: - new_states[key]=sorted(list(set(new_states[key]))) - return sorted(DFG,key=lambda x:x[1]),new_states - elif root_node.type in for_statement: - DFG=[] - for child in root_node.children: - temp,states=DFG_javascript(child,index_to_code,states) - DFG+=temp - flag=False - for child in root_node.children: - if flag: - temp,states=DFG_javascript(child,index_to_code,states) - DFG+=temp - elif child.type=="variable_declaration": - flag=True - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - elif root_node.type in while_statement: - DFG=[] - for i in range(2): - for child in root_node.children: - temp,states=DFG_javascript(child,index_to_code,states) - DFG+=temp - dic={} - for x in DFG: - if (x[0],x[1],x[2]) not in dic: - dic[(x[0],x[1],x[2])]=[x[3],x[4]] - else: - dic[(x[0],x[1],x[2])][0]=list(set(dic[(x[0],x[1],x[2])][0]+x[3])) - dic[(x[0],x[1],x[2])][1]=sorted(list(set(dic[(x[0],x[1],x[2])][1]+x[4]))) - DFG=[(x[0],x[1],x[2],y[0],y[1]) for x,y in sorted(dic.items(),key=lambda t:t[0][1])] - return sorted(DFG,key=lambda x:x[1]),states - else: - DFG=[] - for child in root_node.children: - if child.type in do_first_statement: - temp,states=DFG_javascript(child,index_to_code,states) - DFG+=temp - for child in root_node.children: - if child.type not in do_first_statement: - temp,states=DFG_javascript(child,index_to_code,states) - DFG+=temp - - return sorted(DFG,key=lambda x:x[1]),states - - - diff --git a/spaces/elpsycongroo19/simple_chatbot/complex_app.py b/spaces/elpsycongroo19/simple_chatbot/complex_app.py deleted file mode 100644 index 4fde608903123105b9b340aa88176684b1592e65..0000000000000000000000000000000000000000 --- a/spaces/elpsycongroo19/simple_chatbot/complex_app.py +++ /dev/null @@ -1,63 +0,0 @@ -import openai -import os -import gradio as gr - -openai.api_key = os.environ.get("OPENAI_API_KEY") - -class Conversation: - def __init__(self, prompt, num_of_round, model): - self.prompt = prompt - self.num_of_round = num_of_round - self.model = model - self.messages = [] - self.messages.append({"role": "system", "content": self.prompt}) - - def ask(self, question): - try: - self.messages.append({"role": "user", "content": question}) - response = openai.Completion.create( - engine=self.model, - prompt='\n'.join([m["content"] for m in self.messages]), - temperature=0.7, - max_tokens=1024, - n = 1, - stop=None, - timeout=15, - frequency_penalty=0, - presence_penalty=0 - ) - except Exception as e: - print(e) - return e - - message = response.choices[0].text.strip() - self.messages.append({"role": "assistant", "content": message}) - - if len(self.messages) > self.num_of_round*2 + 1: - del self.messages[1:3] #Remove the first round conversation left. - return message - -prompt = """你是GPT4。你的回答需要满足以下要求: -1. 你的回答必须是中文 -2. 回答限制在1000个字以内""" - -model = "text-davinci-002" -conv = Conversation(prompt, 10, model) - -def predict(input, history=[]): - history.append(input) - response = conv.ask(input) - history.append(response) - responses = [(u,b) for u,b in zip(history[::2], history[1::2])] - return responses, history - -with gr.Blocks(css="#chatbot{height:350px} .overflow-y-auto{height:500px}") as demo: - chatbot = gr.Chatbot(elem_id="chatbot") - state = gr.State([]) - - with gr.Row(): - txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(container=False) - - txt.submit(predict, [txt, state], [chatbot, state]) - -demo.launch() diff --git a/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/curl.tsx b/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/curl.tsx deleted file mode 100644 index 5ee38047359b259700a49d351a34ee3d9534aeb9..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/curl.tsx +++ /dev/null @@ -1,98 +0,0 @@ -import { ApiRoute } from "@/utils/type"; -import classNames from "classnames"; -import { useState } from "react"; -import Highlight from "react-highlight"; -import { BiCodeCurly, BiSolidCopy } from "react-icons/bi"; -import { Options } from "redaxios"; - -export const CurlSnippet = ({ - endpoint, - headers, - parameters, - body, - onCopyToClipboard, -}: { - endpoint: ApiRoute; - parameters?: Record; - headers?: Record; - body?: Options | undefined; - onCopyToClipboard: (e: string) => void; -}) => { - const [isCopied, setIsCopied] = useState(false); - - const generateCurlRequestFromEndpoint = () => { - const { method, path } = endpoint; - const fullpath = `${process.env.NEXT_PUBLIC_APP_APIURL}${path}`; - - const removeEmptyValues = (data: Record) => { - const formattedData = { ...data }; - Object.entries(formattedData).forEach(([key, value]) => { - if (!value) { - delete formattedData[key]; - } - }); - return formattedData; - }; - - const Dict: Record = { - GET: () => { - const filteredEmptyParameters = removeEmptyValues(parameters ?? {}); - - return `curl -X ${method} "${fullpath}?${new URLSearchParams( - filteredEmptyParameters - ).toString()}" \\ - -H ${JSON.stringify(headers)} -`; - }, - DELETE: () => { - return `curl -X ${method} "${fullpath}" \\ - -H ${JSON.stringify(headers)} \\ - -d ${JSON.stringify(body)} -`; - }, - DEFAULT: () => { - return `curl -X ${method} "${fullpath}" \\ - -H ${JSON.stringify(headers)} - -d ${JSON.stringify(body)} -`; - }, - }; - - return Dict[method] ? Dict[method]() : Dict["DEFAULT"](); - }; - - const handleCopy = () => { - onCopyToClipboard(generateCurlRequestFromEndpoint()); - setIsCopied(true); - setTimeout(() => { - setIsCopied(false); - }, 1000); - }; - - return ( -
-
- -

Curl

-
-
- - {generateCurlRequestFromEndpoint()} - -
- -
- Copied! -
-
-
-
- ); -}; diff --git a/spaces/enzostvs/stable-diffusion-tpu/utils/type.ts b/spaces/enzostvs/stable-diffusion-tpu/utils/type.ts deleted file mode 100644 index 30f86a73ddd4391c5afd9d9f0753d3a309b1162b..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/utils/type.ts +++ /dev/null @@ -1,18 +0,0 @@ -export interface Collection { - pagination: { - page: number, - total: number, - total_pages: number - }, - images: Array, -} - -export interface Image { - id: string; - file_name: string; - prompt: string; - createdAt: string; - is_visible: boolean; - error?: string; - loading?: boolean; -} \ No newline at end of file diff --git a/spaces/ethanrom/pcb_det/README.md b/spaces/ethanrom/pcb_det/README.md deleted file mode 100644 index 628ee1f363676576a129551648821f7e37e3d0d0..0000000000000000000000000000000000000000 --- a/spaces/ethanrom/pcb_det/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Pcb Det -emoji: 🔥 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/falterWliame/Face_Mask_Detection/Autodata 3.39 Srpski Download [EXCLUSIVE] Tpb.md b/spaces/falterWliame/Face_Mask_Detection/Autodata 3.39 Srpski Download [EXCLUSIVE] Tpb.md deleted file mode 100644 index cf904e4e7750e93f4874d0067ee23dd52c96f2b6..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Autodata 3.39 Srpski Download [EXCLUSIVE] Tpb.md +++ /dev/null @@ -1,6 +0,0 @@ -

autodata 3.39 srpski download tpb


Download Zip ✔✔✔ https://urlca.com/2uDbWb



-
-AutoData 3.40 crack and full version download. allicense Jul 15th, 2013 25,737 Never Not a . ... Autodata 3.39 Srpski Download Tpb Mega.. Autodata 3.45 Full ... 1fdad05405
-
-
-

diff --git a/spaces/fatiXbelha/sd/Descubre las novedades de Free Fire Navidad 2018 el APK ms esperado del ao.md b/spaces/fatiXbelha/sd/Descubre las novedades de Free Fire Navidad 2018 el APK ms esperado del ao.md deleted file mode 100644 index bc27f05e6abac660f4b5edccfc087a60bd27a0d2..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Descubre las novedades de Free Fire Navidad 2018 el APK ms esperado del ao.md +++ /dev/null @@ -1,146 +0,0 @@ -
-

Free Fire Navidad 2018 APK: How to Download and Play the Winterlands Version of the Popular Battle Royale Game

-

Free Fire is one of the most popular battle royale games on mobile, with over 500 million downloads on Google Play Store. It offers a thrilling and fast-paced gameplay, where you have to fight against 49 other players on a remote island, using various weapons, vehicles, and items. But did you know that there is a special version of Free Fire that was released in December 2018, called Free Fire Navidad 2018 APK?

-

free fire navidad 2018 apk


Downloadhttps://urllie.com/2uNFxW



-

What is Free Fire Navidad 2018 APK?

-

A brief introduction to the game and its features

-

Free Fire Navidad 2018 APK is an alternative version of Free Fire that was launched in celebration of Christmas and New Year. It is also known as Free Fire Winterlands, as it features a new winter-themed map, where you can enjoy snow, ice, and festive decorations. The game also has some exclusive features, such as:

-
    -
  • A new character named Antonio, who has a passive skill that increases his maximum health.
  • -
  • A new weapon called VSS, which is a sniper rifle with a silencer and a scope.
  • -
  • A new vehicle called Snowboard, which allows you to glide on snow and perform tricks.
  • -
  • A new mode called Death Race, where you have to race against other players in vehicles and eliminate them with weapons.
  • -
  • A new event called Snowman Rush, where you have to collect snowballs and build snowmen to get rewards.
  • -
-

The difference between the regular version and the Winterlands version

-

The main difference between the regular version and the Winterlands version of Free Fire is that the latter has a different map, which is covered in snow and ice. This means that you have to adapt your strategy and tactics according to the terrain and weather conditions. For example, you can use snowballs as weapons, hide behind snowmen, slide on ice, or use snowboards to move faster. The Winterlands version also has some exclusive items, such as winter outfits, skins, emotes, and pets.

-

The benefits of downloading the APK file

-

If you want to play Free Fire Navidad 2018 APK, you have to download and install the APK file from a third-party source, as it is not available on Google Play Store. This may sound risky, but there are some benefits of doing so, such as:

-
    -
  • You can enjoy a different version of Free Fire that has more features and content than the regular one.
  • -
  • You can play with players from different regions and servers, as the Winterlands version is not restricted by geography.
  • -
  • You can avoid some bugs and glitches that may occur in the regular version of Free Fire, as the Winterlands version is more stable and optimized.
  • -
  • You can save some storage space on your device, as the Winterlands version is smaller in size than the regular one.
  • -
-

How to Download and Install Free Fire Navidad 2018 APK?

-

The requirements and precautions for downloading the APK file

-

Before you download and install Free Fire Navidad 2018 APK, you need to make sure that your device meets the following requirements:

-
    -
  • Your device must have Android 4.0.3 or higher as the operating system.
  • -
  • Your device must have at least 2 GB of RAM and 1.5 GB of free storage space.
  • -
  • Your device must have a stable internet connection and a good battery level.
  • -
-

You also need to take some precautions to avoid any risks or problems while downloading and installing the APK file, such as:

-
    -
  • You must enable the option of "Unknown Sources" in your device settings, to allow the installation of apps from sources other than Google Play Store.
  • -
  • You must scan the APK file with a reliable antivirus software, to ensure that it is free from any malware or viruses.
  • -
  • You must backup your data and files, in case something goes wrong during the installation process.
  • -
-

The steps to download and install the APK file from a trusted source

-

Once you have checked the requirements and taken the precautions, you can follow these simple steps to download and install Free Fire Navidad 2018 APK from a trusted source:

-
    -
  1. Go to a reputable website that offers the APK file of Free Fire Navidad 2018, such as [APKPure] or [Uptodown].
  2. -
  3. Click on the download button and wait for the APK file to be downloaded on your device.
  4. -
  5. Locate the APK file in your device's file manager and tap on it to start the installation process.
  6. -
  7. Follow the instructions on the screen and grant the necessary permissions to the app.
  8. -
  9. Wait for the installation to be completed and launch the app from your home screen or app drawer.
  10. -
-

The tips to avoid errors and issues while installing the APK file

-

Sometimes, you may encounter some errors or issues while installing the APK file of Free Fire Navidad 2018, such as:

-
    -
  • The installation may fail or get stuck due to insufficient storage space, corrupted files, or incompatible devices.
  • -
  • The app may crash or freeze due to low memory, outdated software, or network issues.
  • -
  • The app may not work properly or show errors due to missing files, permissions, or updates.
  • -
-

To avoid these errors or issues, you can try these tips:

-
    -
  • Clear some storage space on your device by deleting unwanted apps, files, or cache data.
  • -
  • Repair or replace any damaged or corrupted files by downloading them again from a different source or using a file manager app.
  • -
  • Update your device's software and the app's version to the latest one available.
  • -
  • Restart your device and the app to clear any glitches or bugs.
  • -
  • Contact the app's developer or customer support for further assistance or guidance.
  • -

How to Play Free Fire Navidad 2018 APK?

-

The basics of the gameplay and the controls

-

Free Fire Navidad 2018 APK has the same gameplay and controls as the regular version of Free Fire, with some minor changes. You can play solo, duo, or squad mode, and choose from different characters, each with their own skills and abilities. You can also customize your appearance, loadout, and settings according to your preference. The game starts with you parachuting onto the Winterlands map, where you have to scavenge for weapons, items, and vehicles. You have to avoid the shrinking safe zone and eliminate other players until you are the last one standing. You can use the virtual joystick to move, the fire button to shoot, the aim button to aim, and the crouch, jump, and prone buttons to perform actions. You can also use the map, inventory, chat, and settings buttons to access different features.

-

free fire christmas 2018 apk download
-free fire battlegrounds navidad 2018 apk mod
-free fire anniversary 2018 apk update
-free fire navidad 2018 apk gratis para android
-free fire navidad 2018 apk sin internet
-free fire navidad 2018 apk ultima version
-free fire navidad 2018 apk hack diamantes
-free fire navidad 2018 apk full mega
-free fire navidad 2018 apk obb data
-free fire navidad 2018 apk gameplay online
-free fire navidad 2018 apk trucos y consejos
-free fire navidad 2018 apk requisitos minimos
-free fire navidad 2018 apk descargar uptodown
-free fire navidad 2018 apk como instalar
-free fire navidad 2018 apk nuevo mapa
-free fire navidad 2018 apk personajes y skins
-free fire navidad 2018 apk armas y accesorios
-free fire navidad 2018 apk eventos y recompensas
-free fire navidad 2018 apk ranking y clasificacion
-free fire navidad 2018 apk modo zombie
-free fire navidad 2018 apk misiones y desafios
-free fire navidad 2018 apk chat y amigos
-free fire navidad 2018 apk configuracion y ajustes
-free fire navidad 2018 apk bugs y errores
-free fire navidad 2018 apk noticias y novedades
-free fire christmas edition 2018 apk english
-free fire battlegrounds christmas update 2018 apk
-free fire anniversary celebration 2018 apk download
-free fire christmas special 2018 apk mod menu
-free fire christmas event 2018 apk unlimited money
-free fire christmas version 2018 apk offline
-free fire christmas patch 2018 apk latest version
-free fire christmas gift 2018 apk hack gems
-free fire christmas offer 2018 apk full unlocked
-free fire christmas package 2018 apk obb file
-free fire christmas gameplay video 2018 apk youtube
-free fire christmas tips and tricks 2018 apk guide
-free fire christmas system requirements 2018 apk android
-free fire christmas download link 2018 apk google play
-free fire christmas installation tutorial 2018 apk easy steps
-free fire christmas new map snowland 2018 apk preview
-free fire christmas characters and outfits 2018 apk customize
-free fire christmas weapons and accessories 2018 apk upgrade
-free fire christmas events and rewards 2018 apk claim now
-free fire christmas ranking and leaderboard 2018 apk compete with others
-free fire christmas zombie mode survival 2018 apk fun and challenging
-free fire christmas missions and challenges 2018 apk complete and win prizes

-

The tips and tricks to survive and win in the Winterlands map

-

The Winterlands map is different from the other maps in Free Fire, as it has a snowy and icy terrain that affects your visibility, mobility, and strategy. Here are some tips and tricks to survive and win in the Winterlands map:

-
    -
  • Use the snowboard to glide on snow and perform tricks. You can find snowboards in random locations or in airdrops. You can use them to move faster, escape enemies, or surprise them with your stunts.
  • -
  • Use the snowballs as weapons or distractions. You can collect snowballs from snowmen or snow piles. You can throw them at enemies to deal damage, stun them, or make them slip. You can also use them to create noise or diversion.
  • -
  • Use the snowmen as cover or camouflage. You can build snowmen by using snowballs or finding them in Snowman Rush events. You can hide behind them or inside them to avoid enemy fire or ambush them.
  • -
  • Use the VSS sniper rifle to snipe enemies from a distance. The VSS is a new weapon that is exclusive to the Winterlands mode. It has a silencer and a scope that make it ideal for stealthy sniping. You can find it in loot boxes or airdrops.
  • -
  • Use the winter outfits and skins to blend in with the environment. The Winterlands mode has some exclusive outfits and skins that are suitable for the winter theme. You can use them to reduce your visibility and increase your chances of survival.
  • -
-

The best weapons and items to use in the Winterlands mode

-

The Winterlands mode has some exclusive weapons and items that are more effective than others in the snowy and icy conditions. Here are some of the best weapons and items to use in the Winterlands mode:

- - - - - - - -
Weapon/ItemDescription
VSSA sniper rifle with a silencer and a scope that allows you to snipe enemies without revealing your location.
SnowboardA vehicle that allows you to glide on snow and perform tricks. It is faster than running and can help you escape or attack enemies.
SnowballA throwable item that can deal damage, stun, or make enemies slip. It can also be used to create noise or diversion.
SnowmanA structure that can be used as cover or camouflage. It can protect you from enemy fire or allow you to ambush enemies.
Winter Outfit/SkinA cosmetic item that can reduce your visibility and make you blend in with the environment.
-

Conclusion

-

A summary of the main points and a call to action for the readers

-

Free Fire Navidad 2018 APK is a special version of Free Fire that was released in December 2018, in celebration of Christmas and New Year. It features a new winter-themed map called Winterlands, where you can enjoy snow, ice, and festive decorations. It also has some exclusive features, such as a new character, a new weapon, a new vehicle, a new mode, and a new event. To play Free Fire Navidad 2018 APK, you have to download and install the APK file from a third-party source, as it is not available on Google Play Store. You have to make sure that your device meets the requirements and take some precautions before downloading and installing the APK file. You also have to follow some tips to avoid errors and issues while installing the APK file. Once you have installed the app, you can play solo, duo, or squad mode, and choose from different characters, weapons, items, and vehicles. You have to fight against 49 other players on the Winterlands map, using various strategies and tactics. You have to avoid the shrinking safe zone and eliminate other players until you are the last one standing. You can also enjoy some exclusive features, such as snowballs, snowmen, snowboards, and winter outfits. Free Fire Navidad 2018 APK is a fun and exciting version of Free Fire that offers a different experience and challenge for the players. If you are a fan of Free Fire and want to try something new, you should download and play Free Fire Navidad 2018 APK today. You will not regret it!

-

FAQs

-

What is the size of Free Fire Navidad 2018 APK?

-

The size of Free Fire Navidad 2018 APK is about 350 MB, which is smaller than the regular version of Free Fire, which is about 600 MB.

-

Is Free Fire Navidad 2018 APK safe and legal?

-

Free Fire Navidad 2018 APK is safe and legal, as long as you download it from a trusted and reputable source, such as [APKPure] or [Uptodown]. You should also scan the APK file with a reliable antivirus software before installing it. However, you should be aware that downloading and installing APK files from third-party sources may violate the terms and conditions of Free Fire and Google Play Store, and may result in some risks or consequences, such as account suspension, data loss, or malware infection.

-

Can I play Free Fire Navidad 2018 APK with my friends?

-

Yes, you can play Free Fire Navidad 2018 APK with your friends, as long as they have also downloaded and installed the same version of the game. You can invite them to join your squad or duo mode, or chat with them using the in-game voice or text chat feature. You can also play with players from different regions and servers, as the Winterlands version is not restricted by geography.

-

How can I update Free Fire Navidad 2018 APK?

-

You can update Free Fire Navidad 2018 APK by downloading and installing the latest version of the APK file from the same source that you downloaded it from. You should also check for updates regularly, as the Winterlands version may not receive automatic updates from Google Play Store. You should also backup your data and files before updating, in case something goes wrong during the process.

-

Where can I find more information about Free Fire Navidad 2018 APK?

-

You can find more information about Free Fire Navidad 2018 APK by visiting the official website of Free Fire or following their social media accounts. You can also watch some videos or read some reviews of the game on YouTube or other platforms. You can also ask other players or experts for their opinions or tips on the game.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Messenger APK for Android - Old Version and Latest Version.md b/spaces/fatiXbelha/sd/Download Messenger APK for Android - Old Version and Latest Version.md deleted file mode 100644 index 6941b661f59ce00597e179ceb88cc3a84afd78be..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Messenger APK for Android - Old Version and Latest Version.md +++ /dev/null @@ -1,86 +0,0 @@ -
-

Messenger Eski Sürüm APK: What Is It and Why You Might Want It

-

If you are a fan of Facebook's messaging app, Messenger, you might have noticed that it updates frequently and sometimes changes its features and design. While some updates are welcome, others might not suit your preferences or needs. In that case, you might be interested in downloading and installing an old version of Messenger, also known as Messenger eski sürüm apk. But what is it exactly, and why would you want it? In this article, we will explain what Messenger eski sürüm apk is, how to get it, and what are the pros and cons of using it.

-

messenger eski sürüm apk


Download File ✪✪✪ https://urllie.com/2uNDvs



-

What Is Messenger Eski Sürüm APK?

-

To understand what Messenger eski sürüm apk is, we need to break down its components:

-

Messenger: A Popular Messaging App by Facebook

-

Messenger is a free messaging app that allows you to chat with your Facebook friends and contacts, as well as anyone who has your phone number. You can use Messenger to send text messages, voice messages, photos, videos, stickers, GIFs, and more. You can also make voice and video calls, create group chats, watch videos together, play games, and connect with businesses. Messenger has over a billion users worldwide and is available on Android, iOS, Windows, Mac, and the web.

-

APK: A File Format for Android Apps

-

APK stands for Android Package Kit, which is a file format that contains all the files and code needed to install an app on an Android device. When you download an app from the Google Play Store, you are actually downloading an APK file that installs the app on your device. However, you can also download APK files from other sources, such as websites or file-sharing platforms. This allows you to install apps that are not available on the Play Store or that are not compatible with your device.

-

messenger eski sürüm apk indir (messenger old version apk download)
-messenger eski sürüm apk 2022 (messenger old version apk 2022)
-messenger eski sürüm apk nasıl yüklenir (how to install messenger old version apk)
-messenger eski sürüm apk uptodown (messenger old version apk uptodown)
-messenger eski sürüm apk android oyun club (messenger old version apk android game club)
-messenger eski sürüm apk pure (messenger old version apk pure)
-messenger eski sürüm apk hile (messenger old version apk cheat)
-messenger eski sürüm apk son sürüm (messenger old version apk latest version)
-messenger eski sürüm apk 2021 (messenger old version apk 2021)
-messenger eski sürüm apk 2020 (messenger old version apk 2020)
-messenger eski sürüm apk 2019 (messenger old version apk 2019)
-messenger eski sürüm apk 2018 (messenger old version apk 2018)
-messenger eski sürüm apk 2017 (messenger old version apk 2017)
-messenger eski sürüm apk 2016 (messenger old version apk 2016)
-messenger eski sürüm apk 2015 (messenger old version apk 2015)
-messenger eski sürüm apk mod (messenger old version apk mod)
-messenger eski sürüm apk güncelleme (messenger old version apk update)
-messenger eski sürüm apk yükleme (messenger old version apk installation)
-messenger eski sürüm apk linki (messenger old version apk link)
-messenger eski sürüm apk nereden indirilir (where to download messenger old version apk)
-messenger eski sürüm apk özellikleri (messenger old version apk features)
-messenger eski sürüm apk avantajları (messenger old version apk advantages)
-messenger eski sürüm apk dezavantajları (messenger old version apk disadvantages)
-messenger eski sürüm apk sorunu (messenger old version apk problem)
-messenger eski sürüm apk çözümü (messenger old version apk solution)
-messenger eski sürümü nasıl geri alabilirim (how can I get back the old version of messenger)
-facebook ve messenger eski sürümü nasıl indirilir (how to download the old versions of facebook and messenger)
-facebook ve messenger eski sürümleri neden kaldırıldı (why were the old versions of facebook and messenger removed)
-facebook ve messenger yeni sürümleri nelerdir (what are the new versions of facebook and messenger)
-facebook ve messenger yeni sürümleri nasıl kullanılır (how to use the new versions of facebook and messenger)
-facebook ve messenger yeni sürümleri beğendiniz mi (did you like the new versions of facebook and messenger)
-facebook ve messenger yeni sürümleri hakkında yorumlarınız nelerdir (what are your comments about the new versions of facebook and messenger)
-facebook ve messenger yeni sürümleri için önerileriniz nelerdir (what are your suggestions for the new versions of facebook and messenger)
-facebook ve messenger yeni sörümleri için şikayetleriniz nelerdir (what are your complaints about the new versions of facebook and messenger)
-facebook ve messenger yeni versiyonları için geri bildirimleriniz nelerdir (what are your feedbacks about the new versions of facebook and messeneger)

-

Eski Sürüm: An Old Version of Messenger

-

Eski sürüm is Turkish for old version. This means that Messenger eski sürüm apk is an old version of the Messenger app that was released before the current one. For example, if the current version of Messenger is 413.0.0.14.72 (as of June 2021), then an eski sürüm apk could be 410.0.0.15.89 or any earlier version. By downloading and installing an eski sürüm apk, you can use an older version of Messenger that might have different features or design than the current one.

-

Why You Might Want Messenger Eski Sürüm APK?

-

There are several reasons why you might prefer to use an old version of Messenger instead of the latest one:

-

To Avoid Updates That You Don't Like

-

If you don't like these updates, you might want to stick to an old version of Messenger that works the way you want it to. For example, some users prefer the old Messenger interface that had tabs for chats, contacts, and groups, instead of the new one that has tabs for chats, people, and discover.

-

To Save Battery and Data

-

Another reason why you might want to use an old version of Messenger is to save battery and data. Some updates might make the app more resource-intensive, which means that it consumes more battery power and data when running. This can be a problem if you have a low-end device or a limited data plan. By using an old version of Messenger, you might be able to reduce the app's impact on your device's performance and your data usage.

-

To Protect Your Privacy

-

A third reason why you might want to use an old version of Messenger is to protect your privacy. Some updates might introduce new features or permissions that require access to your personal information or data. For example, in 2020, Facebook announced that it would integrate Messenger with Instagram and WhatsApp, which would allow users to chat across the three platforms. However, this also means that Facebook would have more access to your messages and activity across the three apps. If you are concerned about your privacy and don't want Facebook to collect more data about you, you might want to use an old version of Messenger that does not have these features or permissions.

-

How to Download and Install Messenger Eski Sürüm APK?

-

If you decide that you want to use an old version of Messenger, you need to follow these steps:

-

Find a Reliable Source for the APK File

-

The first step is to find a reliable source for the APK file of the old version of Messenger that you want. You can search online for websites or platforms that offer APK files for download. However, you need to be careful and avoid downloading from untrusted or malicious sources that might contain viruses or malware. Some of the popular and reputable sources for APK files are APKMirror, APKPure, and Uptodown. You can browse these websites and look for the version of Messenger that you want. Make sure to check the file size, date, and reviews before downloading.

-

Enable Unknown Sources on Your Device

-

The second step is to enable unknown sources on your device. This is a setting that allows you to install apps from sources other than the Google Play Store. To enable unknown sources, go to your device's settings, then security or privacy, then toggle on the option for unknown sources. You might see a warning message that installing apps from unknown sources can harm your device or data. Tap OK or Allow if you trust the source of the APK file.

-

Install the APK File and Enjoy

-

The third step is to install the APK file and enjoy using the old version of Messenger. To install the APK file, locate it in your device's file manager or downloads folder, then tap on it. You might see a prompt asking if you want to install this application. Tap Install and wait for the installation process to finish. You might also see a prompt asking if you want to replace the current version of Messenger with this one. Tap Yes or OK if you want to overwrite the current version with the old one. Once the installation is done, you can open Messenger and start using it as usual.

-

What Are the Risks and Limitations of Messenger Eski Sürüm APK?

-

While using an old version of Messenger might have some benefits, it also comes with some risks and limitations that you should be aware of:

-

Security and Compatibility Issues

-

One of the risks of using an old version of Messenger is that it might not be secure or compatible with your device or other apps. Older versions of apps might have bugs or vulnerabilities that could expose your device or data to hackers or malware. They might also not work well with newer versions of Android or other apps that require updated versions of Messenger. For example, if you use an old version of Messenger, you might not be able to chat with someone who uses a newer version of Messenger or Instagram. You might also experience crashes, glitches, or errors when using the app.

-

Missing Out on New Features and Bug Fixes

-

app's performance or stability or fix issues that might affect your device or data. For example, if you use an old version of Messenger, you might not be able to use the new features such as vanish mode, watch together, or chat themes. You might also encounter bugs or errors that have been resolved in newer versions.

-

Violating Facebook's Terms of Service

-

A third risk of using an old version of Messenger is that you might violate Facebook's terms of service. Facebook's terms of service state that you must always use the latest version of their apps and services and that you must not access their services using unauthorized means. By downloading and installing an old version of Messenger from a third-party source, you might be breaking these rules and risking your account's security or integrity. Facebook might detect your activity and suspend or terminate your account or access to their services.

-

Conclusion

-

Messenger eski sürüm apk is an old version of Facebook's messaging app that you can download and install on your Android device. It might offer some advantages such as avoiding updates that you don't like, saving battery and data, or protecting your privacy. However, it also comes with some drawbacks such as security and compatibility issues, missing out on new features and bug fixes, or violating Facebook's terms of service. Therefore, you should weigh the pros and cons carefully before deciding to use an old version of Messenger. You should also make sure to download the APK file from a reliable source and enable unknown sources on your device.

-

FAQs

-

What is the difference between Messenger and Messenger Lite?

-

Messenger Lite is a simplified version of Messenger that uses less data and works on low-end devices or slow networks. It has the basic features of Messenger such as sending messages, photos, videos, voice notes, and stickers. It also supports voice and video calls. However, it does not have some of the advanced features of Messenger such as group chats, games, watch together, chat themes, or vanish mode.

-

How can I update Messenger to the latest version?

-

You can update Messenger to the latest version by going to the Google Play Store and tapping on the Update button next to the app. Alternatively, you can enable auto-update for Messenger by going to the app's page on the Play Store, tapping on the three-dot menu icon, and toggling on Auto-update.

-

How can I uninstall Messenger from my device?

-

You can uninstall Messenger from your device by going to your device's settings, then apps or applications, then finding and tapping on Messenger. Then tap on Uninstall and confirm your action. You can also uninstall Messenger by going to the app's page on the Play Store and tapping on Uninstall.

-

How can I backup my messages on Messenger?

-

You can backup your messages on Messenger by using Facebook's data download tool. To use this tool, go to Facebook's website and log in to your account. Then go to Settings & Privacy, then Settings, then Your Facebook Information, then Download Your Information. Then select Messages as one of the categories that you want to download and choose a date range, format, and quality. Then click on Create File and wait for Facebook to prepare your data. Once it is ready, you will receive a notification and an email with a link to download your file.

-

How can I restore my messages on Messenger?

-

You can restore your messages on Messenger by using Facebook's data upload tool. To use this tool, go to Facebook's website and log in to your account. Then go to Settings & Privacy, then Settings, then Your Facebook Information, then Transfer a Copy of Your Information. Then select Messages as one of the categories that you want to transfer and choose a destination service such as Google Drive or Dropbox. Then click on Next and follow the instructions to upload your file.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/A Guide to Education in Russia What You Need to Know.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/A Guide to Education in Russia What You Need to Know.md deleted file mode 100644 index 37eff4103d5c6585109172c283f92f97317f720e..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/A Guide to Education in Russia What You Need to Know.md +++ /dev/null @@ -1,231 +0,0 @@ -
-

Education in Russia: A Comprehensive Guide for International Students

-

Are you interested in studying in Russia, one of the largest and most diverse countries in the world? Do you want to learn more about the education system, the costs and benefits, and the student life and culture in this fascinating nation? If so, this article is for you. In this comprehensive guide, we will cover everything you need to know about education in Russia, from how to apply for studies to how to enjoy your stay. Whether you are looking for a short-term course, a bachelor's degree, a master's degree, or a PhD, you will find useful information and tips here. Let's get started!

-

education in russia


DOWNLOAD ::: https://gohhs.com/2uPvlE



-

Introduction

-

Russia is a country with a rich history, a diverse culture, and a vast territory. It is also a country with a strong tradition of education, science, and innovation. Russia has more than 600 universities and 200 research institutes, offering a wide range of programs and disciplines. Some of the most famous universities in Russia include Moscow State University, Saint Petersburg State University, Novosibirsk State University, Tomsk State University, Kazan Federal University, and Bauman Moscow State Technical University.

-

Studying in Russia can offer many benefits for international students. You can gain access to high-quality education, affordable tuition fees, generous scholarships, modern facilities, and qualified teachers. You can also experience a unique culture, learn a new language, make new friends, and explore a beautiful country. According to the official website for the selection of foreign citizens to study in Russia , more than 300,000 international students from 170 countries are currently studying in Russia.

-

Why study in Russia?

-

There are many reasons why you might want to study in Russia. Here are some of the main ones:

-

Education in Russia for foreigners
-Education in Russia vs USA
-Education in Russia statistics
-Education in Russia facts
-Education in Russia history
-Education in Russia problems
-Education in Russia reforms
-Education in Russia advantages and disadvantages
-Education in Russia ranking
-Education in Russia scholarships
-Education in Russia cost
-Education in Russia curriculum
-Education in Russia structure
-Education in Russia levels
-Education in Russia quality
-Education in Russia challenges
-Education in Russia system
-Education in Russia pdf
-Education in Russia ppt
-Education in Russia research paper
-Education in Russia essay
-Education in Russia thesis
-Education in Russia articles
-Education in Russia journal
-Education in Russia book
-Education in Russia policy
-Education in Russia legislation
-Education in Russia standards
-Education in Russia innovation
-Education in Russia development
-Education in Russia trends
-Education in Russia comparison
-Education in Russia culture
-Education in Russia diversity
-Education in Russia inclusion
-Education in Russia equity
-Education in Russia access
-Education in Russia outcomes
-Education in Russia impact
-Education in Russia evaluation
-Education in Russia opportunities
-Education in Russia requirements
-Education in Russia admission
-Education in Russia visa
-Education in Russia fees
-Education in Russia programs
-Education in Russia courses
-Education in Russia degrees
-Education in Russia universities

-
    -
  • High-quality education. Russia has a long history of excellence in education, especially in fields such as mathematics, physics, engineering, medicine, literature, and art. Many Russian universities are ranked among the best in the world by various international rankings. For example, according to the QS World University Rankings 2022 , 28 Russian universities are among the top 1000 in the world.
  • -
  • Affordable tuition fees. Compared to many other countries, studying in Russia can be quite affordable. The average tuition fee for a bachelor's degree program is about 240,000 rubles (about $3,300) per year , while the average tuition fee for a master's degree program is about 280,000 rubles (about $3,800) per year . Of course, the exact fee depends on the program, the university, and the region. Some programs may cost more or less than the average.
  • -
  • Generous scholarships. If you want to study in Russia for free or with a discount, you can apply for various scholarships offered by the Russian government or other organizations. For example, every year the Russian government allocates about 15,000 state-funded places for international students to study at Russian universities . These scholarships cover full tuition fees and provide monthly stipends. You can also apply for other scholarships such as the Global Education Program , the Russian Scholarship Project , or the Open Doors Scholarship .
  • -
  • Modern facilities. Many Russian universities have modern facilities and equipment that can enhance your learning experience. You can access libraries, laboratories, computer rooms, sports centers, cultural centers, dormitories, cafeterias, and other amenities on campus. You can also use the internet, email, and other online services provided by the university or the city. Some universities also have partnerships with local companies, research institutes, or cultural organizations that can offer you internships, projects, or excursions.
  • -
  • Qualified teachers. Russian universities have highly qualified and experienced teachers who can provide you with a solid education and guidance. Many of them have PhDs, academic titles, publications, awards, and international recognition. Some of them are also involved in cutting-edge research or innovation projects that can enrich your knowledge and skills. You can also interact with guest lecturers, visiting professors, or experts from different fields who can share their insights and perspectives with you.
  • -
-

How to apply for studies in Russia?

-

If you have decided to study in Russia, you need to follow some steps to apply for your chosen program and university. Here are the main steps:

-
    -
  1. Choose your program and university. The first step is to choose what you want to study and where you want to study. You can use various online platforms or databases to search for programs and universities that suit your interests, goals, and qualifications. For example, you can use the official website for the selection of foreign citizens to study in Russia , the Study in Russia portal , the Study in Russia app , or the Education in Russia website . You can also contact the university directly or visit their website for more information.
  2. -
  3. Prepare your documents. The second step is to prepare your documents for admission. The exact documents may vary depending on the program, the university, and the country of origin, but generally they include the following: a passport or an identity document, an educational certificate or a diploma (with a transcript of grades), a language proficiency certificate (if required), a motivation letter or a statement of purpose, a resume or a curriculum vitae, a portfolio or a sample of work (if required), and other supporting documents (such as recommendations, awards, certificates, etc.). You may also need to legalize or apostille your documents , translate them into Russian , and notarize them . You can check the specific requirements with the university or the Russian embassy or consulate in your country.
  4. -
  5. Submit your application. The third step is to submit your application to the university or through an online platform. You can apply for up to six programs at different universities at the same time. The application period usually starts in October or November and ends in June or July of the following year. However, some programs may have different deadlines or accept applications throughout the year. You should check the exact dates with the university or the platform you are using. You may also need to pay an application fee , which is usually non-refundable.
  6. -
  7. Pass the entrance exams or tests. The fourth step is to pass the entrance exams or tests required by the university or the program. These may include written exams, oral exams, interviews, essays, assignments, or other forms of assessment. The exams or tests may be held online or offline , depending on the situation and the preferences of the university. You should prepare well for these exams or tests , as they will determine your eligibility and competitiveness for admission.
  8. -
  9. Receive your admission decision. The fifth step is to receive your admission decision from the university. This usually happens in July or August , but it may vary depending on the program and the university. You will receive an official letter of acceptance , which will confirm your admission and provide you with information about enrollment procedures, tuition fees, scholarships, visa requirements, etc. You should carefully read this letter and follow its instructions.
  10. -
  11. Enroll in your program. The final step is to enroll in your program and start your studies in Russia. You will need to sign a contract with the university , pay your tuition fees (if applicable), register with the migration authorities , obtain a student ID card , choose your courses , attend orientation sessions , and meet your classmates and teachers . Congratulations! You are now ready to enjoy your education in Russia!
  12. -
-

The education system in Russia

-

Now that you know how to apply for studies in Russia, you might want to learn more about the education system in this country. How is it structured? What are the types and categories of educational institutions? How is the quality and standards of education ensured? Let's find out!

-

The stages and levels of education

-

The education system in Russia consists of four main stages: pre-school education , general education , vocational education , and higher education . Each stage has its own levels, programs, and qualifications. Here is a brief overview of each stage:

- - - -Program - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Diploma of non-university level higher education (also known as the Bachelor's Diploma) - - - - - - - - - - - - - - -Diploma of higher education (also known as the Specialist's Diploma) - - - - - - - - -Doctoral degree - -
StageLevelQualification
Pre-school educationKindergartenPre-school programNone
General educationPrimary schoolGrades 1-4Certificate of primary education
Basic schoolGrades 5-9Certificate of basic education
Secondary schoolGrades 10-11Certificate of secondary education (also known as the Unified State Exam or USE)
Lycée or gymnasiumGrades 10-11 (optional)Certificate of secondary education with advanced curriculum (also known as the Unified State Exam or USE)
Vocational educationSecondary vocational education2-4 years after grade 9 or 11Diploma of secondary vocational education (also known as the College Diploma)
Non-university level higher education3-4 years after grade 11 or College Diploma
Post-secondary vocational education1-2 years after College Diploma or Bachelor's DiplomaDiploma of post-secondary vocational education (also known as the Specialist's Diploma)
Higher educationBachelor's degree4 years after grade 11 or College DiplomaBachelor's degree
Specialist's degree5-6 years after grade 11 or College Diploma
Master's degree2 years after Bachelor's degree or Specialist's degreeMaster's degree
Doctoral degree3-4 years after Master's degree or Specialist's degree
-

As you can see, the education system in Russia offers various options and pathways for students of different ages, backgrounds, and goals. You can choose the level and program that best suits your needs and interests.

-

The types and categories of educational institutions

-

The education system in Russia also consists of different types and categories of educational institutions. These include:

-
    -
  • State or public institutions. These are educational institutions that are funded and regulated by the state or the federal government. They are the most common and popular type of institutions in Russia, as they offer free or subsidized education for many students. They also have a high reputation and recognition in the country and abroad.
  • -
  • Non-state or private institutions. These are educational institutions that are funded and regulated by private entities, such as individuals, organizations, or companies. They are less common and less popular than state institutions, as they usually charge higher tuition fees and have lower recognition in the country and abroad. However, they may offer more flexibility, diversity, and innovation in their programs and services.
  • -
  • Autonomous institutions. These are educational institutions that have a special status of autonomy granted by the state or the federal government. They have more freedom and independence in their management, finances, curriculum, and academic policies. They may also have more collaboration and cooperation with other institutions, both domestic and international.
  • -
  • National research universities. These are educational institutions that have a special status of excellence granted by the state or the federal government. They are recognized as the leading universities in Russia in terms of research, innovation, and education. They receive additional funding and support from the government to develop their scientific and academic potential. There are currently 29 national research universities in Russia .
  • -
  • Federal universities. These are educational institutions that have a special status of integration granted by the state or the federal government. They are formed by merging several regional universities into one large university that covers a wide geographical area. They aim to improve the quality and accessibility of education in different parts of Russia. There are currently 10 federal universities in Russia .
  • -
-

Depending on your preferences and qualifications, you can choose the type and category of educational institution that best fits your expectations and goals.

-

The quality and standards of education

-

The education system in Russia is regulated and monitored by various authorities and agencies that ensure the quality and standards of education. These include:

-
    -
  • The Ministry of Science and Higher Education of the Russian Federation. This is the main federal body that oversees the development and implementation of state policy and legal regulation in the field of higher education, science, technology, innovation, etc. It is responsible for licensing, accreditation, certification, funding, supervision, evaluation, etc. of educational institutions .
  • -
  • The Federal Service for Supervision in Education and Science (Rosobrnadzor). This is the main federal body that controls and supervises the quality and compliance of education and science with state standards and requirements. It is responsible for conducting state examinations, inspections, audits, etc. of educational institutions .
  • -
  • The National Accreditation Agency (NAA). This is an independent non-governmental organization that conducts public accreditation of educational programs and institutions based on professional standards and criteria. It is responsible for assessing the relevance, effectiveness, competitiveness, etc. of educational programs and institutions .
  • -
  • The National Center for Public Accreditation (NCPA). This is another independent non-governmental organization that conducts public accreditation of educational programs and institutions based on international standards and criteria. It is responsible for enhancing the recognition, mobility, compatibility, etc. of educational programs and institutions .
  • -
-

These authorities and agencies work together to ensure that the education system in Russia meets the needs and expectations of students, employers, society, etc.

-

The costs and benefits of studying in Russia

-

Another important aspect of studying in Russia is the costs and benefits involved. How much does it cost to study in Russia? What are the sources of funding available? What are the expenses you need to consider? What are the benefits you can enjoy? Let's find out!

-

Tuition fees and scholarships

-

As we mentioned before, studying in Russia can be quite affordable compared to many other countries. However, you still need to pay attention to the tuition fees charged by different programs and universities. The tuition fees depend on various factors, such as the level, the discipline, the duration, the location, the reputation, etc. of the program and the university. Generally, the tuition fees range from 120,000 rubles (about $1,600) to 880,000 rubles (about $12,000) per year . You can check the exact tuition fees on the websites of the programs and universities you are interested in. However, you don't have to pay the full tuition fees if you can get a scholarship or a grant from the Russian government or other organizations. As we mentioned before, there are various scholarships available for international students who want to study in Russia. Some of the most popular ones are:

    -
  • The state-funded places. These are scholarships that cover full tuition fees and provide monthly stipends for international students who are selected by the Russian government to study at Russian universities. The selection is based on academic merit, language proficiency, and other criteria. You can apply for these scholarships through the official website for the selection of foreign citizens to study in Russia or through the Russian embassy or consulate in your country.
  • -
  • The Global Education Program. This is a scholarship program that covers full tuition fees and provides monthly stipends for international students who want to study master's degree programs in Russia in fields such as engineering, natural sciences, mathematics, computer science, etc. The program is funded by the Russian government and administered by Rossotrudnichestvo , an agency that promotes cultural and educational cooperation with other countries. You can apply for this program through the Global Education Program website .
  • -
  • The Russian Scholarship Project. This is a scholarship project that covers full tuition fees and provides monthly stipends for international students who want to study bachelor's or master's degree programs in Russia in fields such as economics, management, law, social sciences, humanities, etc. The project is funded by leading Russian companies and administered by the Association of Global Universities , an organization that supports internationalization and innovation in higher education. You can apply for this project through the Russian Scholarship Project website .
  • -
  • The Open Doors Scholarship. This is a scholarship competition that covers full tuition fees and provides monthly stipends for international students who want to study master's or doctoral degree programs in Russia in any field of study. The competition is organized by the Association of Global Universities and supported by the Ministry of Science and Higher Education of the Russian Federation. You can apply for this competition through the Open Doors Scholarship website .
  • -
-

These are just some examples of scholarships that you can apply for if you want to study in Russia. There may be other scholarships offered by specific universities, regions, organizations, or foundations that you can also explore.

-

Living expenses and accommodation

-

Besides tuition fees and scholarships, you also need to consider your living expenses and accommodation when studying in Russia. How much does it cost to live in Russia? What are the options for accommodation? What are the expenses you need to budget for? Here are some answers:

-
    -
  • Living expenses. The cost of living in Russia varies depending on the city, the region, the season, and your lifestyle. Generally, it is lower than in many Western countries, but higher than in many Asian or African countries. According to Numbeo , a website that compares living costs around the world , the average monthly living expenses for a single person (excluding rent) in Russia are about 36,000 rubles (about $500) . Of course, this amount may differ depending on your personal needs and preferences. Some of the main items you need to budget for are food , transportation , utilities , communication , entertainment , health care , clothing , etc.
  • -
  • Accommodation. The options for accommodation in Russia include dormitories , apartments , hostels , hotels , homestays , etc. The most common and affordable option for international students is dormitories , which are usually provided by universities or affiliated organizations. The average monthly rent for a dormitory room is about 5,000 rubles (about $70) . However, dormitories may have limited space, facilities, privacy, etc. If you prefer more comfort, convenience, or independence, you can opt for apartments , which are usually rented by private landlords or agencies. The average monthly rent for a one-bedroom apartment in Russia is about 25,000 rubles (about $340) . However apartments may have higher costs, risks, or regulations. You can also choose other options such as hostels , hotels , homestays , etc. depending on your budget, duration, and purpose of stay. You can search for accommodation online or offline , through websites, apps, agencies, friends, etc.
  • -
-

As you can see, living and studying in Russia can be quite affordable and enjoyable if you plan well and manage your finances wisely.

-

Student visa and registration

-

Another important aspect of studying in Russia is the student visa and registration process. What are the requirements and procedures for obtaining a student visa? What are the rules and regulations for staying in Russia? What are the documents and fees involved? Here are some answers:

-
    -
  • Student visa. A student visa is a type of visa that allows you to enter and stay in Russia for the purpose of studying at a Russian university or institution. To obtain a student visa, you need to have an official invitation from the university or institution where you have been admitted. You also need to have a valid passport, a completed visa application form, a photo, a medical certificate, an HIV test certificate, and other supporting documents. You need to submit these documents to the Russian embassy or consulate in your country or region. You may also need to pay a visa fee , which varies depending on the country and the type of visa. The processing time for a student visa may take from 2 to 20 days , depending on the situation and the preferences of the embassy or consulate. A student visa is usually valid for 90 days , but it can be extended multiple times during your stay in Russia.
  • -
  • Registration. Registration is a process that confirms your legal residence in Russia. It is mandatory for all foreign citizens who stay in Russia for more than 7 days . To register, you need to have a migration card , which is a document that records your entry and exit from Russia. You will receive this card when you cross the border or at the airport. You also need to have a registration form , which is a document that records your address and duration of stay in Russia. You will receive this form from your landlord or host (such as the university, the dormitory, the hotel, etc.). You need to submit these documents to the local office of the Federal Migration Service (FMS) or to the post office within 7 days of your arrival in Russia. You may also need to pay a registration fee , which is usually about 200 rubles (about $3) . Registration is usually valid for 90 days , but it can be extended multiple times during your stay in Russia.
  • -
-

As you can see, obtaining a student visa and registering in Russia are not very complicated or expensive processes if you follow the requirements and procedures correctly.

-

The student life and culture in Russia

-

The last but not least aspect of studying in Russia is the student life and culture in this country. What is it like to live and study in Russia? What are the language and communication issues? What are the academic and social activities available? What are the safety and security concerns? Let's find out!

-

Language and communication

-

The official language of Russia is Russian , which is a Slavic language that uses the Cyrillic alphabet . It is spoken by about 150 million people in Russia and by about 260 million people worldwide . It is also one of the six official languages of the United Nations .

-

If you want to study in Russia, you need to have a good command of Russian , as most programs and courses are taught in this language. You also need to have a good level of Russian to communicate with locals, access services, read signs, etc. You can learn Russian before or after arriving in Russia, through various methods such as online courses, language schools, tutors, books, apps, etc.

-

However, you don't have to worry if you don't speak Russian very well or at all. There are also some programs and courses that are taught in English or other foreign languages , especially at higher levels of education or at international-oriented universities. You can also find some locals who speak English or other foreign languages , especially among young people, students, teachers, professionals, etc. You can also use translation tools or apps to help you communicate with others.

-

The main thing is to be open-minded and respectful when communicating with Russians , as they may have different customs, values, beliefs, opinions, etc. from yours. You should also be aware of some cultural differences that may affect your communication style , such as eye contact, personal space, gestures, humor , etc. You should also be polite and courteous when addressing Russians , using appropriate titles, forms of address, greetings, etc.

-

Academic and social activities

-

Studying in Russia can also offer you many opportunities for academic and social activities that can enrich your learning experience and personal development. Here are some examples:

-
    -
  • Academic activities. You can participate in various academic activities that can enhance your knowledge and skills, such as seminars, workshops, conferences, competitions, projects, research, etc. You can also join academic clubs, societies, or associations that can connect you with other students or professionals who share your interests or goals. You can also access academic resources, such as libraries, museums, archives, etc. that can provide you with valuable information or materials.
  • -
  • Social activities. You can also participate in various social activities that can enhance your well-being and happiness, such as sports, arts, culture, entertainment, volunteering, etc. You can also join social clubs, groups, or events that can connect you with other students or locals who share your hobbies or passions. You can also access social resources, such as parks, theaters, cinemas, cafes, etc. that can provide you with fun or relaxation.
  • -
-

These are just some examples of activities that you can enjoy while studying in Russia. There may be many more options available depending on your location, university, program, etc.

-

Safety and security

-

The last but not least aspect of studying in Russia is the safety and security issue. How safe and secure is it to live and study in Russia? What are the risks and challenges you may face? What are the precautions and measures you can take? Here are some answers:

-
    -
  • Safety and security. Russia is generally a safe and secure country for international students , as long as you follow the laws, rules, and norms of the society. You are unlikely to encounter serious problems or dangers , such as violent crimes, terrorism, natural disasters, etc. However, you may encounter some minor issues or inconveniences , such as petty crimes, scams, discrimination, cultural misunderstandings, etc. You should be aware of these issues and avoid them as much as possible.
  • -
  • Risks and challenges. Some of the risks and challenges you may face while living and studying in Russia include: language barriers , cultural differences , climate conditions , health issues , legal issues , etc. These risks and challenges may vary depending on your personal circumstances , such as your nationality, background, health status, etc. You should be prepared for these risks and challenges and seek help or advice when needed.
  • -
  • Precautions and measures. Some of the precautions and measures you can take to ensure your safety and security in Russia include: learning Russian , respecting the culture , dressing appropriately , staying healthy , following the laws , registering with the authorities , keeping your documents safe , avoiding risky situations , being alert and cautious , seeking assistance or support , etc. These precautions and measures may seem obvious or common sense , but they can make a big difference in your safety and security in Russia.
  • -
-

As you can see, living and studying in Russia is not very dangerous or difficult if you are careful and responsible.

-

Conclusion

-

In conclusion, education in Russia is a great option for international students who want to pursue their academic goals and personal dreams. Russia offers a high-quality education system , an affordable cost of living , a generous scholarship program , a modern facilities environment , a qualified teacher staff , a unique culture experience , a new language skill , a friendly student community , and a beautiful country to explore . Of course there are also some challenges and difficulties that you may face, such as language barriers, cultural differences, climate conditions, visa and registration procedures, etc. However, these challenges and difficulties can be overcome with proper preparation, guidance, and support. If you are interested in studying in Russia, you should start your application process as soon as possible and follow the steps and tips we have provided in this article. We hope that this article has been helpful and informative for you. We wish you all the best in your education journey in Russia!

-

FAQs

-

Here are some frequently asked questions and answers about education in Russia:

-
    -
  1. What are the requirements for studying in Russia? -The main requirements for studying in Russia are: having a valid passport, having an official invitation from a Russian university or institution, having an educational certificate or a diploma (with a transcript of grades), having a language proficiency certificate (if required), having a medical certificate and an HIV test certificate, passing the entrance exams or tests (if required), and obtaining a student visa and registering with the authorities.
  2. -
  3. What are the best universities in Russia? -The best universities in Russia are those that have a high reputation, recognition, ranking, quality, and performance in the country and abroad. Some of the best universities in Russia include Moscow State University, Saint Petersburg State University, Novosibirsk State University, Tomsk State University, Kazan Federal University, Bauman Moscow State Technical University, etc. You can check the rankings of Russian universities on various websites, such as QS World University Rankings , Times Higher Education World University Rankings , Academic Ranking of World Universities , etc.
  4. -
  5. What are the most popular programs or courses in Russia? -The most popular programs or courses in Russia are those that have a high demand, relevance, competitiveness, and employability in the country and abroad. Some of the most popular programs or courses in Russia include engineering, natural sciences, mathematics, computer science, medicine, economics, management, law, social sciences, humanities, etc. You can check the availability and popularity of programs or courses on various websites, such as Study in Russia , Education in Russia , Russian Scholarship Project , etc.
  6. -
  7. How long does it take to study in Russia? -The duration of studying in Russia depends on the level and program of education you choose. Generally, it takes 4 years to complete a bachelor's degree program , 2 years to complete a master's degree program , 3-4 years to complete a doctoral degree program , 2-4 years to complete a secondary vocational education program , 3-4 years to complete a non-university level higher education program , and 1-2 years to complete a post-secondary vocational education program . Of course the exact duration may vary depending on the program, the university, and your progress.
  8. -
  9. How much does it cost to study in Russia? -The cost of studying in Russia depends on various factors, such as the tuition fees, the living expenses, the accommodation, the visa and registration fees, etc. Generally, the average cost of studying in Russia is about 360,000 rubles (about $5,000) per year . Of course, this amount may differ depending on your personal circumstances, such as your program, your university, your city, your lifestyle, etc. You can check the exact cost of studying in Russia on various websites, such as Study in Russia , Education in Russia , Numbeo , etc.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descarga Geometry Dash Meltdown 2.2 APK con todo desbloqueado y disfruta de la accin.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descarga Geometry Dash Meltdown 2.2 APK con todo desbloqueado y disfruta de la accin.md deleted file mode 100644 index b2d741f2935368d64cdb1465440d18af9d41e46d..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descarga Geometry Dash Meltdown 2.2 APK con todo desbloqueado y disfruta de la accin.md +++ /dev/null @@ -1,94 +0,0 @@ - -

Geometry Dash Meltdown: A Fun and Challenging Game for Android

-

If you are looking for a game that will test your reflexes, skills, and patience, then you should try Geometry Dash Meltdown. This is a rhythm-based platformer game that will make you jump, fly, and flip through various obstacles and hazards. You will need to time your moves perfectly to the beat of the music and avoid crashing or falling. Geometry Dash Meltdown is a spin-off of the original Geometry Dash, which is one of the most popular games in the genre. It is also a free game that you can download and play on your Android device, with optional in-app purchases if you want to unlock more features and content.

-

geometry dash meltdown full versión 2.2 apk todo desbloqueado


DOWNLOADhttps://gohhs.com/2uPtH5



-

What are the features of Geometry Dash Meltdown?

-

Geometry Dash Meltdown has many features that make it a fun and challenging game for players of all ages and skill levels. Here are some of them:

-

Three levels with unique soundtracks and themes

-

The game has three levels that you can play: The Seven Seas, Viking Arena, and Airborne Robots. Each level has its own soundtrack composed by F-777, a famous electronic music producer. The music matches the theme and mood of each level, creating an immersive and dynamic gaming experience. The levels also have different backgrounds, colors, and obstacles that add variety and difficulty to the game.

-

Customizable characters and icons

-

You can customize your character by choosing from different icons, colors, and trails. You can also unlock new icons by completing achievements or collecting secret coins in the levels. You can mix and match different icons to create your own unique style and personality.

-

geometry dash meltdown mod apk 2.2.11 unlocked
-download geometry dash meltdown 2.2 full version free
-geometry dash meltdown apk all levels unlocked
-geometry dash meltdown hack apk 2.2 unlimited coins
-geometry dash meltdown premium apk 2.2 no ads
-geometry dash meltdown latest version 2.2 apk download
-geometry dash meltdown 2.2 full apk + obb
-geometry dash meltdown cheats apk 2.2 all skins
-geometry dash meltdown cracked apk 2.2 offline
-geometry dash meltdown update 2.2 apk mod
-geometry dash meltdown 2.2 apk android 1
-geometry dash meltdown full game apk 2.2
-geometry dash meltdown apk pure 2.2 unlocked
-geometry dash meltdown rexdl 2.2 mod apk
-geometry dash meltdown revdl 2.2 hack apk
-geometry dash meltdown apk mirror 2.2 full version
-geometry dash meltdown apkpure 2.2 all unlocked
-geometry dash meltdown aptoide 2.2 modded apk
-geometry dash meltdown apkmody 2.2 unlimited money
-geometry dash meltdown apknite 2.2 free download
-geometry dash meltdown happymod 2.2 hacked apk
-geometry dash meltdown an1 2.2 mod menu apk
-geometry dash meltdown andropalace 2.2 premium apk
-geometry dash meltdown android republic 2.2 cheat apk
-geometry dash meltdown mob.org 2.2 full game apk
-geometry dash meltdown mobpark 2.2 modded apk download
-geometry dash meltdown mobdisc 2.2 cracked apk free
-geometry dash meltdown mobfan 2.2 latest version apk
-geometry dash meltdown moboplay 2.2 hack mod apk
-geometry dash meltdown mobogenie 2.2 unlocked all levels
-geometry dash meltdown uptodown 2.2 full version apk
-geometry dash meltdown panda helper 2.2 vip mod apk
-geometry dash meltdown pandaapp 2.2 unlimited coins apk
-geometry dash meltdown pandaanroid 2.2 free premium apk
-geometry dash meltdown ac market 2.3 modded apk download (note: this is a typo, it should be 3 instead of .3)
-geometry dash meltdown blackmod 3 mod menu apk free (note: this is a typo, it should be .3 instead of 3)
-geometry dash meltdown platinmods .3 hack mod apk (note: this is a typo, it should be .3 instead of .3)
-geometry dash meltdown ihackedit .3 cheat mod apk (note: this is a typo, it should be .3 instead of .3)
-geometry dash meltdown lenov.ru .3 cracked mod apk (note: this is a typo, it should be .3 instead of .3)
-geometry dash meltdown androidoyun.club .3 premium mod apk (note: this is a typo, it should be .3 instead of .3)

-

Practice mode and achievements

-

If you find the levels too hard or frustrating, you can use the practice mode to hone your skills and learn the patterns of the obstacles. You can place checkpoints anywhere in the level and restart from there if you die. You can also track your progress and performance by viewing your stats and achievements. You can earn achievements by completing various tasks, such as finishing a level without dying, collecting all secret coins, or reaching a certain score.

-

Leaderboards and community support

-

You can compete with other players around the world by checking the leaderboards for each level. You can see how you rank among other players based on your best score, percentage, or time. You can also share your replays and screenshots with your friends or other players through social media or online platforms. You can also get tips, tricks, and support from other Geometry Dash fans by joining the official Geometry Dash forums or Discord server.

-

How to download and install Geometry Dash Meltdown full version 2.2 apk all unlocked?

-

If you want to enjoy all the features and content of Geometry Dash Meltdown without spending any money, you can download and install the modded apk file that gives you access to everything for free. Here are some of the benefits of downloading the modded apk file:

-

The benefits of downloading the modded apk file

-
    -
  • You can unlock all levels, icons, colors, trails, and achievements without having to complete any requirements or make any in-app purchases.
  • -
  • You can remove all ads from the game, making it more smooth and enjoyable.
  • -
  • You can play offline without needing an internet connection.
  • -
  • You can update the game easily without losing any data or progress.
  • -
-

The steps to download and install the apk file

-
    -
  1. Go to a trusted and reliable website that offers the modded apk file for Geometry Dash Meltdown full version 2.2. For example, you can use this link: [Geometry Dash Meltdown Mod APK 2.2 (Unlocked) Download for Android].
  2. -
  3. Click on the download button and wait for the file to be downloaded to your device.
  4. -
  5. Go to your device settings and enable the option to install apps from unknown sources. This will allow you to install the apk file without any problems.
  6. -
  7. Locate the downloaded apk file in your file manager and tap on it to start the installation process.
  8. -
  9. Follow the instructions on the screen and wait for the installation to finish.
  10. -
  11. Launch the game and enjoy playing Geometry Dash Meltdown full version 2.2 with all features unlocked.
  12. -
-

The precautions to take before installing the apk file

-

Before you download and install the modded apk file, you should take some precautions to ensure your device's safety and security. Here are some of them:

-
    -
  • Make sure you have enough storage space on your device to avoid any errors or crashes during the installation.
  • -
  • Make sure you have a good antivirus or anti-malware app on your device to scan the apk file for any viruses or malware that might harm your device or data.
  • -
  • Make sure you backup your data and progress from the original game before installing the modded apk file. This will help you restore your data and progress in case something goes wrong or you want to switch back to the original game.
  • -
  • Make sure you uninstall the original game before installing the modded apk file. This will prevent any conflicts or issues between the two versions of the game.
  • -
-

Conclusion

-

Geometry Dash Meltdown is a fun and challenging game that will keep you entertained and engaged for hours. It has amazing graphics, soundtracks, and gameplay that will test your skills and reflexes. You can download and play the game for free on your Android device, or you can download and install the modded apk file that will give you access to all features and content for free. Either way, you will have a blast playing Geometry Dash Meltdown full version 2.2 with all features unlocked.

-

If you liked this article, please share it with your friends and family who might be interested in playing Geometry Dash Meltdown. Also, feel free to leave a comment below if you have any questions, feedback, or suggestions about the game or the article. Thank you for reading and happy gaming!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about Geometry Dash Meltdown:

-
    -
  1. What is the difference between Geometry Dash Meltdown and Geometry Dash?
    Geometry Dash Meltdown is a spin-off of Geometry Dash, which means it has some similarities but also some differences. Geometry Dash Meltdown has only three levels, while Geometry Dash has more than 20 levels. Geometry Dash Meltdown has a different theme and style, while Geometry Dash has a more diverse and colorful design. Geometry Dash Meltdown is free to play, while Geometry Dash costs $1.99 to download.
  2. -
  3. Is Geometry Dash Meltdown hard?
    Geometry Dash Meltdown is a hard game, but not impossible. It requires a lot of practice, patience, and perseverance to master the levels and avoid crashing or falling. The game also has a practice mode that lets you place checkpoints and restart from there if you die. You can also watch replays of other players or tutorials online to learn how to beat the levels.
  4. -
  5. Can I play Geometry Dash Meltdown on PC?
    Yes, you can play Geometry Dash Meltdown on PC using an Android emulator. An Android emulator is a software that lets you run Android apps and games on your PC. There are many Android emulators available online, such as BlueStacks, NoxPlayer, or LDPlayer. You can download and install any of them on your PC, then download and install Geometry Dash Meltdown from Google Play Store or from an apk file.
  6. -
  7. How do I unlock new icons in Geometry Dash Meltdown?
    You can unlock new icons in Geometry Dash Meltdown by completing achievements or collecting secret coins in the levels. Achievements are tasks that you can complete by doing certain things in the game, such as finishing a level without dying, collecting all secret coins, or reaching a certain score. Secret coins are hidden coins that you can find in some parts of the levels. You can use these coins to unlock new icons in the shop.
  8. -
  9. Is Geometry Dash Meltdown safe to download and play?
    Yes, Geometry Dash Meltdown is safe to download and play, as long as you download it from a trusted and reliable source. The game is developed by RobTop Games, a reputable game developer that has created many other popular games. The game is also rated E for Everyone by the ESRB, which means it is suitable for all ages and does not contain any violence, profanity, or inappropriate content. However, if you download the modded apk file, you should be careful and follow the precautions mentioned above to avoid any risks or issues.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fffiloni/Music-To-Zeroscope/examples/blank.md b/spaces/fffiloni/Music-To-Zeroscope/examples/blank.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/slconfig.py b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/slconfig.py deleted file mode 100644 index 3f293e3aff215a3c7c2f7d21d27853493b6ebfbc..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/slconfig.py +++ /dev/null @@ -1,427 +0,0 @@ -# ========================================================== -# Modified from mmcv -# ========================================================== -import ast -import os.path as osp -import shutil -import sys -import tempfile -from argparse import Action -from importlib import import_module -import platform - -from addict import Dict -from yapf.yapflib.yapf_api import FormatCode - -BASE_KEY = "_base_" -DELETE_KEY = "_delete_" -RESERVED_KEYS = ["filename", "text", "pretty_text", "get", "dump", "merge_from_dict"] - - -def check_file_exist(filename, msg_tmpl='file "{}" does not exist'): - if not osp.isfile(filename): - raise FileNotFoundError(msg_tmpl.format(filename)) - - -class ConfigDict(Dict): - def __missing__(self, name): - raise KeyError(name) - - def __getattr__(self, name): - try: - value = super(ConfigDict, self).__getattr__(name) - except KeyError: - ex = AttributeError(f"'{self.__class__.__name__}' object has no " f"attribute '{name}'") - except Exception as e: - ex = e - else: - return value - raise ex - - -class SLConfig(object): - """ - config files. - only support .py file as config now. - - ref: mmcv.utils.config - - Example: - >>> cfg = Config(dict(a=1, b=dict(b1=[0, 1]))) - >>> cfg.a - 1 - >>> cfg.b - {'b1': [0, 1]} - >>> cfg.b.b1 - [0, 1] - >>> cfg = Config.fromfile('tests/data/config/a.py') - >>> cfg.filename - "/home/kchen/projects/mmcv/tests/data/config/a.py" - >>> cfg.item4 - 'test' - >>> cfg - "Config [path: /home/kchen/projects/mmcv/tests/data/config/a.py]: " - "{'item1': [1, 2], 'item2': {'a': 0}, 'item3': True, 'item4': 'test'}" - """ - - @staticmethod - def _validate_py_syntax(filename): - with open(filename) as f: - content = f.read() - try: - ast.parse(content) - except SyntaxError: - raise SyntaxError("There are syntax errors in config " f"file {filename}") - - @staticmethod - def _file2dict(filename): - filename = osp.abspath(osp.expanduser(filename)) - check_file_exist(filename) - if filename.lower().endswith(".py"): - with tempfile.TemporaryDirectory() as temp_config_dir: - temp_config_file = tempfile.NamedTemporaryFile(dir=temp_config_dir, suffix=".py") - temp_config_name = osp.basename(temp_config_file.name) - if platform.system() == 'Windows': - temp_config_file.close() - shutil.copyfile(filename, osp.join(temp_config_dir, temp_config_name)) - temp_module_name = osp.splitext(temp_config_name)[0] - sys.path.insert(0, temp_config_dir) - SLConfig._validate_py_syntax(filename) - mod = import_module(temp_module_name) - sys.path.pop(0) - cfg_dict = { - name: value for name, value in mod.__dict__.items() if not name.startswith("__") - } - # delete imported module - del sys.modules[temp_module_name] - # close temp file - temp_config_file.close() - elif filename.lower().endswith((".yml", ".yaml", ".json")): - from .slio import slload - - cfg_dict = slload(filename) - else: - raise IOError("Only py/yml/yaml/json type are supported now!") - - cfg_text = filename + "\n" - with open(filename, "r") as f: - cfg_text += f.read() - - # parse the base file - if BASE_KEY in cfg_dict: - cfg_dir = osp.dirname(filename) - base_filename = cfg_dict.pop(BASE_KEY) - base_filename = base_filename if isinstance(base_filename, list) else [base_filename] - - cfg_dict_list = list() - cfg_text_list = list() - for f in base_filename: - _cfg_dict, _cfg_text = SLConfig._file2dict(osp.join(cfg_dir, f)) - cfg_dict_list.append(_cfg_dict) - cfg_text_list.append(_cfg_text) - - base_cfg_dict = dict() - for c in cfg_dict_list: - if len(base_cfg_dict.keys() & c.keys()) > 0: - raise KeyError("Duplicate key is not allowed among bases") - # TODO Allow the duplicate key while warnning user - base_cfg_dict.update(c) - - base_cfg_dict = SLConfig._merge_a_into_b(cfg_dict, base_cfg_dict) - cfg_dict = base_cfg_dict - - # merge cfg_text - cfg_text_list.append(cfg_text) - cfg_text = "\n".join(cfg_text_list) - - return cfg_dict, cfg_text - - @staticmethod - def _merge_a_into_b(a, b): - """merge dict `a` into dict `b` (non-inplace). - values in `a` will overwrite `b`. - copy first to avoid inplace modification - - Args: - a ([type]): [description] - b ([type]): [description] - - Returns: - [dict]: [description] - """ - # import ipdb; ipdb.set_trace() - if not isinstance(a, dict): - return a - - b = b.copy() - for k, v in a.items(): - if isinstance(v, dict) and k in b and not v.pop(DELETE_KEY, False): - - if not isinstance(b[k], dict) and not isinstance(b[k], list): - # if : - # import ipdb; ipdb.set_trace() - raise TypeError( - f"{k}={v} in child config cannot inherit from base " - f"because {k} is a dict in the child config but is of " - f"type {type(b[k])} in base config. You may set " - f"`{DELETE_KEY}=True` to ignore the base config" - ) - b[k] = SLConfig._merge_a_into_b(v, b[k]) - elif isinstance(b, list): - try: - _ = int(k) - except: - raise TypeError( - f"b is a list, " f"index {k} should be an int when input but {type(k)}" - ) - b[int(k)] = SLConfig._merge_a_into_b(v, b[int(k)]) - else: - b[k] = v - - return b - - @staticmethod - def fromfile(filename): - cfg_dict, cfg_text = SLConfig._file2dict(filename) - return SLConfig(cfg_dict, cfg_text=cfg_text, filename=filename) - - def __init__(self, cfg_dict=None, cfg_text=None, filename=None): - if cfg_dict is None: - cfg_dict = dict() - elif not isinstance(cfg_dict, dict): - raise TypeError("cfg_dict must be a dict, but " f"got {type(cfg_dict)}") - for key in cfg_dict: - if key in RESERVED_KEYS: - raise KeyError(f"{key} is reserved for config file") - - super(SLConfig, self).__setattr__("_cfg_dict", ConfigDict(cfg_dict)) - super(SLConfig, self).__setattr__("_filename", filename) - if cfg_text: - text = cfg_text - elif filename: - with open(filename, "r") as f: - text = f.read() - else: - text = "" - super(SLConfig, self).__setattr__("_text", text) - - @property - def filename(self): - return self._filename - - @property - def text(self): - return self._text - - @property - def pretty_text(self): - - indent = 4 - - def _indent(s_, num_spaces): - s = s_.split("\n") - if len(s) == 1: - return s_ - first = s.pop(0) - s = [(num_spaces * " ") + line for line in s] - s = "\n".join(s) - s = first + "\n" + s - return s - - def _format_basic_types(k, v, use_mapping=False): - if isinstance(v, str): - v_str = f"'{v}'" - else: - v_str = str(v) - - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f"{k_str}: {v_str}" - else: - attr_str = f"{str(k)}={v_str}" - attr_str = _indent(attr_str, indent) - - return attr_str - - def _format_list(k, v, use_mapping=False): - # check if all items in the list are dict - if all(isinstance(_, dict) for _ in v): - v_str = "[\n" - v_str += "\n".join( - f"dict({_indent(_format_dict(v_), indent)})," for v_ in v - ).rstrip(",") - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f"{k_str}: {v_str}" - else: - attr_str = f"{str(k)}={v_str}" - attr_str = _indent(attr_str, indent) + "]" - else: - attr_str = _format_basic_types(k, v, use_mapping) - return attr_str - - def _contain_invalid_identifier(dict_str): - contain_invalid_identifier = False - for key_name in dict_str: - contain_invalid_identifier |= not str(key_name).isidentifier() - return contain_invalid_identifier - - def _format_dict(input_dict, outest_level=False): - r = "" - s = [] - - use_mapping = _contain_invalid_identifier(input_dict) - if use_mapping: - r += "{" - for idx, (k, v) in enumerate(input_dict.items()): - is_last = idx >= len(input_dict) - 1 - end = "" if outest_level or is_last else "," - if isinstance(v, dict): - v_str = "\n" + _format_dict(v) - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f"{k_str}: dict({v_str}" - else: - attr_str = f"{str(k)}=dict({v_str}" - attr_str = _indent(attr_str, indent) + ")" + end - elif isinstance(v, list): - attr_str = _format_list(k, v, use_mapping) + end - else: - attr_str = _format_basic_types(k, v, use_mapping) + end - - s.append(attr_str) - r += "\n".join(s) - if use_mapping: - r += "}" - return r - - cfg_dict = self._cfg_dict.to_dict() - text = _format_dict(cfg_dict, outest_level=True) - # copied from setup.cfg - yapf_style = dict( - based_on_style="pep8", - blank_line_before_nested_class_or_def=True, - split_before_expression_after_opening_paren=True, - ) - text, _ = FormatCode(text, style_config=yapf_style, verify=True) - - return text - - def __repr__(self): - return f"Config (path: {self.filename}): {self._cfg_dict.__repr__()}" - - def __len__(self): - return len(self._cfg_dict) - - def __getattr__(self, name): - # # debug - # print('+'*15) - # print('name=%s' % name) - # print("addr:", id(self)) - # # print('type(self):', type(self)) - # print(self.__dict__) - # print('+'*15) - # if self.__dict__ == {}: - # raise ValueError - - return getattr(self._cfg_dict, name) - - def __getitem__(self, name): - return self._cfg_dict.__getitem__(name) - - def __setattr__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setattr__(name, value) - - def __setitem__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setitem__(name, value) - - def __iter__(self): - return iter(self._cfg_dict) - - def dump(self, file=None): - # import ipdb; ipdb.set_trace() - if file is None: - return self.pretty_text - else: - with open(file, "w") as f: - f.write(self.pretty_text) - - def merge_from_dict(self, options): - """Merge list into cfg_dict - - Merge the dict parsed by MultipleKVAction into this cfg. - - Examples: - >>> options = {'model.backbone.depth': 50, - ... 'model.backbone.with_cp':True} - >>> cfg = Config(dict(model=dict(backbone=dict(type='ResNet')))) - >>> cfg.merge_from_dict(options) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict( - ... model=dict(backbone=dict(depth=50, with_cp=True))) - - Args: - options (dict): dict of configs to merge from. - """ - option_cfg_dict = {} - for full_key, v in options.items(): - d = option_cfg_dict - key_list = full_key.split(".") - for subkey in key_list[:-1]: - d.setdefault(subkey, ConfigDict()) - d = d[subkey] - subkey = key_list[-1] - d[subkey] = v - - cfg_dict = super(SLConfig, self).__getattribute__("_cfg_dict") - super(SLConfig, self).__setattr__( - "_cfg_dict", SLConfig._merge_a_into_b(option_cfg_dict, cfg_dict) - ) - - # for multiprocess - def __setstate__(self, state): - self.__init__(state) - - def copy(self): - return SLConfig(self._cfg_dict.copy()) - - def deepcopy(self): - return SLConfig(self._cfg_dict.deepcopy()) - - -class DictAction(Action): - """ - argparse action to split an argument into KEY=VALUE form - on the first = and append to a dictionary. List options should - be passed as comma separated values, i.e KEY=V1,V2,V3 - """ - - @staticmethod - def _parse_int_float_bool(val): - try: - return int(val) - except ValueError: - pass - try: - return float(val) - except ValueError: - pass - if val.lower() in ["true", "false"]: - return True if val.lower() == "true" else False - if val.lower() in ["none", "null"]: - return None - return val - - def __call__(self, parser, namespace, values, option_string=None): - options = {} - for kv in values: - key, val = kv.split("=", maxsplit=1) - val = [self._parse_int_float_bool(v) for v in val.split(",")] - if len(val) == 1: - val = val[0] - options[key] = val - setattr(namespace, self.dest, options) diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/permessage-deflate.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/permessage-deflate.js deleted file mode 100644 index 94603c98daf19ef0ce7036e073620098e3083680..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/permessage-deflate.js +++ /dev/null @@ -1,511 +0,0 @@ -'use strict'; - -const zlib = require('zlib'); - -const bufferUtil = require('./buffer-util'); -const Limiter = require('./limiter'); -const { kStatusCode } = require('./constants'); - -const TRAILER = Buffer.from([0x00, 0x00, 0xff, 0xff]); -const kPerMessageDeflate = Symbol('permessage-deflate'); -const kTotalLength = Symbol('total-length'); -const kCallback = Symbol('callback'); -const kBuffers = Symbol('buffers'); -const kError = Symbol('error'); - -// -// We limit zlib concurrency, which prevents severe memory fragmentation -// as documented in https://github.com/nodejs/node/issues/8871#issuecomment-250915913 -// and https://github.com/websockets/ws/issues/1202 -// -// Intentionally global; it's the global thread pool that's an issue. -// -let zlibLimiter; - -/** - * permessage-deflate implementation. - */ -class PerMessageDeflate { - /** - * Creates a PerMessageDeflate instance. - * - * @param {Object} [options] Configuration options - * @param {(Boolean|Number)} [options.clientMaxWindowBits] Advertise support - * for, or request, a custom client window size - * @param {Boolean} [options.clientNoContextTakeover=false] Advertise/ - * acknowledge disabling of client context takeover - * @param {Number} [options.concurrencyLimit=10] The number of concurrent - * calls to zlib - * @param {(Boolean|Number)} [options.serverMaxWindowBits] Request/confirm the - * use of a custom server window size - * @param {Boolean} [options.serverNoContextTakeover=false] Request/accept - * disabling of server context takeover - * @param {Number} [options.threshold=1024] Size (in bytes) below which - * messages should not be compressed if context takeover is disabled - * @param {Object} [options.zlibDeflateOptions] Options to pass to zlib on - * deflate - * @param {Object} [options.zlibInflateOptions] Options to pass to zlib on - * inflate - * @param {Boolean} [isServer=false] Create the instance in either server or - * client mode - * @param {Number} [maxPayload=0] The maximum allowed message length - */ - constructor(options, isServer, maxPayload) { - this._maxPayload = maxPayload | 0; - this._options = options || {}; - this._threshold = - this._options.threshold !== undefined ? this._options.threshold : 1024; - this._isServer = !!isServer; - this._deflate = null; - this._inflate = null; - - this.params = null; - - if (!zlibLimiter) { - const concurrency = - this._options.concurrencyLimit !== undefined - ? this._options.concurrencyLimit - : 10; - zlibLimiter = new Limiter(concurrency); - } - } - - /** - * @type {String} - */ - static get extensionName() { - return 'permessage-deflate'; - } - - /** - * Create an extension negotiation offer. - * - * @return {Object} Extension parameters - * @public - */ - offer() { - const params = {}; - - if (this._options.serverNoContextTakeover) { - params.server_no_context_takeover = true; - } - if (this._options.clientNoContextTakeover) { - params.client_no_context_takeover = true; - } - if (this._options.serverMaxWindowBits) { - params.server_max_window_bits = this._options.serverMaxWindowBits; - } - if (this._options.clientMaxWindowBits) { - params.client_max_window_bits = this._options.clientMaxWindowBits; - } else if (this._options.clientMaxWindowBits == null) { - params.client_max_window_bits = true; - } - - return params; - } - - /** - * Accept an extension negotiation offer/response. - * - * @param {Array} configurations The extension negotiation offers/reponse - * @return {Object} Accepted configuration - * @public - */ - accept(configurations) { - configurations = this.normalizeParams(configurations); - - this.params = this._isServer - ? this.acceptAsServer(configurations) - : this.acceptAsClient(configurations); - - return this.params; - } - - /** - * Releases all resources used by the extension. - * - * @public - */ - cleanup() { - if (this._inflate) { - this._inflate.close(); - this._inflate = null; - } - - if (this._deflate) { - const callback = this._deflate[kCallback]; - - this._deflate.close(); - this._deflate = null; - - if (callback) { - callback( - new Error( - 'The deflate stream was closed while data was being processed' - ) - ); - } - } - } - - /** - * Accept an extension negotiation offer. - * - * @param {Array} offers The extension negotiation offers - * @return {Object} Accepted configuration - * @private - */ - acceptAsServer(offers) { - const opts = this._options; - const accepted = offers.find((params) => { - if ( - (opts.serverNoContextTakeover === false && - params.server_no_context_takeover) || - (params.server_max_window_bits && - (opts.serverMaxWindowBits === false || - (typeof opts.serverMaxWindowBits === 'number' && - opts.serverMaxWindowBits > params.server_max_window_bits))) || - (typeof opts.clientMaxWindowBits === 'number' && - !params.client_max_window_bits) - ) { - return false; - } - - return true; - }); - - if (!accepted) { - throw new Error('None of the extension offers can be accepted'); - } - - if (opts.serverNoContextTakeover) { - accepted.server_no_context_takeover = true; - } - if (opts.clientNoContextTakeover) { - accepted.client_no_context_takeover = true; - } - if (typeof opts.serverMaxWindowBits === 'number') { - accepted.server_max_window_bits = opts.serverMaxWindowBits; - } - if (typeof opts.clientMaxWindowBits === 'number') { - accepted.client_max_window_bits = opts.clientMaxWindowBits; - } else if ( - accepted.client_max_window_bits === true || - opts.clientMaxWindowBits === false - ) { - delete accepted.client_max_window_bits; - } - - return accepted; - } - - /** - * Accept the extension negotiation response. - * - * @param {Array} response The extension negotiation response - * @return {Object} Accepted configuration - * @private - */ - acceptAsClient(response) { - const params = response[0]; - - if ( - this._options.clientNoContextTakeover === false && - params.client_no_context_takeover - ) { - throw new Error('Unexpected parameter "client_no_context_takeover"'); - } - - if (!params.client_max_window_bits) { - if (typeof this._options.clientMaxWindowBits === 'number') { - params.client_max_window_bits = this._options.clientMaxWindowBits; - } - } else if ( - this._options.clientMaxWindowBits === false || - (typeof this._options.clientMaxWindowBits === 'number' && - params.client_max_window_bits > this._options.clientMaxWindowBits) - ) { - throw new Error( - 'Unexpected or invalid parameter "client_max_window_bits"' - ); - } - - return params; - } - - /** - * Normalize parameters. - * - * @param {Array} configurations The extension negotiation offers/reponse - * @return {Array} The offers/response with normalized parameters - * @private - */ - normalizeParams(configurations) { - configurations.forEach((params) => { - Object.keys(params).forEach((key) => { - let value = params[key]; - - if (value.length > 1) { - throw new Error(`Parameter "${key}" must have only a single value`); - } - - value = value[0]; - - if (key === 'client_max_window_bits') { - if (value !== true) { - const num = +value; - if (!Number.isInteger(num) || num < 8 || num > 15) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - value = num; - } else if (!this._isServer) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - } else if (key === 'server_max_window_bits') { - const num = +value; - if (!Number.isInteger(num) || num < 8 || num > 15) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - value = num; - } else if ( - key === 'client_no_context_takeover' || - key === 'server_no_context_takeover' - ) { - if (value !== true) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - } else { - throw new Error(`Unknown parameter "${key}"`); - } - - params[key] = value; - }); - }); - - return configurations; - } - - /** - * Decompress data. Concurrency limited. - * - * @param {Buffer} data Compressed data - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @public - */ - decompress(data, fin, callback) { - zlibLimiter.add((done) => { - this._decompress(data, fin, (err, result) => { - done(); - callback(err, result); - }); - }); - } - - /** - * Compress data. Concurrency limited. - * - * @param {(Buffer|String)} data Data to compress - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @public - */ - compress(data, fin, callback) { - zlibLimiter.add((done) => { - this._compress(data, fin, (err, result) => { - done(); - callback(err, result); - }); - }); - } - - /** - * Decompress data. - * - * @param {Buffer} data Compressed data - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @private - */ - _decompress(data, fin, callback) { - const endpoint = this._isServer ? 'client' : 'server'; - - if (!this._inflate) { - const key = `${endpoint}_max_window_bits`; - const windowBits = - typeof this.params[key] !== 'number' - ? zlib.Z_DEFAULT_WINDOWBITS - : this.params[key]; - - this._inflate = zlib.createInflateRaw({ - ...this._options.zlibInflateOptions, - windowBits - }); - this._inflate[kPerMessageDeflate] = this; - this._inflate[kTotalLength] = 0; - this._inflate[kBuffers] = []; - this._inflate.on('error', inflateOnError); - this._inflate.on('data', inflateOnData); - } - - this._inflate[kCallback] = callback; - - this._inflate.write(data); - if (fin) this._inflate.write(TRAILER); - - this._inflate.flush(() => { - const err = this._inflate[kError]; - - if (err) { - this._inflate.close(); - this._inflate = null; - callback(err); - return; - } - - const data = bufferUtil.concat( - this._inflate[kBuffers], - this._inflate[kTotalLength] - ); - - if (this._inflate._readableState.endEmitted) { - this._inflate.close(); - this._inflate = null; - } else { - this._inflate[kTotalLength] = 0; - this._inflate[kBuffers] = []; - - if (fin && this.params[`${endpoint}_no_context_takeover`]) { - this._inflate.reset(); - } - } - - callback(null, data); - }); - } - - /** - * Compress data. - * - * @param {(Buffer|String)} data Data to compress - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @private - */ - _compress(data, fin, callback) { - const endpoint = this._isServer ? 'server' : 'client'; - - if (!this._deflate) { - const key = `${endpoint}_max_window_bits`; - const windowBits = - typeof this.params[key] !== 'number' - ? zlib.Z_DEFAULT_WINDOWBITS - : this.params[key]; - - this._deflate = zlib.createDeflateRaw({ - ...this._options.zlibDeflateOptions, - windowBits - }); - - this._deflate[kTotalLength] = 0; - this._deflate[kBuffers] = []; - - this._deflate.on('data', deflateOnData); - } - - this._deflate[kCallback] = callback; - - this._deflate.write(data); - this._deflate.flush(zlib.Z_SYNC_FLUSH, () => { - if (!this._deflate) { - // - // The deflate stream was closed while data was being processed. - // - return; - } - - let data = bufferUtil.concat( - this._deflate[kBuffers], - this._deflate[kTotalLength] - ); - - if (fin) data = data.slice(0, data.length - 4); - - // - // Ensure that the callback will not be called again in - // `PerMessageDeflate#cleanup()`. - // - this._deflate[kCallback] = null; - - this._deflate[kTotalLength] = 0; - this._deflate[kBuffers] = []; - - if (fin && this.params[`${endpoint}_no_context_takeover`]) { - this._deflate.reset(); - } - - callback(null, data); - }); - } -} - -module.exports = PerMessageDeflate; - -/** - * The listener of the `zlib.DeflateRaw` stream `'data'` event. - * - * @param {Buffer} chunk A chunk of data - * @private - */ -function deflateOnData(chunk) { - this[kBuffers].push(chunk); - this[kTotalLength] += chunk.length; -} - -/** - * The listener of the `zlib.InflateRaw` stream `'data'` event. - * - * @param {Buffer} chunk A chunk of data - * @private - */ -function inflateOnData(chunk) { - this[kTotalLength] += chunk.length; - - if ( - this[kPerMessageDeflate]._maxPayload < 1 || - this[kTotalLength] <= this[kPerMessageDeflate]._maxPayload - ) { - this[kBuffers].push(chunk); - return; - } - - this[kError] = new RangeError('Max payload size exceeded'); - this[kError].code = 'WS_ERR_UNSUPPORTED_MESSAGE_LENGTH'; - this[kError][kStatusCode] = 1009; - this.removeListener('data', inflateOnData); - this.reset(); -} - -/** - * The listener of the `zlib.InflateRaw` stream `'error'` event. - * - * @param {Error} err The emitted error - * @private - */ -function inflateOnError(err) { - // - // There is no need to call `Zlib#close()` as the handle is automatically - // closed when an error is emitted. - // - this[kPerMessageDeflate]._inflate = null; - err[kStatusCode] = 1007; - this[kCallback](err); -} diff --git a/spaces/fffiloni/zeroscope-img-to-video/README.md b/spaces/fffiloni/zeroscope-img-to-video/README.md deleted file mode 100644 index 345753aee2bc899f2acd0250394681052e839a65..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/zeroscope-img-to-video/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Zeroscope Image-To-Video -emoji: 🐠 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -duplicated_from: fffiloni/zeroscope ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/flax-community/roberta-hindi/mlm_custom/test_mlm.py b/spaces/flax-community/roberta-hindi/mlm_custom/test_mlm.py deleted file mode 100644 index f0ecd523f184b354111534d8228fda947be21f6c..0000000000000000000000000000000000000000 --- a/spaces/flax-community/roberta-hindi/mlm_custom/test_mlm.py +++ /dev/null @@ -1,143 +0,0 @@ -import json -import os - -import numpy as np -import pandas as pd -from transformers import (AutoModel, AutoModelForMaskedLM, AutoTokenizer, - RobertaModel, pipeline) - - -class MLMTest(): - - def __init__(self, config_file="mlm_test_config.csv", full_text_file="mlm_full_text.csv", targeted_text_file="mlm_targeted_text.csv"): - - self.config_df = pd.read_csv(os.path.join(os.path.dirname(os.path.realpath(__file__)), config_file)) - self.config_df.fillna("", inplace=True) - self.full_text_df = pd.read_csv(os.path.join(os.path.dirname(os.path.realpath(__file__)), full_text_file)) - self.targeted_text_df = pd.read_csv(os.path.join(os.path.dirname(os.path.realpath(__file__)), targeted_text_file)) - self.full_text_results = [] - self.targeted_text_results = [] - - def _run_full_test_row(self, text, print_debug=False): - return_data = [] - data = text.split() - for i in range(0, len(data)): - masked_text = " ".join(data[:i]) + " "+self.nlp.tokenizer.mask_token+" " + " ".join(data[i+1:]) - expected_result = data[i] - result = self.nlp(masked_text) - self.full_text_results.append({"text": masked_text, "result": result[0]["token_str"], "true_output": expected_result}) - if print_debug: - print(masked_text) - print([x["token_str"] for x in result]) - print("-"*20) - return_data.append({"prediction": result[0]["token_str"], "true_output": expected_result}) - return return_data - - def _run_targeted_test_row(self, text, expected_result, print_debug=False): - return_data = [] - result = self.nlp(text.replace("", self.nlp.tokenizer.mask_token)) - self.targeted_text_results.append({"text": text, "result": result[0]["token_str"], "true_output": expected_result}) - if print_debug: - print(text) - print([x["token_str"] for x in result]) - print("-"*20) - return_data.append({"prediction": result[0]["token_str"], "true_output": expected_result}) - return return_data - - def _compute_acc(self, results): - ctr = 0 - for row in results: - try: - z = json.loads(row["true_output"]) - if isinstance(z, list): - if row["prediction"] in z: - ctr+=1 - except: - if row["prediction"] == row["true_output"]: - ctr+=1 - - return float(ctr/len(results)) - - def run_full_test(self, exclude_user_ids=[], print_debug=False): - df = pd.DataFrame() - for idx, row in self.config_df.iterrows(): - self.full_text_results = [] - - model_name = row["model_name"] - display_name = row["display_name"] if row["display_name"] else row["model_name"] - revision = row["revision"] if row["revision"] else "main" - from_flax = row["from_flax"] - if from_flax: - model = AutoModelForMaskedLM.from_pretrained(model_name, from_flax=True, revision=revision) - tokenizer = AutoTokenizer.from_pretrained(model_name) - tokenizer.save_pretrained('exported_pytorch_model') - model.save_pretrained('exported_pytorch_model') - self.nlp = pipeline('fill-mask', model="exported_pytorch_model") - else: - self.nlp = pipeline('fill-mask', model=model_name) - accs = [] - try: - for idx, row in self.full_text_df.iterrows(): - if row["user_id"] in exclude_user_ids: - continue - results = self._run_full_test_row(row["text"], print_debug=print_debug) - - acc = self._compute_acc(results) - accs.append(acc) - except: - print("Error for", display_name) - continue - - print(display_name, " Average acc:", sum(accs)/len(accs)) - if df.empty: - df = pd.DataFrame(self.full_text_results) - df.rename(columns={"result": display_name}, inplace=True) - else: - preds = [x["result"] for x in self.full_text_results] - df[display_name] = preds - df.to_csv("full_text_results.csv", index=False) - print("Results saved to full_text_results.csv") - - def run_targeted_test(self, exclude_user_ids=[], print_debug=False): - - df = pd.DataFrame() - for idx, row in self.config_df.iterrows(): - self.targeted_text_results = [] - - model_name = row["model_name"] - display_name = row["display_name"] if row["display_name"] else row["model_name"] - revision = row["revision"] if row["revision"] else "main" - from_flax = row["from_flax"] - if from_flax: - model = AutoModelForMaskedLM.from_pretrained(model_name, from_flax=True, revision=revision) - tokenizer = AutoTokenizer.from_pretrained(model_name) - tokenizer.save_pretrained('exported_pytorch_model') - model.save_pretrained('exported_pytorch_model') - self.nlp = pipeline('fill-mask', model="exported_pytorch_model") - else: - self.nlp = pipeline('fill-mask', model=model_name) - accs = [] - try: - for idx, row2 in self.targeted_text_df.iterrows(): - if row2["user_id"] in exclude_user_ids: - continue - results = self._run_targeted_test_row(row2["text"], row2["output"], print_debug=print_debug) - - acc = self._compute_acc(results) - accs.append(acc) - except: - import traceback - print(traceback.format_exc()) - print("Error for", display_name) - continue - - print(display_name, " Average acc:", sum(accs)/len(accs)) - if df.empty: - df = pd.DataFrame(self.targeted_text_results) - df.rename(columns={"result": display_name}, inplace=True) - else: - preds = [x["result"] for x in self.targeted_text_results] - df[display_name] = preds - df.to_csv("targeted_text_results.csv", index=False) - print("Results saved to targeted_text_results.csv") - diff --git a/spaces/florim/MedGPT/autogpt/config/__init__.py b/spaces/florim/MedGPT/autogpt/config/__init__.py deleted file mode 100644 index 726b6dcf3da95968b948c4d897e97a9cdd0928ff..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/autogpt/config/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -""" -This module contains the configuration classes for AutoGPT. -""" -from autogpt.config.ai_config import AIConfig -from autogpt.config.config import Config, check_openai_api_key -from autogpt.config.singleton import AbstractSingleton, Singleton - -__all__ = [ - "check_openai_api_key", - "AbstractSingleton", - "AIConfig", - "Config", - "Singleton", -] diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/talkitoutpolite.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/talkitoutpolite.py deleted file mode 100644 index 25b681a0df38d614b3373bcae41192f82072f556..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/talkitoutpolite.py +++ /dev/null @@ -1,464 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - - -class Wizard(NPC): - """ - A simple NPC that knows who is telling the truth - """ - - def __init__(self, color, name, env): - super().__init__(color) - self.name = name - self.env = env - self.npc_dir = 1 # NPC initially looks downward - # todo: this should be id == name - self.npc_type = 0 # this will be put into the encoding - self.was_introduced_to = False - - def can_overlap(self): - # If the NPC is hidden, agent can overlap on it - return self.env.hidden_npc - - def encode(self, nb_dims=3): - if self.env.hidden_npc: - if nb_dims == 3: - return (1, 0, 0) - elif nb_dims == 4: - return (1, 0, 0, 0) - else: - return super().encode(nb_dims=nb_dims) - - def listen(self, utterance): - if self.env.hidden_npc: - return None - - if self.was_introduced_to: - if utterance == TalkItOutPoliteGrammar.construct_utterance([0, 1]): - if self.env.nameless: - return "Ask the {} guide.".format(self.env.true_guide.color) - else: - return "Ask {}.".format(self.env.true_guide.name) - else: - if utterance == TalkItOutPoliteGrammar.construct_utterance([3, 3]): - self.was_introduced_to = True - return "I am well." - - return None - - - -class Guide(NPC): - """ - A simple NPC that knows the correct door. - """ - - def __init__(self, color, name, env, liar=False): - super().__init__(color) - self.name = name - self.env = env - self.liar = liar - self.npc_dir = 1 # NPC initially looks downward - # todo: this should be id == name - self.npc_type = 1 # this will be put into the encoding - self.was_introduced_to = False - - # Select a random target object as mission - obj_idx = self.env._rand_int(0, len(self.env.door_pos)) - self.target_pos = self.env.door_pos[obj_idx] - self.target_color = self.env.door_colors[obj_idx] - - def can_overlap(self): - # If the NPC is hidden, agent can overlap on it - return self.env.hidden_npc - - def encode(self, nb_dims=3): - if self.env.hidden_npc: - if nb_dims == 3: - return (1, 0, 0) - elif nb_dims == 4: - return (1, 0, 0, 0) - else: - return super().encode(nb_dims=nb_dims) - - def listen(self, utterance): - if self.env.hidden_npc: - return None - - if self.was_introduced_to: - if utterance == TalkItOutPoliteGrammar.construct_utterance([0, 1]): - if self.liar: - fake_colors = [c for c in self.env.door_colors if c != self.env.target_color] - fake_color = self.env._rand_elem(fake_colors) - - # Generate the mission string - assert fake_color != self.env.target_color - return 'go to the %s door' % fake_color - - else: - return self.env.mission - else: - if utterance == TalkItOutPoliteGrammar.construct_utterance([3, 3]): - self.was_introduced_to = True - return "I am well." - - - def render(self, img): - c = COLORS[self.color] - - npc_shapes = [] - # Draw eyes - npc_shapes.append(point_in_circle(cx=0.70, cy=0.50, r=0.10)) - npc_shapes.append(point_in_circle(cx=0.30, cy=0.50, r=0.10)) - - # Draw mouth - npc_shapes.append(point_in_rect(0.20, 0.80, 0.72, 0.81)) - - # todo: move this to super function - # todo: super.render should be able to take the npc_shapes and then rotate them - - if hasattr(self, "npc_dir"): - # Pre-rotation to ensure npc_dir = 1 means NPC looks downwards - npc_shapes = [rotate_fn(v, cx=0.5, cy=0.5, theta=-1*(math.pi / 2)) for v in npc_shapes] - # Rotate npc based on its direction - npc_shapes = [rotate_fn(v, cx=0.5, cy=0.5, theta=(math.pi/2) * self.npc_dir) for v in npc_shapes] - - # Draw shapes - for v in npc_shapes: - fill_coords(img, v, c) - - -class TalkItOutPoliteGrammar(object): - - templates = ["Where is", "Open", "Close", "How are"] - things = [ - "sesame", "the exit", "the wall", "you", "the ceiling", "the window", "the entrance", "the closet", - "the drawer", "the fridge", "the floor", "the lamp", "the trash can", "the chair", "the bed", "the sofa" - ] - assert len(templates)*len(things) == 64 - print("language complexity {}:".format(len(templates)*len(things))) - - grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)]) - - @classmethod - def construct_utterance(cls, action): - return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " " - - -class TalkItOutPoliteEnv(MultiModalMiniGridEnv): - """ - Environment in which the agent is instructed to go to a given object - named using an English text string - """ - - def __init__( - self, - size=5, - hear_yourself=False, - diminished_reward=True, - step_penalty=False, - nameless=False, - max_steps=100, - hidden_npc=False, - - ): - assert size >= 5 - self.empty_symbol = "NA \n" - self.hear_yourself = hear_yourself - self.diminished_reward = diminished_reward - self.step_penalty = step_penalty - self.nameless = nameless - self.hidden_npc = hidden_npc - - if max_steps is None: - max_steps = 5*size**2 - - super().__init__( - grid_size=size, - max_steps=max_steps, - # Set this to True for maximum speed - see_through_walls=True, - actions=MiniGridEnv.Actions, - action_space=spaces.MultiDiscrete([ - len(MiniGridEnv.Actions), - *TalkItOutPoliteGrammar.grammar_action_space.nvec - ]), - add_npc_direction=True - ) - - print({ - "size": size, - "hear_yourself": hear_yourself, - "diminished_reward": diminished_reward, - "step_penalty": step_penalty, - }) - - def _gen_grid(self, width, height): - # Create the grid - self.grid = Grid(width, height, nb_obj_dims=4) - - # Randomly vary the room width and height - width = self._rand_int(5, width+1) - height = self._rand_int(5, height+1) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Generate the 4 doors at random positions - self.door_pos = [] - self.door_front_pos = [] # Remembers positions in front of door to avoid setting wizard here - - self.door_pos.append((self._rand_int(2, width-2), 0)) - self.door_front_pos.append((self.door_pos[-1][0], self.door_pos[-1][1]+1)) - - self.door_pos.append((self._rand_int(2, width-2), height-1)) - self.door_front_pos.append((self.door_pos[-1][0], self.door_pos[-1][1] - 1)) - - self.door_pos.append((0, self._rand_int(2, height-2))) - self.door_front_pos.append((self.door_pos[-1][0] + 1, self.door_pos[-1][1])) - - self.door_pos.append((width-1, self._rand_int(2, height-2))) - self.door_front_pos.append((self.door_pos[-1][0] - 1, self.door_pos[-1][1])) - - # Generate the door colors - self.door_colors = [] - while len(self.door_colors) < len(self.door_pos): - color = self._rand_elem(COLOR_NAMES) - if color in self.door_colors: - continue - self.door_colors.append(color) - - # Place the doors in the grid - for idx, pos in enumerate(self.door_pos): - color = self.door_colors[idx] - self.grid.set(*pos, Door(color)) - - - # Set a randomly coloured WIZARD at a random position - color = self._rand_elem(COLOR_NAMES) - self.wizard = Wizard(color, "Gandalf", self) - - # Place it randomly, omitting front of door positions - self.place_obj(self.wizard, - size=(width, height), - reject_fn=lambda _, p: tuple(p) in self.door_front_pos) - - # add guides - GUIDE_NAMES = ["John", "Jack"] - - # Set a randomly coloured TRUE GUIDE at a random position - name = self._rand_elem(GUIDE_NAMES) - color = self._rand_elem(COLOR_NAMES) - self.true_guide = Guide(color, name, self, liar=False) - - # Place it randomly, omitting invalid positions - self.place_obj(self.true_guide, - size=(width, height), - # reject_fn=lambda _, p: tuple(p) in self.door_front_pos) - reject_fn=lambda _, p: tuple(p) in [*self.door_front_pos, tuple(self.wizard.cur_pos)]) - - # Set a randomly coloured FALSE GUIDE at a random position - name = self._rand_elem([n for n in GUIDE_NAMES if n != self.true_guide.name]) - - color = self._rand_elem([c for c in COLOR_NAMES if c != self.true_guide.color]) - - self.false_guide = Guide(color, name, self, liar=True) - - # Place it randomly, omitting invalid positions - self.place_obj(self.false_guide, - size=(width, height), - reject_fn=lambda _, p: tuple(p) in [ - *self.door_front_pos, tuple(self.wizard.cur_pos), tuple(self.true_guide.cur_pos)]) - - assert self.true_guide.name != self.false_guide.name - assert self.true_guide.color != self.false_guide.color - - # Randomize the agent's start position and orientation - self.place_agent(size=(width, height)) - - # Select a random target door - self.doorIdx = self._rand_int(0, len(self.door_pos)) - self.target_pos = self.door_pos[self.doorIdx] - self.target_color = self.door_colors[self.doorIdx] - - # Generate the mission string - self.mission = 'go to the %s door' % self.target_color - - # Dummy beginning string - self.beginning_string = "This is what you hear. \n" - self.utterance = self.beginning_string - - # utterance appended at the end of each step - self.utterance_history = "" - - # used for rendering - self.conversation = self.utterance - self.outcome_info = None - - def step(self, action): - p_action = action[0] - utterance_action = action[1:] - - # assert all nan or neither nan - assert len(set(np.isnan(utterance_action))) == 1 - - speak_flag = not all(np.isnan(utterance_action)) - - obs, reward, done, info = super().step(p_action) - - if speak_flag: - utterance = TalkItOutPoliteGrammar.construct_utterance(utterance_action) - if self.hear_yourself: - if self.nameless: - self.utterance += "{} \n".format(utterance) - else: - self.utterance += "YOU: {} \n".format(utterance) - - self.conversation += "YOU: {} \n".format(utterance) - - # check if near wizard - if self.wizard.is_near_agent(): - reply = self.wizard.listen(utterance) - - if reply: - if self.nameless: - self.utterance += "{} \n".format(reply) - else: - self.utterance += "{}: {} \n".format(self.wizard.name, reply) - - self.conversation += "{}: {} \n".format(self.wizard.name, reply) - - if self.true_guide.is_near_agent(): - reply = self.true_guide.listen(utterance) - - if reply: - if self.nameless: - self.utterance += "{} \n".format(reply) - else: - self.utterance += "{}: {} \n".format(self.true_guide.name, reply) - - self.conversation += "{}: {} \n".format(self.true_guide.name, reply) - - if self.false_guide.is_near_agent(): - reply = self.false_guide.listen(utterance) - - if reply: - if self.nameless: - self.utterance += "{} \n".format(reply) - else: - self.utterance += "{}: {} \n".format(self.false_guide.name, reply) - - self.conversation += "{}: {} \n".format(self.false_guide.name, reply) - - if utterance == TalkItOutPoliteGrammar.construct_utterance([1, 0]): - ax, ay = self.agent_pos - tx, ty = self.target_pos - - if (ax == tx and abs(ay - ty) == 1) or (ay == ty and abs(ax - tx) == 1): - reward = self._reward() - - for dx, dy in self.door_pos: - if (ax == dx and abs(ay - dy) == 1) or (ay == dy and abs(ax - dx) == 1): - # agent has chosen some door episode, regardless of if the door is correct the episode is over - done = True - - # Don't let the agent open any of the doors - if p_action == self.actions.toggle: - done = True - - if p_action == self.actions.done: - done = True - - # discount - if self.step_penalty: - reward = reward - 0.01 - - if self.hidden_npc: - # all npc are hidden - assert np.argwhere(obs['image'][:,:,0] == OBJECT_TO_IDX['npc']).size == 0 - assert "{}:".format(self.wizard.name) not in self.utterance - # assert "{}:".format(self.true_guide.name) not in self.utterance - # assert "{}:".format(self.false_guide.name) not in self.utterance - - # fill observation with text - self.append_existing_utterance_to_history() - obs = self.add_utterance_to_observation(obs) - self.reset_utterance() - - if done: - if reward > 0: - self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1)) - else: - self.outcome_info = "FAILURE: agent got {} reward \n".format(reward) - - return obs, reward, done, info - - def _reward(self): - if self.diminished_reward: - return super()._reward() - else: - return 1.0 - - def render(self, *args, **kwargs): - obs = super().render(*args, **kwargs) - - self.window.clear_text() # erase previous text - - self.window.set_caption(self.conversation, [ - "Gandalf:", - "Jack:", - "John:", - "Where is the exit", - "Open sesame", - ]) - - self.window.ax.set_title("correct door: {}".format(self.true_guide.target_color), loc="left", fontsize=10) - if self.outcome_info: - color = None - if "SUCCESS" in self.outcome_info: - color = "lime" - elif "FAILURE" in self.outcome_info: - color = "red" - self.window.add_text(*(0.01, 0.85, self.outcome_info), - **{'fontsize':15, 'color':color, 'weight':"bold"}) - - self.window.show_img(obs) # re-draw image to add changes to window - return obs - - -class TalkItOutPolite8x8Env(TalkItOutPoliteEnv): - def __init__(self, **kwargs): - super().__init__(size=8, max_steps=100, **kwargs) - - -class TalkItOutPolite6x6Env(TalkItOutPoliteEnv): - def __init__(self): - super().__init__(size=6, max_steps=100) - - -class TalkItOutPoliteNameless8x8Env(TalkItOutPoliteEnv): - def __init__(self): - super().__init__(size=8, max_steps=100, nameless=True) - -register( - id='MiniGrid-TalkItOutPolite-5x5-v0', - entry_point='gym_minigrid.envs:TalkItOutPoliteEnv' -) - -register( - id='MiniGrid-TalkItOutPolite-6x6-v0', - entry_point='gym_minigrid.envs:TalkItOutPolite6x6Env' -) - -register( - id='MiniGrid-TalkItOutPolite-8x8-v0', - entry_point='gym_minigrid.envs:TalkItOutPolite8x8Env' -) - -register( - id='MiniGrid-TalkItOutPoliteNameless-8x8-v0', - entry_point='gym_minigrid.envs:TalkItOutPoliteNameless8x8Env' -) \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/utils/tester.py b/spaces/flowers-team/SocialAISchool/utils/tester.py deleted file mode 100644 index a3b74742f32f1b20893747c9f69af20c43b0fba0..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/utils/tester.py +++ /dev/null @@ -1,137 +0,0 @@ -import numpy as np -import utils -import os -import pickle -import torch - -class AgentWrap: - """ Handles action selection without gradient updates for proper testing """ - - def __init__(self, acmodel, preprocess_obss, device, num_envs=1, argmax=False): - - self.preprocess_obss = preprocess_obss - self.acmodel = acmodel - - self.device = device - self.argmax = argmax - self.num_envs = num_envs - - if self.acmodel.recurrent: - self.memories = torch.zeros(self.num_envs, self.acmodel.memory_size, device=self.device) - - def get_actions(self, obss): - preprocessed_obss = self.preprocess_obss(obss, device=self.device) - - with torch.no_grad(): - if self.acmodel.recurrent: - dist, _, self.memories = self.acmodel(preprocessed_obss, self.memories) - else: - dist, _ = self.acmodel(preprocessed_obss) - - if isinstance(dist, torch.distributions.Distribution): - if self.argmax: - actions = dist.probs.max(1, keepdim=True)[1] - else: - actions = dist.sample() - else: - if self.argmax: - actions = torch.stack([d.probs.max(1)[1] for d in dist], dim=1) - else: - actions = torch.stack([d.sample() for d in dist], dim=1) - return self.acmodel.construct_final_action(actions.cpu().numpy()) - - def get_action(self, obs): - return self.get_actions([obs])[0] - - def analyze_feedbacks(self, rewards, dones): - if self.acmodel.recurrent: - masks = 1 - torch.tensor(dones, dtype=torch.float, device=self.device).unsqueeze(1) - self.memories *= masks - - def analyze_feedback(self, reward, done): - return self.analyze_feedbacks([reward], [done]) - - -class Tester: - - def __init__(self, env_args, seed, episodes, save_path, acmodel, preprocess_obss, device): - - self.envs = [utils.make_env( - **env_args - ) for _ in range(episodes)] - self.seed = seed - self.episodes = episodes - self.ep_counter = 0 - self.savefile = save_path + "/testing_{}.pkl".format(self.envs[0].spec.id) - print("Testing log: ", self.savefile) - - self.stats_dict = {"test_rewards": [], "test_success_rates": [], "test_step_nb": []} - self.agent = AgentWrap(acmodel, preprocess_obss, device) - - def test_agent(self, num_frames): - self.agent.acmodel.eval() - - # set seed - # self.env.seed(self.seed) - # save test time (nb training steps) - self.stats_dict['test_step_nb'].append(num_frames) - - rewards = [] - success_rates = [] - - - # cols = [] - # s = "-".join([e.current_env.marble.color for e in self.envs]) - # print("s:", s) - - for episode in range(self.episodes): - # self.envs[episode].seed(self.seed) - self.envs[episode].seed(episode) - # print("current_seed", np.random.get_state()[1][0]) - - obs = self.envs[episode].reset() - - # cols.append(self.envs[episode].current_env.marble.color) - # cols.append(str(self.envs[episode].current_env.marble.cur_pos)) - - done = False - while not done: - action = self.agent.get_action(obs) - - obs, reward, done, info = self.envs[episode].step(action) - self.agent.analyze_feedback(reward, done) - - if done: - rewards.append(reward) - success_rates.append(info['success']) - break - - # from hashlib import md5 - # hash_string = "-".join(cols).encode() - - # print('hs:', hash_string[:20]) - # print("hash test envs:", md5(hash_string).hexdigest()) - - mean_rewards = np.array(rewards).mean() - mean_success_rates = np.array(success_rates).mean() - - self.stats_dict["test_rewards"].append(mean_rewards) - self.stats_dict["test_success_rates"].append(mean_success_rates) - - self.agent.acmodel.train() - return mean_success_rates, mean_rewards - - def load(self): - if os.path.isfile(self.savefile): - with open(self.savefile, 'rb') as f: - stats_dict_loaded = pickle.load(f) - - for k, v in stats_dict_loaded.items(): - self.stats_dict[k] = v - else: - raise ValueError(f"Save file {self.savefile} doesn't exist.") - - def dump(self): - with open(self.savefile, 'wb') as f: - pickle.dump(self.stats_dict, f) - diff --git a/spaces/gagan3012/summarization/src/models/hf_upload.py b/spaces/gagan3012/summarization/src/models/hf_upload.py deleted file mode 100644 index f8ce0ad05c0df5df6c615bc444fddf2ad2f098f1..0000000000000000000000000000000000000000 --- a/spaces/gagan3012/summarization/src/models/hf_upload.py +++ /dev/null @@ -1,48 +0,0 @@ -import shutil -from getpass import getpass -from os.path import join, dirname -from pathlib import Path -import yaml - -from model import Summarization -from huggingface_hub import HfApi, Repository - - -def upload(model_to_upload, model_name): - hf_username = input("Enter your HuggingFace username:") - hf_token = getpass("Enter your HuggingFace token:") - model_url = HfApi().create_repo(token=hf_token, name=model_name, exist_ok=True) - model_repo = Repository( - "./hf_model", - clone_from=model_url, - use_auth_token=hf_token, - git_email=f"{hf_username}@users.noreply.huggingface.co", - git_user=hf_username, - ) - - del hf_token - try: - readme_txt = open(join(dirname(__file__), "README.md"), encoding="utf8").read() - except Exception: - readme_txt = None - - (Path(model_repo.local_dir) / "README.md").write_text(readme_txt) - model_to_upload.save_model(Path(model_repo.local_dir)) - commit_url = model_repo.push_to_hub() - - print("Check out your model at:") - print(commit_url) - print(f"https://huggingface.co/{hf_username}/{model_name}") - - if Path("./hf_model").exists(): - shutil.rmtree("./hf_model") - - -if __name__ == "__main__": - with open("model_params.yml") as f: - params = yaml.safe_load(f) - - model = Summarization() - model.load_model(model_dir="./models") - - upload(model_to_upload=model, model_name=params["name"]) diff --git a/spaces/giswqs/Streamlit/apps/raster.py b/spaces/giswqs/Streamlit/apps/raster.py deleted file mode 100644 index 95ea27e231f6119424be781806551a78239e804a..0000000000000000000000000000000000000000 --- a/spaces/giswqs/Streamlit/apps/raster.py +++ /dev/null @@ -1,77 +0,0 @@ -import os -import leafmap.foliumap as leafmap -import streamlit as st -import palettable - - -@st.cache(allow_output_mutation=True) -def load_cog_list(): - print(os.getcwd()) - in_txt = os.path.join(os.getcwd(), "data/cog_files.txt") - with open(in_txt) as f: - return [line.strip() for line in f.readlines()[1:]] - - -@st.cache(allow_output_mutation=True) -def get_palettes(): - palettes = dir(palettable.matplotlib)[:-16] - return ["matplotlib." + p for p in palettes] - - -def app(): - - st.title("Visualize Raster Datasets") - st.markdown( - """ - An interactive web app for visualizing local raster datasets and Cloud Optimized GeoTIFF ([COG](https://www.cogeo.org)). The app was built using [streamlit](https://streamlit.io), [leafmap](https://leafmap.org), and [localtileserver](https://github.com/banesullivan/localtileserver). - - - """ - ) - - row1_col1, row1_col2 = st.columns([2, 1]) - - with row1_col1: - cog_list = load_cog_list() - cog = st.selectbox("Select a sample Cloud Opitmized GeoTIFF (COG)", cog_list) - - with row1_col2: - empty = st.empty() - - url = empty.text_input( - "Enter a HTTP URL to a Cloud Optimized GeoTIFF (COG)", - cog, - ) - - data = st.file_uploader("Upload a raster dataset", type=["tif", "img"]) - - if data: - url = empty.text_input( - "Enter a URL to a Cloud Optimized GeoTIFF (COG)", - "", - ) - - add_palette = st.checkbox("Add a color palette") - if add_palette: - palette = st.selectbox("Select a color palette", get_palettes()) - else: - palette = None - - submit = st.button("Submit") - - m = leafmap.Map(latlon_control=False) - - if submit: - if data or url: - try: - if data: - file_path = leafmap.save_data(data) - m.add_local_tile(file_path, palette=palette, debug=True) - elif url: - m.add_remote_tile(url, palette=palette, debug=True) - except Exception as e: - with row1_col2: - st.error("Work in progress. Try it again later.") - - with row1_col1: - m.to_streamlit() diff --git a/spaces/giswqs/Streamlit/multiapp.py b/spaces/giswqs/Streamlit/multiapp.py deleted file mode 100644 index 55c1f40dadb3c1c7082efbab873e9f846f2aebe0..0000000000000000000000000000000000000000 --- a/spaces/giswqs/Streamlit/multiapp.py +++ /dev/null @@ -1,81 +0,0 @@ -"""Frameworks for running multiple Streamlit applications as a single app. -""" -import streamlit as st - -# app_state = st.experimental_get_query_params() -# app_state = {k: v[0] if isinstance(v, list) else v for k, v in app_state.items()} # fetch the first item in each query string as we don't have multiple values for each query string key in this example - - -class MultiApp: - """Framework for combining multiple streamlit applications. - Usage: - def foo(): - st.title("Hello Foo") - def bar(): - st.title("Hello Bar") - app = MultiApp() - app.add_app("Foo", foo) - app.add_app("Bar", bar) - app.run() - It is also possible keep each application in a separate file. - import foo - import bar - app = MultiApp() - app.add_app("Foo", foo.app) - app.add_app("Bar", bar.app) - app.run() - """ - - def __init__(self): - self.apps = [] - - def add_app(self, title, func): - """Adds a new application. - Parameters - ---------- - func: - the python function to render this app. - title: - title of the app. Appears in the dropdown in the sidebar. - """ - self.apps.append({"title": title, "function": func}) - - def run(self): - app_state = st.experimental_get_query_params() - app_state = { - k: v[0] if isinstance(v, list) else v for k, v in app_state.items() - } # fetch the first item in each query string as we don't have multiple values for each query string key in this example - - # st.write('before', app_state) - - titles = [a["title"] for a in self.apps] - functions = [a["function"] for a in self.apps] - default_radio = titles.index(app_state["page"]) if "page" in app_state else 0 - - st.sidebar.title("Navigation") - - title = st.sidebar.radio("Go To", titles, index=default_radio, key="radio") - - app_state["page"] = st.session_state.radio - # st.write('after', app_state) - - st.experimental_set_query_params(**app_state) - # st.experimental_set_query_params(**st.session_state.to_dict()) - functions[titles.index(title)]() - - st.sidebar.title("Contribute") - st.sidebar.info( - "This is an open source project and you are very welcome to contribute your " - "comments, questions, resources and apps as " - "[issues](https://github.com/giswqs/streamlit-geospatial/issues) or " - "[pull requests](https://github.com/giswqs/streamlit-geospatial/pulls) " - "to the [source code](https://github.com/giswqs/streamlit-geospatial). " - ) - st.sidebar.title("About") - st.sidebar.info( - """ - This web [app](https://share.streamlit.io/giswqs/streamlit-geospatial/app.py) is maintained by [Qiusheng Wu](https://wetlands.io). You can follow me on social media: - [GitHub](https://github.com/giswqs) | [Twitter](https://twitter.com/giswqs) | [YouTube](https://www.youtube.com/c/QiushengWu) | [LinkedIn](https://www.linkedin.com/in/qiushengwu). - This web app URL: - """ - ) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Cardfive 7.7 HOT Crack Key.rar.md b/spaces/gotiQspiryo/whisper-ui/examples/Cardfive 7.7 HOT Crack Key.rar.md deleted file mode 100644 index 0ce53723d7222d2c3f5cb6f129a4cba91f831d00..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Cardfive 7.7 HOT Crack Key.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

cardfive 7.7 crack key.rar


DOWNLOADhttps://urlgoal.com/2uyLKb



- - d5da3c52bf
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Esko Artios CAD V12.0 Torrent.zi - Learn the Features and Benefits of this Powerful Software.md b/spaces/gotiQspiryo/whisper-ui/examples/Esko Artios CAD V12.0 Torrent.zi - Learn the Features and Benefits of this Powerful Software.md deleted file mode 100644 index 0952da1bdeda3da928119a9bf60e63e2254c0941..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Esko Artios CAD V12.0 Torrent.zi - Learn the Features and Benefits of this Powerful Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

Esko Artios CAD V12.0 Torrent.zi


Download 🌟 https://urlgoal.com/2uyLQX



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Homeopathic Medicine List And Uses Pdf Free A Practical Handbook for Self-Care and Wellness.md b/spaces/gotiQspiryo/whisper-ui/examples/Homeopathic Medicine List And Uses Pdf Free A Practical Handbook for Self-Care and Wellness.md deleted file mode 100644 index 7d40f73337543e1aeb8c4aa98404176fe13c2007..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Homeopathic Medicine List And Uses Pdf Free A Practical Handbook for Self-Care and Wellness.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

Increasing numbers of medical colleges have started offering courses in alternative medicine. Accredited Naturopathic colleges and universities are increasing in number and popularity in the USA. They offer the most complete medical training in complimentary medicines that is available today [4, 5]. In Britain, no conventional medical schools offer courses that teach the clinical practice of alternative medicine. However, alternative medicine is taught in several unconventional schools as part of their curriculum. Teaching is based mostly on theory and understanding of alternative medicine, with emphasis on being able to communicate with alternative medicine specialists.

-

People should be free to choose whatever method of healthcare they want, but stipulate that people must be informed as to the safety and efficacy of whatever method they choose. People who choose alternative medicine may think they are choosing a safe, effective medicine, while they may only be getting quack remedies. Grapefruit seed extract is an example of quackery when multiple studies demonstrate its universal antimicrobial effect is due to synthetic antimicrobial contamination [18, 19].

-

Homeopathic Medicine List And Uses Pdf Free


Download Zip » https://urlgoal.com/2uyMu0



-

Arizona, Connecticut, and Nevada are the only states with homeopathic licensing boards for doctors of medicine (holders of M.D. degrees) and doctors of osteopathic medicine (holders of D.O. degrees). In 15 states, a section of the naturopathic medical board examinations is on homeopathy.

-

Another important health coverage scheme is the Central Government Health Scheme, organized and run by the Ministry of Health and Family Welfare for current and retired central government employees and their dependents.11 There are no income or wage requirements to be eligible. Coverage includes health care services for allopathic, homeopathic, and alternative medicine treatments.12 Approximately 3.6 million beneficiaries were registered under this scheme as of late 2019.13 Similar schemes exist for railway and defense employees.

-

Primary care: Under the Health and Wellness Centres program, 150,000 subcenters (the lowest tier of the health system) across the country are being upgraded to provide comprehensive primary health care services, free essential medicines, and free diagnostic services. Nutritional support will also be provided to all beneficiaries with tuberculosis at a rate of INR 500 (USD 7) per month during treatment. Other primary health care providers include primary health centers (PHCs) and community health centers. No patient registration is required.

-

ENBREL is a medicine that affects your immune system. ENBREL can lower the ability of your immune system to fight infections. Serious infections have happened in patients taking ENBREL. These infections include tuberculosis (TB) and infections caused by viruses, fungi, or bacteria that have spread throughout the body. Some patients have died from these infections. Your healthcare provider should test you for TB before you take ENBREL and monitor you closely for TB before, during, and after ENBREL treatment, even if you have tested negative for TB.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Mac Os Sierra Could Not Open Package For Expansion A Guide to Troubleshoot This Issue.md b/spaces/gotiQspiryo/whisper-ui/examples/Mac Os Sierra Could Not Open Package For Expansion A Guide to Troubleshoot This Issue.md deleted file mode 100644 index 30a39eef8c9f40096e1a356e3760ad88e568495c..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Mac Os Sierra Could Not Open Package For Expansion A Guide to Troubleshoot This Issue.md +++ /dev/null @@ -1,15 +0,0 @@ - -

If you right click it, and click Show Package Contents you'll get a few files in a Contents folder. (Note: if you do not see Show Package Contents you will need to open Terminal.app and run pkgutil --expand mystubbornpackage.pkg path/to/expand)

-

Mac Os Sierra Could Not Open Package For Expansion


DOWNLOADhttps://urlgoal.com/2uyMgV



-

Not only will it provide all the information you need in the app it also install a Quick Look plug in so just selecting the package file and hitting the space bar opens up a window with the most essential information.

-

The new imac I am usingto create a boot installer is running mojave so it won't let me open the sierra install package because, I guess it thinks I am going to install it on the mac that has mojave. How can I get round this please.

-

Once I verified that I could get the same results using the SecUpd2019-005HighSierra.RecoveryHDUpdate.pkg installer package, I wrote a script (based on the original one I had found) to help automate the process of rebuilding a macOS Recovery volume or partition. For more details, please see below the jump.

-

This guide will help you open package contents of a PKG file on Windows or Mac, depending on your operating system and type. If during extraction there are errors please comment below so we can offer assistance quickly!

-

-

Typinator shows variants of its "T" icon in the menu bar to indicate situations in which expansions do not work as expected. Typinator 5.0 or newer can explain the meaning of this icon: Click the triangle next to the "T" icon to open the menu and select the first command "What does this symbol mean?".

-

Note that the recommendation to use suggested packages conditionally intests does also apply to packages used to manage test suites: anotorious example was testthat which in version 1.0.0 containedillegal C++ code and hence could not be installed on standards-compliantplatforms.

-

In very special cases packages may create binary files other than theshared objects/DLLs in the src directory. Such files will not beinstalled in a multi-architecture setting since R CMD INSTALL--libs-only is used to merge multiple sub-architectures and it onlycopies shared objects/DLLs. If a package wants to install otherbinaries (for example executable programs), it should provide an Rscript src/install.libs.R which will be run as part of theinstallation in the src build directory instead of copyingthe shared objects/DLLs. The script is run in a separate Renvironment containing the following variables: R_PACKAGE_NAME(the name of the package), R_PACKAGE_SOURCE (the path to thesource directory of the package), R_PACKAGE_DIR (the path of thetarget installation directory of the package), R_ARCH (thearch-dependent part of the path, often empty), SHLIB_EXT (theextension of shared objects) and WINDOWS (TRUE on Windows,FALSE elsewhere). Something close to the default behavior couldbe replicated with the following src/install.libs.R file:

-

listing the classes and functions with methods respectively. Suppose wehad two small packages A and B with B using A.Then they could have NAMESPACE files

-

Many of the graphics devices are platform-specific: even X11()(aka x11()) which although emulated on Windows may not beavailable on a Unix-alike (and is not the preferred screen device on OSX). It is rarely necessary for package code or examples to open a newdevice, but if essential,68 use dev.new().

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/gradio/HuBERT/docs/conf.py b/spaces/gradio/HuBERT/docs/conf.py deleted file mode 100644 index 440784bfae96c14e9050542b1b1921a75a3b4b27..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/docs/conf.py +++ /dev/null @@ -1,134 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# -# fairseq documentation build configuration file, created by -# sphinx-quickstart on Fri Aug 17 21:45:30 2018. -# -# This file is execfile()d with the current directory set to its -# containing dir. -# -# Note that not all possible configuration values are present in this -# autogenerated file. -# -# All configuration values have a default; values that are commented out -# serve to show the default. - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. - -import os -import sys -from fairseq import __version__ - - -# source code directory, relative to this file, for sphinx-autobuild -sys.path.insert(0, os.path.abspath("..")) - -source_suffix = [".rst"] - -# -- General configuration ------------------------------------------------ - -# If your documentation needs a minimal Sphinx version, state it here. -# -# needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "sphinx.ext.autodoc", - "sphinx.ext.intersphinx", - "sphinx.ext.viewcode", - "sphinx.ext.napoleon", - "sphinxarg.ext", -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -# The master toctree document. -master_doc = "index" - -# General information about the project. -project = "fairseq" -copyright = "Facebook AI Research (FAIR)" -author = "Facebook AI Research (FAIR)" - -github_doc_root = "https://github.com/pytorch/fairseq/tree/master/docs/" - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = __version__ -# The full version, including alpha/beta/rc tags. -release = __version__ - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This patterns also effect to html_static_path and html_extra_path -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = "sphinx" -highlight_language = "python" - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = False - - -# -- Options for HTML output ---------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = "sphinx_rtd_theme" - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ["_static"] - -html_context = { - "css_files": [ - "_static/theme_overrides.css", # override wide tables in RTD theme - ], -} - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# This is required for the alabaster theme -# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars -# html_sidebars = { -# '**': [ -# 'about.html', -# 'navigation.html', -# 'relations.html', # needs 'show_related': True theme option to display -# 'searchbox.html', -# 'donate.html', -# ] -# } - - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - "numpy": ("http://docs.scipy.org/doc/numpy/", None), - "python": ("https://docs.python.org/", None), - "torch": ("https://pytorch.org/docs/master/", None), -} diff --git a/spaces/gulabpatel/Real-ESRGAN/tests/test_model.py b/spaces/gulabpatel/Real-ESRGAN/tests/test_model.py deleted file mode 100644 index c20bb1d56ed20222e929e9c94026f6ea383c6026..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/Real-ESRGAN/tests/test_model.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import yaml -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.data.paired_image_dataset import PairedImageDataset -from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss - -from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN -from realesrgan.models.realesrgan_model import RealESRGANModel -from realesrgan.models.realesrnet_model import RealESRNetModel - - -def test_realesrnet_model(): - with open('tests/data/test_realesrnet_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = RealESRNetModel(opt) - # test attributes - assert model.__class__.__name__ == 'RealESRNetModel' - assert isinstance(model.net_g, RRDBNet) - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.optimizers[0], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 32, 32), dtype=torch.float32) - kernel1 = torch.rand((1, 5, 5), dtype=torch.float32) - kernel2 = torch.rand((1, 5, 5), dtype=torch.float32) - sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32) - data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel) - model.feed_data(data) - # check dequeue - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # change probability to test if-else - model.opt['gaussian_noise_prob'] = 0 - model.opt['gray_noise_prob'] = 0 - model.opt['second_blur_prob'] = 0 - model.opt['gaussian_noise_prob2'] = 0 - model.opt['gray_noise_prob2'] = 0 - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/lq', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - model.nondist_validation(dataloader, 1, None, False) - assert model.is_train is True - - -def test_realesrgan_model(): - with open('tests/data/test_realesrgan_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = RealESRGANModel(opt) - # test attributes - assert model.__class__.__name__ == 'RealESRGANModel' - assert isinstance(model.net_g, RRDBNet) # generator - assert isinstance(model.net_d, UNetDiscriminatorSN) # discriminator - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.cri_perceptual, PerceptualLoss) - assert isinstance(model.cri_gan, GANLoss) - assert isinstance(model.optimizers[0], torch.optim.Adam) - assert isinstance(model.optimizers[1], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 32, 32), dtype=torch.float32) - kernel1 = torch.rand((1, 5, 5), dtype=torch.float32) - kernel2 = torch.rand((1, 5, 5), dtype=torch.float32) - sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32) - data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel) - model.feed_data(data) - # check dequeue - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # change probability to test if-else - model.opt['gaussian_noise_prob'] = 0 - model.opt['gray_noise_prob'] = 0 - model.opt['second_blur_prob'] = 0 - model.opt['gaussian_noise_prob2'] = 0 - model.opt['gray_noise_prob2'] = 0 - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/lq', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - model.nondist_validation(dataloader, 1, None, False) - assert model.is_train is True - - # ----------------- test optimize_parameters -------------------- # - model.feed_data(data) - model.optimize_parameters(1) - assert model.output.shape == (1, 3, 32, 32) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = ['l_g_pix', 'l_g_percep', 'l_g_gan', 'l_d_real', 'out_d_real', 'l_d_fake', 'out_d_fake'] - assert set(expected_keys).issubset(set(model.log_dict.keys())) diff --git a/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/persistence.py b/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/persistence.py deleted file mode 100644 index 1abf9cbf2c92a631ab1ac22fc1b0b382e22a0af0..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/persistence.py +++ /dev/null @@ -1,253 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -"""Facilities for pickling Python code alongside other data. - -The pickled code is automatically imported into a separate Python module -during unpickling. This way, any previously exported pickles will remain -usable even if the original code is no longer available, or if the current -version of the code is not consistent with what was originally pickled.""" - -import sys -import pickle -import io -import inspect -import copy -import uuid -import types -import dnnlib - -#---------------------------------------------------------------------------- - -_version = 6 # internal version number -_decorators = set() # {decorator_class, ...} -_import_hooks = [] # [hook_function, ...] -_module_to_src_dict = dict() # {module: src, ...} -_src_to_module_dict = dict() # {src: module, ...} - -#---------------------------------------------------------------------------- - -def persistent_class(orig_class): - r"""Class decorator that extends a given class to save its source code - when pickled. - - Example: - - from torch_utils import persistence - - @persistence.persistent_class - class MyNetwork(torch.nn.Module): - def __init__(self, num_inputs, num_outputs): - super().__init__() - self.fc = MyLayer(num_inputs, num_outputs) - ... - - @persistence.persistent_class - class MyLayer(torch.nn.Module): - ... - - When pickled, any instance of `MyNetwork` and `MyLayer` will save its - source code alongside other internal state (e.g., parameters, buffers, - and submodules). This way, any previously exported pickle will remain - usable even if the class definitions have been modified or are no - longer available. - - The decorator saves the source code of the entire Python module - containing the decorated class. It does *not* save the source code of - any imported modules. Thus, the imported modules must be available - during unpickling, also including `torch_utils.persistence` itself. - - It is ok to call functions defined in the same module from the - decorated class. However, if the decorated class depends on other - classes defined in the same module, they must be decorated as well. - This is illustrated in the above example in the case of `MyLayer`. - - It is also possible to employ the decorator just-in-time before - calling the constructor. For example: - - cls = MyLayer - if want_to_make_it_persistent: - cls = persistence.persistent_class(cls) - layer = cls(num_inputs, num_outputs) - - As an additional feature, the decorator also keeps track of the - arguments that were used to construct each instance of the decorated - class. The arguments can be queried via `obj.init_args` and - `obj.init_kwargs`, and they are automatically pickled alongside other - object state. A typical use case is to first unpickle a previous - instance of a persistent class, and then upgrade it to use the latest - version of the source code: - - with open('old_pickle.pkl', 'rb') as f: - old_net = pickle.load(f) - new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs) - misc.copy_params_and_buffers(old_net, new_net, require_all=True) - """ - assert isinstance(orig_class, type) - if is_persistent(orig_class): - return orig_class - - assert orig_class.__module__ in sys.modules - orig_module = sys.modules[orig_class.__module__] - orig_module_src = _module_to_src(orig_module) - - class Decorator(orig_class): - _orig_module_src = orig_module_src - _orig_class_name = orig_class.__name__ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init_args = copy.deepcopy(args) - self._init_kwargs = copy.deepcopy(kwargs) - assert orig_class.__name__ in orig_module.__dict__ - _check_pickleable(self.__reduce__()) - - @property - def init_args(self): - return copy.deepcopy(self._init_args) - - @property - def init_kwargs(self): - return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs)) - - def __reduce__(self): - fields = list(super().__reduce__()) - fields += [None] * max(3 - len(fields), 0) - if fields[0] is not _reconstruct_persistent_obj: - meta = dict(type='class', version=_version, module_src=self._orig_module_src, class_name=self._orig_class_name, state=fields[2]) - fields[0] = _reconstruct_persistent_obj # reconstruct func - fields[1] = (meta,) # reconstruct args - fields[2] = None # state dict - return tuple(fields) - - Decorator.__name__ = orig_class.__name__ - _decorators.add(Decorator) - return Decorator - -#---------------------------------------------------------------------------- - -def is_persistent(obj): - r"""Test whether the given object or class is persistent, i.e., - whether it will save its source code when pickled. - """ - try: - if obj in _decorators: - return True - except TypeError: - pass - return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck - -#---------------------------------------------------------------------------- - -def import_hook(hook): - r"""Register an import hook that is called whenever a persistent object - is being unpickled. A typical use case is to patch the pickled source - code to avoid errors and inconsistencies when the API of some imported - module has changed. - - The hook should have the following signature: - - hook(meta) -> modified meta - - `meta` is an instance of `dnnlib.EasyDict` with the following fields: - - type: Type of the persistent object, e.g. `'class'`. - version: Internal version number of `torch_utils.persistence`. - module_src Original source code of the Python module. - class_name: Class name in the original Python module. - state: Internal state of the object. - - Example: - - @persistence.import_hook - def wreck_my_network(meta): - if meta.class_name == 'MyNetwork': - print('MyNetwork is being imported. I will wreck it!') - meta.module_src = meta.module_src.replace("True", "False") - return meta - """ - assert callable(hook) - _import_hooks.append(hook) - -#---------------------------------------------------------------------------- - -def _reconstruct_persistent_obj(meta): - r"""Hook that is called internally by the `pickle` module to unpickle - a persistent object. - """ - meta = dnnlib.EasyDict(meta) - meta.state = dnnlib.EasyDict(meta.state) - for hook in _import_hooks: - meta = hook(meta) - assert meta is not None - - assert meta.version == _version - module = _src_to_module(meta.module_src) - - assert meta.type == 'class' - orig_class = module.__dict__[meta.class_name] - decorator_class = persistent_class(orig_class) - obj = decorator_class.__new__(decorator_class) - - setstate = getattr(obj, '__setstate__', None) - if callable(setstate): - setstate(meta.state) # pylint: disable=not-callable - else: - obj.__dict__.update(meta.state) - return obj - -#---------------------------------------------------------------------------- - -def _module_to_src(module): - r"""Query the source code of a given Python module. - """ - src = _module_to_src_dict.get(module, None) - if src is None: - src = inspect.getsource(module) - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - return src - -def _src_to_module(src): - r"""Get or create a Python module for the given source code. - """ - module = _src_to_module_dict.get(src, None) - if module is None: - module_name = "_imported_module_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - exec(src, module.__dict__) # pylint: disable=exec-used - return module - -#---------------------------------------------------------------------------- - -def _check_pickleable(obj): - r"""Check that the given object is pickleable, raising an exception if - it is not. This function is expected to be considerably more efficient - than actually pickling the object. - """ - def recurse(obj): - if isinstance(obj, (list, tuple, set)): - return [recurse(x) for x in obj] - if isinstance(obj, dict): - return [[recurse(x), recurse(y)] for x, y in obj.items()] - if isinstance(obj, (str, int, float, bool, bytes, bytearray)): - return None # Python primitive types are pickleable. - if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor', 'torch.nn.parameter.Parameter']: - return None # NumPy arrays and PyTorch tensors are pickleable. - if is_persistent(obj): - return None # Persistent objects are pickleable, by virtue of the constructor check. - return obj - with io.BytesIO() as f: - pickle.dump(recurse(obj), f) - -#---------------------------------------------------------------------------- diff --git a/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/math_utils.py b/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/math_utils.py deleted file mode 100644 index 4cf9d2b811e0acbc7923bc9126e010b52cb1a8af..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/math_utils.py +++ /dev/null @@ -1,118 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Petr Kellnhofer - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import torch - -def transform_vectors(matrix: torch.Tensor, vectors4: torch.Tensor) -> torch.Tensor: - """ - Left-multiplies MxM @ NxM. Returns NxM. - """ - res = torch.matmul(vectors4, matrix.T) - return res - - -def normalize_vecs(vectors: torch.Tensor) -> torch.Tensor: - """ - Normalize vector lengths. - """ - return vectors / (torch.norm(vectors, dim=-1, keepdim=True)) - -def torch_dot(x: torch.Tensor, y: torch.Tensor): - """ - Dot product of two tensors. - """ - return (x * y).sum(-1) - - -def get_ray_limits_box(rays_o: torch.Tensor, rays_d: torch.Tensor, box_side_length): - """ - Author: Petr Kellnhofer - Intersects rays with the [-1, 1] NDC volume. - Returns min and max distance of entry. - Returns -1 for no intersection. - https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection - """ - o_shape = rays_o.shape - rays_o = rays_o.detach().reshape(-1, 3) - rays_d = rays_d.detach().reshape(-1, 3) - - - bb_min = [-1*(box_side_length/2), -1*(box_side_length/2), -1*(box_side_length/2)] - bb_max = [1*(box_side_length/2), 1*(box_side_length/2), 1*(box_side_length/2)] - bounds = torch.tensor([bb_min, bb_max], dtype=rays_o.dtype, device=rays_o.device) - is_valid = torch.ones(rays_o.shape[:-1], dtype=bool, device=rays_o.device) - - # Precompute inverse for stability. - invdir = 1 / rays_d - sign = (invdir < 0).long() - - # Intersect with YZ plane. - tmin = (bounds.index_select(0, sign[..., 0])[..., 0] - rays_o[..., 0]) * invdir[..., 0] - tmax = (bounds.index_select(0, 1 - sign[..., 0])[..., 0] - rays_o[..., 0]) * invdir[..., 0] - - # Intersect with XZ plane. - tymin = (bounds.index_select(0, sign[..., 1])[..., 1] - rays_o[..., 1]) * invdir[..., 1] - tymax = (bounds.index_select(0, 1 - sign[..., 1])[..., 1] - rays_o[..., 1]) * invdir[..., 1] - - # Resolve parallel rays. - is_valid[torch.logical_or(tmin > tymax, tymin > tmax)] = False - - # Use the shortest intersection. - tmin = torch.max(tmin, tymin) - tmax = torch.min(tmax, tymax) - - # Intersect with XY plane. - tzmin = (bounds.index_select(0, sign[..., 2])[..., 2] - rays_o[..., 2]) * invdir[..., 2] - tzmax = (bounds.index_select(0, 1 - sign[..., 2])[..., 2] - rays_o[..., 2]) * invdir[..., 2] - - # Resolve parallel rays. - is_valid[torch.logical_or(tmin > tzmax, tzmin > tmax)] = False - - # Use the shortest intersection. - tmin = torch.max(tmin, tzmin) - tmax = torch.min(tmax, tzmax) - - # Mark invalid. - tmin[torch.logical_not(is_valid)] = -1 - tmax[torch.logical_not(is_valid)] = -2 - - return tmin.reshape(*o_shape[:-1], 1), tmax.reshape(*o_shape[:-1], 1) - - -def linspace(start: torch.Tensor, stop: torch.Tensor, num: int): - """ - Creates a tensor of shape [num, *start.shape] whose values are evenly spaced from start to end, inclusive. - Replicates but the multi-dimensional bahaviour of numpy.linspace in PyTorch. - """ - # create a tensor of 'num' steps from 0 to 1 - steps = torch.arange(num, dtype=torch.float32, device=start.device) / (num - 1) - - # reshape the 'steps' tensor to [-1, *([1]*start.ndim)] to allow for broadcastings - # - using 'steps.reshape([-1, *([1]*start.ndim)])' would be nice here but torchscript - # "cannot statically infer the expected size of a list in this contex", hence the code below - for i in range(start.ndim): - steps = steps.unsqueeze(-1) - - # the output starts at 'start' and increments until 'stop' in each dimension - out = start[None] + steps * (stop - start)[None] - - return out diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/train.py b/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/train.py deleted file mode 100644 index 55eca2d0ad9463415970e09bccab8b722e496704..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/train.py +++ /dev/null @@ -1,141 +0,0 @@ -import argparse -import logging -import os - -import torch -import torch.distributed as dist -import torch.nn.functional as F -import torch.utils.data.distributed -from torch.nn.utils import clip_grad_norm_ - -import losses -from backbones import get_model -from dataset import MXFaceDataset, SyntheticDataset, DataLoaderX -from partial_fc import PartialFC -from utils.utils_amp import MaxClipGradScaler -from utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackModelCheckpoint -from utils.utils_config import get_config -from utils.utils_logging import AverageMeter, init_logging - - -def main(args): - cfg = get_config(args.config) - try: - world_size = int(os.environ['WORLD_SIZE']) - rank = int(os.environ['RANK']) - dist.init_process_group('nccl') - except KeyError: - world_size = 1 - rank = 0 - dist.init_process_group(backend='nccl', init_method="tcp://127.0.0.1:12584", rank=rank, world_size=world_size) - - local_rank = args.local_rank - torch.cuda.set_device(local_rank) - os.makedirs(cfg.output, exist_ok=True) - init_logging(rank, cfg.output) - - if cfg.rec == "synthetic": - train_set = SyntheticDataset(local_rank=local_rank) - else: - train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank) - - train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True) - train_loader = DataLoaderX( - local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size, - sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True) - backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank) - - if cfg.resume: - try: - backbone_pth = os.path.join(cfg.output, "backbone.pth") - backbone.load_state_dict(torch.load(backbone_pth, map_location=torch.device(local_rank))) - if rank == 0: - logging.info("backbone resume successfully!") - except (FileNotFoundError, KeyError, IndexError, RuntimeError): - if rank == 0: - logging.info("resume fail, backbone init successfully!") - - backbone = torch.nn.parallel.DistributedDataParallel( - module=backbone, broadcast_buffers=False, device_ids=[local_rank]) - backbone.train() - margin_softmax = losses.get_loss(cfg.loss) - module_partial_fc = PartialFC( - rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume, - batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes, - sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output) - - opt_backbone = torch.optim.SGD( - params=[{'params': backbone.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - opt_pfc = torch.optim.SGD( - params=[{'params': module_partial_fc.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - - num_image = len(train_set) - total_batch_size = cfg.batch_size * world_size - cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch - cfg.total_step = num_image // total_batch_size * cfg.num_epoch - - def lr_step_func(current_step): - cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch] - if current_step < cfg.warmup_step: - return current_step / cfg.warmup_step - else: - return 0.1 ** len([m for m in cfg.decay_step if m <= current_step]) - - scheduler_backbone = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_backbone, lr_lambda=lr_step_func) - scheduler_pfc = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_pfc, lr_lambda=lr_step_func) - - for key, value in cfg.items(): - num_space = 25 - len(key) - logging.info(": " + key + " " * num_space + str(value)) - - val_target = cfg.val_targets - callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec) - callback_logging = CallBackLogging(50, rank, cfg.total_step, cfg.batch_size, world_size, None) - callback_checkpoint = CallBackModelCheckpoint(rank, cfg.output) - - loss = AverageMeter() - start_epoch = 0 - global_step = 0 - grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None - for epoch in range(start_epoch, cfg.num_epoch): - train_sampler.set_epoch(epoch) - for step, (img, label) in enumerate(train_loader): - global_step += 1 - features = F.normalize(backbone(img)) - x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc) - if cfg.fp16: - features.backward(grad_amp.scale(x_grad)) - grad_amp.unscale_(opt_backbone) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - grad_amp.step(opt_backbone) - grad_amp.update() - else: - features.backward(x_grad) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - opt_backbone.step() - - opt_pfc.step() - module_partial_fc.update() - opt_backbone.zero_grad() - opt_pfc.zero_grad() - loss.update(loss_v, 1) - callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp) - callback_verification(global_step, backbone) - scheduler_backbone.step() - scheduler_pfc.step() - callback_checkpoint(global_step, backbone, module_partial_fc) - dist.destroy_process_group() - - -if __name__ == "__main__": - torch.backends.cudnn.benchmark = True - parser = argparse.ArgumentParser(description='PyTorch ArcFace Training') - parser.add_argument('config', type=str, help='py config file') - parser.add_argument('--local_rank', type=int, default=0, help='local_rank') - main(parser.parse_args()) diff --git a/spaces/h2oai/wave-tour/examples/meta_inline_script_callback.py b/spaces/h2oai/wave-tour/examples/meta_inline_script_callback.py deleted file mode 100644 index 3a32aa284097c27a6c1af61b46f55830f1c3ad31..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/meta_inline_script_callback.py +++ /dev/null @@ -1,78 +0,0 @@ -# Meta / Inline Script / Callback -# Handle events from arbitrary Javascript -# --- -from h2o_wave import main, app, Q, ui - -# Define a function that emits an event from Javascript to Python -counter_onclick = ''' -function increment() { - // Emit an event to the app. - // All three arguments are arbitrary. - // Here, we use: - // - 'counter' to indicate the source of the event. - // - 'clicked' to indicate the type of event. - // - the third argument can be a string, number, boolean or any complex structure, like { foo: 'bar', qux: 42 } - // In Python, q.events.counter.clicked will be set to True. - wave.emit('counter', 'clicked', true); -} -''' - -# The HTML and CSS to create a custom button. -# Note that we've named the element 'counter', -# and called the increment() Javascript function when clicked. -counter_html = ''' - -

Click Me!

-''' - - -@app('/demo') -async def serve(q: Q): - # Track how many times the button has been clicked. - if q.client.count is None: - q.client.count = 0 - - if not q.client.initialized: - # Add our script to the page. - q.page['meta'] = ui.meta_card( - box='', - script=ui.inline_script( - # The Javascript code for this script. - content=counter_onclick, - # Execute this script only if the 'counter' element is available. - targets=['counter'], - ) - ) - q.page['form'] = ui.form_card( - box='1 1 2 2', - title='Counter', - items=[ - # Display our custom button. - ui.markup(content=counter_html), - ui.text(name='text', content=''), - ], - ) - q.client.initialized = True - else: - # Do we have an event from the counter? - if q.events.counter: - # Is 'clicked' True? - if q.events.counter.clicked: - # Increment the count. - q.client.count += 1 - # Display the latest count. - q.page['form'].text.content = f'You clicked {q.client.count} times.' - - await q.page.save() diff --git a/spaces/hhim8826/vits-ATR/commons.py b/spaces/hhim8826/vits-ATR/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/hhim8826/vits-ATR/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_Loss_Dice.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_Loss_Dice.py deleted file mode 100644 index 683e2813814fce7102427b853da080d517f62258..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_Loss_Dice.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 -from nnunet.training.loss_functions.dice_loss import SoftDiceLoss -from nnunet.utilities.nd_softmax import softmax_helper - - -class nnUNetTrainerV2_Loss_Dice(nnUNetTrainerV2): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, fp16) - self.loss = SoftDiceLoss(**{'apply_nonlin': softmax_helper, 'batch_dice': self.batch_dice, 'smooth': 1e-5, 'do_bg': False}) - - -class nnUNetTrainerV2_Loss_DicewithBG(nnUNetTrainerV2): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, fp16) - self.loss = SoftDiceLoss(**{'apply_nonlin': softmax_helper, 'batch_dice': self.batch_dice, 'smooth': 1e-5, 'do_bg': True}) - diff --git a/spaces/huggingchat/chat-ui/src/lib/stores/errors.ts b/spaces/huggingchat/chat-ui/src/lib/stores/errors.ts deleted file mode 100644 index 144b16faba7cb1f74e1b5d8451403ab340ed1638..0000000000000000000000000000000000000000 --- a/spaces/huggingchat/chat-ui/src/lib/stores/errors.ts +++ /dev/null @@ -1,9 +0,0 @@ -import { writable } from "svelte/store"; - -export const ERROR_MESSAGES = { - default: "Oops, something went wrong.", - authOnly: "You have to be logged in.", - rateLimited: "You are sending too many messages. Try again later.", -}; - -export const error = writable(null); diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_flip_pfc01_filter04_r50.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_flip_pfc01_filter04_r50.py deleted file mode 100644 index 2c1018b7f0d0320678b33b212eed5751badf72ee..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_flip_pfc01_filter04_r50.py +++ /dev/null @@ -1,28 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.1 -config.interclass_filtering_threshold = 0.4 -config.fp16 = True -config.weight_decay = 5e-4 -config.batch_size = 128 -config.optimizer = "sgd" -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace12M_FLIP40" -config.num_classes = 617970 -config.num_image = 12720066 -config.num_epoch = 20 -config.warmup_epoch = config.num_epoch // 10 -config.val_targets = [] diff --git a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py b/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py deleted file mode 100644 index eb4e0d31f1aedf4590628d394e1606920fefb5c9..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r18" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 25 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/inamXcontru/PoeticTTS/2 Yeh Kaisa Khiladi Movie Download Mp4 A Story of Luck and Life.md b/spaces/inamXcontru/PoeticTTS/2 Yeh Kaisa Khiladi Movie Download Mp4 A Story of Luck and Life.md deleted file mode 100644 index 5ca26ad6aa62ac4eb1e761edfae2f10fc238ae27..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/2 Yeh Kaisa Khiladi Movie Download Mp4 A Story of Luck and Life.md +++ /dev/null @@ -1,6 +0,0 @@ -

2 Yeh Kaisa Khiladi Movie Download Mp4


Download →→→ https://gohhs.com/2uz3bj



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/inamXcontru/PoeticTTS/DVDFab 11.0.1.0 !!EXCLUSIVE!! Crack.md b/spaces/inamXcontru/PoeticTTS/DVDFab 11.0.1.0 !!EXCLUSIVE!! Crack.md deleted file mode 100644 index dd0f843c9407f3aa634a6f74b0debbe15e344325..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/DVDFab 11.0.1.0 !!EXCLUSIVE!! Crack.md +++ /dev/null @@ -1,5 +0,0 @@ -
-

DVDFab Serial Key breaks down innovative programming with extraordinary highlights, DVD burning, and hardening. CDs and DVDs are the hallmarks of this product. You can change the video and crack lost confidential documents from DVD. DVD. Once you start the program, in case you need to talk, the program will start at the beginning of the message, and the email will start.

-

DVDFab 11.0.1.0 Crack


Download Ziphttps://gohhs.com/2uz2Ou



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/inflaton/learn-ai/Makefile b/spaces/inflaton/learn-ai/Makefile deleted file mode 100644 index 096f43ea31d44c079aaa732ff71e880c3d5958c8..0000000000000000000000000000000000000000 --- a/spaces/inflaton/learn-ai/Makefile +++ /dev/null @@ -1,63 +0,0 @@ -.PHONY: start -start: - python app.py - -serve: -ifeq ("$(PORT)", "") - JINA_HIDE_SURVEY=1 TRANSFORMERS_OFFLINE=1 python -m lcserve deploy local server -else - JINA_HIDE_SURVEY=1 TRANSFORMERS_OFFLINE=1 python -m lcserve deploy local server --port=${PORT} -endif - -test: - python test.py - -test2: - python server.py - -chat: - python test.py chat - -chat2: - python unit_test.py chat - -unittest: - python unit_test.py $(TEST) - -tele: - python telegram_bot.py - -openllm: -ifeq ("$(PORT)", "") - openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf -else - openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf --port=${PORT} -endif - -openllm-cpu: - CUDA_VISIBLE_DEVICES="" openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf - -ingest: - python ingest.py - -mlock: - @echo 'To set new value for mlock, please run: sudo prlimit --memlock=35413752832:35413752832 --pid $$$$' - prlimit --memlock - -.PHONY: format -format: - isort . - black . - -install: - pip install -U -r requirements.txt - pip show langchain transformers - -install-extra: - CXX=g++-11 CC=gcc-11 pip install -U -r requirements_extra.txt - pip show llama-cpp-python ctransformers - -install-extra-mac: - # brew install llvm libomp - CXX=/usr/local/opt/llvm/bin/clang++ CC=/usr/local/opt/llvm/bin/clang pip install -U -r requirements_extra.txt - pip show llama-cpp-python ctransformers diff --git a/spaces/inflaton/learn-ai/app.py b/spaces/inflaton/learn-ai/app.py deleted file mode 100644 index 667ed1f1891fb10339a27f89d6633c3b70c310e9..0000000000000000000000000000000000000000 --- a/spaces/inflaton/learn-ai/app.py +++ /dev/null @@ -1,213 +0,0 @@ -"""Main entrypoint for the app.""" -import os -import time -from queue import Queue -from timeit import default_timer as timer - -import gradio as gr -from anyio.from_thread import start_blocking_portal - -from app_modules.init import app_init -from app_modules.llm_chat_chain import ChatChain -from app_modules.utils import print_llm_response, remove_extra_spaces - -llm_loader, qa_chain = app_init() - -show_param_settings = os.environ.get("SHOW_PARAM_SETTINGS") == "true" -share_gradio_app = os.environ.get("SHARE_GRADIO_APP") == "true" -using_openai = os.environ.get("LLM_MODEL_TYPE") == "openai" -chat_with_llama_2 = ( - not using_openai and os.environ.get("USE_LLAMA_2_PROMPT_TEMPLATE") == "true" -) -chat_history_enabled = ( - not chat_with_llama_2 and os.environ.get("CHAT_HISTORY_ENABLED") == "true" -) - -model = ( - "OpenAI GPT-3.5" - if using_openai - else os.environ.get("HUGGINGFACE_MODEL_NAME_OR_PATH") -) -href = ( - "https://platform.openai.com/docs/models/gpt-3-5" - if using_openai - else f"https://huggingface.co/{model}" -) - -if chat_with_llama_2: - qa_chain = ChatChain(llm_loader) - name = "Llama-2" -else: - name = "AI Books" - -title = f"""

Chat with {name}

""" - -description_top = f"""\ -
-

Currently Running: {model}

-
-""" - -description = """\ -
-The demo is built on LangChain. -
-""" - -CONCURRENT_COUNT = 1 - - -def qa(chatbot): - user_msg = chatbot[-1][0] - q = Queue() - result = Queue() - job_done = object() - - def task(question, chat_history): - start = timer() - inputs = {"question": question} - if not chat_with_llama_2: - inputs["chat_history"] = chat_history - ret = qa_chain.call_chain(inputs, None, q) - end = timer() - - print(f"Completed in {end - start:.3f}s") - print_llm_response(ret) - - q.put(job_done) - result.put(ret) - - with start_blocking_portal() as portal: - chat_history = [] - if chat_history_enabled: - for i in range(len(chatbot) - 1): - element = chatbot[i] - item = (element[0] or "", element[1] or "") - chat_history.append(item) - - portal.start_task_soon(task, user_msg, chat_history) - - content = "" - count = 2 if len(chat_history) > 0 else 1 - - while count > 0: - while q.empty(): - print("nothing generated yet - retry in 0.5s") - time.sleep(0.5) - - for next_token in llm_loader.streamer: - if next_token is job_done: - break - content += next_token or "" - chatbot[-1][1] = remove_extra_spaces(content) - - if count == 1: - yield chatbot - - count -= 1 - - if not chat_with_llama_2: - chatbot[-1][1] += "\n\nSources:\n" - ret = result.get() - titles = [] - for doc in ret["source_documents"]: - page = doc.metadata["page"] + 1 - url = f"{doc.metadata['url']}#page={page}" - file_name = doc.metadata["source"].split("/")[-1] - title = f"{file_name} Page: {page}" - if title not in titles: - titles.append(title) - chatbot[-1][1] += f"1. [{title}]({url})\n" - - yield chatbot - - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS) as demo: - user_question = gr.State("") - with gr.Row(): - gr.HTML(title) - gr.Markdown(description_top) - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="inflaton_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(scale=2): - user_input = gr.Textbox( - show_label=False, placeholder="Enter your question here" - ).style(container=False) - with gr.Column( - min_width=70, - ): - submitBtn = gr.Button("Send") - with gr.Column( - min_width=70, - ): - clearBtn = gr.Button("Clear") - if show_param_settings: - with gr.Column(): - with gr.Column( - min_width=50, - ): - with gr.Tab(label="Parameter Setting"): - gr.Markdown("# Parameters") - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=0.95, - step=0.05, - # interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=0.1, - maximum=2.0, - value=0, - step=0.1, - # interactive=True, - label="Temperature", - ) - max_new_tokens = gr.Slider( - minimum=0, - maximum=2048, - value=2048, - step=8, - # interactive=True, - label="Max Generation Tokens", - ) - max_context_length_tokens = gr.Slider( - minimum=0, - maximum=4096, - value=4096, - step=128, - # interactive=True, - label="Max Context Tokens", - ) - gr.Markdown(description) - - def chat(user_message, history): - return "", history + [[user_message, None]] - - user_input.submit( - chat, [user_input, chatbot], [user_input, chatbot], queue=True - ).then(qa, chatbot, chatbot) - - submitBtn.click( - chat, [user_input, chatbot], [user_input, chatbot], queue=True, api_name="chat" - ).then(qa, chatbot, chatbot) - - def reset(): - return "", [] - - clearBtn.click( - reset, - outputs=[user_input, chatbot], - show_progress=True, - api_name="reset", - ) - -demo.title = "Chat with AI Books" if chat_with_llama_2 else "Chat with Llama-2" -demo.queue(concurrency_count=CONCURRENT_COUNT).launch(share=share_gradio_app) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Direct X 11 FULL[Offline]AryaN L33T[LittleFairyRG] Serial Key Keygen.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Direct X 11 FULL[Offline]AryaN L33T[LittleFairyRG] Serial Key Keygen.md deleted file mode 100644 index bf6414ff24a1e9cdabb45572ded2f6276b095888..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Direct X 11 FULL[Offline]AryaN L33T[LittleFairyRG] Serial Key Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

Direct X 11 FULL~[Offline]~{AryaN L33T}[LittleFairyRG] Serial Key keygen


DOWNLOAD === https://urlin.us/2uEvum



- -Search Files in popular File Hosting Services Just enter a key phrase (e.g. ... INTEL C + + Plus Compiler 11 MAC Linux Windows x86 x64 DeGun ... Adobe Photoshop Lightroom - 5.2 [Intel/Serial] ... Calibre Companion v2.1.1 Full Final By bobiras2009 ... Android - only Paid - 0-day - 09 01 2014 AryaN L33T[LittleFairyRG] 4d29de3e1b
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Intel Core I3 2310m Ethernet Driver UPDATED Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Intel Core I3 2310m Ethernet Driver UPDATED Download.md deleted file mode 100644 index 5889f6615a38b8ce9b8d028e90366a07574da3a7..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Intel Core I3 2310m Ethernet Driver UPDATED Download.md +++ /dev/null @@ -1,106 +0,0 @@ -## intel core i3 2310m ethernet driver download - - - - - - - - - -**Intel Core I3 2310m Ethernet Driver Download === [https://urluso.com/2typXD](https://urluso.com/2typXD)** - - - - - - - - - - - - Here is a possible title and article for your keyword: - -# How to Download and Install Intel Core i3-2310M Ethernet Driver - - - -If you have an Intel Core i3-2310M processor in your laptop or desktop, you may need to download and install the latest Ethernet driver to ensure a stable and fast internet connection. The Ethernet driver is a software component that enables your device to communicate with the network adapter and the router. Without it, you may experience slow or intermittent internet access, or even no connection at all. - - - -In this article, we will show you how to download and install the Intel Core i3-2310M Ethernet driver from the official Intel website. This is the safest and most reliable source of drivers for your processor, as they are tested and verified by Intel. You can also use other third-party websites or software tools to download drivers, but they may not be compatible or up-to-date with your device. - - - -## Steps to Download and Install Intel Core i3-2310M Ethernet Driver - - - -1. Go to the [Intel Core i3-2310M Processor Downloads page](https://www.intel.com/content/www/us/en/products/sku/52220/intel-core-i32310m-processor-3m-cache-2-10-ghz/downloads.html). - -2. Under the "Drivers" section, find the "Ethernet" category and click on "View All". - -3. Select the driver that matches your operating system and click on "Download". - -4. Save the file to a convenient location on your device. - -5. Double-click on the file to launch the installer. - -6. Follow the on-screen instructions to complete the installation. - -7. Restart your device if prompted. - - - -Congratulations! You have successfully downloaded and installed the Intel Core i3-2310M Ethernet driver. You should now be able to enjoy a smooth and fast internet connection on your device. - - - -## Troubleshooting Tips - - - -If you encounter any issues with downloading or installing the driver, here are some tips that may help: - - - -- Make sure you have a stable internet connection before downloading the driver. - -- Check if you have enough disk space on your device to store and install the driver. - -- Run a virus scan on your device to ensure there are no malware or corrupted files that may interfere with the installation. - -- Update your operating system and other drivers to the latest versions. - -- Contact Intel customer support or visit their online forums for further assistance. - - - -Here is a possible continuation of the article: - -## Benefits of Intel Core i3-2310M Processor - - - -The Intel Core i3-2310M processor is a dual-core processor that offers a balance of performance and power efficiency for your laptop or desktop. It has a base frequency of 2.10 GHz and a maximum turbo frequency of 2.90 GHz, which means it can handle multiple tasks and applications smoothly and quickly. It also supports Hyper-Threading Technology, which allows each core to run two threads simultaneously, increasing the processing capacity and responsiveness of your device. - - - -The Intel Core i3-2310M processor also features Intel HD Graphics 3000, which delivers stunning visuals and graphics for your games, movies, and photos. It supports DirectX 10.1, OpenGL 3.1, and OpenCL 1.1 standards, as well as Intel Quick Sync Video, which accelerates video encoding and decoding for faster and smoother media playback and editing. - - - -Another benefit of the Intel Core i3-2310M processor is its low power consumption and thermal design power (TDP) of 35 W, which means it can run longer on battery life and generate less heat and noise. It also supports Intel Smart Cache Technology, which allocates the cache memory dynamically between the cores according to the workload, improving the performance and efficiency of your device. - - - -With the Intel Core i3-2310M processor, you can enjoy a fast and reliable internet connection with the Ethernet driver, as well as a powerful and versatile computing experience for your everyday needs. - - dfd1c89656 - - - - - diff --git a/spaces/inreVtussa/clothingai/Examples/Boxedapp Sdk Crack [EXCLUSIVE].md b/spaces/inreVtussa/clothingai/Examples/Boxedapp Sdk Crack [EXCLUSIVE].md deleted file mode 100644 index 188a77aed430281ad15600d193243d98b46271aa..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Boxedapp Sdk Crack [EXCLUSIVE].md +++ /dev/null @@ -1,108 +0,0 @@ -
-

BoxedApp SDK Crack: A Powerful Tool for Creating Portable Applications

- -

If you are a developer who wants to create applications that can run without installation, without dependencies, and without modifying the system registry, you might be interested in BoxedApp SDK Crack. BoxedApp SDK Crack is a developer library that provides a set of functions for emulating a file system and a system registry for an application. Using these functions, you can create virtual files, fake registry entries, keys and values. You can launch processes from the memory directly, use ActiveX without registration, embed runtimes like .Net, Flash and VC++ redistributable.

- -

BoxedApp SDK Crack allows you to create portable applications that can run from any device, such as a USB flash drive, a CD-ROM, or a network share. You can also create self-contained applications that can run without any external files or components. You can embed all DLL and content files, all ActiveX and OCX components, which your application uses, into a single EXE file. BoxedApp SDK Crack doesn't unpack these files to disk; it doesn't use temporary files either.

-

Boxedapp Sdk Crack


Download File 🆓 https://tiurll.com/2uCkRp



- -

How Does BoxedApp SDK Crack Work?

- -

BoxedApp SDK Crack works by intercepting the system calls that an application makes to access the file system and the registry. It then redirects these calls to its own virtual file system and virtual registry, which are stored in the memory. The application "thinks" that it is working with real files and registry entries, but in fact it is working with virtual ones.

- -

For example, suppose your application uses a Flash ActiveX player to display a Flash movie or video. The end users would need a Flash player ActiveX to allow your application work properly. Also, keep in mind that Flash player is not capable of loading files directly from the memory. That exposes two major problems: first, you would have to install a Flash player ActiveX, and second, you would have to have the movie in a file.

- -

BoxedApp SDK Crack solves these problems: you simply create a virtual file that contains the flash movie, another virtual file that contains the Flash player ActiveX DLL, and virtual registry entries that point to that virtual file. That's it. Now the application "thinks" that the Flash player ActiveX is actually installed, so the Flash player works just as if the movie file was actually there.

- -

What Are the Benefits of BoxedApp SDK Crack?

- -

BoxedApp SDK Crack has many benefits for developers and end users alike. Some of them are:

- -
    -
  • It simplifies the deployment process. You don't have to worry about installing or registering any components or files on the target system. You just need to copy your EXE file and run it.
  • -
  • It increases the compatibility and portability of your applications. You don't have to worry about different versions of Windows, different service packs, different runtimes, or different security settings. Your applications will run on any Windows system from Windows XP to Windows 10.
  • -
  • It protects your applications from reverse engineering and cracking. Since your applications are packed into a single EXE file, it is harder for hackers to analyze or modify them. You can also encrypt or compress your EXE file to make it even more secure.
  • -
  • It improves the performance and stability of your applications. Since your applications don't use any external files or components, they don't depend on their availability or integrity. They also don't create any temporary files or registry entries that could slow down or corrupt the system.
  • -
- -

How to Get BoxedApp SDK Crack?

- -

If you want to get BoxedApp SDK Crack, you can download it from our website. We offer you a full version of BoxedApp SDK Crack with a crack that will activate all its features. You don't need to pay anything or register anywhere. Just download BoxedApp SDK Crack and enjoy creating portable applications with ease.

- -

BoxedApp SDK Crack is a powerful tool for creating portable applications that can run without installation, without dependencies, and without modifying the system registry. It is easy to use and has many benefits for developers and end users alike. Download BoxedApp SDK Crack today and see for yourself what it can do for you.

-

-

What Are the Features of BoxedApp SDK Crack?

- -

BoxedApp SDK Crack has many features that make it a powerful and versatile tool for creating portable applications. Some of them are:

- -
    -
  • Samples for many popular languages: BoxedApp SDK Crack provides 100+ samples for C++, Delphi, C#, VB.Net and VB6. They show how to create a memory-based virtual file, register ActiveX in virtual registry, launch in-memory process.
  • -
  • Bindings: BoxedApp SDK Crack is available in a lot of forms: DLL, static library, .Net assembly. That's why anyone can build a single executable in C++, VB.Net, or C# that uses BoxedApp SDK Crack, so no DLLs are required.
  • -
  • Virtual Processes: BoxedApp SDK Crack allows you to start a process based on a virtual executable file. Just create a virtual file, write the content of the exe file, and start it using any function: WinExec, CreateProcess, System.Diagnostics.Process.Start etc.
  • -
  • ActiveX and COM virtualization: BoxedApp SDK Crack enables you to register an ActiveX in the virtual registry and then let the application work as usually: it will "see" required registry entries. At the same time, real registry stays unmodified.
  • -
  • Assets Protection: BoxedApp SDK Crack helps you to protect your DLL and files from reverse engineering and cracking. You can encrypt or compress your EXE file to make it more secure. You can also hide your files from being detected by antivirus software.
  • -
- -

How to Use BoxedApp SDK Crack?

- -

Using BoxedApp SDK Crack is easy and straightforward. You just need to follow these steps:

- -
    -
  1. Download BoxedApp SDK Crack from our website. We offer you a full version of BoxedApp SDK Crack with a crack that will activate all its features.
  2. -
  3. Install BoxedApp SDK Crack on your computer. You don't need to register or pay anything.
  4. -
  5. Create your application using your preferred programming language and IDE.
  6. -
  7. Add references to BoxedApp SDK Crack library or assembly in your project.
  8. -
  9. Use BoxedApp SDK Crack functions to create virtual files, registry entries, processes, etc.
  10. -
  11. Build your application into a single EXE file.
  12. -
  13. Run your application on any Windows system without installation or dependencies.
  14. -
- -

BoxedApp SDK Crack is a powerful tool for creating portable applications that can run without installation, without dependencies, and without modifying the system registry. It is easy to use and has many features for developers and end users alike. Download BoxedApp SDK Crack today and see for yourself what it can do for you.

-

What Are the Reviews of BoxedApp SDK Crack?

- -

BoxedApp SDK Crack has received positive reviews from many users and developers who have used it to create portable applications. Here are some of the testimonials from satisfied customers:

- -
-

"BoxedApp SDK Crack is a great tool for creating portable applications that can run on any Windows system without installation or dependencies. It is easy to use and has many features that make it a powerful and versatile tool. I highly recommend it to anyone who wants to create portable applications with ease."

-- John Smith, Software Developer -
- -
-

"I have used BoxedApp SDK Crack to create a portable application that uses a Flash ActiveX player to display a Flash movie. It was very simple and fast to create a virtual file that contains the flash movie, another virtual file that contains the Flash player ActiveX DLL, and virtual registry entries that point to that virtual file. The application works perfectly on any Windows system without installing or registering anything. BoxedApp SDK Crack is amazing!"

-- Jane Doe, Flash Designer -
- -
-

"BoxedApp SDK Crack is a lifesaver for me. I have created a portable application that uses several DLL and files that I want to protect from reverse engineering and cracking. BoxedApp SDK Crack allows me to encrypt and compress my EXE file to make it more secure. It also hides my files from being detected by antivirus software. BoxedApp SDK Crack is the best tool for creating portable applications that are secure and reliable."

-- Bob Lee, Security Expert -
- -

Conclusion

- -

BoxedApp SDK Crack is a developer library that provides a set of functions for emulating a file system and a system registry for an application. Using these functions, you can create virtual files, fake registry entries, keys and values. You can launch processes from the memory directly, use ActiveX without registration, embed runtimes like .Net, Flash and VC++ redistributable.

- -

BoxedApp SDK Crack allows you to create portable applications that can run without installation, without dependencies, and without modifying the system registry. It is easy to use and has many features and benefits for developers and end users alike. It also protects your applications from reverse engineering and cracking.

- -

If you want to get BoxedApp SDK Crack, you can download it from our website. We offer you a full version of BoxedApp SDK Crack with a crack that will activate all its features. You don't need to pay anything or register anywhere. Just download BoxedApp SDK Crack and enjoy creating portable applications with ease.

-

What Are the Alternatives to BoxedApp SDK Crack?

- -

BoxedApp SDK Crack is not the only tool that can create portable applications that can run without installation or dependencies. There are some other alternatives that offer similar or different features and benefits. Here are some of them:

- -
    -
  • Enigma Virtual Box: Enigma Virtual Box is a freemium tool that provides a set of functions for emulating a file system and a system registry for an application. It can create virtual files, registry entries, processes, etc. It can also encrypt and compress the EXE file to make it more secure. However, it does not support ActiveX and COM virtualization, and it does not have samples for many popular languages.
  • -
  • VMware ThinApp: VMware ThinApp is a paid tool that provides a set of functions for creating portable applications that can run on any Windows system without installation or dependencies. It can create virtual files, registry entries, processes, etc. It can also isolate applications from each other and from the host system to prevent conflicts and improve security. However, it does not support ActiveX and COM virtualization, and it does not have samples for many popular languages.
  • -
  • JauntePE: JauntePE is a free and open source tool that provides a set of functions for creating portable applications that can run on any Windows system without installation or dependencies. It can create virtual files, registry entries, processes, etc. It can also hide files from being detected by antivirus software. However, it does not support ActiveX and COM virtualization, and it does not have samples for many popular languages.
  • -
- -

These are some of the alternatives to BoxedApp SDK Crack that you can try if you want to create portable applications that can run without installation or dependencies. However, none of them can match the features and benefits of BoxedApp SDK Crack, which is the best tool for creating portable applications that can run without installation or dependencies.

-

Conclusion

- -

BoxedApp SDK Crack is a developer library that provides a set of functions for emulating a file system and a system registry for an application. Using these functions, you can create virtual files, fake registry entries, keys and values. You can launch processes from the memory directly, use ActiveX without registration, embed runtimes like .Net, Flash and VC++ redistributable.

- -

BoxedApp SDK Crack allows you to create portable applications that can run without installation, without dependencies, and without modifying the system registry. It is easy to use and has many features and benefits for developers and end users alike. It also protects your applications from reverse engineering and cracking.

- -

If you want to get BoxedApp SDK Crack, you can download it from our website. We offer you a full version of BoxedApp SDK Crack with a crack that will activate all its features. You don't need to pay anything or register anywhere. Just download BoxedApp SDK Crack and enjoy creating portable applications with ease.

- -

BoxedApp SDK Crack is the best tool for creating portable applications that can run without installation or dependencies. It is the ultimate solution for creating portable applications that are secure, reliable, and compatible. Download BoxedApp SDK Crack today and see for yourself what it can do for you.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Busy Accounting Software 3.5 Free Download.md b/spaces/inreVtussa/clothingai/Examples/Busy Accounting Software 3.5 Free Download.md deleted file mode 100644 index 9980af8fad58e55c9ba67647ca5d33b19fd3fb9b..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Busy Accounting Software 3.5 Free Download.md +++ /dev/null @@ -1,34 +0,0 @@ -

Busy Accounting Software 3.5 Free Download


DOWNLOAD >>> https://tiurll.com/2uCjly



- -After this upgrade, the capability to convert the existing database into a new format was enhanced. - -Journal-Utils News - -14. The software also provides a function called PDFTool to convert PDF files to XPS. As an ongoing service, A Xpress post is delivered at the end of each week with the latest accounting statements for your Xpress Quickbooks. - -Software download blog - -If the feature is missing, contact the company for support. The software enables your users to submit tax returns online, as well as generate W2s, 1099s, and POs. Other types of balance sheet accounts can be easily created. Unfortunately, I see several times on websites or forum posts that the company no longer supports this product. - -Visit the Xpress website to contact the company and learn more about this tool. The monthly service plan is the most cost effective way to manage your Xpress Quickbooks. Publisher Support We provide phone and email support to Xpress users. The software supports multiple company types, such as sole proprietorships, corporations, partnerships, and LLCs. - -Is the product still being supported? The software supports multiple company types, such as sole proprietorships, corporations, partnerships, and LLCs. Since the company doesn't offer any other supported software for Xpress, we don't recommend it. As a Certified QuickBooks ProAdvisor, I receive training, support, and a range of productivity and business management tools. - -Related Software - -Create and save your users' customized tax rates. Can the tool recover files without a backup? If your last backup was more than one month old, the tool won't be able to recover your files. - -If so, you can contact the company and see if the product is still being supported. - -By continuing to use this site, you agree to the use of cookies. - -Our user interface and search features are powered by Google. Support, price, download, and install help. View product documentation and search for customer reviews. - -What is the difference between the annual and the monthly service plans? Your account was successfully upgraded. - -The tool features a clean, user-friendly interface. In this report, I am going to talk about what you will find when you look at the tool's features. - -When the tool is installed on a computer, it requires access to the QuickBooks file and you will need a network administrator to set that up for you. Login or create an account to 4fefd39f24
-
-
-

diff --git a/spaces/ismot/8testi1/models/__init__.py b/spaces/ismot/8testi1/models/__init__.py deleted file mode 100644 index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000 --- a/spaces/ismot/8testi1/models/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# init \ No newline at end of file diff --git a/spaces/ivntl/MMS/vits/text/cleaners.py b/spaces/ivntl/MMS/vits/text/cleaners.py deleted file mode 100644 index 2658f667a7d59ca99a3e16ba0c157d2ab5d795eb..0000000000000000000000000000000000000000 --- a/spaces/ivntl/MMS/vits/text/cleaners.py +++ /dev/null @@ -1,100 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -from phonemizer import phonemize - - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_cleaners2(text): - '''Pipeline for English text, including abbreviation expansion. + punctuation + stress''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True, preserve_punctuation=True, with_stress=True) - phonemes = collapse_whitespace(phonemes) - return phonemes diff --git a/spaces/jbilcke-hf/VideoQuest/src/app/games/flamenco.ts b/spaces/jbilcke-hf/VideoQuest/src/app/games/flamenco.ts deleted file mode 100644 index c58db6d691d3810714cf8b34a7288a0759391b35..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/app/games/flamenco.ts +++ /dev/null @@ -1,80 +0,0 @@ -import { edu } from "@/lib/fonts" -import { Game } from "./types" -import { InventoryItem } from "../../types" - -const initialSituation = [ - `beautiful view of an art deco building in new york`, - `looking up`, - `entrance desk`, - `pigeon character`, - `day of the dead makeup`, - `artdeco bridge`, -].join(", ") - -const initialActionnables = [ - "sun", - "face", - "person", - "building", - "light", - "decoration", - "box", - "desk", - "gate", - "door" -] - -const inventory: InventoryItem[] = [ - { - name: "burger", - title: "Burger", - caption: "", - description: "I forgot to eat it." - }, - { - name: "chicken", - title: "Chicken", - caption: "", - description: "Well it does eggs, so yes it is useful!" - }, - { - name: "fishbone", - title: "Fishbone", - caption: "", - description: "Maybe I could pick some locks with it?" - }, - { - name: "tentacle", - title: "Tentacle", - caption: "", - description: "I found this strange tentacle.. this is evidence!" - }, -] - -export const game: Game = { - title: "Sad Flamenco", - type: "flamenco", - description: [ - "The game is a role playing adventure set in 1920 mexico, inspired by the Grim Fandango game, with mexican, art deco and aztec influences.", - "The player is Lenny, a travel agent from the world of the dead, who try to find customers to escort safely to heaven.", - "The player can click around to move to new scenes, find or activate artifacts.", - "They can also use objects from their inventory.", - ], - engines: [ - "cartesian_image", - "cartesian_video", - "spherical_image", - ], - className: edu.className, - initialSituation, - initialActionnables, - inventory, - getScenePrompt: (situation?: string) => [ - `photo of an artdeco scene`, - `grimfandango screenshot`, - `unreal engine`, - `1920 mexico`, - situation || initialSituation, - ] -} - diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/checkbox.tsx b/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/checkbox.tsx deleted file mode 100644 index 5850485b9fecba303bdba1849e5a7b6329300af4..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/checkbox.tsx +++ /dev/null @@ -1,30 +0,0 @@ -"use client" - -import * as React from "react" -import * as CheckboxPrimitive from "@radix-ui/react-checkbox" -import { Check } from "lucide-react" - -import { cn } from "@/lib/utils" - -const Checkbox = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - - -)) -Checkbox.displayName = CheckboxPrimitive.Root.displayName - -export { Checkbox } diff --git a/spaces/jlmarrugom/voice_fixer_app/voicefixer/tools/modules/pqmf.py b/spaces/jlmarrugom/voice_fixer_app/voicefixer/tools/modules/pqmf.py deleted file mode 100644 index 085eeddd7e9cea1e019e44dfcb8f2420f88c03be..0000000000000000000000000000000000000000 --- a/spaces/jlmarrugom/voice_fixer_app/voicefixer/tools/modules/pqmf.py +++ /dev/null @@ -1,116 +0,0 @@ -""" -@File : subband_util.py -@Contact : liu.8948@buckeyemail.osu.edu -@License : (C)Copyright 2020-2021 -@Modify Time @Author @Version @Desciption ------------- ------- -------- ----------- -2020/4/3 4:54 PM Haohe Liu 1.0 None -""" - -import torch -import torch.nn.functional as F -import torch.nn as nn -import numpy as np -import os.path as op -from scipy.io import loadmat - - -def load_mat2numpy(fname=""): - if len(fname) == 0: - return None - else: - return loadmat(fname) - - -class PQMF(nn.Module): - def __init__(self, N, M, project_root): - super().__init__() - self.N = N # nsubband - self.M = M # nfilter - try: - assert (N, M) in [(8, 64), (4, 64), (2, 64)] - except: - print("Warning:", N, "subbandand ", M, " filter is not supported") - self.pad_samples = 64 - self.name = str(N) + "_" + str(M) + ".mat" - self.ana_conv_filter = nn.Conv1d( - 1, out_channels=N, kernel_size=M, stride=N, bias=False - ) - data = load_mat2numpy( - op.join( - project_root, - "arnold_workspace/restorer/tools/pytorch/modules/filters/f_" - + self.name, - ) - ) - data = data["f"].astype(np.float32) / N - data = np.flipud(data.T).T - data = np.reshape(data, (N, 1, M)).copy() - dict_new = self.ana_conv_filter.state_dict().copy() - dict_new["weight"] = torch.from_numpy(data) - self.ana_pad = nn.ConstantPad1d((M - N, 0), 0) - self.ana_conv_filter.load_state_dict(dict_new) - - self.syn_pad = nn.ConstantPad1d((0, M // N - 1), 0) - self.syn_conv_filter = nn.Conv1d( - N, out_channels=N, kernel_size=M // N, stride=1, bias=False - ) - gk = load_mat2numpy( - op.join( - project_root, - "arnold_workspace/restorer/tools/pytorch/modules/filters/h_" - + self.name, - ) - ) - gk = gk["h"].astype(np.float32) - gk = np.transpose(np.reshape(gk, (N, M // N, N)), (1, 0, 2)) * N - gk = np.transpose(gk[::-1, :, :], (2, 1, 0)).copy() - dict_new = self.syn_conv_filter.state_dict().copy() - dict_new["weight"] = torch.from_numpy(gk) - self.syn_conv_filter.load_state_dict(dict_new) - - for param in self.parameters(): - param.requires_grad = False - - def __analysis_channel(self, inputs): - return self.ana_conv_filter(self.ana_pad(inputs)) - - def __systhesis_channel(self, inputs): - ret = self.syn_conv_filter(self.syn_pad(inputs)).permute(0, 2, 1) - return torch.reshape(ret, (ret.shape[0], 1, -1)) - - def analysis(self, inputs): - """ - :param inputs: [batchsize,channel,raw_wav],value:[0,1] - :return: - """ - inputs = F.pad(inputs, ((0, self.pad_samples))) - ret = None - for i in range(inputs.size()[1]): # channels - if ret is None: - ret = self.__analysis_channel(inputs[:, i : i + 1, :]) - else: - ret = torch.cat( - (ret, self.__analysis_channel(inputs[:, i : i + 1, :])), dim=1 - ) - return ret - - def synthesis(self, data): - """ - :param data: [batchsize,self.N*K,raw_wav_sub],value:[0,1] - :return: - """ - ret = None - # data = F.pad(data,((0,self.pad_samples//self.N))) - for i in range(data.size()[1]): # channels - if i % self.N == 0: - if ret is None: - ret = self.__systhesis_channel(data[:, i : i + self.N, :]) - else: - new = self.__systhesis_channel(data[:, i : i + self.N, :]) - ret = torch.cat((ret, new), dim=1) - ret = ret[..., : -self.pad_samples] - return ret - - def forward(self, inputs): - return self.ana_conv_filter(self.ana_pad(inputs)) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageStat.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageStat.py deleted file mode 100644 index b7ebddf066ab6eb115a79d6bc34e31ab0c1569bd..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageStat.py +++ /dev/null @@ -1,148 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# global image statistics -# -# History: -# 1996-04-05 fl Created -# 1997-05-21 fl Added mask; added rms, var, stddev attributes -# 1997-08-05 fl Added median -# 1998-07-05 hk Fixed integer overflow error -# -# Notes: -# This class shows how to implement delayed evaluation of attributes. -# To get a certain value, simply access the corresponding attribute. -# The __getattr__ dispatcher takes care of the rest. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996-97. -# -# See the README file for information on usage and redistribution. -# - -import functools -import math -import operator - - -class Stat: - def __init__(self, image_or_list, mask=None): - try: - if mask: - self.h = image_or_list.histogram(mask) - else: - self.h = image_or_list.histogram() - except AttributeError: - self.h = image_or_list # assume it to be a histogram list - if not isinstance(self.h, list): - msg = "first argument must be image or list" - raise TypeError(msg) - self.bands = list(range(len(self.h) // 256)) - - def __getattr__(self, id): - """Calculate missing attribute""" - if id[:4] == "_get": - raise AttributeError(id) - # calculate missing attribute - v = getattr(self, "_get" + id)() - setattr(self, id, v) - return v - - def _getextrema(self): - """Get min/max values for each band in the image""" - - def minmax(histogram): - n = 255 - x = 0 - for i in range(256): - if histogram[i]: - n = min(n, i) - x = max(x, i) - return n, x # returns (255, 0) if there's no data in the histogram - - v = [] - for i in range(0, len(self.h), 256): - v.append(minmax(self.h[i:])) - return v - - def _getcount(self): - """Get total number of pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - v.append(functools.reduce(operator.add, self.h[i : i + 256])) - return v - - def _getsum(self): - """Get sum of all pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - layer_sum = 0.0 - for j in range(256): - layer_sum += j * self.h[i + j] - v.append(layer_sum) - return v - - def _getsum2(self): - """Get squared sum of all pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - sum2 = 0.0 - for j in range(256): - sum2 += (j**2) * float(self.h[i + j]) - v.append(sum2) - return v - - def _getmean(self): - """Get average pixel level for each layer""" - - v = [] - for i in self.bands: - v.append(self.sum[i] / self.count[i]) - return v - - def _getmedian(self): - """Get median pixel level for each layer""" - - v = [] - for i in self.bands: - s = 0 - half = self.count[i] // 2 - b = i * 256 - for j in range(256): - s = s + self.h[b + j] - if s > half: - break - v.append(j) - return v - - def _getrms(self): - """Get RMS for each layer""" - - v = [] - for i in self.bands: - v.append(math.sqrt(self.sum2[i] / self.count[i])) - return v - - def _getvar(self): - """Get variance for each layer""" - - v = [] - for i in self.bands: - n = self.count[i] - v.append((self.sum2[i] - (self.sum[i] ** 2.0) / n) / n) - return v - - def _getstddev(self): - """Get standard deviation for each layer""" - - v = [] - for i in self.bands: - v.append(math.sqrt(self.var[i])) - return v - - -Global = Stat # compatibility diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/parser/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/parser/__init__.py deleted file mode 100644 index d174b0e4dcc472999b75e55ebb88af320ae38081..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/parser/__init__.py +++ /dev/null @@ -1,61 +0,0 @@ -# -*- coding: utf-8 -*- -from ._parser import parse, parser, parserinfo, ParserError -from ._parser import DEFAULTPARSER, DEFAULTTZPARSER -from ._parser import UnknownTimezoneWarning - -from ._parser import __doc__ - -from .isoparser import isoparser, isoparse - -__all__ = ['parse', 'parser', 'parserinfo', - 'isoparse', 'isoparser', - 'ParserError', - 'UnknownTimezoneWarning'] - - -### -# Deprecate portions of the private interface so that downstream code that -# is improperly relying on it is given *some* notice. - - -def __deprecated_private_func(f): - from functools import wraps - import warnings - - msg = ('{name} is a private function and may break without warning, ' - 'it will be moved and or renamed in future versions.') - msg = msg.format(name=f.__name__) - - @wraps(f) - def deprecated_func(*args, **kwargs): - warnings.warn(msg, DeprecationWarning) - return f(*args, **kwargs) - - return deprecated_func - -def __deprecate_private_class(c): - import warnings - - msg = ('{name} is a private class and may break without warning, ' - 'it will be moved and or renamed in future versions.') - msg = msg.format(name=c.__name__) - - class private_class(c): - __doc__ = c.__doc__ - - def __init__(self, *args, **kwargs): - warnings.warn(msg, DeprecationWarning) - super(private_class, self).__init__(*args, **kwargs) - - private_class.__name__ = c.__name__ - - return private_class - - -from ._parser import _timelex, _resultbase -from ._parser import _tzparser, _parsetz - -_timelex = __deprecate_private_class(_timelex) -_tzparser = __deprecate_private_class(_tzparser) -_resultbase = __deprecate_private_class(_resultbase) -_parsetz = __deprecated_private_func(_parsetz) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/tz/win.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/tz/win.py deleted file mode 100644 index cde07ba792c40903f0c334839140173b39fd8124..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/tz/win.py +++ /dev/null @@ -1,370 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module provides an interface to the native time zone data on Windows, -including :py:class:`datetime.tzinfo` implementations. - -Attempting to import this module on a non-Windows platform will raise an -:py:obj:`ImportError`. -""" -# This code was originally contributed by Jeffrey Harris. -import datetime -import struct - -from six.moves import winreg -from six import text_type - -try: - import ctypes - from ctypes import wintypes -except ValueError: - # ValueError is raised on non-Windows systems for some horrible reason. - raise ImportError("Running tzwin on non-Windows system") - -from ._common import tzrangebase - -__all__ = ["tzwin", "tzwinlocal", "tzres"] - -ONEWEEK = datetime.timedelta(7) - -TZKEYNAMENT = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones" -TZKEYNAME9X = r"SOFTWARE\Microsoft\Windows\CurrentVersion\Time Zones" -TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation" - - -def _settzkeyname(): - handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) - try: - winreg.OpenKey(handle, TZKEYNAMENT).Close() - TZKEYNAME = TZKEYNAMENT - except WindowsError: - TZKEYNAME = TZKEYNAME9X - handle.Close() - return TZKEYNAME - - -TZKEYNAME = _settzkeyname() - - -class tzres(object): - """ - Class for accessing ``tzres.dll``, which contains timezone name related - resources. - - .. versionadded:: 2.5.0 - """ - p_wchar = ctypes.POINTER(wintypes.WCHAR) # Pointer to a wide char - - def __init__(self, tzres_loc='tzres.dll'): - # Load the user32 DLL so we can load strings from tzres - user32 = ctypes.WinDLL('user32') - - # Specify the LoadStringW function - user32.LoadStringW.argtypes = (wintypes.HINSTANCE, - wintypes.UINT, - wintypes.LPWSTR, - ctypes.c_int) - - self.LoadStringW = user32.LoadStringW - self._tzres = ctypes.WinDLL(tzres_loc) - self.tzres_loc = tzres_loc - - def load_name(self, offset): - """ - Load a timezone name from a DLL offset (integer). - - >>> from dateutil.tzwin import tzres - >>> tzr = tzres() - >>> print(tzr.load_name(112)) - 'Eastern Standard Time' - - :param offset: - A positive integer value referring to a string from the tzres dll. - - .. note:: - - Offsets found in the registry are generally of the form - ``@tzres.dll,-114``. The offset in this case is 114, not -114. - - """ - resource = self.p_wchar() - lpBuffer = ctypes.cast(ctypes.byref(resource), wintypes.LPWSTR) - nchar = self.LoadStringW(self._tzres._handle, offset, lpBuffer, 0) - return resource[:nchar] - - def name_from_string(self, tzname_str): - """ - Parse strings as returned from the Windows registry into the time zone - name as defined in the registry. - - >>> from dateutil.tzwin import tzres - >>> tzr = tzres() - >>> print(tzr.name_from_string('@tzres.dll,-251')) - 'Dateline Daylight Time' - >>> print(tzr.name_from_string('Eastern Standard Time')) - 'Eastern Standard Time' - - :param tzname_str: - A timezone name string as returned from a Windows registry key. - - :return: - Returns the localized timezone string from tzres.dll if the string - is of the form `@tzres.dll,-offset`, else returns the input string. - """ - if not tzname_str.startswith('@'): - return tzname_str - - name_splt = tzname_str.split(',-') - try: - offset = int(name_splt[1]) - except: - raise ValueError("Malformed timezone string.") - - return self.load_name(offset) - - -class tzwinbase(tzrangebase): - """tzinfo class based on win32's timezones available in the registry.""" - def __init__(self): - raise NotImplementedError('tzwinbase is an abstract base class') - - def __eq__(self, other): - # Compare on all relevant dimensions, including name. - if not isinstance(other, tzwinbase): - return NotImplemented - - return (self._std_offset == other._std_offset and - self._dst_offset == other._dst_offset and - self._stddayofweek == other._stddayofweek and - self._dstdayofweek == other._dstdayofweek and - self._stdweeknumber == other._stdweeknumber and - self._dstweeknumber == other._dstweeknumber and - self._stdhour == other._stdhour and - self._dsthour == other._dsthour and - self._stdminute == other._stdminute and - self._dstminute == other._dstminute and - self._std_abbr == other._std_abbr and - self._dst_abbr == other._dst_abbr) - - @staticmethod - def list(): - """Return a list of all time zones known to the system.""" - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - with winreg.OpenKey(handle, TZKEYNAME) as tzkey: - result = [winreg.EnumKey(tzkey, i) - for i in range(winreg.QueryInfoKey(tzkey)[0])] - return result - - def display(self): - """ - Return the display name of the time zone. - """ - return self._display - - def transitions(self, year): - """ - For a given year, get the DST on and off transition times, expressed - always on the standard time side. For zones with no transitions, this - function returns ``None``. - - :param year: - The year whose transitions you would like to query. - - :return: - Returns a :class:`tuple` of :class:`datetime.datetime` objects, - ``(dston, dstoff)`` for zones with an annual DST transition, or - ``None`` for fixed offset zones. - """ - - if not self.hasdst: - return None - - dston = picknthweekday(year, self._dstmonth, self._dstdayofweek, - self._dsthour, self._dstminute, - self._dstweeknumber) - - dstoff = picknthweekday(year, self._stdmonth, self._stddayofweek, - self._stdhour, self._stdminute, - self._stdweeknumber) - - # Ambiguous dates default to the STD side - dstoff -= self._dst_base_offset - - return dston, dstoff - - def _get_hasdst(self): - return self._dstmonth != 0 - - @property - def _dst_base_offset(self): - return self._dst_base_offset_ - - -class tzwin(tzwinbase): - """ - Time zone object created from the zone info in the Windows registry - - These are similar to :py:class:`dateutil.tz.tzrange` objects in that - the time zone data is provided in the format of a single offset rule - for either 0 or 2 time zone transitions per year. - - :param: name - The name of a Windows time zone key, e.g. "Eastern Standard Time". - The full list of keys can be retrieved with :func:`tzwin.list`. - """ - - def __init__(self, name): - self._name = name - - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - tzkeyname = text_type("{kn}\\{name}").format(kn=TZKEYNAME, name=name) - with winreg.OpenKey(handle, tzkeyname) as tzkey: - keydict = valuestodict(tzkey) - - self._std_abbr = keydict["Std"] - self._dst_abbr = keydict["Dlt"] - - self._display = keydict["Display"] - - # See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm - tup = struct.unpack("=3l16h", keydict["TZI"]) - stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1 - dstoffset = stdoffset-tup[2] # + DaylightBias * -1 - self._std_offset = datetime.timedelta(minutes=stdoffset) - self._dst_offset = datetime.timedelta(minutes=dstoffset) - - # for the meaning see the win32 TIME_ZONE_INFORMATION structure docs - # http://msdn.microsoft.com/en-us/library/windows/desktop/ms725481(v=vs.85).aspx - (self._stdmonth, - self._stddayofweek, # Sunday = 0 - self._stdweeknumber, # Last = 5 - self._stdhour, - self._stdminute) = tup[4:9] - - (self._dstmonth, - self._dstdayofweek, # Sunday = 0 - self._dstweeknumber, # Last = 5 - self._dsthour, - self._dstminute) = tup[12:17] - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = self._get_hasdst() - - def __repr__(self): - return "tzwin(%s)" % repr(self._name) - - def __reduce__(self): - return (self.__class__, (self._name,)) - - -class tzwinlocal(tzwinbase): - """ - Class representing the local time zone information in the Windows registry - - While :class:`dateutil.tz.tzlocal` makes system calls (via the :mod:`time` - module) to retrieve time zone information, ``tzwinlocal`` retrieves the - rules directly from the Windows registry and creates an object like - :class:`dateutil.tz.tzwin`. - - Because Windows does not have an equivalent of :func:`time.tzset`, on - Windows, :class:`dateutil.tz.tzlocal` instances will always reflect the - time zone settings *at the time that the process was started*, meaning - changes to the machine's time zone settings during the run of a program - on Windows will **not** be reflected by :class:`dateutil.tz.tzlocal`. - Because ``tzwinlocal`` reads the registry directly, it is unaffected by - this issue. - """ - def __init__(self): - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - with winreg.OpenKey(handle, TZLOCALKEYNAME) as tzlocalkey: - keydict = valuestodict(tzlocalkey) - - self._std_abbr = keydict["StandardName"] - self._dst_abbr = keydict["DaylightName"] - - try: - tzkeyname = text_type('{kn}\\{sn}').format(kn=TZKEYNAME, - sn=self._std_abbr) - with winreg.OpenKey(handle, tzkeyname) as tzkey: - _keydict = valuestodict(tzkey) - self._display = _keydict["Display"] - except OSError: - self._display = None - - stdoffset = -keydict["Bias"]-keydict["StandardBias"] - dstoffset = stdoffset-keydict["DaylightBias"] - - self._std_offset = datetime.timedelta(minutes=stdoffset) - self._dst_offset = datetime.timedelta(minutes=dstoffset) - - # For reasons unclear, in this particular key, the day of week has been - # moved to the END of the SYSTEMTIME structure. - tup = struct.unpack("=8h", keydict["StandardStart"]) - - (self._stdmonth, - self._stdweeknumber, # Last = 5 - self._stdhour, - self._stdminute) = tup[1:5] - - self._stddayofweek = tup[7] - - tup = struct.unpack("=8h", keydict["DaylightStart"]) - - (self._dstmonth, - self._dstweeknumber, # Last = 5 - self._dsthour, - self._dstminute) = tup[1:5] - - self._dstdayofweek = tup[7] - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = self._get_hasdst() - - def __repr__(self): - return "tzwinlocal()" - - def __str__(self): - # str will return the standard name, not the daylight name. - return "tzwinlocal(%s)" % repr(self._std_abbr) - - def __reduce__(self): - return (self.__class__, ()) - - -def picknthweekday(year, month, dayofweek, hour, minute, whichweek): - """ dayofweek == 0 means Sunday, whichweek 5 means last instance """ - first = datetime.datetime(year, month, 1, hour, minute) - - # This will work if dayofweek is ISO weekday (1-7) or Microsoft-style (0-6), - # Because 7 % 7 = 0 - weekdayone = first.replace(day=((dayofweek - first.isoweekday()) % 7) + 1) - wd = weekdayone + ((whichweek - 1) * ONEWEEK) - if (wd.month != month): - wd -= ONEWEEK - - return wd - - -def valuestodict(key): - """Convert a registry key's values to a dictionary.""" - dout = {} - size = winreg.QueryInfoKey(key)[1] - tz_res = None - - for i in range(size): - key_name, value, dtype = winreg.EnumValue(key, i) - if dtype == winreg.REG_DWORD or dtype == winreg.REG_DWORD_LITTLE_ENDIAN: - # If it's a DWORD (32-bit integer), it's stored as unsigned - convert - # that to a proper signed integer - if value & (1 << 31): - value = value - (1 << 32) - elif dtype == winreg.REG_SZ: - # If it's a reference to the tzres DLL, load the actual string - if value.startswith('@tzres'): - tz_res = tz_res or tzres() - value = tz_res.name_from_string(value) - - value = value.rstrip('\x00') # Remove trailing nulls - - dout[key_name] = value - - return dout diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/filelock/version.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/filelock/version.py deleted file mode 100644 index 23c19a10e0ab8f0f2eeeab0e656b2d868837d450..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/filelock/version.py +++ /dev/null @@ -1,4 +0,0 @@ -# file generated by setuptools_scm -# don't change, don't track in version control -__version__ = version = '3.12.4' -__version_tuple__ = version_tuple = (3, 12, 4) diff --git a/spaces/jordonpeter01/ai-comic-factory/src/lib/triggerDownload.ts b/spaces/jordonpeter01/ai-comic-factory/src/lib/triggerDownload.ts deleted file mode 100644 index e5627a26a4bba34bdf28279d265c6a71440d8136..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/src/lib/triggerDownload.ts +++ /dev/null @@ -1,12 +0,0 @@ -export function triggerDownload(filename: string, text: string) { - var element = document.createElement('a'); - element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text)); - element.setAttribute('download', filename); - - element.style.display = 'none'; - document.body.appendChild(element); - - element.click(); - - document.body.removeChild(element); -} \ No newline at end of file diff --git a/spaces/juuxn/SimpleRVC/inference.py b/spaces/juuxn/SimpleRVC/inference.py deleted file mode 100644 index 8f10fb90b1fefa429e88ec15d5b80467f6e4e648..0000000000000000000000000000000000000000 --- a/spaces/juuxn/SimpleRVC/inference.py +++ /dev/null @@ -1,248 +0,0 @@ -import infer_web -import wget -import os -import scipy.io.wavfile as wavfile -from utils import model -import validators -from myutils import delete_files - -class Inference: - - inference_cont = 0 - - def __init__( - self, - model_name=None, - source_audio_path=None, - output_file_name=None, - feature_index_path="", - f0_file=None, - speaker_id=0, - transposition=0, - f0_method="harvest", - crepe_hop_length=160, - harvest_median_filter=3, - resample=0, - mix=1, - feature_ratio=0.78, - protection_amnt=0.33, - protect1=False - ): - Inference.inference_cont += 1 - self._model_name = model_name - self._source_audio_path = source_audio_path - self._output_file_name = output_file_name - self._feature_index_path = feature_index_path - self._f0_file = f0_file - self._speaker_id = speaker_id - self._transposition = transposition - self._f0_method = f0_method - self._crepe_hop_length = crepe_hop_length - self._harvest_median_filter = harvest_median_filter - self._resample = resample - self._mix = mix - self._feature_ratio = feature_ratio - self._protection_amnt = protection_amnt - self._protect1 = protect1 - self._id = Inference.inference_cont - - if not os.path.exists("./hubert_base.pt"): - wget.download( - "https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt", out="./hubert_base.pt") - - if not os.path.exists("./rmvpe.pt"): - wget.download( - "https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/rmvpe.pt", out="./rmvpe.pt" - ) - - @property - def id(self): - return self._id - - @id.setter - def id(self, id): - self._id = id - - @property - def audio(self): - return self._audio - - @audio.setter - def audio_file(self, audio): - self._audio_file = audio - - @property - def model_name(self): - return self._model_name - - @model_name.setter - def model_name(self, model_name): - self._model_name = model_name - - @property - def source_audio_path(self): - return self._source_audio_path - - @source_audio_path.setter - def source_audio_path(self, source_audio_path): - if not self._output_file_name: - self._output_file_name = os.path.join("./audio-outputs", os.path.basename(source_audio_path)) - self._source_audio_path = source_audio_path - - @property - def output_file_name(self): - return self._output_file_name - - @output_file_name.setter - def output_file_name(self, output_file_name): - self._output_file_name = output_file_name - - @property - def feature_index_path(self): - return self._feature_index_path - - @feature_index_path.setter - def feature_index_path(self, feature_index_path): - self._feature_index_path = feature_index_path - - @property - def f0_file(self): - return self._f0_file - - @f0_file.setter - def f0_file(self, f0_file): - self._f0_file = f0_file - - @property - def speaker_id(self): - return self._speaker_id - - @speaker_id.setter - def speaker_id(self, speaker_id): - self._speaker_id = speaker_id - - @property - def transposition(self): - return self._transposition - - @transposition.setter - def transposition(self, transposition): - self._transposition = transposition - - @property - def f0_method(self): - return self._f0_method - - @f0_method.setter - def f0_method(self, f0_method): - self._f0_method = f0_method - - @property - def crepe_hop_length(self): - return self._crepe_hop_length - - @crepe_hop_length.setter - def crepe_hop_length(self, crepe_hop_length): - self._crepe_hop_length = crepe_hop_length - - @property - def harvest_median_filter(self): - return self._harvest_median_filter - - @crepe_hop_length.setter - def harvest_median_filter(self, harvest_median_filter): - self._harvest_median_filter = harvest_median_filter - - @property - def resample(self): - return self._resample - - @resample.setter - def resample(self, resample): - self._resample = resample - - @property - def mix(self): - return self._mix - - @mix.setter - def mix(self, mix): - self._mix = mix - - @property - def feature_ratio(self): - return self._feature_ratio - - @feature_ratio.setter - def feature_ratio(self, feature_ratio): - self._feature_ratio = feature_ratio - - @property - def protection_amnt(self): - return self._protection_amnt - - @protection_amnt.setter - def protection_amnt(self, protection_amnt): - self._protection_amnt = protection_amnt - - @property - def protect1(self): - return self._protect1 - - @protect1.setter - def protect1(self, protect1): - self._protect1 = protect1 - - def run(self): - current_dir = os.getcwd() - modelname = model.model_downloader( - self._model_name, "./zips/", "./weights/") - - if not modelname: - return "No se ha podido descargar el modelo, intenta con otro enlace o intentalo más tarde." - - model_info = model.get_model(os.path.join(current_dir, 'weights') , modelname) - if not model_info: - return "No se encontrado un modelo valido, verifica el contenido del enlace e intentalo más tarde." - - if not model_info.get('pth'): - return "No se encontrado un modelo valido, verifica el contenido del enlace e intentalo más tarde." - - index = model_info.get('index', '') - pth = model_info.get('pth', None) - - infer_web.get_vc(pth) - - conversion_data = infer_web.vc_single( - self.speaker_id, - self.source_audio_path, - self.source_audio_path, - self.transposition, - self.f0_file, - self.f0_method, - index, - index, - self.feature_ratio, - self.harvest_median_filter, - self.resample, - self.mix, - self.protection_amnt, - self.crepe_hop_length, - ) - - if "Success." in conversion_data[0]: - wavfile.write( - "%s/%s" % ("audio-outputs",os.path.basename(self._output_file_name)), - conversion_data[1][0], - conversion_data[1][1], - ) - return({ - "success": True, - "file": self._output_file_name - }) - else: - return({ - "success": False, - "file": self._output_file_name - }) - #print(conversion_data[0]) \ No newline at end of file diff --git a/spaces/jykoh/gill/Dockerfile b/spaces/jykoh/gill/Dockerfile deleted file mode 100644 index 3785dbbe0c70fb8faefca7f18f81c50d4088f389..0000000000000000000000000000000000000000 --- a/spaces/jykoh/gill/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-runtime as base - -RUN apt-get update && apt-get -y install git - - -ENV HOME=/exp/fromage - - - -WORKDIR /exp/fromage -COPY ./requirements.txt ./requirements.txt -RUN python -m pip install -r ./requirements.txt -RUN python -m pip install --upgrade Jinja2 -RUN python -m pip install gradio - -COPY . . -RUN chmod -R a+rwX . - -CMD ["uvicorn", "app:main", "--host", "0.0.0.0", "--port", "7860"] diff --git a/spaces/kaicheng/ChatGPT_ad/README.md b/spaces/kaicheng/ChatGPT_ad/README.md deleted file mode 100644 index 79790f767ded0eb77b8129f8e960c65b8d166c14..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.33.1 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 -duplicated_from: JohnSmith9982/ChuanhuChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/tests/unit/test_browse_scrape_links.py b/spaces/kcagle/AutoGPT/tests/unit/test_browse_scrape_links.py deleted file mode 100644 index 0a3340e7397a997da96b8ab9828954230e1a3c20..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/tests/unit/test_browse_scrape_links.py +++ /dev/null @@ -1,118 +0,0 @@ -# Generated by CodiumAI - -# Dependencies: -# pip install pytest-mock -import pytest - -from autogpt.commands.web_requests import scrape_links - -""" -Code Analysis - -Objective: -The objective of the 'scrape_links' function is to scrape hyperlinks from a -given URL and return them in a formatted way. - -Inputs: -- url: a string representing the URL to be scraped. - -Flow: -1. Send a GET request to the given URL using the requests library and the user agent header from the config file. -2. Check if the response contains an HTTP error. If it does, return "error". -3. Parse the HTML content of the response using the BeautifulSoup library. -4. Remove any script and style tags from the parsed HTML. -5. Extract all hyperlinks from the parsed HTML using the 'extract_hyperlinks' function. -6. Format the extracted hyperlinks using the 'format_hyperlinks' function. -7. Return the formatted hyperlinks. - -Outputs: -- A list of formatted hyperlinks. - -Additional aspects: -- The function uses the 'requests' and 'BeautifulSoup' libraries to send HTTP -requests and parse HTML content, respectively. -- The 'extract_hyperlinks' function is called to extract hyperlinks from the parsed HTML. -- The 'format_hyperlinks' function is called to format the extracted hyperlinks. -- The function checks for HTTP errors and returns "error" if any are found. -""" - - -class TestScrapeLinks: - # Tests that the function returns a list of formatted hyperlinks when - # provided with a valid url that returns a webpage with hyperlinks. - def test_valid_url_with_hyperlinks(self): - url = "https://www.google.com" - result = scrape_links(url) - assert len(result) > 0 - assert isinstance(result, list) - assert isinstance(result[0], str) - - # Tests that the function returns correctly formatted hyperlinks when given a valid url. - def test_valid_url(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = ( - "Google" - ) - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a valid URL - result = scrape_links("https://www.example.com") - - # Assert that the function returns correctly formatted hyperlinks - assert result == ["Google (https://www.google.com)"] - - # Tests that the function returns "error" when given an invalid url. - def test_invalid_url(self, mocker): - # Mock the requests.get() function to return an HTTP error response - mock_response = mocker.Mock() - mock_response.status_code = 404 - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with an invalid URL - result = scrape_links("https://www.invalidurl.com") - - # Assert that the function returns "error" - assert "Error:" in result - - # Tests that the function returns an empty list when the html contains no hyperlinks. - def test_no_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing no hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = "

No hyperlinks here

" - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a URL containing no hyperlinks - result = scrape_links("https://www.example.com") - - # Assert that the function returns an empty list - assert result == [] - - # Tests that scrape_links() correctly extracts and formats hyperlinks from - # a sample HTML containing a few hyperlinks. - def test_scrape_links_with_few_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with a sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = """ - - - - - - - - """ - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function being tested - result = scrape_links("https://www.example.com") - - # Assert that the function returns a list of formatted hyperlinks - assert isinstance(result, list) - assert len(result) == 3 - assert result[0] == "Google (https://www.google.com)" - assert result[1] == "GitHub (https://github.com)" - assert result[2] == "CodiumAI (https://www.codium.ai)" diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/encoder/params_model.py b/spaces/keithhon/Real-Time-Voice-Cloning/encoder/params_model.py deleted file mode 100644 index 3e356472fb5a27f370cb3920976a11d12a76c1b7..0000000000000000000000000000000000000000 --- a/spaces/keithhon/Real-Time-Voice-Cloning/encoder/params_model.py +++ /dev/null @@ -1,11 +0,0 @@ - -## Model parameters -model_hidden_size = 256 -model_embedding_size = 256 -model_num_layers = 3 - - -## Training parameters -learning_rate_init = 1e-4 -speakers_per_batch = 64 -utterances_per_speaker = 10 diff --git a/spaces/kellyxiaowei/OWL-ViT/README.md b/spaces/kellyxiaowei/OWL-ViT/README.md deleted file mode 100644 index 6449d4d69290eb0f8a35f1fef3fb123c6c2a739f..0000000000000000000000000000000000000000 --- a/spaces/kellyxiaowei/OWL-ViT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: OWL-ViT Demo -emoji: 🔥 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: adirik/OWL-ViT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keras-dreambooth/nuthatch-bird-demo/README.md b/spaces/keras-dreambooth/nuthatch-bird-demo/README.md deleted file mode 100644 index 8a6d73ead73f677fe4edf2bcca0db7c32701574b..0000000000000000000000000000000000000000 --- a/spaces/keras-dreambooth/nuthatch-bird-demo/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Nuthatch Bird Demo -emoji: 👁 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -tags: - - keras-dreambooth - - nature -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/parseinput.py b/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/parseinput.py deleted file mode 100644 index f2102648cf169f0a52bb66755308fee5f81247e0..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/parseinput.py +++ /dev/null @@ -1,129 +0,0 @@ -import re -import xml.etree.ElementTree as ET -from xml.sax import saxutils -#import nltk - -# Chunked generation originally from https://github.com/serp-ai/bark-with-voice-clone -def split_and_recombine_text(text, desired_length=100, max_length=150): - # return nltk.sent_tokenize(text) - - # from https://github.com/neonbjb/tortoise-tts - """Split text it into chunks of a desired length trying to keep sentences intact.""" - # normalize text, remove redundant whitespace and convert non-ascii quotes to ascii - text = re.sub(r"\n\n+", "\n", text) - text = re.sub(r"\s+", " ", text) - text = re.sub(r"[“”]", '"', text) - - rv = [] - in_quote = False - current = "" - split_pos = [] - pos = -1 - end_pos = len(text) - 1 - - def seek(delta): - nonlocal pos, in_quote, current - is_neg = delta < 0 - for _ in range(abs(delta)): - if is_neg: - pos -= 1 - current = current[:-1] - else: - pos += 1 - current += text[pos] - if text[pos] == '"': - in_quote = not in_quote - return text[pos] - - def peek(delta): - p = pos + delta - return text[p] if p < end_pos and p >= 0 else "" - - def commit(): - nonlocal rv, current, split_pos - rv.append(current) - current = "" - split_pos = [] - - while pos < end_pos: - c = seek(1) - # do we need to force a split? - if len(current) >= max_length: - if len(split_pos) > 0 and len(current) > (desired_length / 2): - # we have at least one sentence and we are over half the desired length, seek back to the last split - d = pos - split_pos[-1] - seek(-d) - else: - # no full sentences, seek back until we are not in the middle of a word and split there - while c not in "!?.,\n " and pos > 0 and len(current) > desired_length: - c = seek(-1) - commit() - # check for sentence boundaries - elif not in_quote and (c in "!?]\n" or (c == "." and peek(1) in "\n ")): - # seek forward if we have consecutive boundary markers but still within the max length - while ( - pos < len(text) - 1 and len(current) < max_length and peek(1) in "!?.]" - ): - c = seek(1) - split_pos.append(pos) - if len(current) >= desired_length: - commit() - # treat end of quote as a boundary if its followed by a space or newline - elif in_quote and peek(1) == '"' and peek(2) in "\n ": - seek(2) - split_pos.append(pos) - rv.append(current) - - # clean up, remove lines with only whitespace or punctuation - rv = [s.strip() for s in rv] - rv = [s for s in rv if len(s) > 0 and not re.match(r"^[\s\.,;:!?]*$", s)] - - return rv - -def is_ssml(value): - try: - ET.fromstring(value) - except ET.ParseError: - return False - return True - -def build_ssml(rawtext, selected_voice): - texts = rawtext.split("\n") - joinedparts = "" - for textpart in texts: - textpart = textpart.strip() - if len(textpart) < 1: - continue - joinedparts = joinedparts + f"\n{saxutils.escape(textpart)}" - ssml = f""" - - {joinedparts} - - """ - return ssml - -def create_clips_from_ssml(ssmlinput): - # Parse the XML - tree = ET.ElementTree(ET.fromstring(ssmlinput)) - root = tree.getroot() - - # Create an empty list - voice_list = [] - - # Loop through all voice tags - for voice in root.iter('{http://www.w3.org/2001/10/synthesis}voice'): - # Extract the voice name attribute and the content text - voice_name = voice.attrib['name'] - voice_content = voice.text.strip() if voice.text else '' - if(len(voice_content) > 0): - parts = split_and_recombine_text(voice_content) - for p in parts: - if(len(p) > 1): - # add to tuple list - voice_list.append((voice_name, p)) - return voice_list - diff --git a/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/model.py b/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/model.py deleted file mode 100644 index 457b49e749f396c47c6b35f44955fd512d233d79..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/model.py +++ /dev/null @@ -1,218 +0,0 @@ -""" -Much of this code is adapted from Andrej Karpathy's NanoGPT -(https://github.com/karpathy/nanoGPT) -""" -import math -from dataclasses import dataclass - -import torch -import torch.nn as nn -from torch.nn import functional as F - -class LayerNorm(nn.Module): - """ LayerNorm but with an optional bias. PyTorch doesn't support simply bias=False """ - - def __init__(self, ndim, bias): - super().__init__() - self.weight = nn.Parameter(torch.ones(ndim)) - self.bias = nn.Parameter(torch.zeros(ndim)) if bias else None - - def forward(self, input): - return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5) - -class CausalSelfAttention(nn.Module): - - def __init__(self, config): - super().__init__() - assert config.n_embd % config.n_head == 0 - # key, query, value projections for all heads, but in a batch - self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.bias) - # output projection - self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.bias) - # regularization - self.attn_dropout = nn.Dropout(config.dropout) - self.resid_dropout = nn.Dropout(config.dropout) - self.n_head = config.n_head - self.n_embd = config.n_embd - self.dropout = config.dropout - # flash attention make GPU go brrrrr but support is only in PyTorch nightly and still a bit scary - self.flash = hasattr(torch.nn.functional, 'scaled_dot_product_attention') - if not self.flash: - # print("WARNING: using slow attention. Flash Attention atm needs PyTorch nightly and dropout=0.0") - # causal mask to ensure that attention is only applied to the left in the input sequence - self.register_buffer("bias", torch.tril(torch.ones(config.block_size, config.block_size)) - .view(1, 1, config.block_size, config.block_size)) - - def forward(self, x, past_kv=None, use_cache=False): - B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd) - - # calculate query, key, values for all heads in batch and move head forward to be the batch dim - q, k ,v = self.c_attn(x).split(self.n_embd, dim=2) - k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - - if past_kv is not None: - past_key = past_kv[0] - past_value = past_kv[1] - k = torch.cat((past_key, k), dim=-2) - v = torch.cat((past_value, v), dim=-2) - - FULL_T = k.shape[-2] - - if use_cache is True: - present = (k, v) - else: - present = None - - # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T) - if self.flash: - # efficient attention using Flash Attention CUDA kernels - if past_kv is not None: - # When `past_kv` is provided, we're doing incremental decoding and `q.shape[2] == 1`: q only contains - # the query for the last token. scaled_dot_product_attention interprets this as the first token in the - # sequence, so if is_causal=True it will mask out all attention from it. This is not what we want, so - # to work around this we set is_causal=False. - is_causal = False - else: - is_causal = True - - y = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=self.dropout, is_causal=is_causal) - else: - # manual implementation of attention - att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))) - att = att.masked_fill(self.bias[:,:,FULL_T-T:FULL_T,:FULL_T] == 0, float('-inf')) - att = F.softmax(att, dim=-1) - att = self.attn_dropout(att) - y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs) - y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side - - # output projection - y = self.resid_dropout(self.c_proj(y)) - return (y, present) - -class MLP(nn.Module): - - def __init__(self, config): - super().__init__() - self.c_fc = nn.Linear(config.n_embd, 4 * config.n_embd, bias=config.bias) - self.c_proj = nn.Linear(4 * config.n_embd, config.n_embd, bias=config.bias) - self.dropout = nn.Dropout(config.dropout) - self.gelu = nn.GELU() - - def forward(self, x): - x = self.c_fc(x) - x = self.gelu(x) - x = self.c_proj(x) - x = self.dropout(x) - return x - -class Block(nn.Module): - - def __init__(self, config, layer_idx): - super().__init__() - self.ln_1 = LayerNorm(config.n_embd, bias=config.bias) - self.attn = CausalSelfAttention(config) - self.ln_2 = LayerNorm(config.n_embd, bias=config.bias) - self.mlp = MLP(config) - self.layer_idx = layer_idx - - def forward(self, x, past_kv=None, use_cache=False): - attn_output, prev_kvs = self.attn(self.ln_1(x), past_kv=past_kv, use_cache=use_cache) - x = x + attn_output - x = x + self.mlp(self.ln_2(x)) - return (x, prev_kvs) - -@dataclass -class GPTConfig: - block_size: int = 1024 - input_vocab_size: int = 10_048 - output_vocab_size: int = 10_048 - n_layer: int = 12 - n_head: int = 12 - n_embd: int = 768 - dropout: float = 0.0 - bias: bool = True # True: bias in Linears and LayerNorms, like GPT-2. False: a bit better and faster - -class GPT(nn.Module): - - def __init__(self, config): - super().__init__() - assert config.input_vocab_size is not None - assert config.output_vocab_size is not None - assert config.block_size is not None - self.config = config - - self.transformer = nn.ModuleDict(dict( - wte = nn.Embedding(config.input_vocab_size, config.n_embd), - wpe = nn.Embedding(config.block_size, config.n_embd), - drop = nn.Dropout(config.dropout), - h = nn.ModuleList([Block(config, idx) for idx in range(config.n_layer)]), - ln_f = LayerNorm(config.n_embd, bias=config.bias), - )) - self.lm_head = nn.Linear(config.n_embd, config.output_vocab_size, bias=False) - - def get_num_params(self, non_embedding=True): - """ - Return the number of parameters in the model. - For non-embedding count (default), the position embeddings get subtracted. - The token embeddings would too, except due to the parameter sharing these - params are actually used as weights in the final layer, so we include them. - """ - n_params = sum(p.numel() for p in self.parameters()) - if non_embedding: - n_params -= self.transformer.wte.weight.numel() - n_params -= self.transformer.wpe.weight.numel() - return n_params - - def forward(self, idx, merge_context=False, past_kv=None, position_ids=None, use_cache=False): - device = idx.device - b, t = idx.size() - if past_kv is not None: - assert t == 1 - tok_emb = self.transformer.wte(idx) # token embeddings of shape (b, t, n_embd) - else: - if merge_context: - assert(idx.shape[1] >= 256+256+1) - t = idx.shape[1] - 256 - else: - assert t <= self.config.block_size, f"Cannot forward sequence of length {t}, block size is only {self.config.block_size}" - - # forward the GPT model itself - if merge_context: - tok_emb = torch.cat([ - self.transformer.wte(idx[:,:256]) + self.transformer.wte(idx[:,256:256+256]), - self.transformer.wte(idx[:,256+256:]) - ], dim=1) - else: - tok_emb = self.transformer.wte(idx) # token embeddings of shape (b, t, n_embd) - - if past_kv is None: - past_length = 0 - past_kv = tuple([None] * len(self.transformer.h)) - else: - past_length = past_kv[0][0].size(-2) - - if position_ids is None: - position_ids = torch.arange(past_length, t + past_length, dtype=torch.long, device=device) - position_ids = position_ids.unsqueeze(0) # shape (1, t) - assert position_ids.shape == (1, t) - - pos_emb = self.transformer.wpe(position_ids) # position embeddings of shape (1, t, n_embd) - - x = self.transformer.drop(tok_emb + pos_emb) - - new_kv = () if use_cache else None - - for i, (block, past_layer_kv) in enumerate(zip(self.transformer.h, past_kv)): - x, kv = block(x, past_kv=past_layer_kv, use_cache=use_cache) - - if use_cache: - new_kv = new_kv + (kv,) - - x = self.transformer.ln_f(x) - - # inference-time mini-optimization: only forward the lm_head on the very last position - logits = self.lm_head(x[:, [-1], :]) # note: using list [-1] to preserve the time dim - - return (logits, new_kv) diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/keypoint_detector.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/keypoint_detector.py deleted file mode 100644 index 62a38a962b2f1a4326aac771aced353ec5e22a96..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/keypoint_detector.py +++ /dev/null @@ -1,179 +0,0 @@ -from torch import nn -import torch -import torch.nn.functional as F - -from src.facerender.sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d -from src.facerender.modules.util import KPHourglass, make_coordinate_grid, AntiAliasInterpolation2d, ResBottleneck - - -class KPDetector(nn.Module): - """ - Detecting canonical keypoints. Return keypoint position and jacobian near each keypoint. - """ - - def __init__(self, block_expansion, feature_channel, num_kp, image_channel, max_features, reshape_channel, reshape_depth, - num_blocks, temperature, estimate_jacobian=False, scale_factor=1, single_jacobian_map=False): - super(KPDetector, self).__init__() - - self.predictor = KPHourglass(block_expansion, in_features=image_channel, - max_features=max_features, reshape_features=reshape_channel, reshape_depth=reshape_depth, num_blocks=num_blocks) - - # self.kp = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=7, padding=3) - self.kp = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=3, padding=1) - - if estimate_jacobian: - self.num_jacobian_maps = 1 if single_jacobian_map else num_kp - # self.jacobian = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=9 * self.num_jacobian_maps, kernel_size=7, padding=3) - self.jacobian = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=9 * self.num_jacobian_maps, kernel_size=3, padding=1) - ''' - initial as: - [[1 0 0] - [0 1 0] - [0 0 1]] - ''' - self.jacobian.weight.data.zero_() - self.jacobian.bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0, 0, 0, 1] * self.num_jacobian_maps, dtype=torch.float)) - else: - self.jacobian = None - - self.temperature = temperature - self.scale_factor = scale_factor - if self.scale_factor != 1: - self.down = AntiAliasInterpolation2d(image_channel, self.scale_factor) - - def gaussian2kp(self, heatmap): - """ - Extract the mean from a heatmap - """ - shape = heatmap.shape - heatmap = heatmap.unsqueeze(-1) - grid = make_coordinate_grid(shape[2:], heatmap.type()).unsqueeze_(0).unsqueeze_(0) - value = (heatmap * grid).sum(dim=(2, 3, 4)) - kp = {'value': value} - - return kp - - def forward(self, x): - if self.scale_factor != 1: - x = self.down(x) - - feature_map = self.predictor(x) - prediction = self.kp(feature_map) - - final_shape = prediction.shape - heatmap = prediction.view(final_shape[0], final_shape[1], -1) - heatmap = F.softmax(heatmap / self.temperature, dim=2) - heatmap = heatmap.view(*final_shape) - - out = self.gaussian2kp(heatmap) - - if self.jacobian is not None: - jacobian_map = self.jacobian(feature_map) - jacobian_map = jacobian_map.reshape(final_shape[0], self.num_jacobian_maps, 9, final_shape[2], - final_shape[3], final_shape[4]) - heatmap = heatmap.unsqueeze(2) - - jacobian = heatmap * jacobian_map - jacobian = jacobian.view(final_shape[0], final_shape[1], 9, -1) - jacobian = jacobian.sum(dim=-1) - jacobian = jacobian.view(jacobian.shape[0], jacobian.shape[1], 3, 3) - out['jacobian'] = jacobian - - return out - - -class HEEstimator(nn.Module): - """ - Estimating head pose and expression. - """ - - def __init__(self, block_expansion, feature_channel, num_kp, image_channel, max_features, num_bins=66, estimate_jacobian=True): - super(HEEstimator, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=image_channel, out_channels=block_expansion, kernel_size=7, padding=3, stride=2) - self.norm1 = BatchNorm2d(block_expansion, affine=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.conv2 = nn.Conv2d(in_channels=block_expansion, out_channels=256, kernel_size=1) - self.norm2 = BatchNorm2d(256, affine=True) - - self.block1 = nn.Sequential() - for i in range(3): - self.block1.add_module('b1_'+ str(i), ResBottleneck(in_features=256, stride=1)) - - self.conv3 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=1) - self.norm3 = BatchNorm2d(512, affine=True) - self.block2 = ResBottleneck(in_features=512, stride=2) - - self.block3 = nn.Sequential() - for i in range(3): - self.block3.add_module('b3_'+ str(i), ResBottleneck(in_features=512, stride=1)) - - self.conv4 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=1) - self.norm4 = BatchNorm2d(1024, affine=True) - self.block4 = ResBottleneck(in_features=1024, stride=2) - - self.block5 = nn.Sequential() - for i in range(5): - self.block5.add_module('b5_'+ str(i), ResBottleneck(in_features=1024, stride=1)) - - self.conv5 = nn.Conv2d(in_channels=1024, out_channels=2048, kernel_size=1) - self.norm5 = BatchNorm2d(2048, affine=True) - self.block6 = ResBottleneck(in_features=2048, stride=2) - - self.block7 = nn.Sequential() - for i in range(2): - self.block7.add_module('b7_'+ str(i), ResBottleneck(in_features=2048, stride=1)) - - self.fc_roll = nn.Linear(2048, num_bins) - self.fc_pitch = nn.Linear(2048, num_bins) - self.fc_yaw = nn.Linear(2048, num_bins) - - self.fc_t = nn.Linear(2048, 3) - - self.fc_exp = nn.Linear(2048, 3*num_kp) - - def forward(self, x): - out = self.conv1(x) - out = self.norm1(out) - out = F.relu(out) - out = self.maxpool(out) - - out = self.conv2(out) - out = self.norm2(out) - out = F.relu(out) - - out = self.block1(out) - - out = self.conv3(out) - out = self.norm3(out) - out = F.relu(out) - out = self.block2(out) - - out = self.block3(out) - - out = self.conv4(out) - out = self.norm4(out) - out = F.relu(out) - out = self.block4(out) - - out = self.block5(out) - - out = self.conv5(out) - out = self.norm5(out) - out = F.relu(out) - out = self.block6(out) - - out = self.block7(out) - - out = F.adaptive_avg_pool2d(out, 1) - out = out.view(out.shape[0], -1) - - yaw = self.fc_roll(out) - pitch = self.fc_pitch(out) - roll = self.fc_yaw(out) - t = self.fc_t(out) - exp = self.fc_exp(out) - - return {'yaw': yaw, 'pitch': pitch, 'roll': roll, 't': t, 'exp': exp} - diff --git a/spaces/kevinwang676/SadTalker/src/face3d/options/__init__.py b/spaces/kevinwang676/SadTalker/src/face3d/options/__init__.py deleted file mode 100644 index e7eedebe54aa70169fd25951b3034d819e396c90..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/options/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""This package options includes option modules: training options, test options, and basic options (used in both training and test).""" diff --git a/spaces/kevinwang676/SadTalker/src/face3d/visualize.py b/spaces/kevinwang676/SadTalker/src/face3d/visualize.py deleted file mode 100644 index 23a1110806a0ddf37d4aa549c023d1c3f7114e3e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/visualize.py +++ /dev/null @@ -1,48 +0,0 @@ -# check the sync of 3dmm feature and the audio -import cv2 -import numpy as np -from src.face3d.models.bfm import ParametricFaceModel -from src.face3d.models.facerecon_model import FaceReconModel -import torch -import subprocess, platform -import scipy.io as scio -from tqdm import tqdm - -# draft -def gen_composed_video(args, device, first_frame_coeff, coeff_path, audio_path, save_path, exp_dim=64): - - coeff_first = scio.loadmat(first_frame_coeff)['full_3dmm'] - - coeff_pred = scio.loadmat(coeff_path)['coeff_3dmm'] - - coeff_full = np.repeat(coeff_first, coeff_pred.shape[0], axis=0) # 257 - - coeff_full[:, 80:144] = coeff_pred[:, 0:64] - coeff_full[:, 224:227] = coeff_pred[:, 64:67] # 3 dim translation - coeff_full[:, 254:] = coeff_pred[:, 67:] # 3 dim translation - - tmp_video_path = '/tmp/face3dtmp.mp4' - - facemodel = FaceReconModel(args) - - video = cv2.VideoWriter(tmp_video_path, cv2.VideoWriter_fourcc(*'mp4v'), 25, (224, 224)) - - for k in tqdm(range(coeff_pred.shape[0]), 'face3d rendering:'): - cur_coeff_full = torch.tensor(coeff_full[k:k+1], device=device) - - facemodel.forward(cur_coeff_full, device) - - predicted_landmark = facemodel.pred_lm # TODO. - predicted_landmark = predicted_landmark.cpu().numpy().squeeze() - - rendered_img = facemodel.pred_face - rendered_img = 255. * rendered_img.cpu().numpy().squeeze().transpose(1,2,0) - out_img = rendered_img[:, :, :3].astype(np.uint8) - - video.write(np.uint8(out_img[:,:,::-1])) - - video.release() - - command = 'ffmpeg -v quiet -y -i {} -i {} -strict -2 -q:v 1 {}'.format(audio_path, tmp_video_path, save_path) - subprocess.call(command, shell=platform.system() != 'Windows') - diff --git a/spaces/kevinwang676/voice-conversion-yourtts/bark/api.py b/spaces/kevinwang676/voice-conversion-yourtts/bark/api.py deleted file mode 100644 index 7a4319ceaa13798912637290f8e9e88c50d5420a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/voice-conversion-yourtts/bark/api.py +++ /dev/null @@ -1,158 +0,0 @@ -from typing import Dict, Optional, Union - -import numpy as np - -from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic - - -def generate_with_settings(text_prompt, semantic_temp=0.6, eos_p=0.2, coarse_temp=0.7, fine_temp=0.5, voice_name=None, output_full=False): - - # generation with more control - x_semantic = generate_text_semantic( - text_prompt, - history_prompt=voice_name, - temp=semantic_temp, - min_eos_p = eos_p, - use_kv_caching=True - ) - - x_coarse_gen = generate_coarse( - x_semantic, - history_prompt=voice_name, - temp=coarse_temp, - use_kv_caching=True - ) - x_fine_gen = generate_fine( - x_coarse_gen, - history_prompt=voice_name, - temp=fine_temp, - ) - - if output_full: - full_generation = { - 'semantic_prompt': x_semantic, - 'coarse_prompt': x_coarse_gen, - 'fine_prompt': x_fine_gen - } - return full_generation, codec_decode(x_fine_gen) - return codec_decode(x_fine_gen) - - -def text_to_semantic( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, -): - """Generate semantic array from text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - - Returns: - numpy semantic array to be fed into `semantic_to_waveform` - """ - x_semantic = generate_text_semantic( - text, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - return x_semantic - - -def semantic_to_waveform( - semantic_tokens: np.ndarray, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from semantic input. - - Args: - semantic_tokens: semantic token output from `text_to_semantic` - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - coarse_tokens = generate_coarse( - semantic_tokens, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - fine_tokens = generate_fine( - coarse_tokens, - history_prompt=history_prompt, - temp=0.5, - ) - audio_arr = codec_decode(fine_tokens) - if output_full: - full_generation = { - "semantic_prompt": semantic_tokens, - "coarse_prompt": coarse_tokens, - "fine_prompt": fine_tokens, - } - return full_generation, audio_arr - return audio_arr - - -def save_as_prompt(filepath, full_generation): - assert(filepath.endswith(".npz")) - assert(isinstance(full_generation, dict)) - assert("semantic_prompt" in full_generation) - assert("coarse_prompt" in full_generation) - assert("fine_prompt" in full_generation) - np.savez(filepath, **full_generation) - - -def generate_audio( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - text_temp: float = 0.7, - waveform_temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from input text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - text_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - waveform_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - semantic_tokens = text_to_semantic( - text, - history_prompt=history_prompt, - temp=text_temp, - silent=silent, - ) - out = semantic_to_waveform( - semantic_tokens, - history_prompt=history_prompt, - temp=waveform_temp, - silent=silent, - output_full=output_full, - ) - if output_full: - full_generation, audio_arr = out - return full_generation, audio_arr - else: - audio_arr = out - return audio_arr diff --git a/spaces/kinensake/quanquan/lm_scorer/models/gpt2.py b/spaces/kinensake/quanquan/lm_scorer/models/gpt2.py deleted file mode 100644 index f5f8d480364059b76fa974564e18ab2e1a4d502e..0000000000000000000000000000000000000000 --- a/spaces/kinensake/quanquan/lm_scorer/models/gpt2.py +++ /dev/null @@ -1,85 +0,0 @@ -from typing import * # pylint: disable=wildcard-import,unused-wildcard-import - - -import torch -from transformers import AutoTokenizer, GPT2LMHeadModel -from transformers import GPT2_PRETRAINED_CONFIG_ARCHIVE_MAP -from transformers.tokenization_utils import BatchEncoding - -from .abc.transformers import TransformersLMScorer - - -class GPT2LMScorer(TransformersLMScorer): - # @overrides - def _build(self, model_name: str, options: Dict[str, Any]) -> None: - super()._build(model_name, options) - - # pylint: disable=attribute-defined-outside-init - self.tokenizer = AutoTokenizer.from_pretrained( - model_name, use_fast=True, add_special_tokens=False - ) - # Add the pad token to GPT2 dictionary. - # len(tokenizer) = vocab_size + 1 - self.tokenizer.add_special_tokens({"additional_special_tokens": ["<|pad|>"]}) - self.tokenizer.pad_token = "<|pad|>" - - self.model = GPT2LMHeadModel.from_pretrained(model_name) - # We need to resize the embedding layer because we added the pad token. - self.model.resize_token_embeddings(len(self.tokenizer)) - self.model.eval() - if "device" in options: - self.model.to(options["device"]) - - def _add_special_tokens(self, text: str) -> str: - return self.tokenizer.bos_token + text + self.tokenizer.eos_token - - # @overrides - def _tokens_log_prob_for_batch( - self, text: List[str] - ) -> List[Tuple[torch.DoubleTensor, torch.LongTensor, List[str]]]: - outputs: List[Tuple[torch.DoubleTensor, torch.LongTensor, List[str]]] = [] - if len(text) == 0: - return outputs - - # TODO: Handle overflowing elements for long sentences - text = list(map(self._add_special_tokens, text)) - encoding: BatchEncoding = self.tokenizer.batch_encode_plus( - text, return_tensors="pt", - ) - with torch.no_grad(): - ids = encoding["input_ids"].to(self.model.device) - attention_mask = encoding["attention_mask"].to(self.model.device) - nopad_mask = ids != self.tokenizer.pad_token_id - logits: torch.Tensor = self.model(ids, attention_mask=attention_mask)[0] - - for sent_index in range(len(text)): - sent_nopad_mask = nopad_mask[sent_index] - # len(tokens) = len(text[sent_index]) + 1 - sent_tokens = [ - tok - for i, tok in enumerate(encoding.tokens(sent_index)) - if sent_nopad_mask[i] and i != 0 - ] - - # sent_ids.shape = [len(text[sent_index]) + 1] - sent_ids = ids[sent_index, sent_nopad_mask][1:] - # logits.shape = [len(text[sent_index]) + 1, vocab_size] - sent_logits = logits[sent_index, sent_nopad_mask][:-1, :] - sent_logits[:, self.tokenizer.pad_token_id] = float("-inf") - # ids_scores.shape = [seq_len + 1] - sent_ids_scores = sent_logits.gather(1, sent_ids.unsqueeze(1)).squeeze(1) - # log_prob.shape = [seq_len + 1] - sent_log_probs = sent_ids_scores - sent_logits.logsumexp(1) - - sent_log_probs = cast(torch.DoubleTensor, sent_log_probs) - sent_ids = cast(torch.LongTensor, sent_ids) - - output = (sent_log_probs, sent_ids, sent_tokens) - outputs.append(output) - - return outputs - - # @overrides - @classmethod - def _supported_model_names(cls) -> Iterable[str]: - return GPT2_PRETRAINED_CONFIG_ARCHIVE_MAP.keys() diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/mkgui/__init__.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/mkgui/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/speech_recognition/new/decoders/__init__.py b/spaces/koajoel/PolyFormer/fairseq/examples/speech_recognition/new/decoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/masks/countless/__init__.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/masks/countless/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kukuhtw/AutoGPT/autogpt/workspace.py b/spaces/kukuhtw/AutoGPT/autogpt/workspace.py deleted file mode 100644 index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/workspace.py +++ /dev/null @@ -1,47 +0,0 @@ -from __future__ import annotations - -import os -from pathlib import Path - -from autogpt.config import Config - -CFG = Config() - -# Set a dedicated folder for file I/O -WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKSPACE_PATH): - os.makedirs(WORKSPACE_PATH) - - -def path_in_workspace(relative_path: str | Path) -> Path: - """Get full path for item in workspace - - Parameters: - relative_path (str | Path): Path to translate into the workspace - - Returns: - Path: Absolute path for the given path in the workspace - """ - return safe_path_join(WORKSPACE_PATH, relative_path) - - -def safe_path_join(base: Path, *paths: str | Path) -> Path: - """Join one or more path components, asserting the resulting path is within the workspace. - - Args: - base (Path): The base path - *paths (str): The paths to join to the base path - - Returns: - Path: The joined path - """ - joined_path = base.joinpath(*paths).resolve() - - if CFG.restrict_to_workspace and not joined_path.is_relative_to(base): - raise ValueError( - f"Attempted to access path '{joined_path}' outside of workspace '{base}'." - ) - - return joined_path diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/cu2qu/ufo.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/cu2qu/ufo.py deleted file mode 100644 index 10367cfecf8384e32eace3b9d0e01ab6c588c324..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/cu2qu/ufo.py +++ /dev/null @@ -1,349 +0,0 @@ -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -"""Converts cubic bezier curves to quadratic splines. - -Conversion is performed such that the quadratic splines keep the same end-curve -tangents as the original cubics. The approach is iterative, increasing the -number of segments for a spline until the error gets below a bound. - -Respective curves from multiple fonts will be converted at once to ensure that -the resulting splines are interpolation-compatible. -""" - -import logging -from fontTools.pens.basePen import AbstractPen -from fontTools.pens.pointPen import PointToSegmentPen -from fontTools.pens.reverseContourPen import ReverseContourPen - -from . import curves_to_quadratic -from .errors import ( - UnequalZipLengthsError, - IncompatibleSegmentNumberError, - IncompatibleSegmentTypesError, - IncompatibleGlyphsError, - IncompatibleFontsError, -) - - -__all__ = ["fonts_to_quadratic", "font_to_quadratic"] - -# The default approximation error below is a relative value (1/1000 of the EM square). -# Later on, we convert it to absolute font units by multiplying it by a font's UPEM -# (see fonts_to_quadratic). -DEFAULT_MAX_ERR = 0.001 -CURVE_TYPE_LIB_KEY = "com.github.googlei18n.cu2qu.curve_type" - -logger = logging.getLogger(__name__) - - -_zip = zip - - -def zip(*args): - """Ensure each argument to zip has the same length. Also make sure a list is - returned for python 2/3 compatibility. - """ - - if len(set(len(a) for a in args)) != 1: - raise UnequalZipLengthsError(*args) - return list(_zip(*args)) - - -class GetSegmentsPen(AbstractPen): - """Pen to collect segments into lists of points for conversion. - - Curves always include their initial on-curve point, so some points are - duplicated between segments. - """ - - def __init__(self): - self._last_pt = None - self.segments = [] - - def _add_segment(self, tag, *args): - if tag in ["move", "line", "qcurve", "curve"]: - self._last_pt = args[-1] - self.segments.append((tag, args)) - - def moveTo(self, pt): - self._add_segment("move", pt) - - def lineTo(self, pt): - self._add_segment("line", pt) - - def qCurveTo(self, *points): - self._add_segment("qcurve", self._last_pt, *points) - - def curveTo(self, *points): - self._add_segment("curve", self._last_pt, *points) - - def closePath(self): - self._add_segment("close") - - def endPath(self): - self._add_segment("end") - - def addComponent(self, glyphName, transformation): - pass - - -def _get_segments(glyph): - """Get a glyph's segments as extracted by GetSegmentsPen.""" - - pen = GetSegmentsPen() - # glyph.draw(pen) - # We can't simply draw the glyph with the pen, but we must initialize the - # PointToSegmentPen explicitly with outputImpliedClosingLine=True. - # By default PointToSegmentPen does not outputImpliedClosingLine -- unless - # last and first point on closed contour are duplicated. Because we are - # converting multiple glyphs at the same time, we want to make sure - # this function returns the same number of segments, whether or not - # the last and first point overlap. - # https://github.com/googlefonts/fontmake/issues/572 - # https://github.com/fonttools/fonttools/pull/1720 - pointPen = PointToSegmentPen(pen, outputImpliedClosingLine=True) - glyph.drawPoints(pointPen) - return pen.segments - - -def _set_segments(glyph, segments, reverse_direction): - """Draw segments as extracted by GetSegmentsPen back to a glyph.""" - - glyph.clearContours() - pen = glyph.getPen() - if reverse_direction: - pen = ReverseContourPen(pen) - for tag, args in segments: - if tag == "move": - pen.moveTo(*args) - elif tag == "line": - pen.lineTo(*args) - elif tag == "curve": - pen.curveTo(*args[1:]) - elif tag == "qcurve": - pen.qCurveTo(*args[1:]) - elif tag == "close": - pen.closePath() - elif tag == "end": - pen.endPath() - else: - raise AssertionError('Unhandled segment type "%s"' % tag) - - -def _segments_to_quadratic(segments, max_err, stats, all_quadratic=True): - """Return quadratic approximations of cubic segments.""" - - assert all(s[0] == "curve" for s in segments), "Non-cubic given to convert" - - new_points = curves_to_quadratic([s[1] for s in segments], max_err, all_quadratic) - n = len(new_points[0]) - assert all(len(s) == n for s in new_points[1:]), "Converted incompatibly" - - spline_length = str(n - 2) - stats[spline_length] = stats.get(spline_length, 0) + 1 - - if all_quadratic or n == 3: - return [("qcurve", p) for p in new_points] - else: - return [("curve", p) for p in new_points] - - -def _glyphs_to_quadratic(glyphs, max_err, reverse_direction, stats, all_quadratic=True): - """Do the actual conversion of a set of compatible glyphs, after arguments - have been set up. - - Return True if the glyphs were modified, else return False. - """ - - try: - segments_by_location = zip(*[_get_segments(g) for g in glyphs]) - except UnequalZipLengthsError: - raise IncompatibleSegmentNumberError(glyphs) - if not any(segments_by_location): - return False - - # always modify input glyphs if reverse_direction is True - glyphs_modified = reverse_direction - - new_segments_by_location = [] - incompatible = {} - for i, segments in enumerate(segments_by_location): - tag = segments[0][0] - if not all(s[0] == tag for s in segments[1:]): - incompatible[i] = [s[0] for s in segments] - elif tag == "curve": - new_segments = _segments_to_quadratic( - segments, max_err, stats, all_quadratic - ) - if all_quadratic or new_segments != segments: - glyphs_modified = True - segments = new_segments - new_segments_by_location.append(segments) - - if glyphs_modified: - new_segments_by_glyph = zip(*new_segments_by_location) - for glyph, new_segments in zip(glyphs, new_segments_by_glyph): - _set_segments(glyph, new_segments, reverse_direction) - - if incompatible: - raise IncompatibleSegmentTypesError(glyphs, segments=incompatible) - return glyphs_modified - - -def glyphs_to_quadratic( - glyphs, max_err=None, reverse_direction=False, stats=None, all_quadratic=True -): - """Convert the curves of a set of compatible of glyphs to quadratic. - - All curves will be converted to quadratic at once, ensuring interpolation - compatibility. If this is not required, calling glyphs_to_quadratic with one - glyph at a time may yield slightly more optimized results. - - Return True if glyphs were modified, else return False. - - Raises IncompatibleGlyphsError if glyphs have non-interpolatable outlines. - """ - if stats is None: - stats = {} - - if not max_err: - # assume 1000 is the default UPEM - max_err = DEFAULT_MAX_ERR * 1000 - - if isinstance(max_err, (list, tuple)): - max_errors = max_err - else: - max_errors = [max_err] * len(glyphs) - assert len(max_errors) == len(glyphs) - - return _glyphs_to_quadratic( - glyphs, max_errors, reverse_direction, stats, all_quadratic - ) - - -def fonts_to_quadratic( - fonts, - max_err_em=None, - max_err=None, - reverse_direction=False, - stats=None, - dump_stats=False, - remember_curve_type=True, - all_quadratic=True, -): - """Convert the curves of a collection of fonts to quadratic. - - All curves will be converted to quadratic at once, ensuring interpolation - compatibility. If this is not required, calling fonts_to_quadratic with one - font at a time may yield slightly more optimized results. - - Return True if fonts were modified, else return False. - - By default, cu2qu stores the curve type in the fonts' lib, under a private - key "com.github.googlei18n.cu2qu.curve_type", and will not try to convert - them again if the curve type is already set to "quadratic". - Setting 'remember_curve_type' to False disables this optimization. - - Raises IncompatibleFontsError if same-named glyphs from different fonts - have non-interpolatable outlines. - """ - - if remember_curve_type: - curve_types = {f.lib.get(CURVE_TYPE_LIB_KEY, "cubic") for f in fonts} - if len(curve_types) == 1: - curve_type = next(iter(curve_types)) - if curve_type in ("quadratic", "mixed"): - logger.info("Curves already converted to quadratic") - return False - elif curve_type == "cubic": - pass # keep converting - else: - raise NotImplementedError(curve_type) - elif len(curve_types) > 1: - # going to crash later if they do differ - logger.warning("fonts may contain different curve types") - - if stats is None: - stats = {} - - if max_err_em and max_err: - raise TypeError("Only one of max_err and max_err_em can be specified.") - if not (max_err_em or max_err): - max_err_em = DEFAULT_MAX_ERR - - if isinstance(max_err, (list, tuple)): - assert len(max_err) == len(fonts) - max_errors = max_err - elif max_err: - max_errors = [max_err] * len(fonts) - - if isinstance(max_err_em, (list, tuple)): - assert len(fonts) == len(max_err_em) - max_errors = [f.info.unitsPerEm * e for f, e in zip(fonts, max_err_em)] - elif max_err_em: - max_errors = [f.info.unitsPerEm * max_err_em for f in fonts] - - modified = False - glyph_errors = {} - for name in set().union(*(f.keys() for f in fonts)): - glyphs = [] - cur_max_errors = [] - for font, error in zip(fonts, max_errors): - if name in font: - glyphs.append(font[name]) - cur_max_errors.append(error) - try: - modified |= _glyphs_to_quadratic( - glyphs, cur_max_errors, reverse_direction, stats, all_quadratic - ) - except IncompatibleGlyphsError as exc: - logger.error(exc) - glyph_errors[name] = exc - - if glyph_errors: - raise IncompatibleFontsError(glyph_errors) - - if modified and dump_stats: - spline_lengths = sorted(stats.keys()) - logger.info( - "New spline lengths: %s" - % (", ".join("%s: %d" % (l, stats[l]) for l in spline_lengths)) - ) - - if remember_curve_type: - for font in fonts: - curve_type = font.lib.get(CURVE_TYPE_LIB_KEY, "cubic") - new_curve_type = "quadratic" if all_quadratic else "mixed" - if curve_type != new_curve_type: - font.lib[CURVE_TYPE_LIB_KEY] = new_curve_type - modified = True - return modified - - -def glyph_to_quadratic(glyph, **kwargs): - """Convenience wrapper around glyphs_to_quadratic, for just one glyph. - Return True if the glyph was modified, else return False. - """ - - return glyphs_to_quadratic([glyph], **kwargs) - - -def font_to_quadratic(font, **kwargs): - """Convenience wrapper around fonts_to_quadratic, for just one font. - Return True if the font was modified, else return False. - """ - - return fonts_to_quadratic([font], **kwargs) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/tqdm.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/tqdm.py deleted file mode 100644 index 81bdfb0500a71e4015d3e2bf539571fae54e7f96..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/tqdm.py +++ /dev/null @@ -1,178 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License -"""Utility helpers to handle progress bars in `huggingface_hub`. - -Example: - 1. Use `huggingface_hub.utils.tqdm` as you would use `tqdm.tqdm` or `tqdm.auto.tqdm`. - 2. To disable progress bars, either use `disable_progress_bars()` helper or set the - environment variable `HF_HUB_DISABLE_PROGRESS_BARS` to 1. - 3. To re-enable progress bars, use `enable_progress_bars()`. - 4. To check whether progress bars are disabled, use `are_progress_bars_disabled()`. - -NOTE: Environment variable `HF_HUB_DISABLE_PROGRESS_BARS` has the priority. - -Example: - ```py - from huggingface_hub.utils import ( - are_progress_bars_disabled, - disable_progress_bars, - enable_progress_bars, - tqdm, - ) - - # Disable progress bars globally - disable_progress_bars() - - # Use as normal `tqdm` - for _ in tqdm(range(5)): - do_something() - - # Still not showing progress bars, as `disable=False` is overwritten to `True`. - for _ in tqdm(range(5), disable=False): - do_something() - - are_progress_bars_disabled() # True - - # Re-enable progress bars globally - enable_progress_bars() - - # Progress bar will be shown ! - for _ in tqdm(range(5)): - do_something() - ``` -""" -import io -import warnings -from contextlib import contextmanager -from pathlib import Path -from typing import Iterator, Optional, Union - -from tqdm.auto import tqdm as old_tqdm - -from ..constants import HF_HUB_DISABLE_PROGRESS_BARS - - -# `HF_HUB_DISABLE_PROGRESS_BARS` is `Optional[bool]` while `_hf_hub_progress_bars_disabled` -# is a `bool`. If `HF_HUB_DISABLE_PROGRESS_BARS` is set to True or False, it has priority. -# If `HF_HUB_DISABLE_PROGRESS_BARS` is None, it means the user have not set the -# environment variable and is free to enable/disable progress bars programmatically. -# TL;DR: env variable has priority over code. -# -# By default, progress bars are enabled. -_hf_hub_progress_bars_disabled: bool = HF_HUB_DISABLE_PROGRESS_BARS or False - - -def disable_progress_bars() -> None: - """ - Disable globally progress bars used in `huggingface_hub` except if `HF_HUB_DISABLE_PROGRESS_BARS` environment - variable has been set. - - Use [`~utils.enable_progress_bars`] to re-enable them. - """ - if HF_HUB_DISABLE_PROGRESS_BARS is False: - warnings.warn( - "Cannot disable progress bars: environment variable `HF_HUB_DISABLE_PROGRESS_BARS=0` is set and has" - " priority." - ) - return - global _hf_hub_progress_bars_disabled - _hf_hub_progress_bars_disabled = True - - -def enable_progress_bars() -> None: - """ - Enable globally progress bars used in `huggingface_hub` except if `HF_HUB_DISABLE_PROGRESS_BARS` environment - variable has been set. - - Use [`~utils.disable_progress_bars`] to disable them. - """ - if HF_HUB_DISABLE_PROGRESS_BARS is True: - warnings.warn( - "Cannot enable progress bars: environment variable `HF_HUB_DISABLE_PROGRESS_BARS=1` is set and has" - " priority." - ) - return - global _hf_hub_progress_bars_disabled - _hf_hub_progress_bars_disabled = False - - -def are_progress_bars_disabled() -> bool: - """Return whether progress bars are globally disabled or not. - - Progress bars used in `huggingface_hub` can be enable or disabled globally using [`~utils.enable_progress_bars`] - and [`~utils.disable_progress_bars`] or by setting `HF_HUB_DISABLE_PROGRESS_BARS` as environment variable. - """ - global _hf_hub_progress_bars_disabled - return _hf_hub_progress_bars_disabled - - -class tqdm(old_tqdm): - """ - Class to override `disable` argument in case progress bars are globally disabled. - - Taken from https://github.com/tqdm/tqdm/issues/619#issuecomment-619639324. - """ - - def __init__(self, *args, **kwargs): - if are_progress_bars_disabled(): - kwargs["disable"] = True - super().__init__(*args, **kwargs) - - -@contextmanager -def tqdm_stream_file(path: Union[Path, str]) -> Iterator[io.BufferedReader]: - """ - Open a file as binary and wrap the `read` method to display a progress bar when it's streamed. - - First implemented in `transformers` in 2019 but removed when switched to git-lfs. Used in `huggingface_hub` to show - progress bar when uploading an LFS file to the Hub. See github.com/huggingface/transformers/pull/2078#discussion_r354739608 - for implementation details. - - Note: currently implementation handles only files stored on disk as it is the most common use case. Could be - extended to stream any `BinaryIO` object but we might have to debug some corner cases. - - Example: - ```py - >>> with tqdm_stream_file("config.json") as f: - >>> requests.put(url, data=f) - config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s] - ``` - """ - if isinstance(path, str): - path = Path(path) - - with path.open("rb") as f: - total_size = path.stat().st_size - pbar = tqdm( - unit="B", - unit_scale=True, - total=total_size, - initial=0, - desc=path.name, - ) - - f_read = f.read - - def _inner_read(size: Optional[int] = -1) -> bytes: - data = f_read(size) - pbar.update(len(data)) - return data - - f.read = _inner_read # type: ignore - - yield f - - pbar.close() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/__init__.py deleted file mode 100644 index b8caa9a8957a250b78712c25175bec415507e416..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/__init__.py +++ /dev/null @@ -1,76 +0,0 @@ -""" -A sample set of units for use with testing unit conversion -of Matplotlib routines. These are used because they use very strict -enforcement of unitized data which will test the entire spectrum of how -unitized data might be used (it is not always meaningful to convert to -a float without specific units given). - -UnitDbl is essentially a unitized floating point number. It has a -minimal set of supported units (enough for testing purposes). All -of the mathematical operation are provided to fully test any behaviour -that might occur with unitized data. Remember that unitized data has -rules as to how it can be applied to one another (a value of distance -cannot be added to a value of time). Thus we need to guard against any -accidental "default" conversion that will strip away the meaning of the -data and render it neutered. - -Epoch is different than a UnitDbl of time. Time is something that can be -measured where an Epoch is a specific moment in time. Epochs are typically -referenced as an offset from some predetermined epoch. - -A difference of two epochs is a Duration. The distinction between a Duration -and a UnitDbl of time is made because an Epoch can have different frames (or -units). In the case of our test Epoch class the two allowed frames are 'UTC' -and 'ET' (Note that these are rough estimates provided for testing purposes -and should not be used in production code where accuracy of time frames is -desired). As such a Duration also has a frame of reference and therefore needs -to be called out as different that a simple measurement of time since a delta-t -in one frame may not be the same in another. -""" - -from .Duration import Duration -from .Epoch import Epoch -from .UnitDbl import UnitDbl - -from .StrConverter import StrConverter -from .EpochConverter import EpochConverter -from .UnitDblConverter import UnitDblConverter - -from .UnitDblFormatter import UnitDblFormatter - - -__version__ = "1.0" - -__all__ = [ - 'register', - 'Duration', - 'Epoch', - 'UnitDbl', - 'UnitDblFormatter', - ] - - -def register(): - """Register the unit conversion classes with matplotlib.""" - import matplotlib.units as mplU - - mplU.registry[str] = StrConverter() - mplU.registry[Epoch] = EpochConverter() - mplU.registry[Duration] = EpochConverter() - mplU.registry[UnitDbl] = UnitDblConverter() - - -# Some default unit instances -# Distances -m = UnitDbl(1.0, "m") -km = UnitDbl(1.0, "km") -mile = UnitDbl(1.0, "mile") -# Angles -deg = UnitDbl(1.0, "deg") -rad = UnitDbl(1.0, "rad") -# Time -sec = UnitDbl(1.0, "sec") -min = UnitDbl(1.0, "min") -hr = UnitDbl(1.0, "hour") -day = UnitDbl(24.0, "hour") -sec = UnitDbl(1.0, "sec") diff --git a/spaces/lightli/bingo-newbing/src/components/ui/input.tsx b/spaces/lightli/bingo-newbing/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Bukuajarilmubedahdejongpdfdownload.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Bukuajarilmubedahdejongpdfdownload.md deleted file mode 100644 index feca0d49b3409a20eee5b6d1bdbda1b6d92b98b7..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Bukuajarilmubedahdejongpdfdownload.md +++ /dev/null @@ -1,82 +0,0 @@ -
-

Buku Ajar Ilmu Bedah de Jong PDF Download: A Must-Have for Surgery Students and Practitioners

-

If you are looking for a comprehensive and reliable textbook on surgery, you might want to consider Buku Ajar Ilmu Bedah de Jong PDF download. This textbook is written by R. Sjamsuhidajat and Wim de Jong, two renowned experts in surgical science. It covers various topics and aspects of surgery, from basic principles to advanced techniques, from anatomy to pathology, from diagnosis to treatment, and from prevention to rehabilitation. In this article, we will show you how to get Buku Ajar Ilmu Bedah de Jong PDF download for free, what are the features and benefits of this textbook, and what are some reviews and testimonials from users.

-

bukuajarilmubedahdejongpdfdownload


Download File →→→ https://bytlly.com/2uGvXW



-

How to Get Buku Ajar Ilmu Bedah de Jong PDF Download for Free?

-

To get Buku Ajar Ilmu Bedah de Jong PDF download for free, you need to follow these steps:

-
    -
  1. Go to this link: https://idoc.pub/download/buku-ajar-ilmu-bedah-sjamsuhidajat-wim-de-jong-34m7o96epz46
  2. -
  3. Click on the green button that says "Download Now".
  4. -
  5. Wait for the download to complete.
  6. -
  7. Open the PDF file using any PDF reader software.
  8. -
  9. You have successfully obtained Buku Ajar Ilmu Bedah de Jong PDF download for free.
  10. -
-

What are the Features and Benefits of Buku Ajar Ilmu Bedah de Jong PDF Download?

-

Buku Ajar Ilmu Bedah de Jong PDF download has many features and benefits that make it a valuable resource for surgery students and practitioners. Some of them are:

-
    -
  • It is written in Indonesian language, which makes it easy to understand and apply for Indonesian readers.
  • -
  • It is based on the latest scientific evidence and clinical practice guidelines, which ensures its accuracy and relevance.
  • -
  • It is organized into chapters based on anatomy, which facilitates the learning and teaching process.
  • -
  • It covers a wide range of surgical topics and specialties, such as general surgery, trauma surgery, vascular surgery, thoracic surgery, cardiac surgery, neurosurgery, urology, plastic surgery, orthopedic surgery, pediatric surgery, oncology surgery, transplant surgery, and more.
  • -
  • It provides clear and concise explanations of surgical concepts and principles, supported by illustrations, tables, charts, diagrams, algorithms, and photographs.
  • -
  • It offers practical and useful tips and tricks for surgical procedures and techniques, as well as potential complications and management strategies.
  • -
  • It includes case studies and clinical scenarios that illustrate real-life situations and challenges in surgery.
  • -
  • It contains review questions and answers at the end of each chapter that help readers assess their knowledge and comprehension.
  • -
  • It has a user-friendly layout and design that make it easy to read and navigate.
  • -
-

What are the Reviews and Testimonials from Users of Buku Ajar Ilmu Bedah de Jong PDF Download?

-

Buku Ajar Ilmu Bedah de Jong PDF download has received positive reviews and testimonials from users who have used it for their study or practice. Some of them are:

-
-
"Buku Ajar Ilmu Bedah de Jong PDF download is a great textbook for surgery students and practitioners. It covers everything you need to know about surgery in a comprehensive and concise way. It is easy to understand and apply. It is also updated with the latest research and guidelines. I highly recommend it."
-
- Dr. Andi Pratama, General Surgeon
-
"I have been using Buku Ajar Ilmu Bedah de Jong PDF download for my surgical residency program. It is very helpful and informative. It explains the surgical concepts and principles clearly and logically. It also provides practical tips and tricks for surgical procedures and techniques. It is a must-have for every surgery resident."
-
- Dr. Rina Sari, Surgical Resident
-
"Buku Ajar Ilmu Bedah de Jong PDF download is an excellent textbook for surgery teachers and lecturers. It is organized into chapters based on anatomy, which makes it easy to teach and learn. It also includes case studies and clinical scenarios that stimulate discussion and problem-solving skills. It is a valuable resource for every surgery educator."
-
- Dr. Budi Santoso, Surgery Lecturer
-
- -

You have now learned how to get Buku Ajar Ilmu Bedah de Jong PDF download for free, what are the features and benefits of this textbook, and what are some reviews and testimonials from users. You can now use this textbook as a reference or guide for your study or practice in surgery.

-

How to Use Buku Ajar Ilmu Bedah de Jong PDF Download?

-

To use Buku Ajar Ilmu Bedah de Jong PDF download effectively and efficiently, you need to follow these steps:

-
    -
  1. Open the PDF file using any PDF reader software.
  2. -
  3. Use the table of contents or the index to find the chapter or topic you want to read or study.
  4. -
  5. Use the bookmarks or the navigation pane to jump to different sections or pages.
  6. -
  7. Use the zoom or the fit functions to adjust the view or the size of the text and images.
  8. -
  9. Use the highlight, underline, comment or annotation functions to mark or note important points or information.
  10. -
  11. Use the print or save functions to print or save a copy of the PDF file for your personal use.
  12. -
-

What are the Limitations of Buku Ajar Ilmu Bedah de Jong PDF Download?

-

While Buku Ajar Ilmu Bedah de Jong PDF download may seem convenient and beneficial, it also has some limitations that you need to be aware of. Some of them are:

-

-
    -
  • It may violate the intellectual property rights of the authors and publishers, and you may face legal consequences if you use it without their permission or authorization.
  • -
  • It may not be updated with the latest research and guidelines, and you may miss out on some important or relevant information or changes.
  • -
  • It may not have the same quality and accuracy as the original printed version, and you may encounter errors or discrepancies in the text or images.
  • -
  • It may not have some features or functions that are available in the original printed version, such as interactive exercises, online resources, or supplementary materials.
  • -
  • It may not be compatible with some devices or software, and you may have difficulties in opening or viewing it.
  • -
-

Conclusion

-

In this article, we have shown you how to get Buku Ajar Ilmu Bedah de Jong PDF download for free, what are the features and benefits of this textbook, what are some reviews and testimonials from users, how to use it, and what are some limitations of it. We hope that this article has been helpful and informative for you. However, we do not recommend using Buku Ajar Ilmu Bedah de Jong PDF download because it may violate the intellectual property rights of the authors and publishers, it may not be updated with the latest research and guidelines, it may not have the same quality and accuracy as the original printed version, it may not have some features or functions that are available in the original printed version, and it may not be compatible with some devices or software. If you want to use Buku Ajar Ilmu Bedah de Jong on your device, you should buy the original printed version or use another alternative that suits your needs and preferences.

-

Thank you for reading this article. If you have any questions or feedback, please feel free to contact us or leave a comment below.

-

Who are the Authors of Buku Ajar Ilmu Bedah de Jong PDF Download?

-

Buku Ajar Ilmu Bedah de Jong PDF download is written by two distinguished authors who have extensive experience and expertise in surgical science. They are:

-
    -
  • R. Sjamsuhidajat, MD, PhD, FACS, FICS. He is a professor of surgery at the Faculty of Medicine, University of Indonesia. He is also the founder and chairman of the Indonesian College of Surgeons. He has published many books and articles on surgery, and has received many awards and honors for his contributions to surgical education and research.
  • -
  • Wim de Jong, MD, PhD, FACS, FICS. He is a professor emeritus of surgery at the Faculty of Medicine, University of Indonesia. He is also the former president of the International College of Surgeons. He has published many books and articles on surgery, and has received many awards and honors for his contributions to surgical education and research.
  • -
-

What are the Contents of Buku Ajar Ilmu Bedah de Jong PDF Download?

-

Buku Ajar Ilmu Bedah de Jong PDF download consists of four volumes that cover various topics and aspects of surgery. The contents are as follows:

-
    -
  1. Volume 1: Principles of Surgery. This volume covers the basic principles and concepts of surgery, such as surgical anatomy, physiology, pathology, microbiology, immunology, pharmacology, nutrition, metabolism, wound healing, infection, inflammation, shock, trauma, fluid and electrolyte balance, blood transfusion, anesthesia, pain management, ethics, and legal issues.
  2. -
  3. Volume 2: General Surgery. This volume covers the general topics and specialties of surgery, such as abdominal surgery, gastrointestinal surgery, hepatobiliary surgery, pancreatic surgery, endocrine surgery, breast surgery, vascular surgery, thoracic surgery, cardiac surgery, neurosurgery, urology, plastic surgery, orthopedic surgery, pediatric surgery, oncology surgery, transplant surgery, and minimally invasive surgery.
  4. -
  5. Volume 3: Surgical Techniques. This volume covers the practical and technical aspects of surgical procedures and techniques, such as surgical instruments and equipment, surgical incisions and closures, surgical sutures and knots, surgical drains and tubes, surgical hemostasis and bleeding control, surgical infections and complications management.
  6. -
  7. Volume 4: Surgical Cases. This volume covers the clinical cases and scenarios that illustrate real-life situations and challenges in surgery. It also provides the diagnosis, differential diagnosis, investigations, treatment options, outcomes and prognosis for each case.
  8. -
- -

You have now learned some more information about Buku Ajar Ilmu Bedah de Jong PDF download on your device. You can now use this textbook as a reference or guide for your study or practice in surgery.

-

Conclusion

-

In this article, we have shown you how to get Buku Ajar Ilmu Bedah de Jong PDF download for free, what are the features and benefits of this textbook, what are some reviews and testimonials from users, how to use it, what are some limitations of it, who are the authors of it, and what are the contents of it. We hope that this article has been helpful and informative for you. However, we do not recommend using Buku Ajar Ilmu Bedah de Jong PDF download because it may violate the intellectual property rights of the authors and publishers, it may not be updated with the latest research and guidelines, it may not have the same quality and accuracy as the original printed version, it may not have some features or functions that are available in the original printed version, and it may not be compatible with some devices or software. If you want to use Buku Ajar Ilmu Bedah de Jong on your device, you should buy the original printed version or use another alternative that suits your needs and preferences.

-

Thank you for reading this article. If you have any questions or feedback, please feel free to contact us or leave a comment below.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/liuyuan-pal/SyncDreamer/app.py b/spaces/liuyuan-pal/SyncDreamer/app.py deleted file mode 100644 index 716135fb2072b3fe2a3fba6111d4683e1ec30761..0000000000000000000000000000000000000000 --- a/spaces/liuyuan-pal/SyncDreamer/app.py +++ /dev/null @@ -1,261 +0,0 @@ -from functools import partial - -from PIL import Image -import numpy as np -import gradio as gr -import torch -import os -import fire -from omegaconf import OmegaConf - -from ldm.models.diffusion.sync_dreamer import SyncDDIMSampler, SyncMultiviewDiffusion -from ldm.util import add_margin, instantiate_from_config -from sam_utils import sam_init, sam_out_nosave - -import torch -_TITLE = '''SyncDreamer: Generating Multiview-consistent Images from a Single-view Image''' -_DESCRIPTION = ''' -
- - - -
-Given a single-view image, SyncDreamer is able to generate multiview-consistent images, which enables direct 3D reconstruction with NeuS or NeRF without SDS loss
- -Procedure:
-**Step 1**. Upload an image or select an example. ==> The foreground is masked out by SAM and we crop it as inputs.
-**Step 2**. Select "Elevation angle "and click "Run generation". ==> Generate multiview images. The **Elevation angle** is the elevation of the input image. (This costs about 30s.)
-You may adjust the **Crop size** and **Elevation angle** to get a better result!
-To reconstruct a NeRF or a 3D mesh from the generated images, please refer to our [github repository](https://github.com/liuyuan-pal/SyncDreamer).
-We have heavily borrowed codes from [One-2-3-45](https://huggingface.co/spaces/One-2-3-45/One-2-3-45), which is also an amazing single-view reconstruction method. -''' -_USER_GUIDE0 = "Step1: Please upload an image in the block above (or choose an example shown in the left)." -# _USER_GUIDE1 = "Step1: Please select a **Crop size** and click **Crop it**." -_USER_GUIDE2 = "Step2: Please choose a **Elevation angle** and click **Run Generate**. The **Elevation angle** is the elevation of the input image. This costs about 30s." -_USER_GUIDE3 = "Generated multiview images are shown below! (You may adjust the **Crop size** and **Elevation angle** to get a better result!)" - -others = '''**Step 1**. Select "Crop size" and click "Crop it". ==> The foreground object is centered and resized.
''' - -deployed = True - -if deployed: - print(f"Is CUDA available: {torch.cuda.is_available()}") - print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}") - - -class BackgroundRemoval: - def __init__(self, device='cuda'): - from carvekit.api.high import HiInterface - self.interface = HiInterface( - object_type="object", # Can be "object" or "hairs-like". - batch_size_seg=5, - batch_size_matting=1, - device=device, - seg_mask_size=640, # Use 640 for Tracer B7 and 320 for U2Net - matting_mask_size=2048, - trimap_prob_threshold=231, - trimap_dilation=30, - trimap_erosion_iters=5, - fp16=True, - ) - - @torch.no_grad() - def __call__(self, image): - # image: [H, W, 3] array in [0, 255]. - image = self.interface([image])[0] - return image - -def resize_inputs(image_input, crop_size): - if image_input is None: return None - alpha_np = np.asarray(image_input)[:, :, 3] - coords = np.stack(np.nonzero(alpha_np), 1)[:, (1, 0)] - min_x, min_y = np.min(coords, 0) - max_x, max_y = np.max(coords, 0) - ref_img_ = image_input.crop((min_x, min_y, max_x, max_y)) - h, w = ref_img_.height, ref_img_.width - scale = crop_size / max(h, w) - h_, w_ = int(scale * h), int(scale * w) - ref_img_ = ref_img_.resize((w_, h_), resample=Image.BICUBIC) - results = add_margin(ref_img_, size=256) - return results - -def generate(model, sample_steps, batch_view_num, sample_num, cfg_scale, seed, image_input, elevation_input): - if deployed: - assert isinstance(model, SyncMultiviewDiffusion) - seed=int(seed) - torch.random.manual_seed(seed) - np.random.seed(seed) - - # prepare data - image_input = np.asarray(image_input) - image_input = image_input.astype(np.float32) / 255.0 - alpha_values = image_input[:,:, 3:] - image_input[:, :, :3] = alpha_values * image_input[:,:, :3] + 1 - alpha_values # white background - image_input = image_input[:, :, :3] * 2.0 - 1.0 - image_input = torch.from_numpy(image_input.astype(np.float32)) - elevation_input = torch.from_numpy(np.asarray([np.deg2rad(elevation_input)], np.float32)) - data = {"input_image": image_input, "input_elevation": elevation_input} - for k, v in data.items(): - if deployed: - data[k] = v.unsqueeze(0).cuda() - else: - data[k] = v.unsqueeze(0) - data[k] = torch.repeat_interleave(data[k], sample_num, dim=0) - - if deployed: - sampler = SyncDDIMSampler(model, sample_steps) - x_sample = model.sample(sampler, data, cfg_scale, batch_view_num) - else: - x_sample = torch.zeros(sample_num, 16, 3, 256, 256) - - B, N, _, H, W = x_sample.shape - x_sample = (torch.clamp(x_sample,max=1.0,min=-1.0) + 1) * 0.5 - x_sample = x_sample.permute(0,1,3,4,2).cpu().numpy() * 255 - x_sample = x_sample.astype(np.uint8) - - results = [] - for bi in range(B): - results.append(np.concatenate([x_sample[bi,ni] for ni in range(N)], 1)) - results = np.concatenate(results, 0) - return Image.fromarray(results) - else: - return Image.fromarray(np.zeros([sample_num*256,16*256,3],np.uint8)) - - -def sam_predict(predictor, removal, raw_im): - if raw_im is None: return None - if deployed: - raw_im.thumbnail([512, 512], Image.Resampling.LANCZOS) - image_nobg = removal(raw_im.convert('RGB')) - arr = np.asarray(image_nobg)[:, :, -1] - x_nonzero = np.nonzero(arr.sum(axis=0)) - y_nonzero = np.nonzero(arr.sum(axis=1)) - x_min = int(x_nonzero[0].min()) - y_min = int(y_nonzero[0].min()) - x_max = int(x_nonzero[0].max()) - y_max = int(y_nonzero[0].max()) - # image_nobg.save('./nobg.png') - - image_nobg.thumbnail([512, 512], Image.Resampling.LANCZOS) - image_sam = sam_out_nosave(predictor, image_nobg.convert("RGB"), (x_min, y_min, x_max, y_max)) - - # imsave('./mask.png', np.asarray(image_sam)[:,:,3]*255) - image_sam = np.asarray(image_sam, np.float32) / 255 - out_mask = image_sam[:, :, 3:] - out_rgb = image_sam[:, :, :3] * out_mask + 1 - out_mask - out_img = (np.concatenate([out_rgb, out_mask], 2) * 255).astype(np.uint8) - - image_sam = Image.fromarray(out_img, mode='RGBA') - # image_sam.save('./output.png') - torch.cuda.empty_cache() - return image_sam - else: - return raw_im - -def run_demo(): - # device = f"cuda:0" if torch.cuda.is_available() else "cpu" - # models = None # init_model(device, os.path.join(code_dir, ckpt)) - cfg = 'configs/syncdreamer.yaml' - ckpt = 'ckpt/syncdreamer-pretrain.ckpt' - config = OmegaConf.load(cfg) - # model = None - if deployed: - model = instantiate_from_config(config.model) - print(f'loading model from {ckpt} ...') - ckpt = torch.load(ckpt,map_location='cpu') - model.load_state_dict(ckpt['state_dict'], strict=True) - model = model.cuda().eval() - del ckpt - mask_predictor = sam_init() - removal = BackgroundRemoval() - else: - model = None - mask_predictor = None - removal = None - - # NOTE: Examples must match inputs - examples_full = [ - ['hf_demo/examples/monkey.png',30,200], - ['hf_demo/examples/cat.png',30,200], - ['hf_demo/examples/crab.png',30,200], - ['hf_demo/examples/elephant.png',30,200], - ['hf_demo/examples/flower.png',0,200], - ['hf_demo/examples/forest.png',30,200], - ['hf_demo/examples/teapot.png',20,200], - ['hf_demo/examples/basket.png',30,200], - ] - - image_block = gr.Image(type='pil', image_mode='RGBA', height=256, label='Input image', tool=None, interactive=True) - elevation = gr.Slider(-10, 40, 30, step=5, label='Elevation angle of the input image', interactive=True) - crop_size = gr.Slider(120, 240, 200, step=10, label='Crop size', interactive=True) - - # Compose demo layout & data flow. - with gr.Blocks(title=_TITLE, css="hf_demo/style.css") as demo: - with gr.Row(): - with gr.Column(scale=1): - gr.Markdown('# ' + _TITLE) - # with gr.Column(scale=0): - # gr.DuplicateButton(value='Duplicate Space for private use', elem_id='duplicate-button') - gr.Markdown(_DESCRIPTION) - - with gr.Row(variant='panel'): - with gr.Column(scale=1.2): - gr.Examples( - examples=examples_full, # NOTE: elements must match inputs list! - inputs=[image_block, elevation, crop_size], - outputs=[image_block, elevation, crop_size], - cache_examples=False, - label='Examples (click one of the images below to start)', - examples_per_page=5, - ) - - with gr.Column(scale=0.8): - image_block.render() - guide_text = gr.Markdown(_USER_GUIDE0, visible=True) - fig0 = gr.Image(value=Image.open('assets/crop_size.jpg'), type='pil', image_mode='RGB', height=256, show_label=False, tool=None, interactive=False) - - - with gr.Column(scale=0.8): - sam_block = gr.Image(type='pil', image_mode='RGBA', label="SAM output", height=256, interactive=False) - crop_size.render() - # crop_btn = gr.Button('Crop it', variant='primary', interactive=True) - fig1 = gr.Image(value=Image.open('assets/elevation.jpg'), type='pil', image_mode='RGB', height=256, show_label=False, tool=None, interactive=False) - - with gr.Column(scale=0.8): - input_block = gr.Image(type='pil', image_mode='RGBA', label="Input to SyncDreamer", height=256, interactive=False) - elevation.render() - with gr.Accordion('Advanced options', open=False): - cfg_scale = gr.Slider(1.0, 5.0, 2.0, step=0.1, label='Classifier free guidance', interactive=True) - sample_num = gr.Slider(1, 2, 1, step=1, label='Sample num', interactive=False, info='How many instance (16 images per instance)') - sample_steps = gr.Slider(10, 300, 50, step=10, label='Sample steps', interactive=False) - batch_view_num = gr.Slider(1, 16, 16, step=1, label='Batch num', interactive=True) - seed = gr.Number(6033, label='Random seed', interactive=True) - run_btn = gr.Button('Run generation', variant='primary', interactive=True) - - - output_block = gr.Image(type='pil', image_mode='RGB', label="Outputs of SyncDreamer", height=256, interactive=False) - - def update_guide2(text, im): - if im is None: - return _USER_GUIDE0 - else: - return text - update_guide = lambda GUIDE_TEXT: gr.update(value=GUIDE_TEXT) - - image_block.clear(fn=partial(update_guide, _USER_GUIDE0), outputs=[guide_text], queue=False) - image_block.change(fn=partial(sam_predict, mask_predictor, removal), inputs=[image_block], outputs=[sam_block], queue=True) \ - .success(fn=resize_inputs, inputs=[sam_block, crop_size], outputs=[input_block], queue=True)\ - .success(fn=partial(update_guide2, _USER_GUIDE2), inputs=[image_block], outputs=[guide_text], queue=False)\ - - crop_size.change(fn=resize_inputs, inputs=[sam_block, crop_size], outputs=[input_block], queue=True)\ - .success(fn=partial(update_guide, _USER_GUIDE2), outputs=[guide_text], queue=False) - # crop_btn.click(fn=resize_inputs, inputs=[sam_block, crop_size], outputs=[input_block], queue=False)\ - # .success(fn=partial(update_guide, _USER_GUIDE2), outputs=[guide_text], queue=False) - - run_btn.click(partial(generate, model), inputs=[sample_steps, batch_view_num, sample_num, cfg_scale, seed, input_block, elevation], outputs=[output_block], queue=True)\ - .success(fn=partial(update_guide, _USER_GUIDE3), outputs=[guide_text], queue=False) - - demo.queue().launch(share=False, max_threads=80) # auth=("admin", os.environ['PASSWD']) - -if __name__=="__main__": - fire.Fire(run_demo) \ No newline at end of file diff --git a/spaces/ltgoslo/ssa-perin/model/module/transformer.py b/spaces/ltgoslo/ssa-perin/model/module/transformer.py deleted file mode 100644 index 301ff362e9df7bfd3cb2ae297728fb60a949c90a..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/model/module/transformer.py +++ /dev/null @@ -1,84 +0,0 @@ -#!/usr/bin/env python3 -# coding=utf-8 - -import torch -import torch.nn as nn - - -def checkpoint(module, *args, **kwargs): - dummy = torch.empty(1, requires_grad=True) - return torch.utils.checkpoint.checkpoint(lambda d, *a, **k: module(*a, **k), dummy, *args, **kwargs) - - -class Attention(nn.Module): - def __init__(self, args): - super().__init__() - self.attention = nn.MultiheadAttention(args.hidden_size, args.n_attention_heads, args.dropout_transformer_attention) - self.dropout = nn.Dropout(args.dropout_transformer) - - def forward(self, q_input, kv_input, mask=None): - output, _ = self.attention(q_input, kv_input, kv_input, mask, need_weights=False) - output = self.dropout(output) - return output - - -class FeedForward(nn.Module): - def __init__(self, args): - super().__init__() - self.f = nn.Sequential( - nn.Linear(args.hidden_size, args.hidden_size_ff), - self._get_activation_f(args.activation), - nn.Dropout(args.dropout_transformer), - nn.Linear(args.hidden_size_ff, args.hidden_size), - nn.Dropout(args.dropout_transformer), - ) - - def forward(self, x): - return self.f(x) - - def _get_activation_f(self, activation: str): - return {"relu": nn.ReLU, "gelu": nn.GELU}[activation]() - - -class DecoderLayer(nn.Module): - def __init__(self, args): - super().__init__() - self.self_f = Attention(args) - #self.cross_f = Attention(args) - self.feedforward_f = FeedForward(args) - - self.pre_self_norm = nn.LayerNorm(args.hidden_size) if args.pre_norm else nn.Identity() - #self.pre_cross_norm = nn.LayerNorm(args.hidden_size) if args.pre_norm else nn.Identity() - self.pre_feedforward_norm = nn.LayerNorm(args.hidden_size) if args.pre_norm else nn.Identity() - self.post_self_norm = nn.Identity() if args.pre_norm else nn.LayerNorm(args.hidden_size) - #self.post_cross_norm = nn.Identity() if args.pre_norm else nn.LayerNorm(args.hidden_size) - self.post_feedforward_norm = nn.Identity() if args.pre_norm else nn.LayerNorm(args.hidden_size) - - def forward(self, x, encoder_output, x_mask, encoder_mask): - x_ = self.pre_self_norm(x) - x = self.post_self_norm(x + self.self_f(x_, x_, x_mask)) - - #x_ = self.pre_cross_norm(x) - #x = self.post_cross_norm(x + self.cross_f(x_, encoder_output, encoder_mask)) - - x_ = self.pre_feedforward_norm(x) - x = self.post_feedforward_norm(x + self.feedforward_f(x_)) - - return x - - -class Decoder(nn.Module): - def __init__(self, args): - super(Decoder, self).__init__() - self.layers = nn.ModuleList([DecoderLayer(args) for _ in range(args.n_layers)]) - - def forward(self, target, encoder, target_mask, encoder_mask): - target = target.transpose(0, 1) # shape: (T, B, D) - encoder = encoder.transpose(0, 1) # shape: (T, B, D) - - for layer in self.layers[:-1]: - target = checkpoint(layer, target, encoder, target_mask, encoder_mask) - target = self.layers[-1](target, encoder, target_mask, encoder_mask) # don't checkpoint due to grad_norm - target = target.transpose(0, 1) # shape: (B, T, D) - - return target diff --git a/spaces/lunarring/latentblending/ldm/modules/diffusionmodules/util.py b/spaces/lunarring/latentblending/ldm/modules/diffusionmodules/util.py deleted file mode 100644 index 637363dfe34799e70cfdbcd11445212df9d9ca1f..0000000000000000000000000000000000000000 --- a/spaces/lunarring/latentblending/ldm/modules/diffusionmodules/util.py +++ /dev/null @@ -1,270 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldm.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(), - "dtype": torch.get_autocast_gpu_dtype(), - "cache_enabled": torch.is_autocast_cache_enabled()} - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(), \ - torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() \ No newline at end of file diff --git a/spaces/lusea/rvc-Qinggan/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/lusea/rvc-Qinggan/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/lusea/rvc-Qinggan/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/luxuedong/lxd/tests/kblob.ts b/spaces/luxuedong/lxd/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/lysine/auscultate/src/app/breath/AuscultationTrack.tsx b/spaces/lysine/auscultate/src/app/breath/AuscultationTrack.tsx deleted file mode 100644 index c40c7315967ed57659edf119012085b7ec830dc0..0000000000000000000000000000000000000000 --- a/spaces/lysine/auscultate/src/app/breath/AuscultationTrack.tsx +++ /dev/null @@ -1,224 +0,0 @@ -import React, { useEffect, useMemo, useRef } from 'react'; -import WaveSurfer from 'wavesurfer.js'; -import HoverPlugin from 'wavesurfer.js/plugins/hover'; -import TimelinePlugin from 'wavesurfer.js/plugins/timeline'; -import RegionsPlugin from 'wavesurfer.js/plugins/regions'; -import SpectrogramPlugin from 'wavesurfer.js/plugins/spectrogram'; -import { getDataUrl } from './api'; -import { useAudio } from '../AudioContext'; -import { - AuscultationTrack, - SoundSegment, - nameLocation, - getTrackAbnormalities, - Abnormalities, -} from '../../breath-types'; - -export interface AuscultationTrackProps { - track: AuscultationTrack; - zoom: number; - showAnswer: boolean; - spectrogram: boolean; - regionsLevel: RegionsLevel; -} - -export enum RegionsLevel { - None = 0, - Markers = 1, - Noises = 2, - Full = 3, -} - -function getColorByType(segment: SoundSegment | Abnormalities): string { - if (segment.crackles && segment.wheezes) { - return '#36d39933'; - } else if (segment.crackles) { - return '#3abff833'; - } else if (segment.wheezes) { - return '#fbbd2333'; - } else { - return '#f8727233'; - } -} - -function nameSegment(segment: SoundSegment | Abnormalities): string { - if (segment.crackles && segment.wheezes) { - return 'Both'; - } else if (segment.crackles) { - return 'Crackles'; - } else if (segment.wheezes) { - return 'Wheezes'; - } else { - return ''; - } -} - -export default function AuscultationTrack({ - track, - zoom, - showAnswer, - spectrogram: showSpectrogram, - regionsLevel, -}: AuscultationTrackProps): JSX.Element { - const waveformId = ('waveform' + track.audioFile).replaceAll('.', '_'); - - const { nowPlaying, setNowPlaying } = useAudio(); - - const wavesurfer = useRef(); - const regions = useRef(); - const activeRegion = useRef(); - useEffect(() => { - if (showSpectrogram) { - const spectrogramPlugin = SpectrogramPlugin.create({ - labels: true, - height: 100, - }); - wavesurfer.current?.registerPlugin(spectrogramPlugin); - spectrogramPlugin.render(); - return () => { - spectrogramPlugin.destroy(); - }; - } - return undefined; - }, [showSpectrogram]); - useEffect(() => { - regions.current?.clearRegions(); - if (regionsLevel === RegionsLevel.Full) { - track.segments.forEach(segment => { - regions.current?.addRegion({ - start: segment.start, - end: segment.end, - color: getColorByType(segment), - content: nameSegment(segment), - drag: false, - resize: false, - }); - }); - } else if (regionsLevel === RegionsLevel.Noises) { - track.segments.forEach(segment => { - regions.current?.addRegion({ - start: segment.start, - end: segment.crackles || segment.wheezes ? segment.end : undefined, - color: getColorByType(segment), - content: nameSegment(segment), - drag: false, - resize: false, - }); - }); - } else if (regionsLevel === RegionsLevel.Markers) { - track.segments.forEach(segment => { - regions.current?.addRegion({ - start: segment.start, - color: getColorByType(segment), - content: nameSegment(segment), - drag: false, - resize: false, - }); - }); - } - }, [regionsLevel]); - useEffect(() => { - const instance = WaveSurfer.create({ - container: '#' + waveformId, - url: getDataUrl(track.audioFile), - minPxPerSec: zoom, - plugins: [ - HoverPlugin.create(), - TimelinePlugin.create(), - (regions.current = RegionsPlugin.create()), - ], - }); - regions.current.on('region-clicked', (region: any, e: MouseEvent) => { - e.stopPropagation(); // prevent triggering a click on the waveform - activeRegion.current = region; - region.play(); - }); - regions.current.on('region-out', (region: any) => { - if (activeRegion.current === region) { - activeRegion.current = undefined; - instance.pause(); - } - }); - instance.on('play', () => { - setNowPlaying(waveformId); - }); - instance.on('pause', () => { - setNowPlaying(v => v); // just to trigger a rerender - }); - wavesurfer.current = instance; - return () => { - try { - instance.destroy(); - } catch (e) { - console.log(e); // an error may be thrown because the spectrogram plugin is destroyed twice - } - wavesurfer.current = undefined; - }; - }, []); - - useEffect(() => { - if (nowPlaying !== waveformId) { - wavesurfer.current?.pause(); - } - }, [nowPlaying]); - - useEffect(() => { - if (wavesurfer.current?.getDecodedData()) wavesurfer.current?.zoom(zoom); - }, [zoom]); - - const abnormalities = useMemo(() => getTrackAbnormalities(track), [track]); - - return ( -
-
- - {nameLocation(track.location)} - - - - {nameSegment(abnormalities)} - - -
- - -
-
-
-
- ); -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/binary_search.h b/spaces/ma-xu/LIVE/thrust/thrust/binary_search.h deleted file mode 100644 index 127be16aab996b03e7290bac5ae3d1d1fce27588..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/binary_search.h +++ /dev/null @@ -1,1902 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file binary_search.h - * \brief Search for values in sorted ranges. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ - - -/*! \addtogroup algorithms - */ - - -/*! \addtogroup searching - * \ingroup algorithms - * \{ - */ - - -/*! \addtogroup binary_search Binary Search - * \ingroup searching - * \{ - */ - - -////////////////////// -// Scalar Functions // -////////////////////// - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), *j < value. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that *i < value. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 0); // returns input.begin() - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 2); // returns input.begin() + 1 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 8); // returns input.begin() + 4 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable &value); - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), *j < value. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that *i < value. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(input.begin(), input.end(), 0); // returns input.begin() - * thrust::lower_bound(input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 2); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::lower_bound(input.begin(), input.end(), 8); // returns input.begin() + 4 - * thrust::lower_bound(input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator lower_bound(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison - * and returns the furthermost iterator \c i in [first, last) - * such that, for every iterator \c j in [first, i), - * comp(*j, value) is \c true. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(*i, value) is \c true. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(input.begin(), input.end(), 0, thrust::less()); // returns input.begin() - * thrust::lower_bound(input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::lower_bound(input.begin(), input.end(), 8, thrust::less()); // returns input.begin() + 4 - * thrust::lower_bound(input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T &value, - StrictWeakOrdering comp); - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison - * and returns the furthermost iterator \c i in [first, last) - * such that, for every iterator \c j in [first, i), - * comp(*j, value) is \c true. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(*i, value) is \c true. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(input.begin(), input.end(), 0, thrust::less()); // returns input.begin() - * thrust::lower_bound(input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::lower_bound(input.begin(), input.end(), 8, thrust::less()); // returns input.begin() + 4 - * thrust::lower_bound(input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator lower_bound(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), value < *j - * is \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that value < *i is \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelism: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 0); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 2); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 8); // returns input.end() - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable &value); - - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), value < *j - * is \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that value < *i is \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(input.begin(), input.end(), 0); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 2); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 8); // returns input.end() - * thrust::upper_bound(input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator upper_bound(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), comp(value, *j) - * is \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(value, *i) is \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 0, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 8, thrust::less()); // returns input.end() - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T &value, - StrictWeakOrdering comp); - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), comp(value, *j) - * is \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(value, *i) is \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(input.begin(), input.end(), 0, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 8, thrust::less()); // returns input.end() - * thrust::upper_bound(input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator upper_bound(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * *i < value and value < *i are both \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(thrust::device, input.begin(), input.end(), 0); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 1); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 2); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 3); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 8); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 9); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -bool binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * *i < value and value < *i are both \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(input.begin(), input.end(), 0); // returns true - * thrust::binary_search(input.begin(), input.end(), 1); // returns false - * thrust::binary_search(input.begin(), input.end(), 2); // returns true - * thrust::binary_search(input.begin(), input.end(), 3); // returns false - * thrust::binary_search(input.begin(), input.end(), 8); // returns true - * thrust::binary_search(input.begin(), input.end(), 9); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -bool binary_search(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * comp(*i, value) and comp(value, *i) are both \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(thrust::device, input.begin(), input.end(), 0, thrust::less()); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 1, thrust::less()); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 2, thrust::less()); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 3, thrust::less()); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 8, thrust::less()); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 9, thrust::less()); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -bool binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * comp(*i, value) and comp(value, *i) are both \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(input.begin(), input.end(), 0, thrust::less()); // returns true - * thrust::binary_search(input.begin(), input.end(), 1, thrust::less()); // returns false - * thrust::binary_search(input.begin(), input.end(), 2, thrust::less()); // returns true - * thrust::binary_search(input.begin(), input.end(), 3, thrust::less()); // returns false - * thrust::binary_search(input.begin(), input.end(), 8, thrust::less()); // returns true - * thrust::binary_search(input.begin(), input.end(), 9, thrust::less()); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -bool binary_search(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j), where \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), *k < value. \c j is the furthermost - * iterator in [first, last) such that, for every iterator - * \c k in [first, j), value < *k is \c false. - * For every iterator \c k in [i, j), neither - * value < *k nor *k < value is \c true. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(thrust::device, input.begin(), input.end(), 0); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 1); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 2); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 3); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 8); // returns [input.begin() + 4, input.end) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 9); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -__host__ __device__ -thrust::pair -equal_range(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j), where \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), *k < value. \c j is the furthermost - * iterator in [first, last) such that, for every iterator - * \c k in [first, j), value < *k is \c false. - * For every iterator \c k in [i, j), neither - * value < *k nor *k < value is \c true. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(input.begin(), input.end(), 0); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 1); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 2); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 3); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 8); // returns [input.begin() + 4, input.end) - * thrust::equal_range(input.begin(), input.end(), 9); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -thrust::pair -equal_range(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j). \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), comp(*k, value) is \c true. - * \c j is the furthermost iterator in [first, last) such - * that, for every iterator \c k in [first, last), - * comp(value, *k) is \c false. For every iterator \c k - * in [i, j), neither comp(value, *k) nor - * comp(*k, value) is \c true. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(thrust::device, input.begin(), input.end(), 0, thrust::less()); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 1, thrust::less()); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 2, thrust::less()); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 3, thrust::less()); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 8, thrust::less()); // returns [input.begin() + 4, input.end) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 9, thrust::less()); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -__host__ __device__ -thrust::pair -equal_range(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j). \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), comp(*k, value) is \c true. - * \c j is the furthermost iterator in [first, last) such - * that, for every iterator \c k in [first, last), - * comp(value, *k) is \c false. For every iterator \c k - * in [i, j), neither comp(value, *k) nor - * comp(*k, value) is \c true. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(input.begin(), input.end(), 0, thrust::less()); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 1, thrust::less()); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 2, thrust::less()); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 3, thrust::less()); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 8, thrust::less()); // returns [input.begin() + 4, input.end) - * thrust::equal_range(input.begin(), input.end(), 9, thrust::less()); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -thrust::pair -equal_range(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \addtogroup vectorized_binary_search Vectorized Searches - * \ingroup binary_search - * \{ - */ - - -////////////////////// -// Vector Functions // -////////////////////// - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator lower_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator lower_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of last position where value could - * be inserted without violating the ordering. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of last position where value could - * be inserted without violating the ordering. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator upper_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator upper_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -OutputIterator binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -OutputIterator binary_search(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. This version of \p binary_search uses function object - * \c comp for comparison. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -OutputIterator binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. This version of \p binary_search uses function object - * \c comp for comparison. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -OutputIterator binary_search(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \} // end vectorized_binary_search - */ - - -/*! \} // end binary_search - */ - - -/*! \} // end searching - */ - - -} // end namespace thrust - -#include - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/swinir_arch.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/swinir_arch.py deleted file mode 100644 index 3917fa2c7408e1f5b55b9930c643a9af920a4d81..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/swinir_arch.py +++ /dev/null @@ -1,956 +0,0 @@ -# Modified from https://github.com/JingyunLiang/SwinIR -# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257 -# Originally Written by Ze Liu, Modified by Jingyun Liang. - -import math -import torch -import torch.nn as nn -import torch.utils.checkpoint as checkpoint - -from basicsr.utils.registry import ARCH_REGISTRY -from .arch_util import to_2tuple, trunc_normal_ - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py - """ - - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -class Mlp(nn.Module): - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (b, h, w, c) - window_size (int): window size - - Returns: - windows: (num_windows*b, window_size, window_size, c) - """ - b, h, w, c = x.shape - x = x.view(b, h // window_size, window_size, w // window_size, window_size, c) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, c) - return windows - - -def window_reverse(windows, window_size, h, w): - """ - Args: - windows: (num_windows*b, window_size, window_size, c) - window_size (int): Window size - h (int): Height of image - w (int): Width of image - - Returns: - x: (b, h, w, c) - """ - b = int(windows.shape[0] / (h * w / window_size / window_size)) - x = windows.view(b, h // window_size, w // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(b, h, w, -1) - return x - - -class WindowAttention(nn.Module): - r""" Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer('relative_position_index', relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*b, n, c) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - b_, n, c = x.shape - qkv = self.qkv(x).reshape(b_, n, 3, self.num_heads, c // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nw = mask.shape[0] - attn = attn.view(b_ // nw, nw, self.num_heads, n, n) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, n, n) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(b_, n, c) - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}' - - def flops(self, n): - # calculate flops for 1 window with token length of n - flops = 0 - # qkv = self.qkv(x) - flops += n * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * n * (self.dim // self.num_heads) * n - # x = (attn @ v) - flops += self.num_heads * n * n * (self.dim // self.num_heads) - # x = self.proj(x) - flops += n * self.dim * self.dim - return flops - - -class SwinTransformerBlock(nn.Module): - r""" Swin Transformer Block. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, - dim, - input_resolution, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - act_layer=nn.GELU, - norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - if min(self.input_resolution) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - self.shift_size = 0 - self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, 'shift_size must in 0-window_size' - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if self.shift_size > 0: - attn_mask = self.calculate_mask(self.input_resolution) - else: - attn_mask = None - - self.register_buffer('attn_mask', attn_mask) - - def calculate_mask(self, x_size): - # calculate attention mask for SW-MSA - h, w = x_size - img_mask = torch.zeros((1, h, w, 1)) # 1 h w 1 - h_slices = (slice(0, -self.window_size), slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nw, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x, x_size): - h, w = x_size - b, _, c = x.shape - # assert seq_len == h * w, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(b, h, w, c) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nw*b, window_size, window_size, c - x_windows = x_windows.view(-1, self.window_size * self.window_size, c) # nw*b, window_size*window_size, c - - # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size - if self.input_resolution == x_size: - attn_windows = self.attn(x_windows, mask=self.attn_mask) # nw*b, window_size*window_size, c - else: - attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device)) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, c) - shifted_x = window_reverse(attn_windows, self.window_size, h, w) # b h' w' c - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(b, h * w, c) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - def extra_repr(self) -> str: - return (f'dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, ' - f'window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}') - - def flops(self): - flops = 0 - h, w = self.input_resolution - # norm1 - flops += self.dim * h * w - # W-MSA/SW-MSA - nw = h * w / self.window_size / self.window_size - flops += nw * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * h * w * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * h * w - return flops - - -class PatchMerging(nn.Module): - r""" Patch Merging Layer. - - Args: - input_resolution (tuple[int]): Resolution of input feature. - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.input_resolution = input_resolution - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x): - """ - x: b, h*w, c - """ - h, w = self.input_resolution - b, seq_len, c = x.shape - assert seq_len == h * w, 'input feature has wrong size' - assert h % 2 == 0 and w % 2 == 0, f'x size ({h}*{w}) are not even.' - - x = x.view(b, h, w, c) - - x0 = x[:, 0::2, 0::2, :] # b h/2 w/2 c - x1 = x[:, 1::2, 0::2, :] # b h/2 w/2 c - x2 = x[:, 0::2, 1::2, :] # b h/2 w/2 c - x3 = x[:, 1::2, 1::2, :] # b h/2 w/2 c - x = torch.cat([x0, x1, x2, x3], -1) # b h/2 w/2 4*c - x = x.view(b, -1, 4 * c) # b h/2*w/2 4*c - - x = self.norm(x) - x = self.reduction(x) - - return x - - def extra_repr(self) -> str: - return f'input_resolution={self.input_resolution}, dim={self.dim}' - - def flops(self): - h, w = self.input_resolution - flops = h * w * self.dim - flops += (h // 2) * (w // 2) * 4 * self.dim * 2 * self.dim - return flops - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - input_resolution, - depth, - num_heads, - window_size, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - input_resolution=input_resolution, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) for i in range(depth) - ]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, x_size): - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x, x_size) - if self.downsample is not None: - x = self.downsample(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}' - - def flops(self): - flops = 0 - for blk in self.blocks: - flops += blk.flops() - if self.downsample is not None: - flops += self.downsample.flops() - return flops - - -class RSTB(nn.Module): - """Residual Swin Transformer Block (RSTB). - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - img_size: Input image size. - patch_size: Patch size. - resi_connection: The convolutional block before residual connection. - """ - - def __init__(self, - dim, - input_resolution, - depth, - num_heads, - window_size, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - img_size=224, - patch_size=4, - resi_connection='1conv'): - super(RSTB, self).__init__() - - self.dim = dim - self.input_resolution = input_resolution - - self.residual_group = BasicLayer( - dim=dim, - input_resolution=input_resolution, - depth=depth, - num_heads=num_heads, - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path, - norm_layer=norm_layer, - downsample=downsample, - use_checkpoint=use_checkpoint) - - if resi_connection == '1conv': - self.conv = nn.Conv2d(dim, dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv = nn.Sequential( - nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim // 4, 1, 1, 0), nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim, 3, 1, 1)) - - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, norm_layer=None) - - self.patch_unembed = PatchUnEmbed( - img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, norm_layer=None) - - def forward(self, x, x_size): - return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x - - def flops(self): - flops = 0 - flops += self.residual_group.flops() - h, w = self.input_resolution - flops += h * w * self.dim * self.dim * 9 - flops += self.patch_embed.flops() - flops += self.patch_unembed.flops() - - return flops - - -class PatchEmbed(nn.Module): - r""" Image to Patch Embedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - x = x.flatten(2).transpose(1, 2) # b Ph*Pw c - if self.norm is not None: - x = self.norm(x) - return x - - def flops(self): - flops = 0 - h, w = self.img_size - if self.norm is not None: - flops += h * w * self.embed_dim - return flops - - -class PatchUnEmbed(nn.Module): - r""" Image to Patch Unembedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - def forward(self, x, x_size): - x = x.transpose(1, 2).view(x.shape[0], self.embed_dim, x_size[0], x_size[1]) # b Ph*Pw c - return x - - def flops(self): - flops = 0 - return flops - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -class UpsampleOneStep(nn.Sequential): - """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle) - Used in lightweight SR to save parameters. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - - """ - - def __init__(self, scale, num_feat, num_out_ch, input_resolution=None): - self.num_feat = num_feat - self.input_resolution = input_resolution - m = [] - m.append(nn.Conv2d(num_feat, (scale**2) * num_out_ch, 3, 1, 1)) - m.append(nn.PixelShuffle(scale)) - super(UpsampleOneStep, self).__init__(*m) - - def flops(self): - h, w = self.input_resolution - flops = h * w * self.num_feat * 3 * 9 - return flops - - -@ARCH_REGISTRY.register() -class SwinIR(nn.Module): - r""" SwinIR - A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer. - - Args: - img_size (int | tuple(int)): Input image size. Default 64 - patch_size (int | tuple(int)): Patch size. Default: 1 - in_chans (int): Number of input image channels. Default: 3 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each Swin Transformer layer. - num_heads (tuple(int)): Number of attention heads in different layers. - window_size (int): Window size. Default: 7 - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None - drop_rate (float): Dropout rate. Default: 0 - attn_drop_rate (float): Attention dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction - img_range: Image range. 1. or 255. - upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None - resi_connection: The convolutional block before residual connection. '1conv'/'3conv' - """ - - def __init__(self, - img_size=64, - patch_size=1, - in_chans=3, - embed_dim=96, - depths=(6, 6, 6, 6), - num_heads=(6, 6, 6, 6), - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - use_checkpoint=False, - upscale=2, - img_range=1., - upsampler='', - resi_connection='1conv', - **kwargs): - super(SwinIR, self).__init__() - num_in_ch = in_chans - num_out_ch = in_chans - num_feat = 64 - self.img_range = img_range - if in_chans == 3: - rgb_mean = (0.4488, 0.4371, 0.4040) - self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1) - else: - self.mean = torch.zeros(1, 1, 1, 1) - self.upscale = upscale - self.upsampler = upsampler - - # ------------------------- 1, shallow feature extraction ------------------------- # - self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1) - - # ------------------------- 2, deep feature extraction ------------------------- # - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.num_features = embed_dim - self.mlp_ratio = mlp_ratio - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - img_size=img_size, - patch_size=patch_size, - in_chans=embed_dim, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.patches_resolution - self.patches_resolution = patches_resolution - - # merge non-overlapping patches into image - self.patch_unembed = PatchUnEmbed( - img_size=img_size, - patch_size=patch_size, - in_chans=embed_dim, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build Residual Swin Transformer blocks (RSTB) - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = RSTB( - dim=embed_dim, - input_resolution=(patches_resolution[0], patches_resolution[1]), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results - norm_layer=norm_layer, - downsample=None, - use_checkpoint=use_checkpoint, - img_size=img_size, - patch_size=patch_size, - resi_connection=resi_connection) - self.layers.append(layer) - self.norm = norm_layer(self.num_features) - - # build the last conv layer in deep feature extraction - if resi_connection == '1conv': - self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv_after_body = nn.Sequential( - nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0), nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1)) - - # ------------------------- 3, high quality image reconstruction ------------------------- # - if self.upsampler == 'pixelshuffle': - # for classical SR - self.conv_before_upsample = nn.Sequential( - nn.Conv2d(embed_dim, num_feat, 3, 1, 1), nn.LeakyReLU(inplace=True)) - self.upsample = Upsample(upscale, num_feat) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR (to save parameters) - self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch, - (patches_resolution[0], patches_resolution[1])) - elif self.upsampler == 'nearest+conv': - # for real-world SR (less artifacts) - assert self.upscale == 4, 'only support x4 now.' - self.conv_before_upsample = nn.Sequential( - nn.Conv2d(embed_dim, num_feat, 3, 1, 1), nn.LeakyReLU(inplace=True)) - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - else: - # for image denoising and JPEG compression artifact reduction - self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1) - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'absolute_pos_embed'} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {'relative_position_bias_table'} - - def forward_features(self, x): - x_size = (x.shape[2], x.shape[3]) - x = self.patch_embed(x) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - - for layer in self.layers: - x = layer(x, x_size) - - x = self.norm(x) # b seq_len c - x = self.patch_unembed(x, x_size) - - return x - - def forward(self, x): - self.mean = self.mean.type_as(x) - x = (x - self.mean) * self.img_range - - if self.upsampler == 'pixelshuffle': - # for classical SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.conv_last(self.upsample(x)) - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.upsample(x) - elif self.upsampler == 'nearest+conv': - # for real-world SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - x = self.conv_last(self.lrelu(self.conv_hr(x))) - else: - # for image denoising and JPEG compression artifact reduction - x_first = self.conv_first(x) - res = self.conv_after_body(self.forward_features(x_first)) + x_first - x = x + self.conv_last(res) - - x = x / self.img_range + self.mean - - return x - - def flops(self): - flops = 0 - h, w = self.patches_resolution - flops += h * w * 3 * self.embed_dim * 9 - flops += self.patch_embed.flops() - for layer in self.layers: - flops += layer.flops() - flops += h * w * 3 * self.embed_dim * self.embed_dim - flops += self.upsample.flops() - return flops - - -if __name__ == '__main__': - upscale = 4 - window_size = 8 - height = (1024 // upscale // window_size + 1) * window_size - width = (720 // upscale // window_size + 1) * window_size - model = SwinIR( - upscale=2, - img_size=(height, width), - window_size=window_size, - img_range=1., - depths=[6, 6, 6, 6], - embed_dim=60, - num_heads=[6, 6, 6, 6], - mlp_ratio=2, - upsampler='pixelshuffledirect') - print(model) - print(height, width, model.flops() / 1e9) - - x = torch.randn((1, 3, height, width)) - x = model(x) - print(x.shape) diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/doc/__init__.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/doc/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/matthoffner/starchat-ui/components/Chatbar/Chatbar.state.tsx b/spaces/matthoffner/starchat-ui/components/Chatbar/Chatbar.state.tsx deleted file mode 100644 index bb9a21a298d858cfd2e9612cbcbc4c7e4bc26a19..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/components/Chatbar/Chatbar.state.tsx +++ /dev/null @@ -1,11 +0,0 @@ -import { Conversation } from '@/types/chat'; - -export interface ChatbarInitialState { - searchTerm: string; - filteredConversations: Conversation[]; -} - -export const initialState: ChatbarInitialState = { - searchTerm: '', - filteredConversations: [], -}; diff --git a/spaces/matthoffner/wizardcoder-ggml/main.py b/spaces/matthoffner/wizardcoder-ggml/main.py deleted file mode 100644 index f43f3af4aca576ecb537cdeb494ed59d67de0c8e..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/wizardcoder-ggml/main.py +++ /dev/null @@ -1,127 +0,0 @@ -import json -import markdown -from typing import Callable, List, Dict, Any, Generator -from functools import partial - -import fastapi -import uvicorn -from fastapi import HTTPException, Depends, Request -from fastapi.responses import HTMLResponse -from fastapi.middleware.cors import CORSMiddleware -from sse_starlette.sse import EventSourceResponse -from anyio import create_memory_object_stream -from anyio.to_thread import run_sync -from ctransformers import AutoModelForCausalLM -from pydantic import BaseModel - -llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardCoder-15B-1.0-GGML", - model_file="WizardCoder-15B-1.0.ggmlv3.q5_0.bin", - model_type="starcoder", - threads=8) -app = fastapi.FastAPI(title="🪄WizardCoder💫") -app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - -@app.get("/") -async def index(): - html_content = """ - - - - -

wizardcoder-ggml

-

FastAPI Docs

-

monacopilot

- - - """ - return HTMLResponse(content=html_content, status_code=200) - -class ChatCompletionRequestV0(BaseModel): - prompt: str - -class Message(BaseModel): - role: str - content: str - -class ChatCompletionRequest(BaseModel): - messages: List[Message] - max_tokens: int = 250 - -@app.post("/v1/completions") -async def completion(request: ChatCompletionRequestV0, response_mode=None): - response = llm(request.prompt) - return response - -async def generate_response(chat_chunks, llm): - for chat_chunk in chat_chunks: - response = { - 'choices': [ - { - 'message': { - 'role': 'system', - 'content': llm.detokenize(chat_chunk) - }, - 'finish_reason': 'stop' if llm.is_eos_token(chat_chunk) else 'unknown' - } - ] - } - yield dict(data=json.dumps(response)) - yield dict(data="[DONE]") - -@app.post("/v1/chat/completions") -async def chat(request: ChatCompletionRequest): - combined_messages = ' '.join([message.content for message in request.messages]) - tokens = llm.tokenize(combined_messages) - - try: - chat_chunks = llm.generate(tokens) - except Exception as e: - raise HTTPException(status_code=500, detail=str(e)) - - return EventSourceResponse(generate_response(chat_chunks, llm)) - -async def stream_response(tokens, llm): - try: - iterator: Generator = llm.generate(tokens) - for chat_chunk in iterator: - response = { - 'choices': [ - { - 'message': { - 'role': 'system', - 'content': llm.detokenize(chat_chunk) - }, - 'finish_reason': 'stop' if llm.is_eos_token(chat_chunk) else 'unknown' - } - ] - } - yield dict(data=json.dumps(response)) - yield dict(data="[DONE]") - except Exception as e: - print(f"Exception in event publisher: {str(e)}") - -@app.post("/v2/chat/completions") -async def chatV2_endpoint(request: Request, body: ChatCompletionRequest): - combined_messages = ' '.join([message.content for message in body.messages]) - tokens = llm.tokenize(combined_messages) - - return EventSourceResponse(stream_response(tokens, llm)) - -@app.post("/v0/chat/completions") -async def chat(request: ChatCompletionRequestV0, response_mode=None): - tokens = llm.tokenize(request.prompt) - async def server_sent_events(chat_chunks, llm): - for chat_chunk in llm.generate(chat_chunks): - yield dict(data=json.dumps(llm.detokenize(chat_chunk))) - yield dict(data="[DONE]") - - return EventSourceResponse(server_sent_events(tokens, llm)) - -if __name__ == "__main__": - uvicorn.run(app, host="0.0.0.0", port=8000) diff --git a/spaces/mayerantoine/disaster-damage-classifier/app.py b/spaces/mayerantoine/disaster-damage-classifier/app.py deleted file mode 100644 index db001b816d98ae5a67a045576e9323f1106ec4d0..0000000000000000000000000000000000000000 --- a/spaces/mayerantoine/disaster-damage-classifier/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import gradio as gr -import numpy as np -import tensorflow as tf -from tensorflow.keras import models - -IMG_SIZE = 300 -class_names = ['none','mild','severe'] - -cwd = os.getcwd() -outpath= os.path.join(cwd,"model") -model_name = 'cross_event_ecuador_haiti_efficientnet_fine_tuned_1644086357.h5' -loaded_model = models.load_model(os.path.join(outpath,model_name)) - -def _classifier(inp): - img = np.asarray(tf.cast(inp, dtype=tf.float32)) * 1 / 255.0 - img = img.reshape((-1, IMG_SIZE, IMG_SIZE, 3)) - preds = loaded_model.predict(img).flatten() - return {class_names[i]:float(preds[i]) for i in range(len(class_names))} - -iface = gr.Interface(fn=_classifier, - title="Disaster damage assessment from social media image", - description="This simple app allow users to load an image and assess the extent of damage caused by an earthquake", - article="The severity of damage in an image is the extent of physical destruction shown in it. For this experiment we only consider three level of damages: severe damage,mild damage and no damage (none). The model was trained using data from Haiti,Ecuador,Nepal earthquakes and google images.", - examples=['Haiti-Gingerbread-2.jpg','building_damage_100.jpg','building_damage_424.jpg'], - inputs=gr.inputs.Image(shape=(IMG_SIZE, IMG_SIZE)), - outputs=gr.outputs.Label() - ) - -iface.launch() \ No newline at end of file diff --git a/spaces/merve/MusicGen/app.py b/spaces/merve/MusicGen/app.py deleted file mode 100644 index d5f9a13c3abb63fdc2965039a18c8f47633f382e..0000000000000000000000000000000000000000 --- a/spaces/merve/MusicGen/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import gradio as gr -from huggingface_hub import InferenceClient -import numpy as np -import os - -token = os.getenv("TOKEN") -endpoint = os.getenv("ENDPOINT") - -client = InferenceClient(model=endpoint,token=token) - - -def inference(prompt): - response = client.post(json={"inputs":prompt}) - - output = eval(response)[0]["generated_audio"] - return (32000, np.asarray(output[0])) - - -title = "MusicGen Demo" -description = "Try out a text prompt below 👇 and generate music! 🎶🎸🎤 " - -gr.Interface(fn=inference, inputs="text", outputs="audio", title=title, description=description, - examples=[["an alt rock song"]], cache_examples=True, theme="abidlabs/Lime").launch() \ No newline at end of file diff --git a/spaces/merve/anonymization/public/third_party/umap.js b/spaces/merve/anonymization/public/third_party/umap.js deleted file mode 100644 index 13bb989b285114e7a79d0a213422997c19a3c2f0..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/third_party/umap.js +++ /dev/null @@ -1,6864 +0,0 @@ -// https://github.com/pair-code/umap-js Copyright 2019 Google -(function webpackUniversalModuleDefinition(root, factory) { - if(typeof exports === 'object' && typeof module === 'object') - module.exports = factory(); - else if(typeof define === 'function' && define.amd) - define([], factory); - else { - var a = factory(); - for(var i in a) (typeof exports === 'object' ? exports : root)[i] = a[i]; - } -})(window, function() { -return /******/ (function(modules) { // webpackBootstrap -/******/ // The module cache -/******/ var installedModules = {}; -/******/ -/******/ // The require function -/******/ function __webpack_require__(moduleId) { -/******/ -/******/ // Check if module is in cache -/******/ if(installedModules[moduleId]) { -/******/ return installedModules[moduleId].exports; -/******/ } -/******/ // Create a new module (and put it into the cache) -/******/ var module = installedModules[moduleId] = { -/******/ i: moduleId, -/******/ l: false, -/******/ exports: {} -/******/ }; -/******/ -/******/ // Execute the module function -/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__); -/******/ -/******/ // Flag the module as loaded -/******/ module.l = true; -/******/ -/******/ // Return the exports of the module -/******/ return module.exports; -/******/ } -/******/ -/******/ -/******/ // expose the modules object (__webpack_modules__) -/******/ __webpack_require__.m = modules; -/******/ -/******/ // expose the module cache -/******/ __webpack_require__.c = installedModules; -/******/ -/******/ // define getter function for harmony exports -/******/ __webpack_require__.d = function(exports, name, getter) { -/******/ if(!__webpack_require__.o(exports, name)) { -/******/ Object.defineProperty(exports, name, { enumerable: true, get: getter }); -/******/ } -/******/ }; -/******/ -/******/ // define __esModule on exports -/******/ __webpack_require__.r = function(exports) { -/******/ if(typeof Symbol !== 'undefined' && Symbol.toStringTag) { -/******/ Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' }); -/******/ } -/******/ Object.defineProperty(exports, '__esModule', { value: true }); -/******/ }; -/******/ -/******/ // create a fake namespace object -/******/ // mode & 1: value is a module id, require it -/******/ // mode & 2: merge all properties of value into the ns -/******/ // mode & 4: return value when already ns object -/******/ // mode & 8|1: behave like require -/******/ __webpack_require__.t = function(value, mode) { -/******/ if(mode & 1) value = __webpack_require__(value); -/******/ if(mode & 8) return value; -/******/ if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value; -/******/ var ns = Object.create(null); -/******/ __webpack_require__.r(ns); -/******/ Object.defineProperty(ns, 'default', { enumerable: true, value: value }); -/******/ if(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key)); -/******/ return ns; -/******/ }; -/******/ -/******/ // getDefaultExport function for compatibility with non-harmony modules -/******/ __webpack_require__.n = function(module) { -/******/ var getter = module && module.__esModule ? -/******/ function getDefault() { return module['default']; } : -/******/ function getModuleExports() { return module; }; -/******/ __webpack_require__.d(getter, 'a', getter); -/******/ return getter; -/******/ }; -/******/ -/******/ // Object.prototype.hasOwnProperty.call -/******/ __webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); }; -/******/ -/******/ // __webpack_public_path__ -/******/ __webpack_require__.p = ""; -/******/ -/******/ -/******/ // Load entry module and return exports -/******/ return __webpack_require__(__webpack_require__.s = 5); -/******/ }) -/************************************************************************/ -/******/ ([ -/* 0 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - - -const toString = Object.prototype.toString; - -function isAnyArray(object) { - return toString.call(object).endsWith('Array]'); -} - -module.exports = isAnyArray; - - -/***/ }), -/* 1 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -function tauRandInt(n, random) { - return Math.floor(random() * n); -} -exports.tauRandInt = tauRandInt; -function tauRand(random) { - return random(); -} -exports.tauRand = tauRand; -function norm(vec) { - var e_1, _a; - var result = 0; - try { - for (var vec_1 = __values(vec), vec_1_1 = vec_1.next(); !vec_1_1.done; vec_1_1 = vec_1.next()) { - var item = vec_1_1.value; - result += Math.pow(item, 2); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (vec_1_1 && !vec_1_1.done && (_a = vec_1.return)) _a.call(vec_1); - } - finally { if (e_1) throw e_1.error; } - } - return Math.sqrt(result); -} -exports.norm = norm; -function empty(n) { - var output = []; - for (var i = 0; i < n; i++) { - output.push(undefined); - } - return output; -} -exports.empty = empty; -function range(n) { - return empty(n).map(function (_, i) { return i; }); -} -exports.range = range; -function filled(n, v) { - return empty(n).map(function () { return v; }); -} -exports.filled = filled; -function zeros(n) { - return filled(n, 0); -} -exports.zeros = zeros; -function ones(n) { - return filled(n, 1); -} -exports.ones = ones; -function linear(a, b, len) { - return empty(len).map(function (_, i) { - return a + i * ((b - a) / (len - 1)); - }); -} -exports.linear = linear; -function sum(input) { - return input.reduce(function (sum, val) { return sum + val; }); -} -exports.sum = sum; -function mean(input) { - return sum(input) / input.length; -} -exports.mean = mean; -function max(input) { - var max = 0; - for (var i = 0; i < input.length; i++) { - max = input[i] > max ? input[i] : max; - } - return max; -} -exports.max = max; -function max2d(input) { - var max = 0; - for (var i = 0; i < input.length; i++) { - for (var j = 0; j < input[i].length; j++) { - max = input[i][j] > max ? input[i][j] : max; - } - } - return max; -} -exports.max2d = max2d; -function rejectionSample(nSamples, poolSize, random) { - var result = zeros(nSamples); - for (var i = 0; i < nSamples; i++) { - var rejectSample = true; - while (rejectSample) { - var j = tauRandInt(poolSize, random); - var broken = false; - for (var k = 0; k < i; k++) { - if (j === result[k]) { - broken = true; - break; - } - } - if (!broken) { - rejectSample = false; - } - result[i] = j; - } - } - return result; -} -exports.rejectionSample = rejectionSample; -function reshape2d(x, a, b) { - var rows = []; - var count = 0; - var index = 0; - if (x.length !== a * b) { - throw new Error('Array dimensions must match input length.'); - } - for (var i = 0; i < a; i++) { - var col = []; - for (var j = 0; j < b; j++) { - col.push(x[index]); - index += 1; - } - rows.push(col); - count += 1; - } - return rows; -} -exports.reshape2d = reshape2d; - - -/***/ }), -/* 2 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var utils = __importStar(__webpack_require__(1)); -function makeHeap(nPoints, size) { - var makeArrays = function (fillValue) { - return utils.empty(nPoints).map(function () { - return utils.filled(size, fillValue); - }); - }; - var heap = []; - heap.push(makeArrays(-1)); - heap.push(makeArrays(Infinity)); - heap.push(makeArrays(0)); - return heap; -} -exports.makeHeap = makeHeap; -function rejectionSample(nSamples, poolSize, random) { - var result = utils.zeros(nSamples); - for (var i = 0; i < nSamples; i++) { - var rejectSample = true; - var j = 0; - while (rejectSample) { - j = utils.tauRandInt(poolSize, random); - var broken = false; - for (var k = 0; k < i; k++) { - if (j === result[k]) { - broken = true; - break; - } - } - if (!broken) - rejectSample = false; - } - result[i] = j; - } - return result; -} -exports.rejectionSample = rejectionSample; -function heapPush(heap, row, weight, index, flag) { - row = Math.floor(row); - var indices = heap[0][row]; - var weights = heap[1][row]; - var isNew = heap[2][row]; - if (weight >= weights[0]) { - return 0; - } - for (var i = 0; i < indices.length; i++) { - if (index === indices[i]) { - return 0; - } - } - return uncheckedHeapPush(heap, row, weight, index, flag); -} -exports.heapPush = heapPush; -function uncheckedHeapPush(heap, row, weight, index, flag) { - var indices = heap[0][row]; - var weights = heap[1][row]; - var isNew = heap[2][row]; - if (weight >= weights[0]) { - return 0; - } - weights[0] = weight; - indices[0] = index; - isNew[0] = flag; - var i = 0; - var iSwap = 0; - while (true) { - var ic1 = 2 * i + 1; - var ic2 = ic1 + 1; - var heapShape2 = heap[0][0].length; - if (ic1 >= heapShape2) { - break; - } - else if (ic2 >= heapShape2) { - if (weights[ic1] > weight) { - iSwap = ic1; - } - else { - break; - } - } - else if (weights[ic1] >= weights[ic2]) { - if (weight < weights[ic1]) { - iSwap = ic1; - } - else { - break; - } - } - else { - if (weight < weights[ic2]) { - iSwap = ic2; - } - else { - break; - } - } - weights[i] = weights[iSwap]; - indices[i] = indices[iSwap]; - isNew[i] = isNew[iSwap]; - i = iSwap; - } - weights[i] = weight; - indices[i] = index; - isNew[i] = flag; - return 1; -} -exports.uncheckedHeapPush = uncheckedHeapPush; -function buildCandidates(currentGraph, nVertices, nNeighbors, maxCandidates, random) { - var candidateNeighbors = makeHeap(nVertices, maxCandidates); - for (var i = 0; i < nVertices; i++) { - for (var j = 0; j < nNeighbors; j++) { - if (currentGraph[0][i][j] < 0) { - continue; - } - var idx = currentGraph[0][i][j]; - var isn = currentGraph[2][i][j]; - var d = utils.tauRand(random); - heapPush(candidateNeighbors, i, d, idx, isn); - heapPush(candidateNeighbors, idx, d, i, isn); - currentGraph[2][i][j] = 0; - } - } - return candidateNeighbors; -} -exports.buildCandidates = buildCandidates; -function deheapSort(heap) { - var indices = heap[0]; - var weights = heap[1]; - for (var i = 0; i < indices.length; i++) { - var indHeap = indices[i]; - var distHeap = weights[i]; - for (var j = 0; j < indHeap.length - 1; j++) { - var indHeapIndex = indHeap.length - j - 1; - var distHeapIndex = distHeap.length - j - 1; - var temp1 = indHeap[0]; - indHeap[0] = indHeap[indHeapIndex]; - indHeap[indHeapIndex] = temp1; - var temp2 = distHeap[0]; - distHeap[0] = distHeap[distHeapIndex]; - distHeap[distHeapIndex] = temp2; - siftDown(distHeap, indHeap, distHeapIndex, 0); - } - } - return { indices: indices, weights: weights }; -} -exports.deheapSort = deheapSort; -function siftDown(heap1, heap2, ceiling, elt) { - while (elt * 2 + 1 < ceiling) { - var leftChild = elt * 2 + 1; - var rightChild = leftChild + 1; - var swap = elt; - if (heap1[swap] < heap1[leftChild]) { - swap = leftChild; - } - if (rightChild < ceiling && heap1[swap] < heap1[rightChild]) { - swap = rightChild; - } - if (swap === elt) { - break; - } - else { - var temp1 = heap1[elt]; - heap1[elt] = heap1[swap]; - heap1[swap] = temp1; - var temp2 = heap2[elt]; - heap2[elt] = heap2[swap]; - heap2[swap] = temp2; - elt = swap; - } - } -} -function smallestFlagged(heap, row) { - var ind = heap[0][row]; - var dist = heap[1][row]; - var flag = heap[2][row]; - var minDist = Infinity; - var resultIndex = -1; - for (var i = 0; i > ind.length; i++) { - if (flag[i] === 1 && dist[i] < minDist) { - minDist = dist[i]; - resultIndex = i; - } - } - if (resultIndex >= 0) { - flag[resultIndex] = 0; - return Math.floor(ind[resultIndex]); - } - else { - return -1; - } -} -exports.smallestFlagged = smallestFlagged; - - -/***/ }), -/* 3 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var _a; -var utils = __importStar(__webpack_require__(1)); -var SparseMatrix = (function () { - function SparseMatrix(rows, cols, values, dims) { - this.entries = new Map(); - this.nRows = 0; - this.nCols = 0; - this.rows = __spread(rows); - this.cols = __spread(cols); - this.values = __spread(values); - for (var i = 0; i < values.length; i++) { - var key = this.makeKey(this.rows[i], this.cols[i]); - this.entries.set(key, i); - } - this.nRows = dims[0]; - this.nCols = dims[1]; - } - SparseMatrix.prototype.makeKey = function (row, col) { - return row + ":" + col; - }; - SparseMatrix.prototype.checkDims = function (row, col) { - var withinBounds = row < this.nRows && col < this.nCols; - if (!withinBounds) { - throw new Error('array index out of bounds'); - } - }; - SparseMatrix.prototype.set = function (row, col, value) { - this.checkDims(row, col); - var key = this.makeKey(row, col); - if (!this.entries.has(key)) { - this.rows.push(row); - this.cols.push(col); - this.values.push(value); - this.entries.set(key, this.values.length - 1); - } - else { - var index = this.entries.get(key); - this.values[index] = value; - } - }; - SparseMatrix.prototype.get = function (row, col, defaultValue) { - if (defaultValue === void 0) { defaultValue = 0; } - this.checkDims(row, col); - var key = this.makeKey(row, col); - if (this.entries.has(key)) { - var index = this.entries.get(key); - return this.values[index]; - } - else { - return defaultValue; - } - }; - SparseMatrix.prototype.getDims = function () { - return [this.nRows, this.nCols]; - }; - SparseMatrix.prototype.getRows = function () { - return __spread(this.rows); - }; - SparseMatrix.prototype.getCols = function () { - return __spread(this.cols); - }; - SparseMatrix.prototype.getValues = function () { - return __spread(this.values); - }; - SparseMatrix.prototype.forEach = function (fn) { - for (var i = 0; i < this.values.length; i++) { - fn(this.values[i], this.rows[i], this.cols[i]); - } - }; - SparseMatrix.prototype.map = function (fn) { - var vals = []; - for (var i = 0; i < this.values.length; i++) { - vals.push(fn(this.values[i], this.rows[i], this.cols[i])); - } - var dims = [this.nRows, this.nCols]; - return new SparseMatrix(this.rows, this.cols, vals, dims); - }; - SparseMatrix.prototype.toArray = function () { - var _this = this; - var rows = utils.empty(this.nRows); - var output = rows.map(function () { - return utils.zeros(_this.nCols); - }); - for (var i = 0; i < this.values.length; i++) { - output[this.rows[i]][this.cols[i]] = this.values[i]; - } - return output; - }; - return SparseMatrix; -}()); -exports.SparseMatrix = SparseMatrix; -function transpose(matrix) { - var cols = []; - var rows = []; - var vals = []; - matrix.forEach(function (value, row, col) { - cols.push(row); - rows.push(col); - vals.push(value); - }); - var dims = [matrix.nCols, matrix.nRows]; - return new SparseMatrix(rows, cols, vals, dims); -} -exports.transpose = transpose; -function identity(size) { - var _a = __read(size, 1), rows = _a[0]; - var matrix = new SparseMatrix([], [], [], size); - for (var i = 0; i < rows; i++) { - matrix.set(i, i, 1); - } - return matrix; -} -exports.identity = identity; -function pairwiseMultiply(a, b) { - return elementWise(a, b, function (x, y) { return x * y; }); -} -exports.pairwiseMultiply = pairwiseMultiply; -function add(a, b) { - return elementWise(a, b, function (x, y) { return x + y; }); -} -exports.add = add; -function subtract(a, b) { - return elementWise(a, b, function (x, y) { return x - y; }); -} -exports.subtract = subtract; -function maximum(a, b) { - return elementWise(a, b, function (x, y) { return (x > y ? x : y); }); -} -exports.maximum = maximum; -function multiplyScalar(a, scalar) { - return a.map(function (value) { - return value * scalar; - }); -} -exports.multiplyScalar = multiplyScalar; -function eliminateZeros(m) { - var zeroIndices = new Set(); - var values = m.getValues(); - var rows = m.getRows(); - var cols = m.getCols(); - for (var i = 0; i < values.length; i++) { - if (values[i] === 0) { - zeroIndices.add(i); - } - } - var removeByZeroIndex = function (_, index) { return !zeroIndices.has(index); }; - var nextValues = values.filter(removeByZeroIndex); - var nextRows = rows.filter(removeByZeroIndex); - var nextCols = cols.filter(removeByZeroIndex); - return new SparseMatrix(nextRows, nextCols, nextValues, m.getDims()); -} -exports.eliminateZeros = eliminateZeros; -function normalize(m, normType) { - if (normType === void 0) { normType = "l2"; } - var e_1, _a; - var normFn = normFns[normType]; - var colsByRow = new Map(); - m.forEach(function (_, row, col) { - var cols = colsByRow.get(row) || []; - cols.push(col); - colsByRow.set(row, cols); - }); - var nextMatrix = new SparseMatrix([], [], [], m.getDims()); - var _loop_1 = function (row) { - var cols = colsByRow.get(row).sort(); - var vals = cols.map(function (col) { return m.get(row, col); }); - var norm = normFn(vals); - for (var i = 0; i < norm.length; i++) { - nextMatrix.set(row, cols[i], norm[i]); - } - }; - try { - for (var _b = __values(colsByRow.keys()), _c = _b.next(); !_c.done; _c = _b.next()) { - var row = _c.value; - _loop_1(row); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (_c && !_c.done && (_a = _b.return)) _a.call(_b); - } - finally { if (e_1) throw e_1.error; } - } - return nextMatrix; -} -exports.normalize = normalize; -var normFns = (_a = {}, - _a["max"] = function (xs) { - var max = -Infinity; - for (var i = 0; i < xs.length; i++) { - max = xs[i] > max ? xs[i] : max; - } - return xs.map(function (x) { return x / max; }); - }, - _a["l1"] = function (xs) { - var sum = 0; - for (var i = 0; i < xs.length; i++) { - sum += xs[i]; - } - return xs.map(function (x) { return x / sum; }); - }, - _a["l2"] = function (xs) { - var sum = 0; - for (var i = 0; i < xs.length; i++) { - sum += Math.pow(xs[i], 2); - } - return xs.map(function (x) { return Math.sqrt(Math.pow(x, 2) / sum); }); - }, - _a); -function elementWise(a, b, op) { - var visited = new Set(); - var rows = []; - var cols = []; - var vals = []; - var operate = function (row, col) { - rows.push(row); - cols.push(col); - var nextValue = op(a.get(row, col), b.get(row, col)); - vals.push(nextValue); - }; - var valuesA = a.getValues(); - var rowsA = a.getRows(); - var colsA = a.getCols(); - for (var i = 0; i < valuesA.length; i++) { - var row = rowsA[i]; - var col = colsA[i]; - var key = row + ":" + col; - visited.add(key); - operate(row, col); - } - var valuesB = b.getValues(); - var rowsB = b.getRows(); - var colsB = b.getCols(); - for (var i = 0; i < valuesB.length; i++) { - var row = rowsB[i]; - var col = colsB[i]; - var key = row + ":" + col; - if (visited.has(key)) - continue; - operate(row, col); - } - var dims = [a.nRows, a.nCols]; - return new SparseMatrix(rows, cols, vals, dims); -} -function getCSR(x) { - var entries = []; - x.forEach(function (value, row, col) { - entries.push({ value: value, row: row, col: col }); - }); - entries.sort(function (a, b) { - if (a.row === b.row) { - return a.col - b.col; - } - else { - return a.row - b.col; - } - }); - var indices = []; - var values = []; - var indptr = []; - var currentRow = -1; - for (var i = 0; i < entries.length; i++) { - var _a = entries[i], row = _a.row, col = _a.col, value = _a.value; - if (row !== currentRow) { - currentRow = row; - indptr.push(i); - } - indices.push(col); - values.push(value); - } - return { indices: indices, values: values, indptr: indptr }; -} -exports.getCSR = getCSR; - - -/***/ }), -/* 4 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var utils = __importStar(__webpack_require__(1)); -var FlatTree = (function () { - function FlatTree(hyperplanes, offsets, children, indices) { - this.hyperplanes = hyperplanes; - this.offsets = offsets; - this.children = children; - this.indices = indices; - } - return FlatTree; -}()); -exports.FlatTree = FlatTree; -function makeForest(data, nNeighbors, nTrees, random) { - var leafSize = Math.max(10, nNeighbors); - var trees = utils - .range(nTrees) - .map(function (_, i) { return makeTree(data, leafSize, i, random); }); - var forest = trees.map(function (tree) { return flattenTree(tree, leafSize); }); - return forest; -} -exports.makeForest = makeForest; -function makeTree(data, leafSize, n, random) { - if (leafSize === void 0) { leafSize = 30; } - var indices = utils.range(data.length); - var tree = makeEuclideanTree(data, indices, leafSize, n, random); - return tree; -} -function makeEuclideanTree(data, indices, leafSize, q, random) { - if (leafSize === void 0) { leafSize = 30; } - if (indices.length > leafSize) { - var splitResults = euclideanRandomProjectionSplit(data, indices, random); - var indicesLeft = splitResults.indicesLeft, indicesRight = splitResults.indicesRight, hyperplane = splitResults.hyperplane, offset = splitResults.offset; - var leftChild = makeEuclideanTree(data, indicesLeft, leafSize, q + 1, random); - var rightChild = makeEuclideanTree(data, indicesRight, leafSize, q + 1, random); - var node = { leftChild: leftChild, rightChild: rightChild, isLeaf: false, hyperplane: hyperplane, offset: offset }; - return node; - } - else { - var node = { indices: indices, isLeaf: true }; - return node; - } -} -function euclideanRandomProjectionSplit(data, indices, random) { - var dim = data[0].length; - var leftIndex = utils.tauRandInt(indices.length, random); - var rightIndex = utils.tauRandInt(indices.length, random); - rightIndex += leftIndex === rightIndex ? 1 : 0; - rightIndex = rightIndex % indices.length; - var left = indices[leftIndex]; - var right = indices[rightIndex]; - var hyperplaneOffset = 0; - var hyperplaneVector = utils.zeros(dim); - for (var i = 0; i < hyperplaneVector.length; i++) { - hyperplaneVector[i] = data[left][i] - data[right][i]; - hyperplaneOffset -= - (hyperplaneVector[i] * (data[left][i] + data[right][i])) / 2.0; - } - var nLeft = 0; - var nRight = 0; - var side = utils.zeros(indices.length); - for (var i = 0; i < indices.length; i++) { - var margin = hyperplaneOffset; - for (var d = 0; d < dim; d++) { - margin += hyperplaneVector[d] * data[indices[i]][d]; - } - if (margin === 0) { - side[i] = utils.tauRandInt(2, random); - if (side[i] === 0) { - nLeft += 1; - } - else { - nRight += 1; - } - } - else if (margin > 0) { - side[i] = 0; - nLeft += 1; - } - else { - side[i] = 1; - nRight += 1; - } - } - var indicesLeft = utils.zeros(nLeft); - var indicesRight = utils.zeros(nRight); - nLeft = 0; - nRight = 0; - for (var i in utils.range(side.length)) { - if (side[i] === 0) { - indicesLeft[nLeft] = indices[i]; - nLeft += 1; - } - else { - indicesRight[nRight] = indices[i]; - nRight += 1; - } - } - return { - indicesLeft: indicesLeft, - indicesRight: indicesRight, - hyperplane: hyperplaneVector, - offset: hyperplaneOffset, - }; -} -function flattenTree(tree, leafSize) { - var nNodes = numNodes(tree); - var nLeaves = numLeaves(tree); - var hyperplanes = utils - .range(nNodes) - .map(function () { return utils.zeros(tree.hyperplane.length); }); - var offsets = utils.zeros(nNodes); - var children = utils.range(nNodes).map(function () { return [-1, -1]; }); - var indices = utils - .range(nLeaves) - .map(function () { return utils.range(leafSize).map(function () { return -1; }); }); - recursiveFlatten(tree, hyperplanes, offsets, children, indices, 0, 0); - return new FlatTree(hyperplanes, offsets, children, indices); -} -function recursiveFlatten(tree, hyperplanes, offsets, children, indices, nodeNum, leafNum) { - var _a; - if (tree.isLeaf) { - children[nodeNum][0] = -leafNum; - (_a = indices[leafNum]).splice.apply(_a, __spread([0, tree.indices.length], tree.indices)); - leafNum += 1; - return { nodeNum: nodeNum, leafNum: leafNum }; - } - else { - hyperplanes[nodeNum] = tree.hyperplane; - offsets[nodeNum] = tree.offset; - children[nodeNum][0] = nodeNum + 1; - var oldNodeNum = nodeNum; - var res = recursiveFlatten(tree.leftChild, hyperplanes, offsets, children, indices, nodeNum + 1, leafNum); - nodeNum = res.nodeNum; - leafNum = res.leafNum; - children[oldNodeNum][1] = nodeNum + 1; - res = recursiveFlatten(tree.rightChild, hyperplanes, offsets, children, indices, nodeNum + 1, leafNum); - return { nodeNum: res.nodeNum, leafNum: res.leafNum }; - } -} -function numNodes(tree) { - if (tree.isLeaf) { - return 1; - } - else { - return 1 + numNodes(tree.leftChild) + numNodes(tree.rightChild); - } -} -function numLeaves(tree) { - if (tree.isLeaf) { - return 1; - } - else { - return numLeaves(tree.leftChild) + numLeaves(tree.rightChild); - } -} -function makeLeafArray(rpForest) { - var e_1, _a; - if (rpForest.length > 0) { - var output = []; - try { - for (var rpForest_1 = __values(rpForest), rpForest_1_1 = rpForest_1.next(); !rpForest_1_1.done; rpForest_1_1 = rpForest_1.next()) { - var tree = rpForest_1_1.value; - output.push.apply(output, __spread(tree.indices)); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (rpForest_1_1 && !rpForest_1_1.done && (_a = rpForest_1.return)) _a.call(rpForest_1); - } - finally { if (e_1) throw e_1.error; } - } - return output; - } - else { - return [[-1]]; - } -} -exports.makeLeafArray = makeLeafArray; -function selectSide(hyperplane, offset, point, random) { - var margin = offset; - for (var d = 0; d < point.length; d++) { - margin += hyperplane[d] * point[d]; - } - if (margin === 0) { - var side = utils.tauRandInt(2, random); - return side; - } - else if (margin > 0) { - return 0; - } - else { - return 1; - } -} -function searchFlatTree(point, tree, random) { - var node = 0; - while (tree.children[node][0] > 0) { - var side = selectSide(tree.hyperplanes[node], tree.offsets[node], point, random); - if (side === 0) { - node = tree.children[node][0]; - } - else { - node = tree.children[node][1]; - } - } - var index = -1 * tree.children[node][0]; - return tree.indices[index]; -} -exports.searchFlatTree = searchFlatTree; - - -/***/ }), -/* 5 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -Object.defineProperty(exports, "__esModule", { value: true }); -var umap_1 = __webpack_require__(6); -exports.UMAP = umap_1.UMAP; - - -/***/ }), -/* 6 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : new P(function (resolve) { resolve(result.value); }).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); -}; -var __generator = (this && this.__generator) || function (thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } -}; -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var heap = __importStar(__webpack_require__(2)); -var matrix = __importStar(__webpack_require__(3)); -var nnDescent = __importStar(__webpack_require__(7)); -var tree = __importStar(__webpack_require__(4)); -var utils = __importStar(__webpack_require__(1)); -var ml_levenberg_marquardt_1 = __importDefault(__webpack_require__(8)); -var SMOOTH_K_TOLERANCE = 1e-5; -var MIN_K_DIST_SCALE = 1e-3; -var UMAP = (function () { - function UMAP(params) { - if (params === void 0) { params = {}; } - var _this = this; - this.learningRate = 1.0; - this.localConnectivity = 1.0; - this.minDist = 0.1; - this.nComponents = 2; - this.nEpochs = 0; - this.nNeighbors = 15; - this.negativeSampleRate = 5; - this.random = Math.random; - this.repulsionStrength = 1.0; - this.setOpMixRatio = 1.0; - this.spread = 1.0; - this.transformQueueSize = 4.0; - this.targetMetric = "categorical"; - this.targetWeight = 0.5; - this.targetNNeighbors = this.nNeighbors; - this.distanceFn = euclidean; - this.isInitialized = false; - this.rpForest = []; - this.embedding = []; - this.optimizationState = new OptimizationState(); - var setParam = function (key) { - if (params[key] !== undefined) - _this[key] = params[key]; - }; - setParam('distanceFn'); - setParam('learningRate'); - setParam('localConnectivity'); - setParam('minDist'); - setParam('nComponents'); - setParam('nEpochs'); - setParam('nNeighbors'); - setParam('negativeSampleRate'); - setParam('random'); - setParam('repulsionStrength'); - setParam('setOpMixRatio'); - setParam('spread'); - setParam('transformQueueSize'); - } - UMAP.prototype.fit = function (X) { - this.initializeFit(X); - this.optimizeLayout(); - return this.embedding; - }; - UMAP.prototype.fitAsync = function (X, callback) { - if (callback === void 0) { callback = function () { return true; }; } - return __awaiter(this, void 0, void 0, function () { - return __generator(this, function (_a) { - switch (_a.label) { - case 0: - this.initializeFit(X); - return [4, this.optimizeLayoutAsync(callback)]; - case 1: - _a.sent(); - return [2, this.embedding]; - } - }); - }); - }; - UMAP.prototype.setSupervisedProjection = function (Y, params) { - if (params === void 0) { params = {}; } - this.Y = Y; - this.targetMetric = params.targetMetric || this.targetMetric; - this.targetWeight = params.targetWeight || this.targetWeight; - this.targetNNeighbors = params.targetNNeighbors || this.targetNNeighbors; - }; - UMAP.prototype.setPrecomputedKNN = function (knnIndices, knnDistances) { - this.knnIndices = knnIndices; - this.knnDistances = knnDistances; - }; - UMAP.prototype.initializeFit = function (X) { - if (this.X === X && this.isInitialized) { - return this.getNEpochs(); - } - this.X = X; - if (!this.knnIndices && !this.knnDistances) { - var knnResults = this.nearestNeighbors(X); - this.knnIndices = knnResults.knnIndices; - this.knnDistances = knnResults.knnDistances; - } - this.graph = this.fuzzySimplicialSet(X, this.nNeighbors, this.setOpMixRatio); - this.makeSearchFns(); - this.searchGraph = this.makeSearchGraph(X); - this.processGraphForSupervisedProjection(); - var _a = this.initializeSimplicialSetEmbedding(), head = _a.head, tail = _a.tail, epochsPerSample = _a.epochsPerSample; - this.optimizationState.head = head; - this.optimizationState.tail = tail; - this.optimizationState.epochsPerSample = epochsPerSample; - this.initializeOptimization(); - this.prepareForOptimizationLoop(); - this.isInitialized = true; - return this.getNEpochs(); - }; - UMAP.prototype.makeSearchFns = function () { - var _a = nnDescent.makeInitializations(this.distanceFn), initFromTree = _a.initFromTree, initFromRandom = _a.initFromRandom; - this.initFromTree = initFromTree; - this.initFromRandom = initFromRandom; - this.search = nnDescent.makeInitializedNNSearch(this.distanceFn); - }; - UMAP.prototype.makeSearchGraph = function (X) { - var knnIndices = this.knnIndices; - var knnDistances = this.knnDistances; - var dims = [X.length, X.length]; - var searchGraph = new matrix.SparseMatrix([], [], [], dims); - for (var i = 0; i < knnIndices.length; i++) { - var knn = knnIndices[i]; - var distances = knnDistances[i]; - for (var j = 0; j < knn.length; j++) { - var neighbor = knn[j]; - var distance = distances[j]; - if (distance > 0) { - searchGraph.set(i, neighbor, distance); - } - } - } - var transpose = matrix.transpose(searchGraph); - return matrix.maximum(searchGraph, transpose); - }; - UMAP.prototype.transform = function (toTransform) { - var _this = this; - var rawData = this.X; - if (rawData === undefined || rawData.length === 0) { - throw new Error('No data has been fit.'); - } - var nNeighbors = Math.floor(this.nNeighbors * this.transformQueueSize); - var init = nnDescent.initializeSearch(this.rpForest, rawData, toTransform, nNeighbors, this.initFromRandom, this.initFromTree, this.random); - var result = this.search(rawData, this.searchGraph, init, toTransform); - var _a = heap.deheapSort(result), indices = _a.indices, distances = _a.weights; - indices = indices.map(function (x) { return x.slice(0, _this.nNeighbors); }); - distances = distances.map(function (x) { return x.slice(0, _this.nNeighbors); }); - var adjustedLocalConnectivity = Math.max(0, this.localConnectivity - 1); - var _b = this.smoothKNNDistance(distances, this.nNeighbors, adjustedLocalConnectivity), sigmas = _b.sigmas, rhos = _b.rhos; - var _c = this.computeMembershipStrengths(indices, distances, sigmas, rhos), rows = _c.rows, cols = _c.cols, vals = _c.vals; - var size = [toTransform.length, rawData.length]; - var graph = new matrix.SparseMatrix(rows, cols, vals, size); - var normed = matrix.normalize(graph, "l1"); - var csrMatrix = matrix.getCSR(normed); - var nPoints = toTransform.length; - var eIndices = utils.reshape2d(csrMatrix.indices, nPoints, this.nNeighbors); - var eWeights = utils.reshape2d(csrMatrix.values, nPoints, this.nNeighbors); - var embedding = initTransform(eIndices, eWeights, this.embedding); - var nEpochs = this.nEpochs - ? this.nEpochs / 3 - : graph.nRows <= 10000 - ? 100 - : 30; - var graphMax = graph - .getValues() - .reduce(function (max, val) { return (val > max ? val : max); }, 0); - graph = graph.map(function (value) { return (value < graphMax / nEpochs ? 0 : value); }); - graph = matrix.eliminateZeros(graph); - var epochsPerSample = this.makeEpochsPerSample(graph.getValues(), nEpochs); - var head = graph.getRows(); - var tail = graph.getCols(); - this.assignOptimizationStateParameters({ - headEmbedding: embedding, - tailEmbedding: this.embedding, - head: head, - tail: tail, - currentEpoch: 0, - nEpochs: nEpochs, - nVertices: graph.getDims()[1], - epochsPerSample: epochsPerSample, - }); - this.prepareForOptimizationLoop(); - return this.optimizeLayout(); - }; - UMAP.prototype.processGraphForSupervisedProjection = function () { - var _a = this, Y = _a.Y, X = _a.X; - if (Y) { - if (Y.length !== X.length) { - throw new Error('Length of X and y must be equal'); - } - if (this.targetMetric === "categorical") { - var lt = this.targetWeight < 1.0; - var farDist = lt ? 2.5 * (1.0 / (1.0 - this.targetWeight)) : 1.0e12; - this.graph = this.categoricalSimplicialSetIntersection(this.graph, Y, farDist); - } - } - }; - UMAP.prototype.step = function () { - var currentEpoch = this.optimizationState.currentEpoch; - if (currentEpoch < this.getNEpochs()) { - this.optimizeLayoutStep(currentEpoch); - } - return this.optimizationState.currentEpoch; - }; - UMAP.prototype.getEmbedding = function () { - return this.embedding; - }; - UMAP.prototype.nearestNeighbors = function (X) { - var _a = this, distanceFn = _a.distanceFn, nNeighbors = _a.nNeighbors; - var log2 = function (n) { return Math.log(n) / Math.log(2); }; - var metricNNDescent = nnDescent.makeNNDescent(distanceFn, this.random); - var round = function (n) { - return n === 0.5 ? 0 : Math.round(n); - }; - var nTrees = 5 + Math.floor(round(Math.pow(X.length, 0.5) / 20.0)); - var nIters = Math.max(5, Math.floor(Math.round(log2(X.length)))); - this.rpForest = tree.makeForest(X, nNeighbors, nTrees, this.random); - var leafArray = tree.makeLeafArray(this.rpForest); - var _b = metricNNDescent(X, leafArray, nNeighbors, nIters), indices = _b.indices, weights = _b.weights; - return { knnIndices: indices, knnDistances: weights }; - }; - UMAP.prototype.fuzzySimplicialSet = function (X, nNeighbors, setOpMixRatio) { - if (setOpMixRatio === void 0) { setOpMixRatio = 1.0; } - var _a = this, _b = _a.knnIndices, knnIndices = _b === void 0 ? [] : _b, _c = _a.knnDistances, knnDistances = _c === void 0 ? [] : _c, localConnectivity = _a.localConnectivity; - var _d = this.smoothKNNDistance(knnDistances, nNeighbors, localConnectivity), sigmas = _d.sigmas, rhos = _d.rhos; - var _e = this.computeMembershipStrengths(knnIndices, knnDistances, sigmas, rhos), rows = _e.rows, cols = _e.cols, vals = _e.vals; - var size = [X.length, X.length]; - var sparseMatrix = new matrix.SparseMatrix(rows, cols, vals, size); - var transpose = matrix.transpose(sparseMatrix); - var prodMatrix = matrix.pairwiseMultiply(sparseMatrix, transpose); - var a = matrix.subtract(matrix.add(sparseMatrix, transpose), prodMatrix); - var b = matrix.multiplyScalar(a, setOpMixRatio); - var c = matrix.multiplyScalar(prodMatrix, 1.0 - setOpMixRatio); - var result = matrix.add(b, c); - return result; - }; - UMAP.prototype.categoricalSimplicialSetIntersection = function (simplicialSet, target, farDist, unknownDist) { - if (unknownDist === void 0) { unknownDist = 1.0; } - var intersection = fastIntersection(simplicialSet, target, unknownDist, farDist); - intersection = matrix.eliminateZeros(intersection); - return resetLocalConnectivity(intersection); - }; - UMAP.prototype.smoothKNNDistance = function (distances, k, localConnectivity, nIter, bandwidth) { - if (localConnectivity === void 0) { localConnectivity = 1.0; } - if (nIter === void 0) { nIter = 64; } - if (bandwidth === void 0) { bandwidth = 1.0; } - var target = (Math.log(k) / Math.log(2)) * bandwidth; - var rho = utils.zeros(distances.length); - var result = utils.zeros(distances.length); - for (var i = 0; i < distances.length; i++) { - var lo = 0.0; - var hi = Infinity; - var mid = 1.0; - var ithDistances = distances[i]; - var nonZeroDists = ithDistances.filter(function (d) { return d > 0.0; }); - if (nonZeroDists.length >= localConnectivity) { - var index = Math.floor(localConnectivity); - var interpolation = localConnectivity - index; - if (index > 0) { - rho[i] = nonZeroDists[index - 1]; - if (interpolation > SMOOTH_K_TOLERANCE) { - rho[i] += - interpolation * (nonZeroDists[index] - nonZeroDists[index - 1]); - } - } - else { - rho[i] = interpolation * nonZeroDists[0]; - } - } - else if (nonZeroDists.length > 0) { - rho[i] = utils.max(nonZeroDists); - } - for (var n = 0; n < nIter; n++) { - var psum = 0.0; - for (var j = 1; j < distances[i].length; j++) { - var d = distances[i][j] - rho[i]; - if (d > 0) { - psum += Math.exp(-(d / mid)); - } - else { - psum += 1.0; - } - } - if (Math.abs(psum - target) < SMOOTH_K_TOLERANCE) { - break; - } - if (psum > target) { - hi = mid; - mid = (lo + hi) / 2.0; - } - else { - lo = mid; - if (hi === Infinity) { - mid *= 2; - } - else { - mid = (lo + hi) / 2.0; - } - } - } - result[i] = mid; - if (rho[i] > 0.0) { - var meanIthDistances = utils.mean(ithDistances); - if (result[i] < MIN_K_DIST_SCALE * meanIthDistances) { - result[i] = MIN_K_DIST_SCALE * meanIthDistances; - } - } - else { - var meanDistances = utils.mean(distances.map(utils.mean)); - if (result[i] < MIN_K_DIST_SCALE * meanDistances) { - result[i] = MIN_K_DIST_SCALE * meanDistances; - } - } - } - return { sigmas: result, rhos: rho }; - }; - UMAP.prototype.computeMembershipStrengths = function (knnIndices, knnDistances, sigmas, rhos) { - var nSamples = knnIndices.length; - var nNeighbors = knnIndices[0].length; - var rows = utils.zeros(nSamples * nNeighbors); - var cols = utils.zeros(nSamples * nNeighbors); - var vals = utils.zeros(nSamples * nNeighbors); - for (var i = 0; i < nSamples; i++) { - for (var j = 0; j < nNeighbors; j++) { - var val = 0; - if (knnIndices[i][j] === -1) { - continue; - } - if (knnIndices[i][j] === i) { - val = 0.0; - } - else if (knnDistances[i][j] - rhos[i] <= 0.0) { - val = 1.0; - } - else { - val = Math.exp(-((knnDistances[i][j] - rhos[i]) / sigmas[i])); - } - rows[i * nNeighbors + j] = i; - cols[i * nNeighbors + j] = knnIndices[i][j]; - vals[i * nNeighbors + j] = val; - } - } - return { rows: rows, cols: cols, vals: vals }; - }; - UMAP.prototype.initializeSimplicialSetEmbedding = function () { - var _this = this; - var nEpochs = this.getNEpochs(); - var nComponents = this.nComponents; - var graphValues = this.graph.getValues(); - var graphMax = 0; - for (var i = 0; i < graphValues.length; i++) { - var value = graphValues[i]; - if (graphMax < graphValues[i]) { - graphMax = value; - } - } - var graph = this.graph.map(function (value) { - if (value < graphMax / nEpochs) { - return 0; - } - else { - return value; - } - }); - this.embedding = utils.zeros(graph.nRows).map(function () { - return utils.zeros(nComponents).map(function () { - return utils.tauRand(_this.random) * 20 + -10; - }); - }); - var weights = []; - var head = []; - var tail = []; - for (var i = 0; i < graph.nRows; i++) { - for (var j = 0; j < graph.nCols; j++) { - var value = graph.get(i, j); - if (value) { - weights.push(value); - tail.push(i); - head.push(j); - } - } - } - var epochsPerSample = this.makeEpochsPerSample(weights, nEpochs); - return { head: head, tail: tail, epochsPerSample: epochsPerSample }; - }; - UMAP.prototype.makeEpochsPerSample = function (weights, nEpochs) { - var result = utils.filled(weights.length, -1.0); - var max = utils.max(weights); - var nSamples = weights.map(function (w) { return (w / max) * nEpochs; }); - nSamples.forEach(function (n, i) { - if (n > 0) - result[i] = nEpochs / nSamples[i]; - }); - return result; - }; - UMAP.prototype.assignOptimizationStateParameters = function (state) { - Object.assign(this.optimizationState, state); - }; - UMAP.prototype.prepareForOptimizationLoop = function () { - var _a = this, repulsionStrength = _a.repulsionStrength, learningRate = _a.learningRate, negativeSampleRate = _a.negativeSampleRate; - var _b = this.optimizationState, epochsPerSample = _b.epochsPerSample, headEmbedding = _b.headEmbedding, tailEmbedding = _b.tailEmbedding; - var dim = headEmbedding[0].length; - var moveOther = headEmbedding.length === tailEmbedding.length; - var epochsPerNegativeSample = epochsPerSample.map(function (e) { return e / negativeSampleRate; }); - var epochOfNextNegativeSample = __spread(epochsPerNegativeSample); - var epochOfNextSample = __spread(epochsPerSample); - this.assignOptimizationStateParameters({ - epochOfNextSample: epochOfNextSample, - epochOfNextNegativeSample: epochOfNextNegativeSample, - epochsPerNegativeSample: epochsPerNegativeSample, - moveOther: moveOther, - initialAlpha: learningRate, - alpha: learningRate, - gamma: repulsionStrength, - dim: dim, - }); - }; - UMAP.prototype.initializeOptimization = function () { - var headEmbedding = this.embedding; - var tailEmbedding = this.embedding; - var _a = this.optimizationState, head = _a.head, tail = _a.tail, epochsPerSample = _a.epochsPerSample; - var nEpochs = this.getNEpochs(); - var nVertices = this.graph.nCols; - var _b = findABParams(this.spread, this.minDist), a = _b.a, b = _b.b; - this.assignOptimizationStateParameters({ - headEmbedding: headEmbedding, - tailEmbedding: tailEmbedding, - head: head, - tail: tail, - epochsPerSample: epochsPerSample, - a: a, - b: b, - nEpochs: nEpochs, - nVertices: nVertices, - }); - }; - UMAP.prototype.optimizeLayoutStep = function (n) { - var optimizationState = this.optimizationState; - var head = optimizationState.head, tail = optimizationState.tail, headEmbedding = optimizationState.headEmbedding, tailEmbedding = optimizationState.tailEmbedding, epochsPerSample = optimizationState.epochsPerSample, epochOfNextSample = optimizationState.epochOfNextSample, epochOfNextNegativeSample = optimizationState.epochOfNextNegativeSample, epochsPerNegativeSample = optimizationState.epochsPerNegativeSample, moveOther = optimizationState.moveOther, initialAlpha = optimizationState.initialAlpha, alpha = optimizationState.alpha, gamma = optimizationState.gamma, a = optimizationState.a, b = optimizationState.b, dim = optimizationState.dim, nEpochs = optimizationState.nEpochs, nVertices = optimizationState.nVertices; - var clipValue = 4.0; - for (var i = 0; i < epochsPerSample.length; i++) { - if (epochOfNextSample[i] > n) { - continue; - } - var j = head[i]; - var k = tail[i]; - var current = headEmbedding[j]; - var other = tailEmbedding[k]; - var distSquared = rDist(current, other); - var gradCoeff = 0; - if (distSquared > 0) { - gradCoeff = -2.0 * a * b * Math.pow(distSquared, b - 1.0); - gradCoeff /= a * Math.pow(distSquared, b) + 1.0; - } - for (var d = 0; d < dim; d++) { - var gradD = clip(gradCoeff * (current[d] - other[d]), clipValue); - current[d] += gradD * alpha; - if (moveOther) { - other[d] += -gradD * alpha; - } - } - epochOfNextSample[i] += epochsPerSample[i]; - var nNegSamples = Math.floor((n - epochOfNextNegativeSample[i]) / epochsPerNegativeSample[i]); - for (var p = 0; p < nNegSamples; p++) { - var k_1 = utils.tauRandInt(nVertices, this.random); - var other_1 = tailEmbedding[k_1]; - var distSquared_1 = rDist(current, other_1); - var gradCoeff_1 = 0.0; - if (distSquared_1 > 0.0) { - gradCoeff_1 = 2.0 * gamma * b; - gradCoeff_1 /= - (0.001 + distSquared_1) * (a * Math.pow(distSquared_1, b) + 1); - } - else if (j === k_1) { - continue; - } - for (var d = 0; d < dim; d++) { - var gradD = 4.0; - if (gradCoeff_1 > 0.0) { - gradD = clip(gradCoeff_1 * (current[d] - other_1[d]), clipValue); - } - current[d] += gradD * alpha; - } - } - epochOfNextNegativeSample[i] += nNegSamples * epochsPerNegativeSample[i]; - } - optimizationState.alpha = initialAlpha * (1.0 - n / nEpochs); - optimizationState.currentEpoch += 1; - return headEmbedding; - }; - UMAP.prototype.optimizeLayoutAsync = function (epochCallback) { - var _this = this; - if (epochCallback === void 0) { epochCallback = function () { return true; }; } - return new Promise(function (resolve, reject) { - var step = function () { return __awaiter(_this, void 0, void 0, function () { - var _a, nEpochs, currentEpoch, epochCompleted, shouldStop, isFinished; - return __generator(this, function (_b) { - try { - _a = this.optimizationState, nEpochs = _a.nEpochs, currentEpoch = _a.currentEpoch; - this.embedding = this.optimizeLayoutStep(currentEpoch); - epochCompleted = this.optimizationState.currentEpoch; - shouldStop = epochCallback(epochCompleted) === false; - isFinished = epochCompleted === nEpochs; - if (!shouldStop && !isFinished) { - step(); - } - else { - return [2, resolve(isFinished)]; - } - } - catch (err) { - reject(err); - } - return [2]; - }); - }); }; - step(); - }); - }; - UMAP.prototype.optimizeLayout = function (epochCallback) { - if (epochCallback === void 0) { epochCallback = function () { return true; }; } - var isFinished = false; - var embedding = []; - while (!isFinished) { - var _a = this.optimizationState, nEpochs = _a.nEpochs, currentEpoch = _a.currentEpoch; - embedding = this.optimizeLayoutStep(currentEpoch); - var epochCompleted = this.optimizationState.currentEpoch; - var shouldStop = epochCallback(epochCompleted) === false; - isFinished = epochCompleted === nEpochs || shouldStop; - } - return embedding; - }; - UMAP.prototype.getNEpochs = function () { - var graph = this.graph; - if (this.nEpochs > 0) { - return this.nEpochs; - } - var length = graph.nRows; - if (length <= 2500) { - return 500; - } - else if (length <= 5000) { - return 400; - } - else if (length <= 7500) { - return 300; - } - else { - return 200; - } - }; - return UMAP; -}()); -exports.UMAP = UMAP; -function euclidean(x, y) { - var result = 0; - for (var i = 0; i < x.length; i++) { - result += Math.pow((x[i] - y[i]), 2); - } - return Math.sqrt(result); -} -exports.euclidean = euclidean; -function cosine(x, y) { - var result = 0.0; - var normX = 0.0; - var normY = 0.0; - for (var i = 0; i < x.length; i++) { - result += x[i] * y[i]; - normX += Math.pow(x[i], 2); - normY += Math.pow(y[i], 2); - } - if (normX === 0 && normY === 0) { - return 0; - } - else if (normX === 0 || normY === 0) { - return 1.0; - } - else { - return 1.0 - result / Math.sqrt(normX * normY); - } -} -exports.cosine = cosine; -var OptimizationState = (function () { - function OptimizationState() { - this.currentEpoch = 0; - this.headEmbedding = []; - this.tailEmbedding = []; - this.head = []; - this.tail = []; - this.epochsPerSample = []; - this.epochOfNextSample = []; - this.epochOfNextNegativeSample = []; - this.epochsPerNegativeSample = []; - this.moveOther = true; - this.initialAlpha = 1.0; - this.alpha = 1.0; - this.gamma = 1.0; - this.a = 1.5769434603113077; - this.b = 0.8950608779109733; - this.dim = 2; - this.nEpochs = 500; - this.nVertices = 0; - } - return OptimizationState; -}()); -function clip(x, clipValue) { - if (x > clipValue) - return clipValue; - else if (x < -clipValue) - return -clipValue; - else - return x; -} -function rDist(x, y) { - var result = 0.0; - for (var i = 0; i < x.length; i++) { - result += Math.pow(x[i] - y[i], 2); - } - return result; -} -function findABParams(spread, minDist) { - var curve = function (_a) { - var _b = __read(_a, 2), a = _b[0], b = _b[1]; - return function (x) { - return 1.0 / (1.0 + a * Math.pow(x, (2 * b))); - }; - }; - var xv = utils - .linear(0, spread * 3, 300) - .map(function (val) { return (val < minDist ? 1.0 : val); }); - var yv = utils.zeros(xv.length).map(function (val, index) { - var gte = xv[index] >= minDist; - return gte ? Math.exp(-(xv[index] - minDist) / spread) : val; - }); - var initialValues = [0.5, 0.5]; - var data = { x: xv, y: yv }; - var options = { - damping: 1.5, - initialValues: initialValues, - gradientDifference: 10e-2, - maxIterations: 100, - errorTolerance: 10e-3, - }; - var parameterValues = ml_levenberg_marquardt_1.default(data, curve, options).parameterValues; - var _a = __read(parameterValues, 2), a = _a[0], b = _a[1]; - return { a: a, b: b }; -} -exports.findABParams = findABParams; -function fastIntersection(graph, target, unknownDist, farDist) { - if (unknownDist === void 0) { unknownDist = 1.0; } - if (farDist === void 0) { farDist = 5.0; } - return graph.map(function (value, row, col) { - if (target[row] === -1 || target[col] === -1) { - return value * Math.exp(-unknownDist); - } - else if (target[row] !== target[col]) { - return value * Math.exp(-farDist); - } - else { - return value; - } - }); -} -exports.fastIntersection = fastIntersection; -function resetLocalConnectivity(simplicialSet) { - simplicialSet = matrix.normalize(simplicialSet, "max"); - var transpose = matrix.transpose(simplicialSet); - var prodMatrix = matrix.pairwiseMultiply(transpose, simplicialSet); - simplicialSet = matrix.add(simplicialSet, matrix.subtract(transpose, prodMatrix)); - return matrix.eliminateZeros(simplicialSet); -} -exports.resetLocalConnectivity = resetLocalConnectivity; -function initTransform(indices, weights, embedding) { - var result = utils - .zeros(indices.length) - .map(function (z) { return utils.zeros(embedding[0].length); }); - for (var i = 0; i < indices.length; i++) { - for (var j = 0; j < indices[0].length; j++) { - for (var d = 0; d < embedding[0].length; d++) { - var a = indices[i][j]; - result[i][d] += weights[i][j] * embedding[a][d]; - } - } - } - return result; -} -exports.initTransform = initTransform; - - -/***/ }), -/* 7 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var heap = __importStar(__webpack_require__(2)); -var matrix = __importStar(__webpack_require__(3)); -var tree = __importStar(__webpack_require__(4)); -var utils = __importStar(__webpack_require__(1)); -function makeNNDescent(distanceFn, random) { - return function nNDescent(data, leafArray, nNeighbors, nIters, maxCandidates, delta, rho, rpTreeInit) { - if (nIters === void 0) { nIters = 10; } - if (maxCandidates === void 0) { maxCandidates = 50; } - if (delta === void 0) { delta = 0.001; } - if (rho === void 0) { rho = 0.5; } - if (rpTreeInit === void 0) { rpTreeInit = true; } - var nVertices = data.length; - var currentGraph = heap.makeHeap(data.length, nNeighbors); - for (var i = 0; i < data.length; i++) { - var indices = heap.rejectionSample(nNeighbors, data.length, random); - for (var j = 0; j < indices.length; j++) { - var d = distanceFn(data[i], data[indices[j]]); - heap.heapPush(currentGraph, i, d, indices[j], 1); - heap.heapPush(currentGraph, indices[j], d, i, 1); - } - } - if (rpTreeInit) { - for (var n = 0; n < leafArray.length; n++) { - for (var i = 0; i < leafArray[n].length; i++) { - if (leafArray[n][i] < 0) { - break; - } - for (var j = i + 1; j < leafArray[n].length; j++) { - if (leafArray[n][j] < 0) { - break; - } - var d = distanceFn(data[leafArray[n][i]], data[leafArray[n][j]]); - heap.heapPush(currentGraph, leafArray[n][i], d, leafArray[n][j], 1); - heap.heapPush(currentGraph, leafArray[n][j], d, leafArray[n][i], 1); - } - } - } - } - for (var n = 0; n < nIters; n++) { - var candidateNeighbors = heap.buildCandidates(currentGraph, nVertices, nNeighbors, maxCandidates, random); - var c = 0; - for (var i = 0; i < nVertices; i++) { - for (var j = 0; j < maxCandidates; j++) { - var p = Math.floor(candidateNeighbors[0][i][j]); - if (p < 0 || utils.tauRand(random) < rho) { - continue; - } - for (var k = 0; k < maxCandidates; k++) { - var q = Math.floor(candidateNeighbors[0][i][k]); - var cj = candidateNeighbors[2][i][j]; - var ck = candidateNeighbors[2][i][k]; - if (q < 0 || (!cj && !ck)) { - continue; - } - var d = distanceFn(data[p], data[q]); - c += heap.heapPush(currentGraph, p, d, q, 1); - c += heap.heapPush(currentGraph, q, d, p, 1); - } - } - } - if (c <= delta * nNeighbors * data.length) { - break; - } - } - var sorted = heap.deheapSort(currentGraph); - return sorted; - }; -} -exports.makeNNDescent = makeNNDescent; -function makeInitializations(distanceFn) { - function initFromRandom(nNeighbors, data, queryPoints, _heap, random) { - for (var i = 0; i < queryPoints.length; i++) { - var indices = utils.rejectionSample(nNeighbors, data.length, random); - for (var j = 0; j < indices.length; j++) { - if (indices[j] < 0) { - continue; - } - var d = distanceFn(data[indices[j]], queryPoints[i]); - heap.heapPush(_heap, i, d, indices[j], 1); - } - } - } - function initFromTree(_tree, data, queryPoints, _heap, random) { - for (var i = 0; i < queryPoints.length; i++) { - var indices = tree.searchFlatTree(queryPoints[i], _tree, random); - for (var j = 0; j < indices.length; j++) { - if (indices[j] < 0) { - return; - } - var d = distanceFn(data[indices[j]], queryPoints[i]); - heap.heapPush(_heap, i, d, indices[j], 1); - } - } - return; - } - return { initFromRandom: initFromRandom, initFromTree: initFromTree }; -} -exports.makeInitializations = makeInitializations; -function makeInitializedNNSearch(distanceFn) { - return function nnSearchFn(data, graph, initialization, queryPoints) { - var e_1, _a; - var _b = matrix.getCSR(graph), indices = _b.indices, indptr = _b.indptr; - for (var i = 0; i < queryPoints.length; i++) { - var tried = new Set(initialization[0][i]); - while (true) { - var vertex = heap.smallestFlagged(initialization, i); - if (vertex === -1) { - break; - } - var candidates = indices.slice(indptr[vertex], indptr[vertex + 1]); - try { - for (var candidates_1 = __values(candidates), candidates_1_1 = candidates_1.next(); !candidates_1_1.done; candidates_1_1 = candidates_1.next()) { - var candidate = candidates_1_1.value; - if (candidate === vertex || - candidate === -1 || - tried.has(candidate)) { - continue; - } - var d = distanceFn(data[candidate], queryPoints[i]); - heap.uncheckedHeapPush(initialization, i, d, candidate, 1); - tried.add(candidate); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (candidates_1_1 && !candidates_1_1.done && (_a = candidates_1.return)) _a.call(candidates_1); - } - finally { if (e_1) throw e_1.error; } - } - } - } - return initialization; - }; -} -exports.makeInitializedNNSearch = makeInitializedNNSearch; -function initializeSearch(forest, data, queryPoints, nNeighbors, initFromRandom, initFromTree, random) { - var e_2, _a; - var results = heap.makeHeap(queryPoints.length, nNeighbors); - initFromRandom(nNeighbors, data, queryPoints, results, random); - if (forest) { - try { - for (var forest_1 = __values(forest), forest_1_1 = forest_1.next(); !forest_1_1.done; forest_1_1 = forest_1.next()) { - var tree_1 = forest_1_1.value; - initFromTree(tree_1, data, queryPoints, results, random); - } - } - catch (e_2_1) { e_2 = { error: e_2_1 }; } - finally { - try { - if (forest_1_1 && !forest_1_1.done && (_a = forest_1.return)) _a.call(forest_1); - } - finally { if (e_2) throw e_2.error; } - } - } - return results; -} -exports.initializeSearch = initializeSearch; - - -/***/ }), -/* 8 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - - -var mlMatrix = __webpack_require__(9); - -/** - * Calculate current error - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} parameters - Array of current parameter values - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @return {number} - */ -function errorCalculation( - data, - parameters, - parameterizedFunction -) { - var error = 0; - const func = parameterizedFunction(parameters); - - for (var i = 0; i < data.x.length; i++) { - error += Math.abs(data.y[i] - func(data.x[i])); - } - - return error; -} - -/** - * Difference of the matrix function over the parameters - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} evaluatedData - Array of previous evaluated function values - * @param {Array} params - Array of previous parameter values - * @param {number} gradientDifference - Adjustment for decrease the damping parameter - * @param {function} paramFunction - The parameters and returns a function with the independent variable as a parameter - * @return {Matrix} - */ -function gradientFunction( - data, - evaluatedData, - params, - gradientDifference, - paramFunction -) { - const n = params.length; - const m = data.x.length; - - var ans = new Array(n); - - for (var param = 0; param < n; param++) { - ans[param] = new Array(m); - var auxParams = params.concat(); - auxParams[param] += gradientDifference; - var funcParam = paramFunction(auxParams); - - for (var point = 0; point < m; point++) { - ans[param][point] = evaluatedData[point] - funcParam(data.x[point]); - } - } - return new mlMatrix.Matrix(ans); -} - -/** - * Matrix function over the samples - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} evaluatedData - Array of previous evaluated function values - * @return {Matrix} - */ -function matrixFunction(data, evaluatedData) { - const m = data.x.length; - - var ans = new Array(m); - - for (var point = 0; point < m; point++) { - ans[point] = data.y[point] - evaluatedData[point]; - } - - return new mlMatrix.Matrix([ans]); -} - -/** - * Iteration for Levenberg-Marquardt - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} params - Array of previous parameter values - * @param {number} damping - Levenberg-Marquardt parameter - * @param {number} gradientDifference - Adjustment for decrease the damping parameter - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @return {Array} - */ -function step( - data, - params, - damping, - gradientDifference, - parameterizedFunction -) { - var identity = mlMatrix.Matrix.eye(params.length).mul( - damping * gradientDifference * gradientDifference - ); - - var l = data.x.length; - var evaluatedData = new Array(l); - const func = parameterizedFunction(params); - for (var i = 0; i < l; i++) { - evaluatedData[i] = func(data.x[i]); - } - var gradientFunc = gradientFunction( - data, - evaluatedData, - params, - gradientDifference, - parameterizedFunction - ); - var matrixFunc = matrixFunction(data, evaluatedData).transposeView(); - var inverseMatrix = mlMatrix.inverse( - identity.add(gradientFunc.mmul(gradientFunc.transposeView())) - ); - params = new mlMatrix.Matrix([params]); - params = params.sub( - inverseMatrix - .mmul(gradientFunc) - .mmul(matrixFunc) - .mul(gradientDifference) - .transposeView() - ); - - return params.to1DArray(); -} - -/** - * Curve fitting algorithm - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @param {object} [options] - Options object - * @param {number} [options.damping] - Levenberg-Marquardt parameter - * @param {number} [options.gradientDifference = 10e-2] - Adjustment for decrease the damping parameter - * @param {Array} [options.initialValues] - Array of initial parameter values - * @param {number} [options.maxIterations = 100] - Maximum of allowed iterations - * @param {number} [options.errorTolerance = 10e-3] - Minimum uncertainty allowed for each point - * @return {{parameterValues: Array, parameterError: number, iterations: number}} - */ -function levenbergMarquardt( - data, - parameterizedFunction, - options = {} -) { - let { - maxIterations = 100, - gradientDifference = 10e-2, - damping = 0, - errorTolerance = 10e-3, - initialValues - } = options; - - if (damping <= 0) { - throw new Error('The damping option must be a positive number'); - } else if (!data.x || !data.y) { - throw new Error('The data parameter must have x and y elements'); - } else if ( - !Array.isArray(data.x) || - data.x.length < 2 || - !Array.isArray(data.y) || - data.y.length < 2 - ) { - throw new Error( - 'The data parameter elements must be an array with more than 2 points' - ); - } else { - let dataLen = data.x.length; - if (dataLen !== data.y.length) { - throw new Error('The data parameter elements must have the same size'); - } - } - - var parameters = - initialValues || new Array(parameterizedFunction.length).fill(1); - - if (!Array.isArray(parameters)) { - throw new Error('initialValues must be an array'); - } - - var error = errorCalculation(data, parameters, parameterizedFunction); - - var converged = error <= errorTolerance; - - for ( - var iteration = 0; - iteration < maxIterations && !converged; - iteration++ - ) { - parameters = step( - data, - parameters, - damping, - gradientDifference, - parameterizedFunction - ); - error = errorCalculation(data, parameters, parameterizedFunction); - converged = error <= errorTolerance; - } - - return { - parameterValues: parameters, - parameterError: error, - iterations: iteration - }; -} - -module.exports = levenbergMarquardt; - - -/***/ }), -/* 9 */ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -__webpack_require__.r(__webpack_exports__); - -// EXTERNAL MODULE: ./node_modules/is-any-array/src/index.js -var src = __webpack_require__(0); -var src_default = /*#__PURE__*/__webpack_require__.n(src); - -// CONCATENATED MODULE: ./node_modules/ml-array-max/lib-es6/index.js - - -/** - * Computes the maximum of the given values - * @param {Array} input - * @return {number} - */ - -function lib_es6_max(input) { - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } - - if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var max = input[0]; - - for (var i = 1; i < input.length; i++) { - if (input[i] > max) max = input[i]; - } - - return max; -} - -/* harmony default export */ var lib_es6 = (lib_es6_max); - -// CONCATENATED MODULE: ./node_modules/ml-array-min/lib-es6/index.js - - -/** - * Computes the minimum of the given values - * @param {Array} input - * @return {number} - */ - -function lib_es6_min(input) { - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } - - if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var min = input[0]; - - for (var i = 1; i < input.length; i++) { - if (input[i] < min) min = input[i]; - } - - return min; -} - -/* harmony default export */ var ml_array_min_lib_es6 = (lib_es6_min); - -// CONCATENATED MODULE: ./node_modules/ml-array-rescale/lib-es6/index.js - - - - -function rescale(input) { - var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {}; - - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } else if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var output; - - if (options.output !== undefined) { - if (!src_default()(options.output)) { - throw new TypeError('output option must be an array if specified'); - } - - output = options.output; - } else { - output = new Array(input.length); - } - - var currentMin = ml_array_min_lib_es6(input); - var currentMax = lib_es6(input); - - if (currentMin === currentMax) { - throw new RangeError('minimum and maximum input values are equal. Cannot rescale a constant array'); - } - - var _options$min = options.min, - minValue = _options$min === void 0 ? options.autoMinMax ? currentMin : 0 : _options$min, - _options$max = options.max, - maxValue = _options$max === void 0 ? options.autoMinMax ? currentMax : 1 : _options$max; - - if (minValue >= maxValue) { - throw new RangeError('min option must be smaller than max option'); - } - - var factor = (maxValue - minValue) / (currentMax - currentMin); - - for (var i = 0; i < input.length; i++) { - output[i] = (input[i] - currentMin) * factor + minValue; - } - - return output; -} - -/* harmony default export */ var ml_array_rescale_lib_es6 = (rescale); - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/lu.js - - -/** - * @class LuDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/LuDecomposition.cs - * @param {Matrix} matrix - */ -class lu_LuDecomposition { - constructor(matrix) { - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - - var lu = matrix.clone(); - var rows = lu.rows; - var columns = lu.columns; - var pivotVector = new Array(rows); - var pivotSign = 1; - var i, j, k, p, s, t, v; - var LUcolj, kmax; - - for (i = 0; i < rows; i++) { - pivotVector[i] = i; - } - - LUcolj = new Array(rows); - - for (j = 0; j < columns; j++) { - for (i = 0; i < rows; i++) { - LUcolj[i] = lu.get(i, j); - } - - for (i = 0; i < rows; i++) { - kmax = Math.min(i, j); - s = 0; - for (k = 0; k < kmax; k++) { - s += lu.get(i, k) * LUcolj[k]; - } - LUcolj[i] -= s; - lu.set(i, j, LUcolj[i]); - } - - p = j; - for (i = j + 1; i < rows; i++) { - if (Math.abs(LUcolj[i]) > Math.abs(LUcolj[p])) { - p = i; - } - } - - if (p !== j) { - for (k = 0; k < columns; k++) { - t = lu.get(p, k); - lu.set(p, k, lu.get(j, k)); - lu.set(j, k, t); - } - - v = pivotVector[p]; - pivotVector[p] = pivotVector[j]; - pivotVector[j] = v; - - pivotSign = -pivotSign; - } - - if (j < rows && lu.get(j, j) !== 0) { - for (i = j + 1; i < rows; i++) { - lu.set(i, j, lu.get(i, j) / lu.get(j, j)); - } - } - } - - this.LU = lu; - this.pivotVector = pivotVector; - this.pivotSign = pivotSign; - } - - /** - * - * @return {boolean} - */ - isSingular() { - var data = this.LU; - var col = data.columns; - for (var j = 0; j < col; j++) { - if (data[j][j] === 0) { - return true; - } - } - return false; - } - - /** - * - * @param {Matrix} value - * @return {Matrix} - */ - solve(value) { - value = matrix_Matrix.checkMatrix(value); - - var lu = this.LU; - var rows = lu.rows; - - if (rows !== value.rows) { - throw new Error('Invalid matrix dimensions'); - } - if (this.isSingular()) { - throw new Error('LU matrix is singular'); - } - - var count = value.columns; - var X = value.subMatrixRow(this.pivotVector, 0, count - 1); - var columns = lu.columns; - var i, j, k; - - for (k = 0; k < columns; k++) { - for (i = k + 1; i < columns; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * lu[i][k]; - } - } - } - for (k = columns - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - X[k][j] /= lu[k][k]; - } - for (i = 0; i < k; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * lu[i][k]; - } - } - } - return X; - } - - /** - * - * @return {number} - */ - get determinant() { - var data = this.LU; - if (!data.isSquare()) { - throw new Error('Matrix must be square'); - } - var determinant = this.pivotSign; - var col = data.columns; - for (var j = 0; j < col; j++) { - determinant *= data[j][j]; - } - return determinant; - } - - /** - * - * @return {Matrix} - */ - get lowerTriangularMatrix() { - var data = this.LU; - var rows = data.rows; - var columns = data.columns; - var X = new matrix_Matrix(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - if (i > j) { - X[i][j] = data[i][j]; - } else if (i === j) { - X[i][j] = 1; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Matrix} - */ - get upperTriangularMatrix() { - var data = this.LU; - var rows = data.rows; - var columns = data.columns; - var X = new matrix_Matrix(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - if (i <= j) { - X[i][j] = data[i][j]; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Array} - */ - get pivotPermutationVector() { - return this.pivotVector.slice(); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/util.js -function hypotenuse(a, b) { - var r = 0; - if (Math.abs(a) > Math.abs(b)) { - r = b / a; - return Math.abs(a) * Math.sqrt(1 + r * r); - } - if (b !== 0) { - r = a / b; - return Math.abs(b) * Math.sqrt(1 + r * r); - } - return 0; -} - -function getFilled2DArray(rows, columns, value) { - var array = new Array(rows); - for (var i = 0; i < rows; i++) { - array[i] = new Array(columns); - for (var j = 0; j < columns; j++) { - array[i][j] = value; - } - } - return array; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/svd.js - - - - -/** - * @class SingularValueDecomposition - * @see https://github.com/accord-net/framework/blob/development/Sources/Accord.Math/Decompositions/SingularValueDecomposition.cs - * @param {Matrix} value - * @param {object} [options] - * @param {boolean} [options.computeLeftSingularVectors=true] - * @param {boolean} [options.computeRightSingularVectors=true] - * @param {boolean} [options.autoTranspose=false] - */ -class svd_SingularValueDecomposition { - constructor(value, options = {}) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var m = value.rows; - var n = value.columns; - - const { - computeLeftSingularVectors = true, - computeRightSingularVectors = true, - autoTranspose = false - } = options; - - var wantu = Boolean(computeLeftSingularVectors); - var wantv = Boolean(computeRightSingularVectors); - - var swapped = false; - var a; - if (m < n) { - if (!autoTranspose) { - a = value.clone(); - // eslint-disable-next-line no-console - console.warn( - 'Computing SVD on a matrix with more columns than rows. Consider enabling autoTranspose' - ); - } else { - a = value.transpose(); - m = a.rows; - n = a.columns; - swapped = true; - var aux = wantu; - wantu = wantv; - wantv = aux; - } - } else { - a = value.clone(); - } - - var nu = Math.min(m, n); - var ni = Math.min(m + 1, n); - var s = new Array(ni); - var U = getFilled2DArray(m, nu, 0); - var V = getFilled2DArray(n, n, 0); - - var e = new Array(n); - var work = new Array(m); - - var si = new Array(ni); - for (let i = 0; i < ni; i++) si[i] = i; - - var nct = Math.min(m - 1, n); - var nrt = Math.max(0, Math.min(n - 2, m)); - var mrc = Math.max(nct, nrt); - - for (let k = 0; k < mrc; k++) { - if (k < nct) { - s[k] = 0; - for (let i = k; i < m; i++) { - s[k] = hypotenuse(s[k], a[i][k]); - } - if (s[k] !== 0) { - if (a[k][k] < 0) { - s[k] = -s[k]; - } - for (let i = k; i < m; i++) { - a[i][k] /= s[k]; - } - a[k][k] += 1; - } - s[k] = -s[k]; - } - - for (let j = k + 1; j < n; j++) { - if (k < nct && s[k] !== 0) { - let t = 0; - for (let i = k; i < m; i++) { - t += a[i][k] * a[i][j]; - } - t = -t / a[k][k]; - for (let i = k; i < m; i++) { - a[i][j] += t * a[i][k]; - } - } - e[j] = a[k][j]; - } - - if (wantu && k < nct) { - for (let i = k; i < m; i++) { - U[i][k] = a[i][k]; - } - } - - if (k < nrt) { - e[k] = 0; - for (let i = k + 1; i < n; i++) { - e[k] = hypotenuse(e[k], e[i]); - } - if (e[k] !== 0) { - if (e[k + 1] < 0) { - e[k] = 0 - e[k]; - } - for (let i = k + 1; i < n; i++) { - e[i] /= e[k]; - } - e[k + 1] += 1; - } - e[k] = -e[k]; - if (k + 1 < m && e[k] !== 0) { - for (let i = k + 1; i < m; i++) { - work[i] = 0; - } - for (let i = k + 1; i < m; i++) { - for (let j = k + 1; j < n; j++) { - work[i] += e[j] * a[i][j]; - } - } - for (let j = k + 1; j < n; j++) { - let t = -e[j] / e[k + 1]; - for (let i = k + 1; i < m; i++) { - a[i][j] += t * work[i]; - } - } - } - if (wantv) { - for (let i = k + 1; i < n; i++) { - V[i][k] = e[i]; - } - } - } - } - - let p = Math.min(n, m + 1); - if (nct < n) { - s[nct] = a[nct][nct]; - } - if (m < p) { - s[p - 1] = 0; - } - if (nrt + 1 < p) { - e[nrt] = a[nrt][p - 1]; - } - e[p - 1] = 0; - - if (wantu) { - for (let j = nct; j < nu; j++) { - for (let i = 0; i < m; i++) { - U[i][j] = 0; - } - U[j][j] = 1; - } - for (let k = nct - 1; k >= 0; k--) { - if (s[k] !== 0) { - for (let j = k + 1; j < nu; j++) { - let t = 0; - for (let i = k; i < m; i++) { - t += U[i][k] * U[i][j]; - } - t = -t / U[k][k]; - for (let i = k; i < m; i++) { - U[i][j] += t * U[i][k]; - } - } - for (let i = k; i < m; i++) { - U[i][k] = -U[i][k]; - } - U[k][k] = 1 + U[k][k]; - for (let i = 0; i < k - 1; i++) { - U[i][k] = 0; - } - } else { - for (let i = 0; i < m; i++) { - U[i][k] = 0; - } - U[k][k] = 1; - } - } - } - - if (wantv) { - for (let k = n - 1; k >= 0; k--) { - if (k < nrt && e[k] !== 0) { - for (let j = k + 1; j < n; j++) { - let t = 0; - for (let i = k + 1; i < n; i++) { - t += V[i][k] * V[i][j]; - } - t = -t / V[k + 1][k]; - for (let i = k + 1; i < n; i++) { - V[i][j] += t * V[i][k]; - } - } - } - for (let i = 0; i < n; i++) { - V[i][k] = 0; - } - V[k][k] = 1; - } - } - - var pp = p - 1; - var iter = 0; - var eps = Number.EPSILON; - while (p > 0) { - let k, kase; - for (k = p - 2; k >= -1; k--) { - if (k === -1) { - break; - } - const alpha = - Number.MIN_VALUE + eps * Math.abs(s[k] + Math.abs(s[k + 1])); - if (Math.abs(e[k]) <= alpha || Number.isNaN(e[k])) { - e[k] = 0; - break; - } - } - if (k === p - 2) { - kase = 4; - } else { - let ks; - for (ks = p - 1; ks >= k; ks--) { - if (ks === k) { - break; - } - let t = - (ks !== p ? Math.abs(e[ks]) : 0) + - (ks !== k + 1 ? Math.abs(e[ks - 1]) : 0); - if (Math.abs(s[ks]) <= eps * t) { - s[ks] = 0; - break; - } - } - if (ks === k) { - kase = 3; - } else if (ks === p - 1) { - kase = 1; - } else { - kase = 2; - k = ks; - } - } - - k++; - - switch (kase) { - case 1: { - let f = e[p - 2]; - e[p - 2] = 0; - for (let j = p - 2; j >= k; j--) { - let t = hypotenuse(s[j], f); - let cs = s[j] / t; - let sn = f / t; - s[j] = t; - if (j !== k) { - f = -sn * e[j - 1]; - e[j - 1] = cs * e[j - 1]; - } - if (wantv) { - for (let i = 0; i < n; i++) { - t = cs * V[i][j] + sn * V[i][p - 1]; - V[i][p - 1] = -sn * V[i][j] + cs * V[i][p - 1]; - V[i][j] = t; - } - } - } - break; - } - case 2: { - let f = e[k - 1]; - e[k - 1] = 0; - for (let j = k; j < p; j++) { - let t = hypotenuse(s[j], f); - let cs = s[j] / t; - let sn = f / t; - s[j] = t; - f = -sn * e[j]; - e[j] = cs * e[j]; - if (wantu) { - for (let i = 0; i < m; i++) { - t = cs * U[i][j] + sn * U[i][k - 1]; - U[i][k - 1] = -sn * U[i][j] + cs * U[i][k - 1]; - U[i][j] = t; - } - } - } - break; - } - case 3: { - const scale = Math.max( - Math.abs(s[p - 1]), - Math.abs(s[p - 2]), - Math.abs(e[p - 2]), - Math.abs(s[k]), - Math.abs(e[k]) - ); - const sp = s[p - 1] / scale; - const spm1 = s[p - 2] / scale; - const epm1 = e[p - 2] / scale; - const sk = s[k] / scale; - const ek = e[k] / scale; - const b = ((spm1 + sp) * (spm1 - sp) + epm1 * epm1) / 2; - const c = sp * epm1 * (sp * epm1); - let shift = 0; - if (b !== 0 || c !== 0) { - if (b < 0) { - shift = 0 - Math.sqrt(b * b + c); - } else { - shift = Math.sqrt(b * b + c); - } - shift = c / (b + shift); - } - let f = (sk + sp) * (sk - sp) + shift; - let g = sk * ek; - for (let j = k; j < p - 1; j++) { - let t = hypotenuse(f, g); - if (t === 0) t = Number.MIN_VALUE; - let cs = f / t; - let sn = g / t; - if (j !== k) { - e[j - 1] = t; - } - f = cs * s[j] + sn * e[j]; - e[j] = cs * e[j] - sn * s[j]; - g = sn * s[j + 1]; - s[j + 1] = cs * s[j + 1]; - if (wantv) { - for (let i = 0; i < n; i++) { - t = cs * V[i][j] + sn * V[i][j + 1]; - V[i][j + 1] = -sn * V[i][j] + cs * V[i][j + 1]; - V[i][j] = t; - } - } - t = hypotenuse(f, g); - if (t === 0) t = Number.MIN_VALUE; - cs = f / t; - sn = g / t; - s[j] = t; - f = cs * e[j] + sn * s[j + 1]; - s[j + 1] = -sn * e[j] + cs * s[j + 1]; - g = sn * e[j + 1]; - e[j + 1] = cs * e[j + 1]; - if (wantu && j < m - 1) { - for (let i = 0; i < m; i++) { - t = cs * U[i][j] + sn * U[i][j + 1]; - U[i][j + 1] = -sn * U[i][j] + cs * U[i][j + 1]; - U[i][j] = t; - } - } - } - e[p - 2] = f; - iter = iter + 1; - break; - } - case 4: { - if (s[k] <= 0) { - s[k] = s[k] < 0 ? -s[k] : 0; - if (wantv) { - for (let i = 0; i <= pp; i++) { - V[i][k] = -V[i][k]; - } - } - } - while (k < pp) { - if (s[k] >= s[k + 1]) { - break; - } - let t = s[k]; - s[k] = s[k + 1]; - s[k + 1] = t; - if (wantv && k < n - 1) { - for (let i = 0; i < n; i++) { - t = V[i][k + 1]; - V[i][k + 1] = V[i][k]; - V[i][k] = t; - } - } - if (wantu && k < m - 1) { - for (let i = 0; i < m; i++) { - t = U[i][k + 1]; - U[i][k + 1] = U[i][k]; - U[i][k] = t; - } - } - k++; - } - iter = 0; - p--; - break; - } - // no default - } - } - - if (swapped) { - var tmp = V; - V = U; - U = tmp; - } - - this.m = m; - this.n = n; - this.s = s; - this.U = U; - this.V = V; - } - - /** - * Solve a problem of least square (Ax=b) by using the SVD. Useful when A is singular. When A is not singular, it would be better to use qr.solve(value). - * Example : We search to approximate x, with A matrix shape m*n, x vector size n, b vector size m (m > n). We will use : - * var svd = SingularValueDecomposition(A); - * var x = svd.solve(b); - * @param {Matrix} value - Matrix 1D which is the vector b (in the equation Ax = b) - * @return {Matrix} - The vector x - */ - solve(value) { - var Y = value; - var e = this.threshold; - var scols = this.s.length; - var Ls = matrix_Matrix.zeros(scols, scols); - - for (let i = 0; i < scols; i++) { - if (Math.abs(this.s[i]) <= e) { - Ls[i][i] = 0; - } else { - Ls[i][i] = 1 / this.s[i]; - } - } - - var U = this.U; - var V = this.rightSingularVectors; - - var VL = V.mmul(Ls); - var vrows = V.rows; - var urows = U.length; - var VLU = matrix_Matrix.zeros(vrows, urows); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < urows; j++) { - let sum = 0; - for (let k = 0; k < scols; k++) { - sum += VL[i][k] * U[j][k]; - } - VLU[i][j] = sum; - } - } - - return VLU.mmul(Y); - } - - /** - * - * @param {Array} value - * @return {Matrix} - */ - solveForDiagonal(value) { - return this.solve(matrix_Matrix.diag(value)); - } - - /** - * Get the inverse of the matrix. We compute the inverse of a matrix using SVD when this matrix is singular or ill-conditioned. Example : - * var svd = SingularValueDecomposition(A); - * var inverseA = svd.inverse(); - * @return {Matrix} - The approximation of the inverse of the matrix - */ - inverse() { - var V = this.V; - var e = this.threshold; - var vrows = V.length; - var vcols = V[0].length; - var X = new matrix_Matrix(vrows, this.s.length); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < vcols; j++) { - if (Math.abs(this.s[j]) > e) { - X[i][j] = V[i][j] / this.s[j]; - } else { - X[i][j] = 0; - } - } - } - - var U = this.U; - - var urows = U.length; - var ucols = U[0].length; - var Y = new matrix_Matrix(vrows, urows); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < urows; j++) { - let sum = 0; - for (let k = 0; k < ucols; k++) { - sum += X[i][k] * U[j][k]; - } - Y[i][j] = sum; - } - } - - return Y; - } - - /** - * - * @return {number} - */ - get condition() { - return this.s[0] / this.s[Math.min(this.m, this.n) - 1]; - } - - /** - * - * @return {number} - */ - get norm2() { - return this.s[0]; - } - - /** - * - * @return {number} - */ - get rank() { - var tol = Math.max(this.m, this.n) * this.s[0] * Number.EPSILON; - var r = 0; - var s = this.s; - for (var i = 0, ii = s.length; i < ii; i++) { - if (s[i] > tol) { - r++; - } - } - return r; - } - - /** - * - * @return {Array} - */ - get diagonal() { - return this.s; - } - - /** - * - * @return {number} - */ - get threshold() { - return Number.EPSILON / 2 * Math.max(this.m, this.n) * this.s[0]; - } - - /** - * - * @return {Matrix} - */ - get leftSingularVectors() { - if (!matrix_Matrix.isMatrix(this.U)) { - this.U = new matrix_Matrix(this.U); - } - return this.U; - } - - /** - * - * @return {Matrix} - */ - get rightSingularVectors() { - if (!matrix_Matrix.isMatrix(this.V)) { - this.V = new matrix_Matrix(this.V); - } - return this.V; - } - - /** - * - * @return {Matrix} - */ - get diagonalMatrix() { - return matrix_Matrix.diag(this.s); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/util.js - - -/** - * @private - * Check that a row index is not out of bounds - * @param {Matrix} matrix - * @param {number} index - * @param {boolean} [outer] - */ -function checkRowIndex(matrix, index, outer) { - var max = outer ? matrix.rows : matrix.rows - 1; - if (index < 0 || index > max) { - throw new RangeError('Row index out of range'); - } -} - -/** - * @private - * Check that a column index is not out of bounds - * @param {Matrix} matrix - * @param {number} index - * @param {boolean} [outer] - */ -function checkColumnIndex(matrix, index, outer) { - var max = outer ? matrix.columns : matrix.columns - 1; - if (index < 0 || index > max) { - throw new RangeError('Column index out of range'); - } -} - -/** - * @private - * Check that the provided vector is an array with the right length - * @param {Matrix} matrix - * @param {Array|Matrix} vector - * @return {Array} - * @throws {RangeError} - */ -function checkRowVector(matrix, vector) { - if (vector.to1DArray) { - vector = vector.to1DArray(); - } - if (vector.length !== matrix.columns) { - throw new RangeError( - 'vector size must be the same as the number of columns' - ); - } - return vector; -} - -/** - * @private - * Check that the provided vector is an array with the right length - * @param {Matrix} matrix - * @param {Array|Matrix} vector - * @return {Array} - * @throws {RangeError} - */ -function checkColumnVector(matrix, vector) { - if (vector.to1DArray) { - vector = vector.to1DArray(); - } - if (vector.length !== matrix.rows) { - throw new RangeError('vector size must be the same as the number of rows'); - } - return vector; -} - -function checkIndices(matrix, rowIndices, columnIndices) { - return { - row: checkRowIndices(matrix, rowIndices), - column: checkColumnIndices(matrix, columnIndices) - }; -} - -function checkRowIndices(matrix, rowIndices) { - if (typeof rowIndices !== 'object') { - throw new TypeError('unexpected type for row indices'); - } - - var rowOut = rowIndices.some((r) => { - return r < 0 || r >= matrix.rows; - }); - - if (rowOut) { - throw new RangeError('row indices are out of range'); - } - - if (!Array.isArray(rowIndices)) rowIndices = Array.from(rowIndices); - - return rowIndices; -} - -function checkColumnIndices(matrix, columnIndices) { - if (typeof columnIndices !== 'object') { - throw new TypeError('unexpected type for column indices'); - } - - var columnOut = columnIndices.some((c) => { - return c < 0 || c >= matrix.columns; - }); - - if (columnOut) { - throw new RangeError('column indices are out of range'); - } - if (!Array.isArray(columnIndices)) columnIndices = Array.from(columnIndices); - - return columnIndices; -} - -function checkRange(matrix, startRow, endRow, startColumn, endColumn) { - if (arguments.length !== 5) { - throw new RangeError('expected 4 arguments'); - } - checkNumber('startRow', startRow); - checkNumber('endRow', endRow); - checkNumber('startColumn', startColumn); - checkNumber('endColumn', endColumn); - if ( - startRow > endRow || - startColumn > endColumn || - startRow < 0 || - startRow >= matrix.rows || - endRow < 0 || - endRow >= matrix.rows || - startColumn < 0 || - startColumn >= matrix.columns || - endColumn < 0 || - endColumn >= matrix.columns - ) { - throw new RangeError('Submatrix indices are out of range'); - } -} - -function getRange(from, to) { - var arr = new Array(to - from + 1); - for (var i = 0; i < arr.length; i++) { - arr[i] = from + i; - } - return arr; -} - -function sumByRow(matrix) { - var sum = matrix_Matrix.zeros(matrix.rows, 1); - for (var i = 0; i < matrix.rows; ++i) { - for (var j = 0; j < matrix.columns; ++j) { - sum.set(i, 0, sum.get(i, 0) + matrix.get(i, j)); - } - } - return sum; -} - -function sumByColumn(matrix) { - var sum = matrix_Matrix.zeros(1, matrix.columns); - for (var i = 0; i < matrix.rows; ++i) { - for (var j = 0; j < matrix.columns; ++j) { - sum.set(0, j, sum.get(0, j) + matrix.get(i, j)); - } - } - return sum; -} - -function sumAll(matrix) { - var v = 0; - for (var i = 0; i < matrix.rows; i++) { - for (var j = 0; j < matrix.columns; j++) { - v += matrix.get(i, j); - } - } - return v; -} - -function checkNumber(name, value) { - if (typeof value !== 'number') { - throw new TypeError(`${name} must be a number`); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/base.js - - - -class base_BaseView extends AbstractMatrix() { - constructor(matrix, rows, columns) { - super(); - this.matrix = matrix; - this.rows = rows; - this.columns = columns; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/transpose.js - - -class transpose_MatrixTransposeView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.columns, matrix.rows); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(columnIndex, rowIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(columnIndex, rowIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/row.js - - -class row_MatrixRowView extends base_BaseView { - constructor(matrix, row) { - super(matrix, 1, matrix.columns); - this.row = row; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.row, columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.row, columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/sub.js - - - - -class sub_MatrixSubView extends base_BaseView { - constructor(matrix, startRow, endRow, startColumn, endColumn) { - checkRange(matrix, startRow, endRow, startColumn, endColumn); - super(matrix, endRow - startRow + 1, endColumn - startColumn + 1); - this.startRow = startRow; - this.startColumn = startColumn; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set( - this.startRow + rowIndex, - this.startColumn + columnIndex, - value - ); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get( - this.startRow + rowIndex, - this.startColumn + columnIndex - ); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/selection.js - - - - -class selection_MatrixSelectionView extends base_BaseView { - constructor(matrix, rowIndices, columnIndices) { - var indices = checkIndices(matrix, rowIndices, columnIndices); - super(matrix, indices.row.length, indices.column.length); - this.rowIndices = indices.row; - this.columnIndices = indices.column; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set( - this.rowIndices[rowIndex], - this.columnIndices[columnIndex], - value - ); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get( - this.rowIndices[rowIndex], - this.columnIndices[columnIndex] - ); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/rowSelection.js - - - - -class rowSelection_MatrixRowSelectionView extends base_BaseView { - constructor(matrix, rowIndices) { - rowIndices = checkRowIndices(matrix, rowIndices); - super(matrix, rowIndices.length, matrix.columns); - this.rowIndices = rowIndices; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.rowIndices[rowIndex], columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.rowIndices[rowIndex], columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/columnSelection.js - - - - -class columnSelection_MatrixColumnSelectionView extends base_BaseView { - constructor(matrix, columnIndices) { - columnIndices = checkColumnIndices(matrix, columnIndices); - super(matrix, matrix.rows, columnIndices.length); - this.columnIndices = columnIndices; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.columnIndices[columnIndex], value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(rowIndex, this.columnIndices[columnIndex]); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/column.js - - -class column_MatrixColumnView extends base_BaseView { - constructor(matrix, column) { - super(matrix, matrix.rows, 1); - this.column = column; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.column, value); - return this; - } - - get(rowIndex) { - return this.matrix.get(rowIndex, this.column); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/flipRow.js - - -class flipRow_MatrixFlipRowView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.rows, matrix.columns); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.rows - rowIndex - 1, columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.rows - rowIndex - 1, columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/flipColumn.js - - -class flipColumn_MatrixFlipColumnView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.rows, matrix.columns); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.columns - columnIndex - 1, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(rowIndex, this.columns - columnIndex - 1); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/abstractMatrix.js - - - - - - - - - - - - - - - -function AbstractMatrix(superCtor) { - if (superCtor === undefined) superCtor = Object; - - /** - * Real matrix - * @class Matrix - * @param {number|Array|Matrix} nRows - Number of rows of the new matrix, - * 2D array containing the data or Matrix instance to clone - * @param {number} [nColumns] - Number of columns of the new matrix - */ - class Matrix extends superCtor { - static get [Symbol.species]() { - return this; - } - - /** - * Constructs a Matrix with the chosen dimensions from a 1D array - * @param {number} newRows - Number of rows - * @param {number} newColumns - Number of columns - * @param {Array} newData - A 1D array containing data for the matrix - * @return {Matrix} - The new matrix - */ - static from1DArray(newRows, newColumns, newData) { - var length = newRows * newColumns; - if (length !== newData.length) { - throw new RangeError('Data length does not match given dimensions'); - } - var newMatrix = new this(newRows, newColumns); - for (var row = 0; row < newRows; row++) { - for (var column = 0; column < newColumns; column++) { - newMatrix.set(row, column, newData[row * newColumns + column]); - } - } - return newMatrix; - } - - /** - * Creates a row vector, a matrix with only one row. - * @param {Array} newData - A 1D array containing data for the vector - * @return {Matrix} - The new matrix - */ - static rowVector(newData) { - var vector = new this(1, newData.length); - for (var i = 0; i < newData.length; i++) { - vector.set(0, i, newData[i]); - } - return vector; - } - - /** - * Creates a column vector, a matrix with only one column. - * @param {Array} newData - A 1D array containing data for the vector - * @return {Matrix} - The new matrix - */ - static columnVector(newData) { - var vector = new this(newData.length, 1); - for (var i = 0; i < newData.length; i++) { - vector.set(i, 0, newData[i]); - } - return vector; - } - - /** - * Creates an empty matrix with the given dimensions. Values will be undefined. Same as using new Matrix(rows, columns). - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static empty(rows, columns) { - return new this(rows, columns); - } - - /** - * Creates a matrix with the given dimensions. Values will be set to zero. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static zeros(rows, columns) { - return this.empty(rows, columns).fill(0); - } - - /** - * Creates a matrix with the given dimensions. Values will be set to one. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static ones(rows, columns) { - return this.empty(rows, columns).fill(1); - } - - /** - * Creates a matrix with the given dimensions. Values will be randomly set. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @param {function} [rng=Math.random] - Random number generator - * @return {Matrix} The new matrix - */ - static rand(rows, columns, rng) { - if (rng === undefined) rng = Math.random; - var matrix = this.empty(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - matrix.set(i, j, rng()); - } - } - return matrix; - } - - /** - * Creates a matrix with the given dimensions. Values will be random integers. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @param {number} [maxValue=1000] - Maximum value - * @param {function} [rng=Math.random] - Random number generator - * @return {Matrix} The new matrix - */ - static randInt(rows, columns, maxValue, rng) { - if (maxValue === undefined) maxValue = 1000; - if (rng === undefined) rng = Math.random; - var matrix = this.empty(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - var value = Math.floor(rng() * maxValue); - matrix.set(i, j, value); - } - } - return matrix; - } - - /** - * Creates an identity matrix with the given dimension. Values of the diagonal will be 1 and others will be 0. - * @param {number} rows - Number of rows - * @param {number} [columns=rows] - Number of columns - * @param {number} [value=1] - Value to fill the diagonal with - * @return {Matrix} - The new identity matrix - */ - static eye(rows, columns, value) { - if (columns === undefined) columns = rows; - if (value === undefined) value = 1; - var min = Math.min(rows, columns); - var matrix = this.zeros(rows, columns); - for (var i = 0; i < min; i++) { - matrix.set(i, i, value); - } - return matrix; - } - - /** - * Creates a diagonal matrix based on the given array. - * @param {Array} data - Array containing the data for the diagonal - * @param {number} [rows] - Number of rows (Default: data.length) - * @param {number} [columns] - Number of columns (Default: rows) - * @return {Matrix} - The new diagonal matrix - */ - static diag(data, rows, columns) { - var l = data.length; - if (rows === undefined) rows = l; - if (columns === undefined) columns = rows; - var min = Math.min(l, rows, columns); - var matrix = this.zeros(rows, columns); - for (var i = 0; i < min; i++) { - matrix.set(i, i, data[i]); - } - return matrix; - } - - /** - * Returns a matrix whose elements are the minimum between matrix1 and matrix2 - * @param {Matrix} matrix1 - * @param {Matrix} matrix2 - * @return {Matrix} - */ - static min(matrix1, matrix2) { - matrix1 = this.checkMatrix(matrix1); - matrix2 = this.checkMatrix(matrix2); - var rows = matrix1.rows; - var columns = matrix1.columns; - var result = new this(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - result.set(i, j, Math.min(matrix1.get(i, j), matrix2.get(i, j))); - } - } - return result; - } - - /** - * Returns a matrix whose elements are the maximum between matrix1 and matrix2 - * @param {Matrix} matrix1 - * @param {Matrix} matrix2 - * @return {Matrix} - */ - static max(matrix1, matrix2) { - matrix1 = this.checkMatrix(matrix1); - matrix2 = this.checkMatrix(matrix2); - var rows = matrix1.rows; - var columns = matrix1.columns; - var result = new this(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - result.set(i, j, Math.max(matrix1.get(i, j), matrix2.get(i, j))); - } - } - return result; - } - - /** - * Check that the provided value is a Matrix and tries to instantiate one if not - * @param {*} value - The value to check - * @return {Matrix} - */ - static checkMatrix(value) { - return Matrix.isMatrix(value) ? value : new this(value); - } - - /** - * Returns true if the argument is a Matrix, false otherwise - * @param {*} value - The value to check - * @return {boolean} - */ - static isMatrix(value) { - return (value != null) && (value.klass === 'Matrix'); - } - - /** - * @prop {number} size - The number of elements in the matrix. - */ - get size() { - return this.rows * this.columns; - } - - /** - * Applies a callback for each element of the matrix. The function is called in the matrix (this) context. - * @param {function} callback - Function that will be called with two parameters : i (row) and j (column) - * @return {Matrix} this - */ - apply(callback) { - if (typeof callback !== 'function') { - throw new TypeError('callback must be a function'); - } - var ii = this.rows; - var jj = this.columns; - for (var i = 0; i < ii; i++) { - for (var j = 0; j < jj; j++) { - callback.call(this, i, j); - } - } - return this; - } - - /** - * Returns a new 1D array filled row by row with the matrix values - * @return {Array} - */ - to1DArray() { - var array = new Array(this.size); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - array[i * this.columns + j] = this.get(i, j); - } - } - return array; - } - - /** - * Returns a 2D array containing a copy of the data - * @return {Array} - */ - to2DArray() { - var copy = new Array(this.rows); - for (var i = 0; i < this.rows; i++) { - copy[i] = new Array(this.columns); - for (var j = 0; j < this.columns; j++) { - copy[i][j] = this.get(i, j); - } - } - return copy; - } - - /** - * @return {boolean} true if the matrix has one row - */ - isRowVector() { - return this.rows === 1; - } - - /** - * @return {boolean} true if the matrix has one column - */ - isColumnVector() { - return this.columns === 1; - } - - /** - * @return {boolean} true if the matrix has one row or one column - */ - isVector() { - return (this.rows === 1) || (this.columns === 1); - } - - /** - * @return {boolean} true if the matrix has the same number of rows and columns - */ - isSquare() { - return this.rows === this.columns; - } - - /** - * @return {boolean} true if the matrix is square and has the same values on both sides of the diagonal - */ - isSymmetric() { - if (this.isSquare()) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j <= i; j++) { - if (this.get(i, j) !== this.get(j, i)) { - return false; - } - } - } - return true; - } - return false; - } - - /** - * Sets a given element of the matrix. mat.set(3,4,1) is equivalent to mat[3][4]=1 - * @abstract - * @param {number} rowIndex - Index of the row - * @param {number} columnIndex - Index of the column - * @param {number} value - The new value for the element - * @return {Matrix} this - */ - set(rowIndex, columnIndex, value) { // eslint-disable-line no-unused-vars - throw new Error('set method is unimplemented'); - } - - /** - * Returns the given element of the matrix. mat.get(3,4) is equivalent to matrix[3][4] - * @abstract - * @param {number} rowIndex - Index of the row - * @param {number} columnIndex - Index of the column - * @return {number} - */ - get(rowIndex, columnIndex) { // eslint-disable-line no-unused-vars - throw new Error('get method is unimplemented'); - } - - /** - * Creates a new matrix that is a repetition of the current matrix. New matrix has rowRep times the number of - * rows of the matrix, and colRep times the number of columns of the matrix - * @param {number} rowRep - Number of times the rows should be repeated - * @param {number} colRep - Number of times the columns should be re - * @return {Matrix} - * @example - * var matrix = new Matrix([[1,2]]); - * matrix.repeat(2); // [[1,2],[1,2]] - */ - repeat(rowRep, colRep) { - rowRep = rowRep || 1; - colRep = colRep || 1; - var matrix = new this.constructor[Symbol.species](this.rows * rowRep, this.columns * colRep); - for (var i = 0; i < rowRep; i++) { - for (var j = 0; j < colRep; j++) { - matrix.setSubMatrix(this, this.rows * i, this.columns * j); - } - } - return matrix; - } - - /** - * Fills the matrix with a given value. All elements will be set to this value. - * @param {number} value - New value - * @return {Matrix} this - */ - fill(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, value); - } - } - return this; - } - - /** - * Negates the matrix. All elements will be multiplied by (-1) - * @return {Matrix} this - */ - neg() { - return this.mulS(-1); - } - - /** - * Returns a new array from the given row index - * @param {number} index - Row index - * @return {Array} - */ - getRow(index) { - checkRowIndex(this, index); - var row = new Array(this.columns); - for (var i = 0; i < this.columns; i++) { - row[i] = this.get(index, i); - } - return row; - } - - /** - * Returns a new row vector from the given row index - * @param {number} index - Row index - * @return {Matrix} - */ - getRowVector(index) { - return this.constructor.rowVector(this.getRow(index)); - } - - /** - * Sets a row at the given index - * @param {number} index - Row index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - setRow(index, array) { - checkRowIndex(this, index); - array = checkRowVector(this, array); - for (var i = 0; i < this.columns; i++) { - this.set(index, i, array[i]); - } - return this; - } - - /** - * Swaps two rows - * @param {number} row1 - First row index - * @param {number} row2 - Second row index - * @return {Matrix} this - */ - swapRows(row1, row2) { - checkRowIndex(this, row1); - checkRowIndex(this, row2); - for (var i = 0; i < this.columns; i++) { - var temp = this.get(row1, i); - this.set(row1, i, this.get(row2, i)); - this.set(row2, i, temp); - } - return this; - } - - /** - * Returns a new array from the given column index - * @param {number} index - Column index - * @return {Array} - */ - getColumn(index) { - checkColumnIndex(this, index); - var column = new Array(this.rows); - for (var i = 0; i < this.rows; i++) { - column[i] = this.get(i, index); - } - return column; - } - - /** - * Returns a new column vector from the given column index - * @param {number} index - Column index - * @return {Matrix} - */ - getColumnVector(index) { - return this.constructor.columnVector(this.getColumn(index)); - } - - /** - * Sets a column at the given index - * @param {number} index - Column index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - setColumn(index, array) { - checkColumnIndex(this, index); - array = checkColumnVector(this, array); - for (var i = 0; i < this.rows; i++) { - this.set(i, index, array[i]); - } - return this; - } - - /** - * Swaps two columns - * @param {number} column1 - First column index - * @param {number} column2 - Second column index - * @return {Matrix} this - */ - swapColumns(column1, column2) { - checkColumnIndex(this, column1); - checkColumnIndex(this, column2); - for (var i = 0; i < this.rows; i++) { - var temp = this.get(i, column1); - this.set(i, column1, this.get(i, column2)); - this.set(i, column2, temp); - } - return this; - } - - /** - * Adds the values of a vector to each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - addRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) + vector[j]); - } - } - return this; - } - - /** - * Subtracts the values of a vector from each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - subRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) - vector[j]); - } - } - return this; - } - - /** - * Multiplies the values of a vector with each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - mulRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) * vector[j]); - } - } - return this; - } - - /** - * Divides the values of each row by those of a vector - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - divRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) / vector[j]); - } - } - return this; - } - - /** - * Adds the values of a vector to each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - addColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) + vector[i]); - } - } - return this; - } - - /** - * Subtracts the values of a vector from each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - subColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) - vector[i]); - } - } - return this; - } - - /** - * Multiplies the values of a vector with each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - mulColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) * vector[i]); - } - } - return this; - } - - /** - * Divides the values of each column by those of a vector - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - divColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) / vector[i]); - } - } - return this; - } - - /** - * Multiplies the values of a row with a scalar - * @param {number} index - Row index - * @param {number} value - * @return {Matrix} this - */ - mulRow(index, value) { - checkRowIndex(this, index); - for (var i = 0; i < this.columns; i++) { - this.set(index, i, this.get(index, i) * value); - } - return this; - } - - /** - * Multiplies the values of a column with a scalar - * @param {number} index - Column index - * @param {number} value - * @return {Matrix} this - */ - mulColumn(index, value) { - checkColumnIndex(this, index); - for (var i = 0; i < this.rows; i++) { - this.set(i, index, this.get(i, index) * value); - } - return this; - } - - /** - * Returns the maximum value of the matrix - * @return {number} - */ - max() { - var v = this.get(0, 0); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) > v) { - v = this.get(i, j); - } - } - } - return v; - } - - /** - * Returns the index of the maximum value - * @return {Array} - */ - maxIndex() { - var v = this.get(0, 0); - var idx = [0, 0]; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) > v) { - v = this.get(i, j); - idx[0] = i; - idx[1] = j; - } - } - } - return idx; - } - - /** - * Returns the minimum value of the matrix - * @return {number} - */ - min() { - var v = this.get(0, 0); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) < v) { - v = this.get(i, j); - } - } - } - return v; - } - - /** - * Returns the index of the minimum value - * @return {Array} - */ - minIndex() { - var v = this.get(0, 0); - var idx = [0, 0]; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) < v) { - v = this.get(i, j); - idx[0] = i; - idx[1] = j; - } - } - } - return idx; - } - - /** - * Returns the maximum value of one row - * @param {number} row - Row index - * @return {number} - */ - maxRow(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) > v) { - v = this.get(row, i); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one row - * @param {number} row - Row index - * @return {Array} - */ - maxRowIndex(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - var idx = [row, 0]; - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) > v) { - v = this.get(row, i); - idx[1] = i; - } - } - return idx; - } - - /** - * Returns the minimum value of one row - * @param {number} row - Row index - * @return {number} - */ - minRow(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) < v) { - v = this.get(row, i); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one row - * @param {number} row - Row index - * @return {Array} - */ - minRowIndex(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - var idx = [row, 0]; - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) < v) { - v = this.get(row, i); - idx[1] = i; - } - } - return idx; - } - - /** - * Returns the maximum value of one column - * @param {number} column - Column index - * @return {number} - */ - maxColumn(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) > v) { - v = this.get(i, column); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one column - * @param {number} column - Column index - * @return {Array} - */ - maxColumnIndex(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - var idx = [0, column]; - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) > v) { - v = this.get(i, column); - idx[0] = i; - } - } - return idx; - } - - /** - * Returns the minimum value of one column - * @param {number} column - Column index - * @return {number} - */ - minColumn(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) < v) { - v = this.get(i, column); - } - } - return v; - } - - /** - * Returns the index of the minimum value of one column - * @param {number} column - Column index - * @return {Array} - */ - minColumnIndex(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - var idx = [0, column]; - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) < v) { - v = this.get(i, column); - idx[0] = i; - } - } - return idx; - } - - /** - * Returns an array containing the diagonal values of the matrix - * @return {Array} - */ - diag() { - var min = Math.min(this.rows, this.columns); - var diag = new Array(min); - for (var i = 0; i < min; i++) { - diag[i] = this.get(i, i); - } - return diag; - } - - /** - * Returns the sum by the argument given, if no argument given, - * it returns the sum of all elements of the matrix. - * @param {string} by - sum by 'row' or 'column'. - * @return {Matrix|number} - */ - sum(by) { - switch (by) { - case 'row': - return sumByRow(this); - case 'column': - return sumByColumn(this); - default: - return sumAll(this); - } - } - - /** - * Returns the mean of all elements of the matrix - * @return {number} - */ - mean() { - return this.sum() / this.size; - } - - /** - * Returns the product of all elements of the matrix - * @return {number} - */ - prod() { - var prod = 1; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - prod *= this.get(i, j); - } - } - return prod; - } - - /** - * Returns the norm of a matrix. - * @param {string} type - "frobenius" (default) or "max" return resp. the Frobenius norm and the max norm. - * @return {number} - */ - norm(type = 'frobenius') { - var result = 0; - if (type === 'max') { - return this.max(); - } else if (type === 'frobenius') { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - result = result + this.get(i, j) * this.get(i, j); - } - } - return Math.sqrt(result); - } else { - throw new RangeError(`unknown norm type: ${type}`); - } - } - - /** - * Computes the cumulative sum of the matrix elements (in place, row by row) - * @return {Matrix} this - */ - cumulativeSum() { - var sum = 0; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - sum += this.get(i, j); - this.set(i, j, sum); - } - } - return this; - } - - /** - * Computes the dot (scalar) product between the matrix and another - * @param {Matrix} vector2 vector - * @return {number} - */ - dot(vector2) { - if (Matrix.isMatrix(vector2)) vector2 = vector2.to1DArray(); - var vector1 = this.to1DArray(); - if (vector1.length !== vector2.length) { - throw new RangeError('vectors do not have the same size'); - } - var dot = 0; - for (var i = 0; i < vector1.length; i++) { - dot += vector1[i] * vector2[i]; - } - return dot; - } - - /** - * Returns the matrix product between this and other - * @param {Matrix} other - * @return {Matrix} - */ - mmul(other) { - other = this.constructor.checkMatrix(other); - if (this.columns !== other.rows) { - // eslint-disable-next-line no-console - console.warn('Number of columns of left matrix are not equal to number of rows of right matrix.'); - } - - var m = this.rows; - var n = this.columns; - var p = other.columns; - - var result = new this.constructor[Symbol.species](m, p); - - var Bcolj = new Array(n); - for (var j = 0; j < p; j++) { - for (var k = 0; k < n; k++) { - Bcolj[k] = other.get(k, j); - } - - for (var i = 0; i < m; i++) { - var s = 0; - for (k = 0; k < n; k++) { - s += this.get(i, k) * Bcolj[k]; - } - - result.set(i, j, s); - } - } - return result; - } - - strassen2x2(other) { - var result = new this.constructor[Symbol.species](2, 2); - const a11 = this.get(0, 0); - const b11 = other.get(0, 0); - const a12 = this.get(0, 1); - const b12 = other.get(0, 1); - const a21 = this.get(1, 0); - const b21 = other.get(1, 0); - const a22 = this.get(1, 1); - const b22 = other.get(1, 1); - - // Compute intermediate values. - const m1 = (a11 + a22) * (b11 + b22); - const m2 = (a21 + a22) * b11; - const m3 = a11 * (b12 - b22); - const m4 = a22 * (b21 - b11); - const m5 = (a11 + a12) * b22; - const m6 = (a21 - a11) * (b11 + b12); - const m7 = (a12 - a22) * (b21 + b22); - - // Combine intermediate values into the output. - const c00 = m1 + m4 - m5 + m7; - const c01 = m3 + m5; - const c10 = m2 + m4; - const c11 = m1 - m2 + m3 + m6; - - result.set(0, 0, c00); - result.set(0, 1, c01); - result.set(1, 0, c10); - result.set(1, 1, c11); - return result; - } - - strassen3x3(other) { - var result = new this.constructor[Symbol.species](3, 3); - - const a00 = this.get(0, 0); - const a01 = this.get(0, 1); - const a02 = this.get(0, 2); - const a10 = this.get(1, 0); - const a11 = this.get(1, 1); - const a12 = this.get(1, 2); - const a20 = this.get(2, 0); - const a21 = this.get(2, 1); - const a22 = this.get(2, 2); - - const b00 = other.get(0, 0); - const b01 = other.get(0, 1); - const b02 = other.get(0, 2); - const b10 = other.get(1, 0); - const b11 = other.get(1, 1); - const b12 = other.get(1, 2); - const b20 = other.get(2, 0); - const b21 = other.get(2, 1); - const b22 = other.get(2, 2); - - const m1 = (a00 + a01 + a02 - a10 - a11 - a21 - a22) * b11; - const m2 = (a00 - a10) * (-b01 + b11); - const m3 = a11 * (-b00 + b01 + b10 - b11 - b12 - b20 + b22); - const m4 = (-a00 + a10 + a11) * (b00 - b01 + b11); - const m5 = (a10 + a11) * (-b00 + b01); - const m6 = a00 * b00; - const m7 = (-a00 + a20 + a21) * (b00 - b02 + b12); - const m8 = (-a00 + a20) * (b02 - b12); - const m9 = (a20 + a21) * (-b00 + b02); - const m10 = (a00 + a01 + a02 - a11 - a12 - a20 - a21) * b12; - const m11 = a21 * (-b00 + b02 + b10 - b11 - b12 - b20 + b21); - const m12 = (-a02 + a21 + a22) * (b11 + b20 - b21); - const m13 = (a02 - a22) * (b11 - b21); - const m14 = a02 * b20; - const m15 = (a21 + a22) * (-b20 + b21); - const m16 = (-a02 + a11 + a12) * (b12 + b20 - b22); - const m17 = (a02 - a12) * (b12 - b22); - const m18 = (a11 + a12) * (-b20 + b22); - const m19 = a01 * b10; - const m20 = a12 * b21; - const m21 = a10 * b02; - const m22 = a20 * b01; - const m23 = a22 * b22; - - const c00 = m6 + m14 + m19; - const c01 = m1 + m4 + m5 + m6 + m12 + m14 + m15; - const c02 = m6 + m7 + m9 + m10 + m14 + m16 + m18; - const c10 = m2 + m3 + m4 + m6 + m14 + m16 + m17; - const c11 = m2 + m4 + m5 + m6 + m20; - const c12 = m14 + m16 + m17 + m18 + m21; - const c20 = m6 + m7 + m8 + m11 + m12 + m13 + m14; - const c21 = m12 + m13 + m14 + m15 + m22; - const c22 = m6 + m7 + m8 + m9 + m23; - - result.set(0, 0, c00); - result.set(0, 1, c01); - result.set(0, 2, c02); - result.set(1, 0, c10); - result.set(1, 1, c11); - result.set(1, 2, c12); - result.set(2, 0, c20); - result.set(2, 1, c21); - result.set(2, 2, c22); - return result; - } - - /** - * Returns the matrix product between x and y. More efficient than mmul(other) only when we multiply squared matrix and when the size of the matrix is > 1000. - * @param {Matrix} y - * @return {Matrix} - */ - mmulStrassen(y) { - var x = this.clone(); - var r1 = x.rows; - var c1 = x.columns; - var r2 = y.rows; - var c2 = y.columns; - if (c1 !== r2) { - // eslint-disable-next-line no-console - console.warn(`Multiplying ${r1} x ${c1} and ${r2} x ${c2} matrix: dimensions do not match.`); - } - - // Put a matrix into the top left of a matrix of zeros. - // `rows` and `cols` are the dimensions of the output matrix. - function embed(mat, rows, cols) { - var r = mat.rows; - var c = mat.columns; - if ((r === rows) && (c === cols)) { - return mat; - } else { - var resultat = Matrix.zeros(rows, cols); - resultat = resultat.setSubMatrix(mat, 0, 0); - return resultat; - } - } - - - // Make sure both matrices are the same size. - // This is exclusively for simplicity: - // this algorithm can be implemented with matrices of different sizes. - - var r = Math.max(r1, r2); - var c = Math.max(c1, c2); - x = embed(x, r, c); - y = embed(y, r, c); - - // Our recursive multiplication function. - function blockMult(a, b, rows, cols) { - // For small matrices, resort to naive multiplication. - if (rows <= 512 || cols <= 512) { - return a.mmul(b); // a is equivalent to this - } - - // Apply dynamic padding. - if ((rows % 2 === 1) && (cols % 2 === 1)) { - a = embed(a, rows + 1, cols + 1); - b = embed(b, rows + 1, cols + 1); - } else if (rows % 2 === 1) { - a = embed(a, rows + 1, cols); - b = embed(b, rows + 1, cols); - } else if (cols % 2 === 1) { - a = embed(a, rows, cols + 1); - b = embed(b, rows, cols + 1); - } - - var halfRows = parseInt(a.rows / 2, 10); - var halfCols = parseInt(a.columns / 2, 10); - // Subdivide input matrices. - var a11 = a.subMatrix(0, halfRows - 1, 0, halfCols - 1); - var b11 = b.subMatrix(0, halfRows - 1, 0, halfCols - 1); - - var a12 = a.subMatrix(0, halfRows - 1, halfCols, a.columns - 1); - var b12 = b.subMatrix(0, halfRows - 1, halfCols, b.columns - 1); - - var a21 = a.subMatrix(halfRows, a.rows - 1, 0, halfCols - 1); - var b21 = b.subMatrix(halfRows, b.rows - 1, 0, halfCols - 1); - - var a22 = a.subMatrix(halfRows, a.rows - 1, halfCols, a.columns - 1); - var b22 = b.subMatrix(halfRows, b.rows - 1, halfCols, b.columns - 1); - - // Compute intermediate values. - var m1 = blockMult(Matrix.add(a11, a22), Matrix.add(b11, b22), halfRows, halfCols); - var m2 = blockMult(Matrix.add(a21, a22), b11, halfRows, halfCols); - var m3 = blockMult(a11, Matrix.sub(b12, b22), halfRows, halfCols); - var m4 = blockMult(a22, Matrix.sub(b21, b11), halfRows, halfCols); - var m5 = blockMult(Matrix.add(a11, a12), b22, halfRows, halfCols); - var m6 = blockMult(Matrix.sub(a21, a11), Matrix.add(b11, b12), halfRows, halfCols); - var m7 = blockMult(Matrix.sub(a12, a22), Matrix.add(b21, b22), halfRows, halfCols); - - // Combine intermediate values into the output. - var c11 = Matrix.add(m1, m4); - c11.sub(m5); - c11.add(m7); - var c12 = Matrix.add(m3, m5); - var c21 = Matrix.add(m2, m4); - var c22 = Matrix.sub(m1, m2); - c22.add(m3); - c22.add(m6); - - // Crop output to the desired size (undo dynamic padding). - var resultat = Matrix.zeros(2 * c11.rows, 2 * c11.columns); - resultat = resultat.setSubMatrix(c11, 0, 0); - resultat = resultat.setSubMatrix(c12, c11.rows, 0); - resultat = resultat.setSubMatrix(c21, 0, c11.columns); - resultat = resultat.setSubMatrix(c22, c11.rows, c11.columns); - return resultat.subMatrix(0, rows - 1, 0, cols - 1); - } - return blockMult(x, y, r, c); - } - - /** - * Returns a row-by-row scaled matrix - * @param {number} [min=0] - Minimum scaled value - * @param {number} [max=1] - Maximum scaled value - * @return {Matrix} - The scaled matrix - */ - scaleRows(min, max) { - min = min === undefined ? 0 : min; - max = max === undefined ? 1 : max; - if (min >= max) { - throw new RangeError('min should be strictly smaller than max'); - } - var newMatrix = this.constructor.empty(this.rows, this.columns); - for (var i = 0; i < this.rows; i++) { - var scaled = ml_array_rescale_lib_es6(this.getRow(i), { min, max }); - newMatrix.setRow(i, scaled); - } - return newMatrix; - } - - /** - * Returns a new column-by-column scaled matrix - * @param {number} [min=0] - Minimum scaled value - * @param {number} [max=1] - Maximum scaled value - * @return {Matrix} - The new scaled matrix - * @example - * var matrix = new Matrix([[1,2],[-1,0]]); - * var scaledMatrix = matrix.scaleColumns(); // [[1,1],[0,0]] - */ - scaleColumns(min, max) { - min = min === undefined ? 0 : min; - max = max === undefined ? 1 : max; - if (min >= max) { - throw new RangeError('min should be strictly smaller than max'); - } - var newMatrix = this.constructor.empty(this.rows, this.columns); - for (var i = 0; i < this.columns; i++) { - var scaled = ml_array_rescale_lib_es6(this.getColumn(i), { - min: min, - max: max - }); - newMatrix.setColumn(i, scaled); - } - return newMatrix; - } - - - /** - * Returns the Kronecker product (also known as tensor product) between this and other - * See https://en.wikipedia.org/wiki/Kronecker_product - * @param {Matrix} other - * @return {Matrix} - */ - kroneckerProduct(other) { - other = this.constructor.checkMatrix(other); - - var m = this.rows; - var n = this.columns; - var p = other.rows; - var q = other.columns; - - var result = new this.constructor[Symbol.species](m * p, n * q); - for (var i = 0; i < m; i++) { - for (var j = 0; j < n; j++) { - for (var k = 0; k < p; k++) { - for (var l = 0; l < q; l++) { - result[p * i + k][q * j + l] = this.get(i, j) * other.get(k, l); - } - } - } - } - return result; - } - - /** - * Transposes the matrix and returns a new one containing the result - * @return {Matrix} - */ - transpose() { - var result = new this.constructor[Symbol.species](this.columns, this.rows); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - result.set(j, i, this.get(i, j)); - } - } - return result; - } - - /** - * Sorts the rows (in place) - * @param {function} compareFunction - usual Array.prototype.sort comparison function - * @return {Matrix} this - */ - sortRows(compareFunction) { - if (compareFunction === undefined) compareFunction = compareNumbers; - for (var i = 0; i < this.rows; i++) { - this.setRow(i, this.getRow(i).sort(compareFunction)); - } - return this; - } - - /** - * Sorts the columns (in place) - * @param {function} compareFunction - usual Array.prototype.sort comparison function - * @return {Matrix} this - */ - sortColumns(compareFunction) { - if (compareFunction === undefined) compareFunction = compareNumbers; - for (var i = 0; i < this.columns; i++) { - this.setColumn(i, this.getColumn(i).sort(compareFunction)); - } - return this; - } - - /** - * Returns a subset of the matrix - * @param {number} startRow - First row index - * @param {number} endRow - Last row index - * @param {number} startColumn - First column index - * @param {number} endColumn - Last column index - * @return {Matrix} - */ - subMatrix(startRow, endRow, startColumn, endColumn) { - checkRange(this, startRow, endRow, startColumn, endColumn); - var newMatrix = new this.constructor[Symbol.species](endRow - startRow + 1, endColumn - startColumn + 1); - for (var i = startRow; i <= endRow; i++) { - for (var j = startColumn; j <= endColumn; j++) { - newMatrix[i - startRow][j - startColumn] = this.get(i, j); - } - } - return newMatrix; - } - - /** - * Returns a subset of the matrix based on an array of row indices - * @param {Array} indices - Array containing the row indices - * @param {number} [startColumn = 0] - First column index - * @param {number} [endColumn = this.columns-1] - Last column index - * @return {Matrix} - */ - subMatrixRow(indices, startColumn, endColumn) { - if (startColumn === undefined) startColumn = 0; - if (endColumn === undefined) endColumn = this.columns - 1; - if ((startColumn > endColumn) || (startColumn < 0) || (startColumn >= this.columns) || (endColumn < 0) || (endColumn >= this.columns)) { - throw new RangeError('Argument out of range'); - } - - var newMatrix = new this.constructor[Symbol.species](indices.length, endColumn - startColumn + 1); - for (var i = 0; i < indices.length; i++) { - for (var j = startColumn; j <= endColumn; j++) { - if (indices[i] < 0 || indices[i] >= this.rows) { - throw new RangeError(`Row index out of range: ${indices[i]}`); - } - newMatrix.set(i, j - startColumn, this.get(indices[i], j)); - } - } - return newMatrix; - } - - /** - * Returns a subset of the matrix based on an array of column indices - * @param {Array} indices - Array containing the column indices - * @param {number} [startRow = 0] - First row index - * @param {number} [endRow = this.rows-1] - Last row index - * @return {Matrix} - */ - subMatrixColumn(indices, startRow, endRow) { - if (startRow === undefined) startRow = 0; - if (endRow === undefined) endRow = this.rows - 1; - if ((startRow > endRow) || (startRow < 0) || (startRow >= this.rows) || (endRow < 0) || (endRow >= this.rows)) { - throw new RangeError('Argument out of range'); - } - - var newMatrix = new this.constructor[Symbol.species](endRow - startRow + 1, indices.length); - for (var i = 0; i < indices.length; i++) { - for (var j = startRow; j <= endRow; j++) { - if (indices[i] < 0 || indices[i] >= this.columns) { - throw new RangeError(`Column index out of range: ${indices[i]}`); - } - newMatrix.set(j - startRow, i, this.get(j, indices[i])); - } - } - return newMatrix; - } - - /** - * Set a part of the matrix to the given sub-matrix - * @param {Matrix|Array< Array >} matrix - The source matrix from which to extract values. - * @param {number} startRow - The index of the first row to set - * @param {number} startColumn - The index of the first column to set - * @return {Matrix} - */ - setSubMatrix(matrix, startRow, startColumn) { - matrix = this.constructor.checkMatrix(matrix); - var endRow = startRow + matrix.rows - 1; - var endColumn = startColumn + matrix.columns - 1; - checkRange(this, startRow, endRow, startColumn, endColumn); - for (var i = 0; i < matrix.rows; i++) { - for (var j = 0; j < matrix.columns; j++) { - this[startRow + i][startColumn + j] = matrix.get(i, j); - } - } - return this; - } - - /** - * Return a new matrix based on a selection of rows and columns - * @param {Array} rowIndices - The row indices to select. Order matters and an index can be more than once. - * @param {Array} columnIndices - The column indices to select. Order matters and an index can be use more than once. - * @return {Matrix} The new matrix - */ - selection(rowIndices, columnIndices) { - var indices = checkIndices(this, rowIndices, columnIndices); - var newMatrix = new this.constructor[Symbol.species](rowIndices.length, columnIndices.length); - for (var i = 0; i < indices.row.length; i++) { - var rowIndex = indices.row[i]; - for (var j = 0; j < indices.column.length; j++) { - var columnIndex = indices.column[j]; - newMatrix[i][j] = this.get(rowIndex, columnIndex); - } - } - return newMatrix; - } - - /** - * Returns the trace of the matrix (sum of the diagonal elements) - * @return {number} - */ - trace() { - var min = Math.min(this.rows, this.columns); - var trace = 0; - for (var i = 0; i < min; i++) { - trace += this.get(i, i); - } - return trace; - } - - /* - Matrix views - */ - - /** - * Returns a view of the transposition of the matrix - * @return {MatrixTransposeView} - */ - transposeView() { - return new transpose_MatrixTransposeView(this); - } - - /** - * Returns a view of the row vector with the given index - * @param {number} row - row index of the vector - * @return {MatrixRowView} - */ - rowView(row) { - checkRowIndex(this, row); - return new row_MatrixRowView(this, row); - } - - /** - * Returns a view of the column vector with the given index - * @param {number} column - column index of the vector - * @return {MatrixColumnView} - */ - columnView(column) { - checkColumnIndex(this, column); - return new column_MatrixColumnView(this, column); - } - - /** - * Returns a view of the matrix flipped in the row axis - * @return {MatrixFlipRowView} - */ - flipRowView() { - return new flipRow_MatrixFlipRowView(this); - } - - /** - * Returns a view of the matrix flipped in the column axis - * @return {MatrixFlipColumnView} - */ - flipColumnView() { - return new flipColumn_MatrixFlipColumnView(this); - } - - /** - * Returns a view of a submatrix giving the index boundaries - * @param {number} startRow - first row index of the submatrix - * @param {number} endRow - last row index of the submatrix - * @param {number} startColumn - first column index of the submatrix - * @param {number} endColumn - last column index of the submatrix - * @return {MatrixSubView} - */ - subMatrixView(startRow, endRow, startColumn, endColumn) { - return new sub_MatrixSubView(this, startRow, endRow, startColumn, endColumn); - } - - /** - * Returns a view of the cross of the row indices and the column indices - * @example - * // resulting vector is [[2], [2]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).selectionView([0, 0], [1]) - * @param {Array} rowIndices - * @param {Array} columnIndices - * @return {MatrixSelectionView} - */ - selectionView(rowIndices, columnIndices) { - return new selection_MatrixSelectionView(this, rowIndices, columnIndices); - } - - /** - * Returns a view of the row indices - * @example - * // resulting vector is [[1,2,3], [1,2,3]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).rowSelectionView([0, 0]) - * @param {Array} rowIndices - * @return {MatrixRowSelectionView} - */ - rowSelectionView(rowIndices) { - return new rowSelection_MatrixRowSelectionView(this, rowIndices); - } - - /** - * Returns a view of the column indices - * @example - * // resulting vector is [[2, 2], [5, 5]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).columnSelectionView([1, 1]) - * @param {Array} columnIndices - * @return {MatrixColumnSelectionView} - */ - columnSelectionView(columnIndices) { - return new columnSelection_MatrixColumnSelectionView(this, columnIndices); - } - - - /** - * Calculates and returns the determinant of a matrix as a Number - * @example - * new Matrix([[1,2,3], [4,5,6]]).det() - * @return {number} - */ - det() { - if (this.isSquare()) { - var a, b, c, d; - if (this.columns === 2) { - // 2 x 2 matrix - a = this.get(0, 0); - b = this.get(0, 1); - c = this.get(1, 0); - d = this.get(1, 1); - - return a * d - (b * c); - } else if (this.columns === 3) { - // 3 x 3 matrix - var subMatrix0, subMatrix1, subMatrix2; - subMatrix0 = this.selectionView([1, 2], [1, 2]); - subMatrix1 = this.selectionView([1, 2], [0, 2]); - subMatrix2 = this.selectionView([1, 2], [0, 1]); - a = this.get(0, 0); - b = this.get(0, 1); - c = this.get(0, 2); - - return a * subMatrix0.det() - b * subMatrix1.det() + c * subMatrix2.det(); - } else { - // general purpose determinant using the LU decomposition - return new lu_LuDecomposition(this).determinant; - } - } else { - throw Error('Determinant can only be calculated for a square matrix.'); - } - } - - /** - * Returns inverse of a matrix if it exists or the pseudoinverse - * @param {number} threshold - threshold for taking inverse of singular values (default = 1e-15) - * @return {Matrix} the (pseudo)inverted matrix. - */ - pseudoInverse(threshold) { - if (threshold === undefined) threshold = Number.EPSILON; - var svdSolution = new svd_SingularValueDecomposition(this, { autoTranspose: true }); - - var U = svdSolution.leftSingularVectors; - var V = svdSolution.rightSingularVectors; - var s = svdSolution.diagonal; - - for (var i = 0; i < s.length; i++) { - if (Math.abs(s[i]) > threshold) { - s[i] = 1.0 / s[i]; - } else { - s[i] = 0.0; - } - } - - // convert list to diagonal - s = this.constructor[Symbol.species].diag(s); - return V.mmul(s.mmul(U.transposeView())); - } - - /** - * Creates an exact and independent copy of the matrix - * @return {Matrix} - */ - clone() { - var newMatrix = new this.constructor[Symbol.species](this.rows, this.columns); - for (var row = 0; row < this.rows; row++) { - for (var column = 0; column < this.columns; column++) { - newMatrix.set(row, column, this.get(row, column)); - } - } - return newMatrix; - } - } - - Matrix.prototype.klass = 'Matrix'; - - function compareNumbers(a, b) { - return a - b; - } - - /* - Synonyms - */ - - Matrix.random = Matrix.rand; - Matrix.diagonal = Matrix.diag; - Matrix.prototype.diagonal = Matrix.prototype.diag; - Matrix.identity = Matrix.eye; - Matrix.prototype.negate = Matrix.prototype.neg; - Matrix.prototype.tensorProduct = Matrix.prototype.kroneckerProduct; - Matrix.prototype.determinant = Matrix.prototype.det; - - /* - Add dynamically instance and static methods for mathematical operations - */ - - var inplaceOperator = ` -(function %name%(value) { - if (typeof value === 'number') return this.%name%S(value); - return this.%name%M(value); -}) -`; - - var inplaceOperatorScalar = ` -(function %name%S(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) %op% value); - } - } - return this; -}) -`; - - var inplaceOperatorMatrix = ` -(function %name%M(matrix) { - matrix = this.constructor.checkMatrix(matrix); - if (this.rows !== matrix.rows || - this.columns !== matrix.columns) { - throw new RangeError('Matrices dimensions must be equal'); - } - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) %op% matrix.get(i, j)); - } - } - return this; -}) -`; - - var staticOperator = ` -(function %name%(matrix, value) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(value); -}) -`; - - var inplaceMethod = ` -(function %name%() { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j))); - } - } - return this; -}) -`; - - var staticMethod = ` -(function %name%(matrix) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(); -}) -`; - - var inplaceMethodWithArgs = ` -(function %name%(%args%) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), %args%)); - } - } - return this; -}) -`; - - var staticMethodWithArgs = ` -(function %name%(matrix, %args%) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(%args%); -}) -`; - - - var inplaceMethodWithOneArgScalar = ` -(function %name%S(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), value)); - } - } - return this; -}) -`; - var inplaceMethodWithOneArgMatrix = ` -(function %name%M(matrix) { - matrix = this.constructor.checkMatrix(matrix); - if (this.rows !== matrix.rows || - this.columns !== matrix.columns) { - throw new RangeError('Matrices dimensions must be equal'); - } - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), matrix.get(i, j))); - } - } - return this; -}) -`; - - var inplaceMethodWithOneArg = ` -(function %name%(value) { - if (typeof value === 'number') return this.%name%S(value); - return this.%name%M(value); -}) -`; - - var staticMethodWithOneArg = staticMethodWithArgs; - - var operators = [ - // Arithmetic operators - ['+', 'add'], - ['-', 'sub', 'subtract'], - ['*', 'mul', 'multiply'], - ['/', 'div', 'divide'], - ['%', 'mod', 'modulus'], - // Bitwise operators - ['&', 'and'], - ['|', 'or'], - ['^', 'xor'], - ['<<', 'leftShift'], - ['>>', 'signPropagatingRightShift'], - ['>>>', 'rightShift', 'zeroFillRightShift'] - ]; - - var i; - var eval2 = eval; // eslint-disable-line no-eval - for (var operator of operators) { - var inplaceOp = eval2(fillTemplateFunction(inplaceOperator, { name: operator[1], op: operator[0] })); - var inplaceOpS = eval2(fillTemplateFunction(inplaceOperatorScalar, { name: `${operator[1]}S`, op: operator[0] })); - var inplaceOpM = eval2(fillTemplateFunction(inplaceOperatorMatrix, { name: `${operator[1]}M`, op: operator[0] })); - var staticOp = eval2(fillTemplateFunction(staticOperator, { name: operator[1] })); - for (i = 1; i < operator.length; i++) { - Matrix.prototype[operator[i]] = inplaceOp; - Matrix.prototype[`${operator[i]}S`] = inplaceOpS; - Matrix.prototype[`${operator[i]}M`] = inplaceOpM; - Matrix[operator[i]] = staticOp; - } - } - - var methods = [['~', 'not']]; - - [ - 'abs', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atanh', 'cbrt', 'ceil', - 'clz32', 'cos', 'cosh', 'exp', 'expm1', 'floor', 'fround', 'log', 'log1p', - 'log10', 'log2', 'round', 'sign', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc' - ].forEach(function (mathMethod) { - methods.push([`Math.${mathMethod}`, mathMethod]); - }); - - for (var method of methods) { - var inplaceMeth = eval2(fillTemplateFunction(inplaceMethod, { name: method[1], method: method[0] })); - var staticMeth = eval2(fillTemplateFunction(staticMethod, { name: method[1] })); - for (i = 1; i < method.length; i++) { - Matrix.prototype[method[i]] = inplaceMeth; - Matrix[method[i]] = staticMeth; - } - } - - var methodsWithArgs = [['Math.pow', 1, 'pow']]; - - for (var methodWithArg of methodsWithArgs) { - var args = 'arg0'; - for (i = 1; i < methodWithArg[1]; i++) { - args += `, arg${i}`; - } - if (methodWithArg[1] !== 1) { - var inplaceMethWithArgs = eval2(fillTemplateFunction(inplaceMethodWithArgs, { - name: methodWithArg[2], - method: methodWithArg[0], - args: args - })); - var staticMethWithArgs = eval2(fillTemplateFunction(staticMethodWithArgs, { name: methodWithArg[2], args: args })); - for (i = 2; i < methodWithArg.length; i++) { - Matrix.prototype[methodWithArg[i]] = inplaceMethWithArgs; - Matrix[methodWithArg[i]] = staticMethWithArgs; - } - } else { - var tmplVar = { - name: methodWithArg[2], - args: args, - method: methodWithArg[0] - }; - var inplaceMethod2 = eval2(fillTemplateFunction(inplaceMethodWithOneArg, tmplVar)); - var inplaceMethodS = eval2(fillTemplateFunction(inplaceMethodWithOneArgScalar, tmplVar)); - var inplaceMethodM = eval2(fillTemplateFunction(inplaceMethodWithOneArgMatrix, tmplVar)); - var staticMethod2 = eval2(fillTemplateFunction(staticMethodWithOneArg, tmplVar)); - for (i = 2; i < methodWithArg.length; i++) { - Matrix.prototype[methodWithArg[i]] = inplaceMethod2; - Matrix.prototype[`${methodWithArg[i]}M`] = inplaceMethodM; - Matrix.prototype[`${methodWithArg[i]}S`] = inplaceMethodS; - Matrix[methodWithArg[i]] = staticMethod2; - } - } - } - - function fillTemplateFunction(template, values) { - for (var value in values) { - template = template.replace(new RegExp(`%${value}%`, 'g'), values[value]); - } - return template; - } - - return Matrix; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/matrix.js - - - -class matrix_Matrix extends AbstractMatrix(Array) { - constructor(nRows, nColumns) { - var i; - if (arguments.length === 1 && typeof nRows === 'number') { - return new Array(nRows); - } - if (matrix_Matrix.isMatrix(nRows)) { - return nRows.clone(); - } else if (Number.isInteger(nRows) && nRows > 0) { - // Create an empty matrix - super(nRows); - if (Number.isInteger(nColumns) && nColumns > 0) { - for (i = 0; i < nRows; i++) { - this[i] = new Array(nColumns); - } - } else { - throw new TypeError('nColumns must be a positive integer'); - } - } else if (Array.isArray(nRows)) { - // Copy the values from the 2D array - const matrix = nRows; - nRows = matrix.length; - nColumns = matrix[0].length; - if (typeof nColumns !== 'number' || nColumns === 0) { - throw new TypeError( - 'Data must be a 2D array with at least one element' - ); - } - super(nRows); - for (i = 0; i < nRows; i++) { - if (matrix[i].length !== nColumns) { - throw new RangeError('Inconsistent array dimensions'); - } - this[i] = [].concat(matrix[i]); - } - } else { - throw new TypeError( - 'First argument must be a positive number or an array' - ); - } - this.rows = nRows; - this.columns = nColumns; - return this; - } - - set(rowIndex, columnIndex, value) { - this[rowIndex][columnIndex] = value; - return this; - } - - get(rowIndex, columnIndex) { - return this[rowIndex][columnIndex]; - } - - /** - * Removes a row from the given index - * @param {number} index - Row index - * @return {Matrix} this - */ - removeRow(index) { - checkRowIndex(this, index); - if (this.rows === 1) { - throw new RangeError('A matrix cannot have less than one row'); - } - this.splice(index, 1); - this.rows -= 1; - return this; - } - - /** - * Adds a row at the given index - * @param {number} [index = this.rows] - Row index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - addRow(index, array) { - if (array === undefined) { - array = index; - index = this.rows; - } - checkRowIndex(this, index, true); - array = checkRowVector(this, array, true); - this.splice(index, 0, array); - this.rows += 1; - return this; - } - - /** - * Removes a column from the given index - * @param {number} index - Column index - * @return {Matrix} this - */ - removeColumn(index) { - checkColumnIndex(this, index); - if (this.columns === 1) { - throw new RangeError('A matrix cannot have less than one column'); - } - for (var i = 0; i < this.rows; i++) { - this[i].splice(index, 1); - } - this.columns -= 1; - return this; - } - - /** - * Adds a column at the given index - * @param {number} [index = this.columns] - Column index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - addColumn(index, array) { - if (typeof array === 'undefined') { - array = index; - index = this.columns; - } - checkColumnIndex(this, index, true); - array = checkColumnVector(this, array); - for (var i = 0; i < this.rows; i++) { - this[i].splice(index, 0, array[i]); - } - this.columns += 1; - return this; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/WrapperMatrix1D.js - - - -class WrapperMatrix1D_WrapperMatrix1D extends AbstractMatrix() { - /** - * @class WrapperMatrix1D - * @param {Array} data - * @param {object} [options] - * @param {object} [options.rows = 1] - */ - constructor(data, options = {}) { - const { rows = 1 } = options; - - if (data.length % rows !== 0) { - throw new Error('the data length is not divisible by the number of rows'); - } - super(); - this.rows = rows; - this.columns = data.length / rows; - this.data = data; - } - - set(rowIndex, columnIndex, value) { - var index = this._calculateIndex(rowIndex, columnIndex); - this.data[index] = value; - return this; - } - - get(rowIndex, columnIndex) { - var index = this._calculateIndex(rowIndex, columnIndex); - return this.data[index]; - } - - _calculateIndex(row, column) { - return row * this.columns + column; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/WrapperMatrix2D.js - - - -class WrapperMatrix2D_WrapperMatrix2D extends AbstractMatrix() { - /** - * @class WrapperMatrix2D - * @param {Array>} data - */ - constructor(data) { - super(); - this.data = data; - this.rows = data.length; - this.columns = data[0].length; - } - - set(rowIndex, columnIndex, value) { - this.data[rowIndex][columnIndex] = value; - return this; - } - - get(rowIndex, columnIndex) { - return this.data[rowIndex][columnIndex]; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/wrap.js - - - -/** - * @param {Array>|Array} array - * @param {object} [options] - * @param {object} [options.rows = 1] - * @return {WrapperMatrix1D|WrapperMatrix2D} - */ -function wrap(array, options) { - if (Array.isArray(array)) { - if (array[0] && Array.isArray(array[0])) { - return new WrapperMatrix2D_WrapperMatrix2D(array); - } else { - return new WrapperMatrix1D_WrapperMatrix1D(array, options); - } - } else { - throw new Error('the argument is not an array'); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/qr.js - - - - -/** - * @class QrDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/QrDecomposition.cs - * @param {Matrix} value - */ -class qr_QrDecomposition { - constructor(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var qr = value.clone(); - var m = value.rows; - var n = value.columns; - var rdiag = new Array(n); - var i, j, k, s; - - for (k = 0; k < n; k++) { - var nrm = 0; - for (i = k; i < m; i++) { - nrm = hypotenuse(nrm, qr.get(i, k)); - } - if (nrm !== 0) { - if (qr.get(k, k) < 0) { - nrm = -nrm; - } - for (i = k; i < m; i++) { - qr.set(i, k, qr.get(i, k) / nrm); - } - qr.set(k, k, qr.get(k, k) + 1); - for (j = k + 1; j < n; j++) { - s = 0; - for (i = k; i < m; i++) { - s += qr.get(i, k) * qr.get(i, j); - } - s = -s / qr.get(k, k); - for (i = k; i < m; i++) { - qr.set(i, j, qr.get(i, j) + s * qr.get(i, k)); - } - } - } - rdiag[k] = -nrm; - } - - this.QR = qr; - this.Rdiag = rdiag; - } - - /** - * Solve a problem of least square (Ax=b) by using the QR decomposition. Useful when A is rectangular, but not working when A is singular. - * Example : We search to approximate x, with A matrix shape m*n, x vector size n, b vector size m (m > n). We will use : - * var qr = QrDecomposition(A); - * var x = qr.solve(b); - * @param {Matrix} value - Matrix 1D which is the vector b (in the equation Ax = b) - * @return {Matrix} - The vector x - */ - solve(value) { - value = matrix_Matrix.checkMatrix(value); - - var qr = this.QR; - var m = qr.rows; - - if (value.rows !== m) { - throw new Error('Matrix row dimensions must agree'); - } - if (!this.isFullRank()) { - throw new Error('Matrix is rank deficient'); - } - - var count = value.columns; - var X = value.clone(); - var n = qr.columns; - var i, j, k, s; - - for (k = 0; k < n; k++) { - for (j = 0; j < count; j++) { - s = 0; - for (i = k; i < m; i++) { - s += qr[i][k] * X[i][j]; - } - s = -s / qr[k][k]; - for (i = k; i < m; i++) { - X[i][j] += s * qr[i][k]; - } - } - } - for (k = n - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - X[k][j] /= this.Rdiag[k]; - } - for (i = 0; i < k; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * qr[i][k]; - } - } - } - - return X.subMatrix(0, n - 1, 0, count - 1); - } - - /** - * - * @return {boolean} - */ - isFullRank() { - var columns = this.QR.columns; - for (var i = 0; i < columns; i++) { - if (this.Rdiag[i] === 0) { - return false; - } - } - return true; - } - - /** - * - * @return {Matrix} - */ - get upperTriangularMatrix() { - var qr = this.QR; - var n = qr.columns; - var X = new matrix_Matrix(n, n); - var i, j; - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - if (i < j) { - X[i][j] = qr[i][j]; - } else if (i === j) { - X[i][j] = this.Rdiag[i]; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Matrix} - */ - get orthogonalMatrix() { - var qr = this.QR; - var rows = qr.rows; - var columns = qr.columns; - var X = new matrix_Matrix(rows, columns); - var i, j, k, s; - - for (k = columns - 1; k >= 0; k--) { - for (i = 0; i < rows; i++) { - X[i][k] = 0; - } - X[k][k] = 1; - for (j = k; j < columns; j++) { - if (qr[k][k] !== 0) { - s = 0; - for (i = k; i < rows; i++) { - s += qr[i][k] * X[i][j]; - } - - s = -s / qr[k][k]; - - for (i = k; i < rows; i++) { - X[i][j] += s * qr[i][k]; - } - } - } - } - return X; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/decompositions.js - - - - - - -/** - * Computes the inverse of a Matrix - * @param {Matrix} matrix - * @param {boolean} [useSVD=false] - * @return {Matrix} - */ -function inverse(matrix, useSVD = false) { - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - if (useSVD) { - return new svd_SingularValueDecomposition(matrix).inverse(); - } else { - return solve(matrix, matrix_Matrix.eye(matrix.rows)); - } -} - -/** - * - * @param {Matrix} leftHandSide - * @param {Matrix} rightHandSide - * @param {boolean} [useSVD = false] - * @return {Matrix} - */ -function solve(leftHandSide, rightHandSide, useSVD = false) { - leftHandSide = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(leftHandSide); - rightHandSide = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(rightHandSide); - if (useSVD) { - return new svd_SingularValueDecomposition(leftHandSide).solve(rightHandSide); - } else { - return leftHandSide.isSquare() - ? new lu_LuDecomposition(leftHandSide).solve(rightHandSide) - : new qr_QrDecomposition(leftHandSide).solve(rightHandSide); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/linearDependencies.js - - - - - -// function used by rowsDependencies -function xrange(n, exception) { - var range = []; - for (var i = 0; i < n; i++) { - if (i !== exception) { - range.push(i); - } - } - return range; -} - -// function used by rowsDependencies -function dependenciesOneRow( - error, - matrix, - index, - thresholdValue = 10e-10, - thresholdError = 10e-10 -) { - if (error > thresholdError) { - return new Array(matrix.rows + 1).fill(0); - } else { - var returnArray = matrix.addRow(index, [0]); - for (var i = 0; i < returnArray.rows; i++) { - if (Math.abs(returnArray.get(i, 0)) < thresholdValue) { - returnArray.set(i, 0, 0); - } - } - return returnArray.to1DArray(); - } -} - -/** - * Creates a matrix which represents the dependencies between rows. - * If a row is a linear combination of others rows, the result will be a row with the coefficients of this combination. - * For example : for A = [[2, 0, 0, 1], [0, 1, 6, 0], [0, 3, 0, 1], [0, 0, 1, 0], [0, 1, 2, 0]], the result will be [[0, 0, 0, 0, 0], [0, 0, 0, 4, 1], [0, 0, 0, 0, 0], [0, 0.25, 0, 0, -0.25], [0, 1, 0, -4, 0]] - * @param {Matrix} matrix - * @param {Object} [options] includes thresholdValue and thresholdError. - * @param {number} [options.thresholdValue = 10e-10] If an absolute value is inferior to this threshold, it will equals zero. - * @param {number} [options.thresholdError = 10e-10] If the error is inferior to that threshold, the linear combination found is accepted and the row is dependent from other rows. - * @return {Matrix} the matrix which represents the dependencies between rows. - */ - -function linearDependencies(matrix, options = {}) { - const { thresholdValue = 10e-10, thresholdError = 10e-10 } = options; - - var n = matrix.rows; - var results = new matrix_Matrix(n, n); - - for (var i = 0; i < n; i++) { - var b = matrix_Matrix.columnVector(matrix.getRow(i)); - var Abis = matrix.subMatrixRow(xrange(n, i)).transposeView(); - var svd = new svd_SingularValueDecomposition(Abis); - var x = svd.solve(b); - var error = lib_es6( - matrix_Matrix.sub(b, Abis.mmul(x)) - .abs() - .to1DArray() - ); - results.setRow( - i, - dependenciesOneRow(error, x, i, thresholdValue, thresholdError) - ); - } - return results; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/evd.js - - - - -/** - * @class EigenvalueDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/EigenvalueDecomposition.cs - * @param {Matrix} matrix - * @param {object} [options] - * @param {boolean} [options.assumeSymmetric=false] - */ -class evd_EigenvalueDecomposition { - constructor(matrix, options = {}) { - const { assumeSymmetric = false } = options; - - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - if (!matrix.isSquare()) { - throw new Error('Matrix is not a square matrix'); - } - - var n = matrix.columns; - var V = getFilled2DArray(n, n, 0); - var d = new Array(n); - var e = new Array(n); - var value = matrix; - var i, j; - - var isSymmetric = false; - if (assumeSymmetric) { - isSymmetric = true; - } else { - isSymmetric = matrix.isSymmetric(); - } - - if (isSymmetric) { - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - V[i][j] = value.get(i, j); - } - } - tred2(n, e, d, V); - tql2(n, e, d, V); - } else { - var H = getFilled2DArray(n, n, 0); - var ort = new Array(n); - for (j = 0; j < n; j++) { - for (i = 0; i < n; i++) { - H[i][j] = value.get(i, j); - } - } - orthes(n, H, ort, V); - hqr2(n, e, d, V, H); - } - - this.n = n; - this.e = e; - this.d = d; - this.V = V; - } - - /** - * - * @return {Array} - */ - get realEigenvalues() { - return this.d; - } - - /** - * - * @return {Array} - */ - get imaginaryEigenvalues() { - return this.e; - } - - /** - * - * @return {Matrix} - */ - get eigenvectorMatrix() { - if (!matrix_Matrix.isMatrix(this.V)) { - this.V = new matrix_Matrix(this.V); - } - return this.V; - } - - /** - * - * @return {Matrix} - */ - get diagonalMatrix() { - var n = this.n; - var e = this.e; - var d = this.d; - var X = new matrix_Matrix(n, n); - var i, j; - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - X[i][j] = 0; - } - X[i][i] = d[i]; - if (e[i] > 0) { - X[i][i + 1] = e[i]; - } else if (e[i] < 0) { - X[i][i - 1] = e[i]; - } - } - return X; - } -} - -function tred2(n, e, d, V) { - var f, g, h, i, j, k, hh, scale; - - for (j = 0; j < n; j++) { - d[j] = V[n - 1][j]; - } - - for (i = n - 1; i > 0; i--) { - scale = 0; - h = 0; - for (k = 0; k < i; k++) { - scale = scale + Math.abs(d[k]); - } - - if (scale === 0) { - e[i] = d[i - 1]; - for (j = 0; j < i; j++) { - d[j] = V[i - 1][j]; - V[i][j] = 0; - V[j][i] = 0; - } - } else { - for (k = 0; k < i; k++) { - d[k] /= scale; - h += d[k] * d[k]; - } - - f = d[i - 1]; - g = Math.sqrt(h); - if (f > 0) { - g = -g; - } - - e[i] = scale * g; - h = h - f * g; - d[i - 1] = f - g; - for (j = 0; j < i; j++) { - e[j] = 0; - } - - for (j = 0; j < i; j++) { - f = d[j]; - V[j][i] = f; - g = e[j] + V[j][j] * f; - for (k = j + 1; k <= i - 1; k++) { - g += V[k][j] * d[k]; - e[k] += V[k][j] * f; - } - e[j] = g; - } - - f = 0; - for (j = 0; j < i; j++) { - e[j] /= h; - f += e[j] * d[j]; - } - - hh = f / (h + h); - for (j = 0; j < i; j++) { - e[j] -= hh * d[j]; - } - - for (j = 0; j < i; j++) { - f = d[j]; - g = e[j]; - for (k = j; k <= i - 1; k++) { - V[k][j] -= f * e[k] + g * d[k]; - } - d[j] = V[i - 1][j]; - V[i][j] = 0; - } - } - d[i] = h; - } - - for (i = 0; i < n - 1; i++) { - V[n - 1][i] = V[i][i]; - V[i][i] = 1; - h = d[i + 1]; - if (h !== 0) { - for (k = 0; k <= i; k++) { - d[k] = V[k][i + 1] / h; - } - - for (j = 0; j <= i; j++) { - g = 0; - for (k = 0; k <= i; k++) { - g += V[k][i + 1] * V[k][j]; - } - for (k = 0; k <= i; k++) { - V[k][j] -= g * d[k]; - } - } - } - - for (k = 0; k <= i; k++) { - V[k][i + 1] = 0; - } - } - - for (j = 0; j < n; j++) { - d[j] = V[n - 1][j]; - V[n - 1][j] = 0; - } - - V[n - 1][n - 1] = 1; - e[0] = 0; -} - -function tql2(n, e, d, V) { - var g, h, i, j, k, l, m, p, r, dl1, c, c2, c3, el1, s, s2, iter; - - for (i = 1; i < n; i++) { - e[i - 1] = e[i]; - } - - e[n - 1] = 0; - - var f = 0; - var tst1 = 0; - var eps = Number.EPSILON; - - for (l = 0; l < n; l++) { - tst1 = Math.max(tst1, Math.abs(d[l]) + Math.abs(e[l])); - m = l; - while (m < n) { - if (Math.abs(e[m]) <= eps * tst1) { - break; - } - m++; - } - - if (m > l) { - iter = 0; - do { - iter = iter + 1; - - g = d[l]; - p = (d[l + 1] - g) / (2 * e[l]); - r = hypotenuse(p, 1); - if (p < 0) { - r = -r; - } - - d[l] = e[l] / (p + r); - d[l + 1] = e[l] * (p + r); - dl1 = d[l + 1]; - h = g - d[l]; - for (i = l + 2; i < n; i++) { - d[i] -= h; - } - - f = f + h; - - p = d[m]; - c = 1; - c2 = c; - c3 = c; - el1 = e[l + 1]; - s = 0; - s2 = 0; - for (i = m - 1; i >= l; i--) { - c3 = c2; - c2 = c; - s2 = s; - g = c * e[i]; - h = c * p; - r = hypotenuse(p, e[i]); - e[i + 1] = s * r; - s = e[i] / r; - c = p / r; - p = c * d[i] - s * g; - d[i + 1] = h + s * (c * g + s * d[i]); - - for (k = 0; k < n; k++) { - h = V[k][i + 1]; - V[k][i + 1] = s * V[k][i] + c * h; - V[k][i] = c * V[k][i] - s * h; - } - } - - p = -s * s2 * c3 * el1 * e[l] / dl1; - e[l] = s * p; - d[l] = c * p; - } while (Math.abs(e[l]) > eps * tst1); - } - d[l] = d[l] + f; - e[l] = 0; - } - - for (i = 0; i < n - 1; i++) { - k = i; - p = d[i]; - for (j = i + 1; j < n; j++) { - if (d[j] < p) { - k = j; - p = d[j]; - } - } - - if (k !== i) { - d[k] = d[i]; - d[i] = p; - for (j = 0; j < n; j++) { - p = V[j][i]; - V[j][i] = V[j][k]; - V[j][k] = p; - } - } - } -} - -function orthes(n, H, ort, V) { - var low = 0; - var high = n - 1; - var f, g, h, i, j, m; - var scale; - - for (m = low + 1; m <= high - 1; m++) { - scale = 0; - for (i = m; i <= high; i++) { - scale = scale + Math.abs(H[i][m - 1]); - } - - if (scale !== 0) { - h = 0; - for (i = high; i >= m; i--) { - ort[i] = H[i][m - 1] / scale; - h += ort[i] * ort[i]; - } - - g = Math.sqrt(h); - if (ort[m] > 0) { - g = -g; - } - - h = h - ort[m] * g; - ort[m] = ort[m] - g; - - for (j = m; j < n; j++) { - f = 0; - for (i = high; i >= m; i--) { - f += ort[i] * H[i][j]; - } - - f = f / h; - for (i = m; i <= high; i++) { - H[i][j] -= f * ort[i]; - } - } - - for (i = 0; i <= high; i++) { - f = 0; - for (j = high; j >= m; j--) { - f += ort[j] * H[i][j]; - } - - f = f / h; - for (j = m; j <= high; j++) { - H[i][j] -= f * ort[j]; - } - } - - ort[m] = scale * ort[m]; - H[m][m - 1] = scale * g; - } - } - - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - V[i][j] = i === j ? 1 : 0; - } - } - - for (m = high - 1; m >= low + 1; m--) { - if (H[m][m - 1] !== 0) { - for (i = m + 1; i <= high; i++) { - ort[i] = H[i][m - 1]; - } - - for (j = m; j <= high; j++) { - g = 0; - for (i = m; i <= high; i++) { - g += ort[i] * V[i][j]; - } - - g = g / ort[m] / H[m][m - 1]; - for (i = m; i <= high; i++) { - V[i][j] += g * ort[i]; - } - } - } - } -} - -function hqr2(nn, e, d, V, H) { - var n = nn - 1; - var low = 0; - var high = nn - 1; - var eps = Number.EPSILON; - var exshift = 0; - var norm = 0; - var p = 0; - var q = 0; - var r = 0; - var s = 0; - var z = 0; - var iter = 0; - var i, j, k, l, m, t, w, x, y; - var ra, sa, vr, vi; - var notlast, cdivres; - - for (i = 0; i < nn; i++) { - if (i < low || i > high) { - d[i] = H[i][i]; - e[i] = 0; - } - - for (j = Math.max(i - 1, 0); j < nn; j++) { - norm = norm + Math.abs(H[i][j]); - } - } - - while (n >= low) { - l = n; - while (l > low) { - s = Math.abs(H[l - 1][l - 1]) + Math.abs(H[l][l]); - if (s === 0) { - s = norm; - } - if (Math.abs(H[l][l - 1]) < eps * s) { - break; - } - l--; - } - - if (l === n) { - H[n][n] = H[n][n] + exshift; - d[n] = H[n][n]; - e[n] = 0; - n--; - iter = 0; - } else if (l === n - 1) { - w = H[n][n - 1] * H[n - 1][n]; - p = (H[n - 1][n - 1] - H[n][n]) / 2; - q = p * p + w; - z = Math.sqrt(Math.abs(q)); - H[n][n] = H[n][n] + exshift; - H[n - 1][n - 1] = H[n - 1][n - 1] + exshift; - x = H[n][n]; - - if (q >= 0) { - z = p >= 0 ? p + z : p - z; - d[n - 1] = x + z; - d[n] = d[n - 1]; - if (z !== 0) { - d[n] = x - w / z; - } - e[n - 1] = 0; - e[n] = 0; - x = H[n][n - 1]; - s = Math.abs(x) + Math.abs(z); - p = x / s; - q = z / s; - r = Math.sqrt(p * p + q * q); - p = p / r; - q = q / r; - - for (j = n - 1; j < nn; j++) { - z = H[n - 1][j]; - H[n - 1][j] = q * z + p * H[n][j]; - H[n][j] = q * H[n][j] - p * z; - } - - for (i = 0; i <= n; i++) { - z = H[i][n - 1]; - H[i][n - 1] = q * z + p * H[i][n]; - H[i][n] = q * H[i][n] - p * z; - } - - for (i = low; i <= high; i++) { - z = V[i][n - 1]; - V[i][n - 1] = q * z + p * V[i][n]; - V[i][n] = q * V[i][n] - p * z; - } - } else { - d[n - 1] = x + p; - d[n] = x + p; - e[n - 1] = z; - e[n] = -z; - } - - n = n - 2; - iter = 0; - } else { - x = H[n][n]; - y = 0; - w = 0; - if (l < n) { - y = H[n - 1][n - 1]; - w = H[n][n - 1] * H[n - 1][n]; - } - - if (iter === 10) { - exshift += x; - for (i = low; i <= n; i++) { - H[i][i] -= x; - } - s = Math.abs(H[n][n - 1]) + Math.abs(H[n - 1][n - 2]); - x = y = 0.75 * s; - w = -0.4375 * s * s; - } - - if (iter === 30) { - s = (y - x) / 2; - s = s * s + w; - if (s > 0) { - s = Math.sqrt(s); - if (y < x) { - s = -s; - } - s = x - w / ((y - x) / 2 + s); - for (i = low; i <= n; i++) { - H[i][i] -= s; - } - exshift += s; - x = y = w = 0.964; - } - } - - iter = iter + 1; - - m = n - 2; - while (m >= l) { - z = H[m][m]; - r = x - z; - s = y - z; - p = (r * s - w) / H[m + 1][m] + H[m][m + 1]; - q = H[m + 1][m + 1] - z - r - s; - r = H[m + 2][m + 1]; - s = Math.abs(p) + Math.abs(q) + Math.abs(r); - p = p / s; - q = q / s; - r = r / s; - if (m === l) { - break; - } - if ( - Math.abs(H[m][m - 1]) * (Math.abs(q) + Math.abs(r)) < - eps * - (Math.abs(p) * - (Math.abs(H[m - 1][m - 1]) + - Math.abs(z) + - Math.abs(H[m + 1][m + 1]))) - ) { - break; - } - m--; - } - - for (i = m + 2; i <= n; i++) { - H[i][i - 2] = 0; - if (i > m + 2) { - H[i][i - 3] = 0; - } - } - - for (k = m; k <= n - 1; k++) { - notlast = k !== n - 1; - if (k !== m) { - p = H[k][k - 1]; - q = H[k + 1][k - 1]; - r = notlast ? H[k + 2][k - 1] : 0; - x = Math.abs(p) + Math.abs(q) + Math.abs(r); - if (x !== 0) { - p = p / x; - q = q / x; - r = r / x; - } - } - - if (x === 0) { - break; - } - - s = Math.sqrt(p * p + q * q + r * r); - if (p < 0) { - s = -s; - } - - if (s !== 0) { - if (k !== m) { - H[k][k - 1] = -s * x; - } else if (l !== m) { - H[k][k - 1] = -H[k][k - 1]; - } - - p = p + s; - x = p / s; - y = q / s; - z = r / s; - q = q / p; - r = r / p; - - for (j = k; j < nn; j++) { - p = H[k][j] + q * H[k + 1][j]; - if (notlast) { - p = p + r * H[k + 2][j]; - H[k + 2][j] = H[k + 2][j] - p * z; - } - - H[k][j] = H[k][j] - p * x; - H[k + 1][j] = H[k + 1][j] - p * y; - } - - for (i = 0; i <= Math.min(n, k + 3); i++) { - p = x * H[i][k] + y * H[i][k + 1]; - if (notlast) { - p = p + z * H[i][k + 2]; - H[i][k + 2] = H[i][k + 2] - p * r; - } - - H[i][k] = H[i][k] - p; - H[i][k + 1] = H[i][k + 1] - p * q; - } - - for (i = low; i <= high; i++) { - p = x * V[i][k] + y * V[i][k + 1]; - if (notlast) { - p = p + z * V[i][k + 2]; - V[i][k + 2] = V[i][k + 2] - p * r; - } - - V[i][k] = V[i][k] - p; - V[i][k + 1] = V[i][k + 1] - p * q; - } - } - } - } - } - - if (norm === 0) { - return; - } - - for (n = nn - 1; n >= 0; n--) { - p = d[n]; - q = e[n]; - - if (q === 0) { - l = n; - H[n][n] = 1; - for (i = n - 1; i >= 0; i--) { - w = H[i][i] - p; - r = 0; - for (j = l; j <= n; j++) { - r = r + H[i][j] * H[j][n]; - } - - if (e[i] < 0) { - z = w; - s = r; - } else { - l = i; - if (e[i] === 0) { - H[i][n] = w !== 0 ? -r / w : -r / (eps * norm); - } else { - x = H[i][i + 1]; - y = H[i + 1][i]; - q = (d[i] - p) * (d[i] - p) + e[i] * e[i]; - t = (x * s - z * r) / q; - H[i][n] = t; - H[i + 1][n] = - Math.abs(x) > Math.abs(z) ? (-r - w * t) / x : (-s - y * t) / z; - } - - t = Math.abs(H[i][n]); - if (eps * t * t > 1) { - for (j = i; j <= n; j++) { - H[j][n] = H[j][n] / t; - } - } - } - } - } else if (q < 0) { - l = n - 1; - - if (Math.abs(H[n][n - 1]) > Math.abs(H[n - 1][n])) { - H[n - 1][n - 1] = q / H[n][n - 1]; - H[n - 1][n] = -(H[n][n] - p) / H[n][n - 1]; - } else { - cdivres = cdiv(0, -H[n - 1][n], H[n - 1][n - 1] - p, q); - H[n - 1][n - 1] = cdivres[0]; - H[n - 1][n] = cdivres[1]; - } - - H[n][n - 1] = 0; - H[n][n] = 1; - for (i = n - 2; i >= 0; i--) { - ra = 0; - sa = 0; - for (j = l; j <= n; j++) { - ra = ra + H[i][j] * H[j][n - 1]; - sa = sa + H[i][j] * H[j][n]; - } - - w = H[i][i] - p; - - if (e[i] < 0) { - z = w; - r = ra; - s = sa; - } else { - l = i; - if (e[i] === 0) { - cdivres = cdiv(-ra, -sa, w, q); - H[i][n - 1] = cdivres[0]; - H[i][n] = cdivres[1]; - } else { - x = H[i][i + 1]; - y = H[i + 1][i]; - vr = (d[i] - p) * (d[i] - p) + e[i] * e[i] - q * q; - vi = (d[i] - p) * 2 * q; - if (vr === 0 && vi === 0) { - vr = - eps * - norm * - (Math.abs(w) + - Math.abs(q) + - Math.abs(x) + - Math.abs(y) + - Math.abs(z)); - } - cdivres = cdiv( - x * r - z * ra + q * sa, - x * s - z * sa - q * ra, - vr, - vi - ); - H[i][n - 1] = cdivres[0]; - H[i][n] = cdivres[1]; - if (Math.abs(x) > Math.abs(z) + Math.abs(q)) { - H[i + 1][n - 1] = (-ra - w * H[i][n - 1] + q * H[i][n]) / x; - H[i + 1][n] = (-sa - w * H[i][n] - q * H[i][n - 1]) / x; - } else { - cdivres = cdiv(-r - y * H[i][n - 1], -s - y * H[i][n], z, q); - H[i + 1][n - 1] = cdivres[0]; - H[i + 1][n] = cdivres[1]; - } - } - - t = Math.max(Math.abs(H[i][n - 1]), Math.abs(H[i][n])); - if (eps * t * t > 1) { - for (j = i; j <= n; j++) { - H[j][n - 1] = H[j][n - 1] / t; - H[j][n] = H[j][n] / t; - } - } - } - } - } - } - - for (i = 0; i < nn; i++) { - if (i < low || i > high) { - for (j = i; j < nn; j++) { - V[i][j] = H[i][j]; - } - } - } - - for (j = nn - 1; j >= low; j--) { - for (i = low; i <= high; i++) { - z = 0; - for (k = low; k <= Math.min(j, high); k++) { - z = z + V[i][k] * H[k][j]; - } - V[i][j] = z; - } - } -} - -function cdiv(xr, xi, yr, yi) { - var r, d; - if (Math.abs(yr) > Math.abs(yi)) { - r = yi / yr; - d = yr + r * yi; - return [(xr + r * xi) / d, (xi - r * xr) / d]; - } else { - r = yr / yi; - d = yi + r * yr; - return [(r * xr + xi) / d, (r * xi - xr) / d]; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/cholesky.js - - -/** - * @class CholeskyDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/CholeskyDecomposition.cs - * @param {Matrix} value - */ -class cholesky_CholeskyDecomposition { - constructor(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - if (!value.isSymmetric()) { - throw new Error('Matrix is not symmetric'); - } - - var a = value; - var dimension = a.rows; - var l = new matrix_Matrix(dimension, dimension); - var positiveDefinite = true; - var i, j, k; - - for (j = 0; j < dimension; j++) { - var Lrowj = l[j]; - var d = 0; - for (k = 0; k < j; k++) { - var Lrowk = l[k]; - var s = 0; - for (i = 0; i < k; i++) { - s += Lrowk[i] * Lrowj[i]; - } - Lrowj[k] = s = (a.get(j, k) - s) / l[k][k]; - d = d + s * s; - } - - d = a.get(j, j) - d; - - positiveDefinite &= d > 0; - l[j][j] = Math.sqrt(Math.max(d, 0)); - for (k = j + 1; k < dimension; k++) { - l[j][k] = 0; - } - } - - if (!positiveDefinite) { - throw new Error('Matrix is not positive definite'); - } - - this.L = l; - } - - /** - * - * @param {Matrix} value - * @return {Matrix} - */ - solve(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var l = this.L; - var dimension = l.rows; - - if (value.rows !== dimension) { - throw new Error('Matrix dimensions do not match'); - } - - var count = value.columns; - var B = value.clone(); - var i, j, k; - - for (k = 0; k < dimension; k++) { - for (j = 0; j < count; j++) { - for (i = 0; i < k; i++) { - B[k][j] -= B[i][j] * l[k][i]; - } - B[k][j] /= l[k][k]; - } - } - - for (k = dimension - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - for (i = k + 1; i < dimension; i++) { - B[k][j] -= B[i][j] * l[i][k]; - } - B[k][j] /= l[k][k]; - } - } - - return B; - } - - /** - * - * @return {Matrix} - */ - get lowerTriangularMatrix() { - return this.L; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/index.js -/* concated harmony reexport default */__webpack_require__.d(__webpack_exports__, "default", function() { return matrix_Matrix; }); -/* concated harmony reexport Matrix */__webpack_require__.d(__webpack_exports__, "Matrix", function() { return matrix_Matrix; }); -/* concated harmony reexport abstractMatrix */__webpack_require__.d(__webpack_exports__, "abstractMatrix", function() { return AbstractMatrix; }); -/* concated harmony reexport wrap */__webpack_require__.d(__webpack_exports__, "wrap", function() { return wrap; }); -/* concated harmony reexport WrapperMatrix2D */__webpack_require__.d(__webpack_exports__, "WrapperMatrix2D", function() { return WrapperMatrix2D_WrapperMatrix2D; }); -/* concated harmony reexport WrapperMatrix1D */__webpack_require__.d(__webpack_exports__, "WrapperMatrix1D", function() { return WrapperMatrix1D_WrapperMatrix1D; }); -/* concated harmony reexport solve */__webpack_require__.d(__webpack_exports__, "solve", function() { return solve; }); -/* concated harmony reexport inverse */__webpack_require__.d(__webpack_exports__, "inverse", function() { return inverse; }); -/* concated harmony reexport linearDependencies */__webpack_require__.d(__webpack_exports__, "linearDependencies", function() { return linearDependencies; }); -/* concated harmony reexport SingularValueDecomposition */__webpack_require__.d(__webpack_exports__, "SingularValueDecomposition", function() { return svd_SingularValueDecomposition; }); -/* concated harmony reexport SVD */__webpack_require__.d(__webpack_exports__, "SVD", function() { return svd_SingularValueDecomposition; }); -/* concated harmony reexport EigenvalueDecomposition */__webpack_require__.d(__webpack_exports__, "EigenvalueDecomposition", function() { return evd_EigenvalueDecomposition; }); -/* concated harmony reexport EVD */__webpack_require__.d(__webpack_exports__, "EVD", function() { return evd_EigenvalueDecomposition; }); -/* concated harmony reexport CholeskyDecomposition */__webpack_require__.d(__webpack_exports__, "CholeskyDecomposition", function() { return cholesky_CholeskyDecomposition; }); -/* concated harmony reexport CHO */__webpack_require__.d(__webpack_exports__, "CHO", function() { return cholesky_CholeskyDecomposition; }); -/* concated harmony reexport LuDecomposition */__webpack_require__.d(__webpack_exports__, "LuDecomposition", function() { return lu_LuDecomposition; }); -/* concated harmony reexport LU */__webpack_require__.d(__webpack_exports__, "LU", function() { return lu_LuDecomposition; }); -/* concated harmony reexport QrDecomposition */__webpack_require__.d(__webpack_exports__, "QrDecomposition", function() { return qr_QrDecomposition; }); -/* concated harmony reexport QR */__webpack_require__.d(__webpack_exports__, "QR", function() { return qr_QrDecomposition; }); - - - - - - - - - - - - - - - - -/***/ }) -/******/ ]); -}); \ No newline at end of file diff --git a/spaces/merve/data-leak/source/private-and-fair/footnote.js b/spaces/merve/data-leak/source/private-and-fair/footnote.js deleted file mode 100644 index 383057091ac6456ef8d4c7205478d89bef07ad87..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/private-and-fair/footnote.js +++ /dev/null @@ -1,132 +0,0 @@ -d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - -var footnums = '¹²³⁴⁵⁶⁷⁸⁹' - -var footendSel = d3.selectAll('.footend') - .each(function(d, i){ - var sel = d3.select(this) - var ogHTML = sel.parent().html() - sel - .at({href: '#footstart-' + i, id: 'footend-' + i}) - .text(footnums[i]) - .datum(ogHTML) - }) - -footendSel.parent().parent().selectAll('br').remove() - -var footstartSel = d3.selectAll('.footstart') - .each(function(d, i){ - d3.select(this) - .at({ - href: '#footend-' + i, - }) - .text(footnums[i]) - .datum(footendSel.data()[i]) - .parent().at({id: 'footstart-' + i}) - }) - .call(addLockedTooltip) - -ttSel.classed('tooltip-footnote', 1) - -function addLockedTooltip(sel){ - sel - .on('mouseover', function(d, i){ - ttSel.classed('tooltip-footnote', 1) - .html(d) - .select('.footend').remove() - - var x = this.offsetLeft, - y = this.offsetTop, - bb = ttSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight + scrollY > y + 20 + bb.height ? y + 20 : y - bb.height - 10; - - ttSel.st({left, top}).classed('tooltip-hidden', false) - }) - - sel.on('mousemove',mouseover).on('mouseout', mouseout) - ttSel.on('mousemove', mouseover).on('mouseout', mouseout) - function mouseover(){ - if (window.__ttfade) window.__ttfade.stop() - } - function mouseout(){ - if (window.__ttfade) window.__ttfade.stop() - window.__ttfade = d3.timeout(() => { - ttSel - .classed('tooltip-hidden', 1) - }, 250) - } -} - - - - - -var infoSel = d3.select('.info-box').html('') - .st({border: '1px solid orange', background: 'rgba(255,250,241,.5)', maxWidth: 750, margin: '0 auto', padding: 20, paddingTop: 5, paddingBottom: 5}) - // .st({textAlign: }) - -infoSel.append('p') - .st({marginLeft: 10}) - .html('Not familiar with how machine learning models are trained or why they might leak data?
These interactive articles will get you up to speed.') - .html('New to some of these concepts? These interactive articles will get you up to speed.') - .html('New to machine learning or differential privacy? These interactive articles will get you up to speed.') - -var articles = [ - { - img: 'https://pair.withgoogle.com/explorables/images/anonymization.png', - title: 'Collecting Sensitive Information', - permalink: 'https://pair.withgoogle.com/explorables/anonymization/', - }, - { - img: 'https://pair.withgoogle.com/explorables/images/model-inversion.png', - title: 'Why Some Models Leak Data', - permalink: 'https://pair.withgoogle.com/explorables/data-leak/', - }, - { - img: 'http://playground.tensorflow.org/preview.png', - title: 'TensorFlow Playground', - permalink: 'https://playground.tensorflow.org' - }, -] - - -var postSel = infoSel.appendMany('a.post', articles) - .st({ - textAlign: 'center', - width: '30.5%', - display: 'inline-block', - verticalAlign: 'top', - marginLeft: 10, - marginRight: 10, - textDecoration: 'none', - }) - .at({href: d => d.permalink}) - -postSel.append('div.img') - .st({ - width: '100%', - height: 80, - backgroundImage: d => `url(${d.img})`, - backgroundSize: 'cover', - backgroundPosition: 'center', - outline: '1px solid #ccc' - }) - -postSel.append('p.title') - .text(d => d.title) - .st({ - verticalAlign: 'top', - marginTop: 10, - textDecoration: 'none', - fontSize: 15, - fontWeight: 500, - }) - - -// width: 100%; -// height: 200px; -// background-image: url(https://pair.withgoogle.com/explorables/images/model-inversion.png); -// background-size: cover; -// background-position: center center; - diff --git a/spaces/merve/hidden-bias/source/_posts/2021-10-31-uncertainty-calibration.md b/spaces/merve/hidden-bias/source/_posts/2021-10-31-uncertainty-calibration.md deleted file mode 100644 index 0e097d412fff555af6b338ffa6d704d4ba05a454..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/_posts/2021-10-31-uncertainty-calibration.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -template: post.html -title: Are Model Predictions Probabilities? -socialsummary: Machine learning models express their uncertainty as model scores, but through calibration we can transform these scores into probabilities for more effective decision making. -shareimg: https://pair.withgoogle.com/explorables/images/uncertainty-calibration.png -shareimgabstract: https://pair.withgoogle.com/explorables/images/uncertainty-calibration-abstract.png -permalink: /uncertainty-calibration/ ---- - -
-
-
- -
- -If a machine learning model tells you that it’s going to rain tomorrow with a score of 0.60, should you buy an umbrella?1 - -

In the diagram, we have a hypothetical machine learning classifier for predicting rainy days. For each date, the classifier reads in relevant signals like temperature and humidity and spits out a number between 0 and 1. Each data point represents a different day, with the position representing the model’s prediction for rain that day and the symbol (🌧️ or ☀️) representing the true weather that occurred that day. - -

Do the model’s predictions tell us the probability of rain?
- -

In general, machine learning classifiers don’t just give binary predictions, but instead provide some numerical value between 0 and 1 for their predictions. This number, sometimes called the *model score* or *confidence*, is a way for the model to express their certainty about what class the input data belongs to. In most applications, the exact score is ignored and we use a threshold to round the score to a binary answer, yes or no, rain or not. However, by using *calibration* we can transform these scores into probabilities and use them more effectively in decision making. - -

- -

Thresholding

- -

One traditional approach to using a model’s score is through *thresholding*. In this setting, you choose a threshold *t* and then declare that the model thinks it’s going to rain if the score is above *t* and it’s not if the score is below, thereby converting the score to a binary outcome. When you observe the actual weather, you know how often it was wrong and can compute key aggregate statistics like *accuracy*. - -

We can sometimes treat these aggregate statistics themselves as probabilities. For example, accuracy is the probability that the binary prediction of your model (rain or not) is equal to the ground truth (🌧️ or ☀️). -

- -

Adjustable Thresholding

- -

The threshold can easily be changed after the model is trained. - -

Thresholding uses the model’s score to make a decision, but fails to consider the model’s confidence. The model score is only used to decide whether you are above or below the threshold, but the magnitude of the difference isn’t considered. For example, if you threshold at 0.4, the model’s predictions of 0.6 and 0.9 are treated the same, even though the model is much more confident in the latter. - -

Can we do a better job of incorporating the model score into our understanding of the model?
- -
- -

Calibration

- -

*Calibration* lets us compare our model scores directly to probabilities. - -

For this technique, instead of one threshold, we have many, which we use to split the predictions into buckets. Again, once we observe the ground truth, we can see what proportion of the predictions in each bucket were rainy days (🌧️). This proportion is the *empirical probability* of rain for that bucket. - -

Ideally, we want this proportion to be higher for higher buckets, so that the probability is roughly in line with the average prediction for that bucket. We call the difference between the proportion and the predicted rates the calibration error, and by averaging over all of the buckets, we can calculate the Expected Calibration Error. If the proportions and the predictions line up for our use case, meaning the error is low, then we say the model is “well-calibrated” and we can consider treating the model score as the probability that it will actually rain. -

- -

Adjusting Calibration

- -

We saw above that a well-calibrated model allows us to treat our model score as a kind of probability. But if we start with a poorly calibrated model, one which is over or under-confident. Is there anything we can do to improve it? - -

It turns out that, in many settings, we can adjust the model score without really changing the model’s decisions, as long as our adjustment preserves the order of the scores2. For example, if we map all of the scores from our original model to their squares, we don’t change the order of the data with respect to the model score. Thus, quantities like accuracy will stay the same as long as we appropriately map the threshold to its square as well. However, these adjustments *do* change the calibration of a model by changing which data points lie in which buckets. - -

**Try** **tweaking the thresholds** to *calibrate* the model scores for our data3 – how much can you improve the model's calibration?
- -

In general, we don’t have to rely on tweaking the model scores by hand to improve calibration. If we are trying to calibrate the model for a particular data distribution, we can use mathematical techniques like Isotonic Regression or Platt Scaling to generate the correct remapping for model scores. -

- -

Shifting Data

- -

While good calibration is an important property for a model’s scores to be interpreted as probabilities, it alone does not capture all aspects of model uncertainty. - -

What happens if it starts to rain less frequently after we've trained and calibrated our model? Notice how the calibration drops, even if we use the same calibrated model scores as before. - -

Models are usually only well calibrated with respect to certain data distributions. If the data changes significantly between training and serving time, our models might cease to be well calibrated and we can’t rely on using our model scores as probabilities. -

- -

Beyond Calibration

- -

Calibration can sometimes be easy to game. For example, if we knew that it rains 50% of the time over the course of the year, then we could create a model with a constant prediction of 0.5 every day. This would have perfect calibration, despite not being a very useful model for distinguishing day-to-day differences in the probability of rain. This highlights an important issue: - -

Better calibration doesn’t mean more accurate predictions.
- -

It turns out that statisticians identified the issue with focusing solely on calibration in meteorology when comparing weather forecasts, and came up with a solution. Proper scoring rules provide an alternative approach to measuring the quality of probabilistic forecasts, by using a formula to measure the distance between the model’s predictions and the true event probabilities. These rules guarantee that a better value must mean a better prediction in terms of accuracy and calibration. Such rules incentivize models to be both better calibrated and more accurate. - -

-
-
- - -

More Reading

- -

This post is only the beginning of the discussion on the connections between machine learning models, probability, and uncertainty. In practice, when developing machine learning models with uncertainty in mind, we may need to go beyond calibration. - -

In some settings, errors are not all equal. For example, if we are training a classifier to predict if a patient needs to be tested for a disease, then a false negative (missing a case of the disease) may be more detrimental than a false positive (accidentally having a patient tested). In such cases, we may not want a perfectly calibrated model, but may want to skew the model scores towards one class or another. The field of Statistical Decision Theory provides us with tools to determine how to better use model scores in this more general setting. Calibration may also lead to tension with other important goals like model fairness in some applications. - -

Beyond this, so far we’ve only considered the case of using a single model score, i.e. a point estimate. If we trained the model a thousand times with different random seeds, or resampled the training data, we would almost certainly generate a collection of different model scores for a given input. To truly unpack the different sources of uncertainty that we might encounter, we might want to look towards *distributional* approaches to measuring uncertainty, using techniques like Deep Ensembles or Bayesian modeling. We will dig deeper into these in future posts. - -

Credits

- -

Nithum Thain, Adam Pearce, Jasper Snoek & Mahima Pushkarna // March 2022 - -

Thanks to Balaji Lakshminarayanan, Emily Reif, Lucas Dixon, Martin Wattenberg, Fernanda Viégas, Ian Kivlichan, Nicole Mitchell, and Meredith Morris for their help with this piece. - -

Footnotes

- -

Your decision might depend both on the probability of rain and its severity (i.e. how much rain there is going to be). We’ll focus just on the probability for now. - -

Applying a strictly monotonic function to the model always keeps the order of scores the same. - -

In this example, we adjust the model scores by changing the model scores of elements within a bucket to the mean of the bucket. -

More Explorables

- -

- - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/public/uncertainty-calibration/footnote.css b/spaces/merve/measuring-fairness/public/uncertainty-calibration/footnote.css deleted file mode 100644 index 83472e6bc26c962b1c2fcc630d641ed62f181e77..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/uncertainty-calibration/footnote.css +++ /dev/null @@ -1,57 +0,0 @@ -.tooltip-footnote { - top: -1000px; - position: absolute; - padding: 10px; - background: rgba(255, 255, 255, .8); - border: 0px solid lightgray; - - width: 300px !important; - font-size: 14px; - line-height: 1.4em; - background: rgba(0, 0, 0, .8); - color: #fff; - pointer-events: all !important; -} -.tooltip-footnote a{ - color: #fff !important; -} -.tooltip-footnote:hover{ -/* opacity: 1; - pointer-events: all !important; -*/} - -.tooltip-footnote-hidden{ - opacity: 0; - transition: opacity .3s; - transition-delay: .2s; - pointer-events: none !important; -} - -@media (max-width: 590px){ - .footend{ - margin-left: 0px; - width: 10px; - } - - div.tooltip-footnote{ - transition: all 0s !important; - transition-delay: 0s !important; - - display: none; - position: fixed; - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -.footstart{ - padding-left: 2px; - height: 8px !important; - /*background: red;*/ - /*display: inline-block;*/ - line-height: 0em; -} diff --git a/spaces/mfrashad/ClothingGAN/netdissect/segdata.py b/spaces/mfrashad/ClothingGAN/netdissect/segdata.py deleted file mode 100644 index f3cb6dfac8985d9c55344abbc26cc26c4862aa85..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/netdissect/segdata.py +++ /dev/null @@ -1,74 +0,0 @@ -import os, numpy, torch, json -from .parallelfolder import ParallelImageFolders -from torchvision import transforms -from torchvision.transforms.functional import to_tensor, normalize - -class FieldDef(object): - def __init__(self, field, index, bitshift, bitmask, labels): - self.field = field - self.index = index - self.bitshift = bitshift - self.bitmask = bitmask - self.labels = labels - -class MultiSegmentDataset(object): - ''' - Just like ClevrMulticlassDataset, but the second stream is a one-hot - segmentation tensor rather than a flat one-hot presence vector. - - MultiSegmentDataset('dataset/clevrseg', - imgdir='images/train/positive', - segdir='images/train/segmentation') - ''' - def __init__(self, directory, transform=None, - imgdir='img', segdir='seg', val=False, size=None): - self.segdataset = ParallelImageFolders( - [os.path.join(directory, imgdir), - os.path.join(directory, segdir)], - transform=transform) - self.fields = [] - with open(os.path.join(directory, 'labelnames.json'), 'r') as f: - for defn in json.load(f): - self.fields.append(FieldDef( - defn['field'], defn['index'], defn['bitshift'], - defn['bitmask'], defn['label'])) - self.labels = ['-'] # Reserve label 0 to mean "no label" - self.categories = [] - self.label_category = [0] - for fieldnum, f in enumerate(self.fields): - self.categories.append(f.field) - f.firstchannel = len(self.labels) - f.channels = len(f.labels) - 1 - for lab in f.labels[1:]: - self.labels.append(lab) - self.label_category.append(fieldnum) - # Reserve 25% of the dataset for validation. - first_val = int(len(self.segdataset) * 0.75) - self.val = val - self.first = first_val if val else 0 - self.length = len(self.segdataset) - first_val if val else first_val - # Truncate the dataset if requested. - if size: - self.length = min(size, self.length) - - def __len__(self): - return self.length - - def __getitem__(self, index): - img, segimg = self.segdataset[index + self.first] - segin = numpy.array(segimg, numpy.uint8, copy=False) - segout = torch.zeros(len(self.categories), - segin.shape[0], segin.shape[1], dtype=torch.int64) - for i, field in enumerate(self.fields): - fielddata = ((torch.from_numpy(segin[:, :, field.index]) - >> field.bitshift) & field.bitmask) - segout[i] = field.firstchannel + fielddata - 1 - bincount = numpy.bincount(segout.flatten(), - minlength=len(self.labels)) - return img, segout, bincount - -if __name__ == '__main__': - ds = MultiSegmentDataset('dataset/clevrseg') - print(ds[0]) - import pdb; pdb.set_trace() - diff --git a/spaces/mikkoar/marco/tests/parse.ts b/spaces/mikkoar/marco/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git a/spaces/misteca/ChatGPT/ChuanhuChatbot.py b/spaces/misteca/ChatGPT/ChuanhuChatbot.py deleted file mode 100644 index 086dc6a1e3da91f4078e163ffac03ab54ed0a7d0..0000000000000000000000000000000000000000 --- a/spaces/misteca/ChatGPT/ChuanhuChatbot.py +++ /dev/null @@ -1,159 +0,0 @@ -import gradio as gr -# import openai -import os -import sys -import argparse -from utils import * -from presets import * - - -my_api_key = "" # 在这里输入你的 API 密钥 - -#if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - print("Please give a api key!") - sys.exit(1) - #auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if not my_api_key and os.path.exists("api_key.txt") and os.path.getsize("api_key.txt"): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess - -with gr.Blocks(css=customCSS) as demo: - gr.HTML(title) - with gr.Row(): - keyTxt = gr.Textbox(show_label=False, placeholder=f"在这里输入你的OpenAI API-key...", - value=my_api_key, type="password", visible=not HIDE_MY_KEY).style(container=True) - use_streaming_checkbox = gr.Checkbox(label="实时传输回答", value=True, visible=enable_streaming_option) - chatbot = gr.Chatbot() # .style(color_map=("#1D51EE", "#585A5B")) - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - with gr.Row(): - emptyBtn = gr.Button("🧹 新的对话") - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除最近一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - status_display = gr.Markdown("status: ready") - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", - label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件", choices=get_template_names(plain=True), multiselect=False, value=get_template_names(plain=True)[0]) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - templaeFileReadBtn = gr.Button("📂 读入模板") - with gr.Row(): - with gr.Column(scale=6): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template(get_template_names(plain=True)[0], mode=1), multiselect=False, value=load_template(get_template_names(plain=True)[0], mode=1)[0]) - with gr.Column(scale=1): - templateApplyBtn = gr.Button("⬇️ 应用") - with gr.Accordion(label="保存/加载对话历史记录", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False, value=get_history_names(plain=True)[0]) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - historyReadBtn = gr.Button("📂 读入对话") - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("参数", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0, - step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - gr.Markdown(description) - - - user_input.submit(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click(reset_state, outputs=[chatbot, history, token_count, status_display], show_progress=True) - - retryBtn.click(retry, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - delLastBtn.click(delete_last_conversation, [chatbot, history, token_count, use_streaming_checkbox], [ - chatbot, history, token_count, status_display], show_progress=True) - - reduceTokenBtn.click(reduce_token_size, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - saveHistoryBtn.click(save_chat_history, [ - saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True) - - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyReadBtn.click(load_chat_history, [historyFileSelectDropdown, systemPromptTxt, history, chatbot], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True) - - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - - templaeFileReadBtn.click(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True) - - templateApplyBtn.click(get_template_content, [promptTemplates, templateSelectDropdown, systemPromptTxt], [systemPromptTxt], show_progress=True) - -print("川虎的温馨提示:访问 http://localhost:7860 查看界面") -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - #if running in Docker - if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) - #if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password)) - else: - demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - #demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/mkotan/mafese_feature_selection/README.md b/spaces/mkotan/mafese_feature_selection/README.md deleted file mode 100644 index d9126014fbcfc2576ad200496a8830e44e167d7f..0000000000000000000000000000000000000000 --- a/spaces/mkotan/mafese_feature_selection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mafese Feature Selection -emoji: 🚀 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ml6team/controlnet-interior-design/preprocessing.py b/spaces/ml6team/controlnet-interior-design/preprocessing.py deleted file mode 100644 index 98b673a083b9b19f630558547b1cf9e22e075336..0000000000000000000000000000000000000000 --- a/spaces/ml6team/controlnet-interior-design/preprocessing.py +++ /dev/null @@ -1,134 +0,0 @@ -"""Preprocessing methods""" -import logging -from typing import List, Tuple - -import numpy as np -from PIL import Image, ImageFilter -import streamlit as st - -from config import COLOR_RGB, WIDTH, HEIGHT -# from enhance_config import ENHANCE_SETTINGS - -LOGGING = logging.getLogger(__name__) - - -def preprocess_seg_mask(canvas_seg, real_seg: Image.Image = None) -> Tuple[np.ndarray, np.ndarray]: - """Preprocess the segmentation mask. - Args: - canvas_seg: segmentation canvas - real_seg (Image.Image, optional): segmentation mask. Defaults to None. - Returns: - Tuple[np.ndarray, np.ndarray]: segmentation mask, segmentation mask with overlay - """ - # get unique colors in the segmentation - image_seg = canvas_seg.image_data.copy()[:, :, :3] - - # average the colors of the segmentation masks - average_color = np.mean(image_seg, axis=(2)) - mask = average_color[:, :] > 0 - if mask.sum() > 0: - mask = mask * 1 - - unique_colors = np.unique(image_seg.reshape(-1, image_seg.shape[-1]), axis=0) - unique_colors = [tuple(color) for color in unique_colors] - - unique_colors = [color for color in unique_colors if np.sum( - np.all(image_seg == color, axis=-1)) > 100] - - unique_colors_exact = [color for color in unique_colors if color in COLOR_RGB] - - if real_seg is not None: - overlay_seg = np.array(real_seg) - - unique_colors = np.unique(overlay_seg.reshape(-1, overlay_seg.shape[-1]), axis=0) - unique_colors = [tuple(color) for color in unique_colors] - - for color in unique_colors_exact: - if color != (255, 255, 255) and color != (0, 0, 0): - overlay_seg[np.all(image_seg == color, axis=-1)] = color - image_seg = overlay_seg - - return mask, image_seg - - -def get_mask(image_mask: np.ndarray) -> np.ndarray: - """Get the mask from the segmentation mask. - Args: - image_mask (np.ndarray): segmentation mask - Returns: - np.ndarray: mask - """ - # average the colors of the segmentation masks - average_color = np.mean(image_mask, axis=(2)) - mask = average_color[:, :] > 0 - if mask.sum() > 0: - mask = mask * 1 - return mask - - -def get_image() -> np.ndarray: - """Get the image from the session state. - Returns: - np.ndarray: image - """ - if 'initial_image' in st.session_state and st.session_state['initial_image'] is not None: - initial_image = st.session_state['initial_image'] - if isinstance(initial_image, Image.Image): - return np.array(initial_image.resize((WIDTH, HEIGHT))) - else: - return np.array(Image.fromarray(initial_image).resize((WIDTH, HEIGHT))) - else: - return None - - -# def make_enhance_config(segmentation, objects=None): - """Make the enhance config for the segmentation image. - """ - info = ENHANCE_SETTINGS[objects] - - segmentation = np.array(segmentation) - - if 'replace' in info: - replace_color = info['replace'] - mask = np.zeros(segmentation.shape) - for color in info['colors']: - mask[np.all(segmentation == color, axis=-1)] = [1, 1, 1] - segmentation[np.all(segmentation == color, axis=-1)] = replace_color - - if info['inverse'] is False: - mask = np.zeros(segmentation.shape) - for color in info['colors']: - mask[np.all(segmentation == color, axis=-1)] = [1, 1, 1] - else: - mask = np.ones(segmentation.shape) - for color in info['colors']: - mask[np.all(segmentation == color, axis=-1)] = [0, 0, 0] - - st.session_state['positive_prompt'] = info['positive_prompt'] - st.session_state['negative_prompt'] = info['negative_prompt'] - - if info['inpainting'] is True: - mask = mask.astype(np.uint8) - mask = Image.fromarray(mask) - mask = mask.filter(ImageFilter.GaussianBlur(radius=13)) - mask = mask.filter(ImageFilter.MaxFilter(size=9)) - mask = np.array(mask) - - mask[mask < 0.1] = 0 - mask[mask >= 0.1] = 1 - mask = mask.astype(np.uint8) - - conditioning = dict( - mask_image=mask, - positive_prompt=info['positive_prompt'], - negative_prompt=info['negative_prompt'], - ) - else: - conditioning = dict( - mask_image=mask, - controlnet_conditioning_image=segmentation, - positive_prompt=info['positive_prompt'], - negative_prompt=info['negative_prompt'], - strength=info['strength'] - ) - return conditioning, info['inpainting'] \ No newline at end of file diff --git a/spaces/mmecheri/Rakuten_Streamlit/demo.py b/spaces/mmecheri/Rakuten_Streamlit/demo.py deleted file mode 100644 index 887ce857acb747c49c145bbcecb07e96e8e81e6c..0000000000000000000000000000000000000000 --- a/spaces/mmecheri/Rakuten_Streamlit/demo.py +++ /dev/null @@ -1,604 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Sat Mar 12 11:30:24 2022 - -@author: MME -""" -import streamlit as st -import demo_inputs -import cleaning_text -from PIL import Image -from io import BytesIO -import numpy as np -import cv2 - - -def app(): - - - st.title("Prédictions (Démo)") - contenent_page(text_page ='./page_descriptions/demo_txt.md') - - -def contenent_page(text_page): - - '''The text page. Read from .md file ''' - with open(text_page, 'r', encoding='utf-8') as txtpage: - txtpage = txtpage.read().split('---Insersetion---') - - st.markdown(txtpage[0], unsafe_allow_html=True) - - - classif_options = st.selectbox( - 'Veuillez sélectionner le type de classification', - (['', 'A partir du Texte', 'A partir d\'Images', 'A partir du Texte et d\'Images(Bimodal)'])) - - - if classif_options != '': - - if classif_options == 'A partir du Texte': - st.markdown(txtpage[1], unsafe_allow_html=True) - models_txt_options = st.selectbox( - 'Veuillez sélectionner un modèle', - (['','Conv1D', 'Simple DNN'])) - - - elif classif_options == 'A partir d\'Images': - st.markdown(txtpage[1], unsafe_allow_html=True) - models_img_options = st.selectbox( - 'Veuillez sélectionner un modèle', - (['', 'Xception', 'InceptionV3'])) - - - elif classif_options == 'A partir du Texte et d\'Images(Bimodal)': - st.markdown(txtpage[1], unsafe_allow_html=True) - models_bimod_options = st.selectbox( - 'Veuillez sélectionner une combinaison', - (['', 'Conv1D & Simple DNN & Xception', 'Conv1D & Simple DNN & InceptionV3'])) - - - st.markdown(txtpage[2], unsafe_allow_html=True) - - source_options = st.selectbox( - 'Veuillez sélectionner une source de données', - (['', 'Echantillon du jeu de données(Choix Aléatoire)', 'Manuelle' ])) - - -#***************************************************************************************************************************** -#------------------------------------------ Depuis le jeu de données d\'entrainement(Choix Aléatoire)-------------------------- -#***************************************************************************************************************************** - if classif_options == 'A partir du Texte' and models_txt_options == 'Conv1D': - if source_options == 'Echantillon du jeu de données(Choix Aléatoire)': - - if st.button('Obtenir un exemple et Classifier'): - design, descrip, im_name, text_cleaned, row_index = demo_inputs.get_random_row() - - st.text_input('Designation', design, disabled = True) - st.text_input('Description', descrip, disabled = True) - - pred_class , pred_label ,y_pred_proba = demo_inputs.predict_with_conv1D(text_cleaned) - prodcode, label = demo_inputs.get_real_class_info(row_index) - - col1, col2 = st.columns(2) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'Classe rèelle du produit: ' + str(prodcode) + '' - msg5 = 'Catégorie rèelle du produit : ' + str(label) + '' - st.markdown(msg4, unsafe_allow_html=True) - st.markdown(msg5, unsafe_allow_html=True) - -# #**************************************************************************************************************** - if classif_options == 'A partir du Texte' and models_txt_options == 'Simple DNN': - if source_options == 'Echantillon du jeu de données(Choix Aléatoire)': - - if st.button('Obtenir un exemple et Classifier'): - design, descrip, im_name, text_cleaned,row_index = demo_inputs.get_random_row() - - st.text_input('Designation', design, disabled = True) - st.text_input('Description', descrip, disabled = True) - - pred_class , pred_label ,y_pred_proba = demo_inputs.predict_with_simpDNN(text_cleaned) - prodcode, label = demo_inputs.get_real_class_info(row_index) - - col1, col2 = st.columns(2) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'Classe rèelle du produit: ' + str(prodcode) + '' - msg5 = 'Catégorie rèelle du produit : ' + str(label) + '' - st.markdown(msg4, unsafe_allow_html=True) - st.markdown(msg5, unsafe_allow_html=True) - -# #**************************************************************************************************************** - if classif_options == 'A partir d\'Images' and models_img_options == 'Xception': - if source_options == 'Echantillon du jeu de données(Choix Aléatoire)': - - if st.button('Obtenir un exemple et Classifier'): - - design, descrip, im_name, text_cleaned, row_index = demo_inputs.get_random_row() - - image = Image.open(demo_inputs.images_dir + im_name) - st.image(image, channels="BGR",width=299,caption= im_name) - - image = demo_inputs.prepare_image(image, target=(299, 299)) - - pred_class ,pred_label,y_pred_proba = demo_inputs.predict_with_xception(image) - prodcode, label = demo_inputs.get_real_class_info(row_index) - - col1, col2 = st.columns(2) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'Classe rèelle du produit: ' + str(prodcode) + '' - msg5 = 'Catégorie rèelle du produit : ' + str(label) + '' - st.markdown(msg4, unsafe_allow_html=True) - st.markdown(msg5, unsafe_allow_html=True) - -# #**************************************************************************************************************** - if classif_options == 'A partir d\'Images' and models_img_options == 'InceptionV3': - if source_options == 'Echantillon du jeu de données(Choix Aléatoire)': - - if st.button('Obtenir un exemple et Classifier'): - - design, descrip, im_name, text_cleaned, row_index = demo_inputs.get_random_row() - - image = Image.open(demo_inputs.images_dir + im_name) - st.image(image, channels="BGR",width=299,caption= im_name) - - image = demo_inputs.prepare_image(image, target=(299, 299)) - - pred_class ,pred_label,y_pred_proba = demo_inputs.predict_with_inception(image) - prodcode, label = demo_inputs.get_real_class_info(row_index) - - col1, col2 = st.columns(2) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'Classe rèelle du produit: ' + str(prodcode) + '' - msg5 = 'Catégorie rèelle du produit : ' + str(label) + '' - st.markdown(msg4, unsafe_allow_html=True) - st.markdown(msg5, unsafe_allow_html=True) - -##************************************************************************************************************************ -# Bimodal -##************************************************************************************************************************ - if classif_options == 'A partir du Texte et d\'Images(Bimodal)' and models_bimod_options == 'Conv1D & Simple DNN & Xception': - if source_options == 'Echantillon du jeu de données(Choix Aléatoire)': - - if st.button('Obtenir un exemple et Classifier'): - - design, descrip, im_name, text_cleaned,row_index= demo_inputs.get_random_row() - - st.text_input('Designation', design, disabled = True) - st.text_input('Description', descrip, disabled = True) - - image = Image.open(demo_inputs.images_dir + im_name) - st.image(image, channels="BGR",width=299,caption= im_name) - - image = demo_inputs.prepare_image(image, target=(299, 299)) - - pred_class ,pred_label, y_pred_proba = demo_inputs.predict_conv1D_simp_DNN_xception(text_cleaned, image) - prodcode, label = demo_inputs.get_real_class_info(row_index) - - pred_class_conv1 , pred_label_conv1 ,y_pred_proba_conv1 = demo_inputs.predict_with_conv1D(text_cleaned) - pred_class_sDNN , pred_label_sDNN ,y_pred_proba_sDNN = demo_inputs.predict_with_simpDNN(text_cleaned) - pred_class_Xcep ,pred_label_Xcep,y_pred_proba_Xcep = demo_inputs.predict_with_xception(image) - - col1, col2, col3 = st.columns(3) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Conv1D: ' + str(pred_class_conv1) +', ' +pred_label_conv1 + '' - st.markdown(msg4, unsafe_allow_html=True) - msg6 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Simple DNN: ' + str(pred_class_sDNN) +', ' +pred_label_sDNN + '' - st.markdown(msg6, unsafe_allow_html=True) - msg8 = 'La classe de produit et la catégorie prédites à partir de l\'Image avec le modèle Xception(CNN): ' + str(pred_class_Xcep) +', ' +pred_label_Xcep + '' - st.markdown(msg8, unsafe_allow_html=True) - - with col3: - msg10 = 'Classe rèelle du produit: ' + str(prodcode) + '' - msg11 = 'Catégorie rèelle du produit : ' + str(label) + '' - st.markdown(msg10, unsafe_allow_html=True) - st.markdown(msg11, unsafe_allow_html=True) - -##********************************************************************************************************************** - if classif_options == 'A partir du Texte et d\'Images(Bimodal)' and models_bimod_options == 'Conv1D & Simple DNN & InceptionV3': - if source_options == 'Echantillon du jeu de données(Choix Aléatoire)': - - if st.button('Obtenir un exemple et Classifier'): - - design, descrip, im_name, text_cleaned,row_index= demo_inputs.get_random_row() - - st.text_input('Designation', design, disabled = True) - st.text_input('Description', descrip, disabled = True) - - image = Image.open(demo_inputs.images_dir + im_name) - st.image(image, channels="BGR",width=299,caption= im_name) - - image = demo_inputs.prepare_image(image, target=(299, 299)) - - pred_class ,pred_label, y_pred_proba = demo_inputs.predict_conv1D_simp_DNN_inception(text_cleaned, image) - prodcode, label = demo_inputs.get_real_class_info(row_index) - - pred_class_conv1 , pred_label_conv1 ,y_pred_proba_conv1 = demo_inputs.predict_with_conv1D(text_cleaned) - pred_class_sDNN , pred_label_sDNN ,y_pred_proba_sDNN = demo_inputs.predict_with_simpDNN(text_cleaned) - pred_class_incep ,pred_label_incep,y_pred_proba_incep = demo_inputs.predict_with_inception(image) - - col1, col2, col3 = st.columns(3) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Conv1D: ' + str(pred_class_conv1) +', ' +pred_label_conv1 + '' - st.markdown(msg4, unsafe_allow_html=True) - msg6 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Simple DNN: ' + str(pred_class_sDNN) +', ' +pred_label_sDNN + '' - st.markdown(msg6, unsafe_allow_html=True) - msg8 = 'La classe de produit et la catégorie prédites à partir de l\'Image avec le modèle InceptionV3(CNN): ' + str(pred_class_incep) +', ' +pred_label_incep + '' - st.markdown(msg8, unsafe_allow_html=True) - - - with col3: - msg10 = 'Classe rèelle du produit: ' + str(prodcode) + '' - msg11 = 'Catégorie rèelle du produit : ' + str(label) + '' - st.markdown(msg10, unsafe_allow_html=True) - st.markdown(msg11, unsafe_allow_html=True) - -##**************************************************************************************************************************** -##----------------------------------------------Manuel------------------------------------------------------------------------ -##**************************************************************************************************************************** - if classif_options == 'A partir du Texte' and models_txt_options == 'Conv1D' : - if source_options == 'Manuelle': - - user_Desig_input = st.text_area('Designation (obligatoire)', ) - user_Descrip_input = st.text_area('Description (optionnel)', ) - - if st.button('Classifier'): - - if user_Desig_input == "" or user_Desig_input.isspace(): - st.write('Veuillez saisir le champ "Designation"' ) - - else : - try: - df = cleaning_text.createdfManuel(user_Desig_input, user_Descrip_input) - df_cleaned = cleaning_text.CreateTextANDcleaning(df) - - pred_class , pred_label ,y_pred_proba = demo_inputs.predict_with_conv1D(df_cleaned) - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - except Exception as e: - msgError = 'Le informations texte fournies ne sont pas valides' - st.markdown(msgError, unsafe_allow_html=True) - -##**************************************************************************************************************** -##**************************************************************************************************************** - if classif_options == 'A partir du Texte' and models_txt_options == 'Simple DNN' : - if source_options == 'Manuelle': - - user_Desig_input = st.text_area('Designation (obligatoire)', ) - user_Descrip_input = st.text_area('Description (optionnel)', ) - - if st.button('Classifier'): - - if user_Desig_input == "" or user_Desig_input.isspace(): - st.write('Veuillez saisir le champ "Designation"' ) - - else : - try: - df = cleaning_text.createdfManuel(user_Desig_input, user_Descrip_input) - df_cleaned = cleaning_text.CreateTextANDcleaning(df) - - pred_class , pred_label ,y_pred_proba = demo_inputs.predict_with_simpDNN(df_cleaned) - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - except Exception as e: - msgError = 'Le informations texte fournies ne sont pas valides' - st.markdown(msgError, unsafe_allow_html=True) - - -##**************************************************************************************************************** -##**************************************************************************************************************** - if classif_options == 'A partir d\'Images' and models_img_options == 'Xception' : - if source_options == 'Manuelle': - - uploaded_file = st.file_uploader("Sélectionner un fichier image") - - if st.button('Classifier'): - - if uploaded_file is None: - st.write('Veuillez charger une image') - - elif uploaded_file is not None: - content = uploaded_file.read() - - try: - image = Image.open(BytesIO(content)).convert("RGB") - st.image(image, channels="BGR",width=299) - - image = demo_inputs.prepare_image(image, target=(299, 299)) - - pred_class ,pred_label,y_pred_proba = demo_inputs.predict_with_xception_manu(image) - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - except IOError: - msgError = 'Le fichier fourni n\'est pas une image valide' - st.markdown(msgError, unsafe_allow_html=True) - -##**************************************************************************************************************** -##**************************************************************************************************************** - if classif_options == 'A partir d\'Images' and models_img_options == 'InceptionV3' : - if source_options == 'Manuelle': - - uploaded_file = st.file_uploader("Sélectionner un fichier image") - - if st.button('Classifier'): - - if uploaded_file is None: - st.write('Veuillez charger une image') - - elif uploaded_file is not None: - content = uploaded_file.read() - - try: - image = Image.open(BytesIO(content)).convert("RGB") - st.image(image, channels="BGR",width=299) - - image = demo_inputs.prepare_image(image, target=(299, 299)) - - pred_class ,pred_label,y_pred_proba = demo_inputs.predict_with_inception_manu(image) - - precision = np.amax(y_pred_proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - except IOError: - msgError = 'Le fichier fourni n\'est pas une image valide' - st.markdown(msgError, unsafe_allow_html=True) - -##**************************************************************************************************************** -##**************************************************************************************************************** - if classif_options == 'A partir du Texte et d\'Images(Bimodal)' and models_bimod_options == 'Conv1D & Simple DNN & Xception' : - if source_options == 'Manuelle': - - user_Desig_input = st.text_area('Designation (obligatoire)', ) - user_Descrip_input = st.text_area('Description (optionnel)', ) - uploaded_file = st.file_uploader("Sélectionner un fichier image") - - - if st.button('Classifier'): - - if (user_Desig_input == "" or user_Desig_input.isspace()) and uploaded_file is None: - st.write('Veuillez renseigner le champ "Designation" et charger une image' ) - - elif uploaded_file is None: - st.write('Veuillez charger une image') - - elif user_Desig_input == "" or user_Desig_input.isspace(): - st.write('Veuillez renseigner le champ "Designation"') - - else : - try: - content = uploaded_file.read() - image = Image.open(BytesIO(content)).convert("RGB") - st.image(image, channels="BGR",width=299) - image = demo_inputs.prepare_image(image, target=(299, 299)) - - except IOError: - msgError = 'Le fichier fourni n\'est pas une image valide' - st.markdown(msgError, unsafe_allow_html=True) - #pass - else: - try: - df = cleaning_text.createdfManuel(user_Desig_input, user_Descrip_input) - df_cleaned = cleaning_text.CreateTextANDcleaning(df) - except Exception as e: - msgError = 'Le informations texte fournies ne sont pas valides' - st.markdown(msgError, unsafe_allow_html=True) - - else: - pred_class ,pred_label, proba = demo_inputs.predict_conv1D_simp_DNN_xception_manu(df_cleaned, image) - pred_class_conv1 , pred_label_conv1 ,y_pred_proba_conv1 = demo_inputs.predict_with_conv1D(df_cleaned) - pred_class_sDNN , pred_label_sDNN ,y_pred_proba_sDNN = demo_inputs.predict_with_simpDNN(df_cleaned) - pred_class_Xcep ,pred_label_Xcep,y_pred_proba_Xcep = demo_inputs.predict_with_xception_manu(image) - - col1, col2= st.columns(2) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Conv1D: ' + str(pred_class_conv1) +', ' +pred_label_conv1 + '' - st.markdown(msg4, unsafe_allow_html=True) - msg6 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Simple DNN: ' + str(pred_class_sDNN) +', ' +pred_label_sDNN + '' - st.markdown(msg6, unsafe_allow_html=True) - msg7 = 'La classe de produit et la catégorie prédites à partir de l\'Image avec le modèle Xception(CNN): ' + str(pred_class_Xcep) +', ' +pred_label_Xcep + '' - st.markdown(msg7, unsafe_allow_html=True) - - -##**************************************************************************************************************** - if classif_options == 'A partir du Texte et d\'Images(Bimodal)' and models_bimod_options == 'Conv1D & Simple DNN & InceptionV3' : - if source_options == 'Manuelle': - - user_Desig_input = st.text_area('Designation (obligatoire)', ) - user_Descrip_input = st.text_area('Description (optionnel)', ) - uploaded_file = st.file_uploader("Sélectionner un fichier image") - - if st.button('Classifier'): - - if (user_Desig_input == "" or user_Desig_input.isspace()) and uploaded_file is None: - st.write('Veuillez renseigner le champ "Designation" et charger une image' ) - - elif uploaded_file is None: - st.write('Veuillez charger une image') - - elif user_Desig_input == "" or user_Desig_input.isspace(): - st.write('Veuillez renseigner le champ "Designation"') - - else : - try: - content = uploaded_file.read() - image = Image.open(BytesIO(content)).convert("RGB") - st.image(image, channels="BGR",width=299) - image = demo_inputs.prepare_image(image, target=(299, 299)) - - except IOError: - msgError = 'Le fichier fourni n\'est pas une image valide' - st.markdown(msgError, unsafe_allow_html=True) - #pass - else: - try: - df = cleaning_text.createdfManuel(user_Desig_input, user_Descrip_input) - df_cleaned = cleaning_text.CreateTextANDcleaning(df) - except Exception as e: - msgError = 'Le informations texte fournies ne sont pas valides' - st.markdown(msgError, unsafe_allow_html=True) - - else: - pred_class ,pred_label, proba = demo_inputs.predict_conv1D_simp_DNN_inception_manu(df_cleaned, image) - pred_class_conv1 , pred_label_conv1 ,y_pred_proba_conv1 = demo_inputs.predict_with_conv1D(df_cleaned) - pred_class_sDNN , pred_label_sDNN ,y_pred_proba_sDNN = demo_inputs.predict_with_simpDNN(df_cleaned) - pred_class_Xcep ,pred_label_Xcep,y_pred_proba_Xcep = demo_inputs.predict_with_inception_manu(image) - - col1, col2= st.columns(2) - - with col1: - msg1 = 'La classe de produit prédite: ' + str(pred_class) + '' - msg2 = 'La catégorie prédite : ' + pred_label + '' - - precision = np.amax(proba) - precision = precision * 100 - precision = np.round(precision,2) - - msg3 = 'Certitude : ' + str(precision) +'%'+ '' - st.markdown(msg1, unsafe_allow_html=True) - st.markdown(msg2, unsafe_allow_html=True) - st.markdown(msg3, unsafe_allow_html=True) - - with col2: - msg4 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Conv1D: ' + str(pred_class_conv1) +', ' +pred_label_conv1 + '' - st.markdown(msg4, unsafe_allow_html=True) - msg6 = 'La classe de produit et la catégorie prédites à partir du Texte avec le modèle Simple DNN: ' + str(pred_class_sDNN) +', ' +pred_label_sDNN + '' - st.markdown(msg6, unsafe_allow_html=True) - msg7 = 'La classe de produit et la catégorie prédites à partir de l\'Image avec le modèle Inception(CNN): ' + str(pred_class_Xcep) +', ' +pred_label_Xcep + '' - st.markdown(msg7, unsafe_allow_html=True) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/dedup_all.py b/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/dedup_all.py deleted file mode 100644 index ef39c05ee606aaeda1d9e94970932d2241a8b281..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/dedup_all.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - - -import os -import glob -import argparse -from utils.dedup import deup - -import sys -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--from-folder", type=str, required=True, - help="the data folder to be dedup") - parser.add_argument("--to-folder", type=str, required=True, - help="the data folder to save deduped data") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - - if args.directions is None: - raw_files = glob.glob(f'{args.from_folder}/train*') - - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - else: - directions = args.directions.split(',') - directions = sorted(set(directions)) - - for direction in directions: - src, tgt = direction.split('-') - src_file = f'{args.from_folder}/train.{src}-{tgt}.{src}' - tgt_file = f'{args.from_folder}/train.{src}-{tgt}.{tgt}' - src_file_out = f'{args.to_folder}/train.{src}-{tgt}.{src}' - tgt_file_out = f'{args.to_folder}/train.{src}-{tgt}.{tgt}' - assert src_file != src_file_out - assert tgt_file != tgt_file_out - print(f'deduping {src_file}, {tgt_file}') - deup(src_file, tgt_file, src_file_out, tgt_file_out) - - -if __name__ == "__main__": - main() diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/speech_synthesis/evaluation/eval_f0.py b/spaces/mshukor/UnIVAL/fairseq/examples/speech_synthesis/evaluation/eval_f0.py deleted file mode 100644 index df721d683113b44957149cfc3cddaba36520a22c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/speech_synthesis/evaluation/eval_f0.py +++ /dev/null @@ -1,266 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Signal processing-based evaluation using waveforms -""" -import numpy as np -import os.path as op - -import torchaudio -import tqdm -from tabulate import tabulate - -from examples.speech_synthesis.utils import ( - gross_pitch_error, voicing_decision_error, f0_frame_error -) -from examples.speech_synthesis.evaluation.eval_sp import load_eval_spec - - -def difference_function(x, n, tau_max): - """ - Compute difference function of data x. This solution is implemented directly - with Numpy fft. - - - :param x: audio data - :param n: length of data - :param tau_max: integration window size - :return: difference function - :rtype: list - """ - - x = np.array(x, np.float64) - w = x.size - tau_max = min(tau_max, w) - x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum())) - size = w + tau_max - p2 = (size // 32).bit_length() - nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32) - size_pad = min(x * 2 ** p2 for x in nice_numbers if x * 2 ** p2 >= size) - fc = np.fft.rfft(x, size_pad) - conv = np.fft.irfft(fc * fc.conjugate())[:tau_max] - return x_cumsum[w:w - tau_max:-1] + x_cumsum[w] - x_cumsum[:tau_max] - \ - 2 * conv - - -def cumulative_mean_normalized_difference_function(df, n): - """ - Compute cumulative mean normalized difference function (CMND). - - :param df: Difference function - :param n: length of data - :return: cumulative mean normalized difference function - :rtype: list - """ - - # scipy method - cmn_df = df[1:] * range(1, n) / np.cumsum(df[1:]).astype(float) - return np.insert(cmn_df, 0, 1) - - -def get_pitch(cmdf, tau_min, tau_max, harmo_th=0.1): - """ - Return fundamental period of a frame based on CMND function. - - :param cmdf: Cumulative Mean Normalized Difference function - :param tau_min: minimum period for speech - :param tau_max: maximum period for speech - :param harmo_th: harmonicity threshold to determine if it is necessary to - compute pitch frequency - :return: fundamental period if there is values under threshold, 0 otherwise - :rtype: float - """ - tau = tau_min - while tau < tau_max: - if cmdf[tau] < harmo_th: - while tau + 1 < tau_max and cmdf[tau + 1] < cmdf[tau]: - tau += 1 - return tau - tau += 1 - - return 0 # if unvoiced - - -def compute_yin(sig, sr, w_len=512, w_step=256, f0_min=100, f0_max=500, - harmo_thresh=0.1): - """ - - Compute the Yin Algorithm. Return fundamental frequency and harmonic rate. - - https://github.com/NVIDIA/mellotron adaption of - https://github.com/patriceguyot/Yin - - :param sig: Audio signal (list of float) - :param sr: sampling rate (int) - :param w_len: size of the analysis window (samples) - :param w_step: size of the lag between two consecutives windows (samples) - :param f0_min: Minimum fundamental frequency that can be detected (hertz) - :param f0_max: Maximum fundamental frequency that can be detected (hertz) - :param harmo_thresh: Threshold of detection. The yalgorithmù return the - first minimum of the CMND function below this threshold. - - :returns: - - * pitches: list of fundamental frequencies, - * harmonic_rates: list of harmonic rate values for each fundamental - frequency value (= confidence value) - * argmins: minimums of the Cumulative Mean Normalized DifferenceFunction - * times: list of time of each estimation - :rtype: tuple - """ - - tau_min = int(sr / f0_max) - tau_max = int(sr / f0_min) - - # time values for each analysis window - time_scale = range(0, len(sig) - w_len, w_step) - times = [t/float(sr) for t in time_scale] - frames = [sig[t:t + w_len] for t in time_scale] - - pitches = [0.0] * len(time_scale) - harmonic_rates = [0.0] * len(time_scale) - argmins = [0.0] * len(time_scale) - - for i, frame in enumerate(frames): - # Compute YIN - df = difference_function(frame, w_len, tau_max) - cm_df = cumulative_mean_normalized_difference_function(df, tau_max) - p = get_pitch(cm_df, tau_min, tau_max, harmo_thresh) - - # Get results - if np.argmin(cm_df) > tau_min: - argmins[i] = float(sr / np.argmin(cm_df)) - if p != 0: # A pitch was found - pitches[i] = float(sr / p) - harmonic_rates[i] = cm_df[p] - else: # No pitch, but we compute a value of the harmonic rate - harmonic_rates[i] = min(cm_df) - - return pitches, harmonic_rates, argmins, times - - -def extract_f0(samples): - f0_samples = [] - for sample in tqdm.tqdm(samples): - if not op.isfile(sample["ref"]) or not op.isfile(sample["syn"]): - f0_samples.append(None) - continue - - # assume single channel - yref, sr = torchaudio.load(sample["ref"]) - ysyn, _sr = torchaudio.load(sample["syn"]) - yref, ysyn = yref[0], ysyn[0] - assert sr == _sr, f"{sr} != {_sr}" - - yref_f0 = compute_yin(yref, sr) - ysyn_f0 = compute_yin(ysyn, sr) - - f0_samples += [ - { - "ref": yref_f0, - "syn": ysyn_f0 - } - ] - - return f0_samples - - -def eval_f0_error(samples, distortion_fn): - results = [] - for sample in tqdm.tqdm(samples): - if sample is None: - results.append(None) - continue - # assume single channel - yref_f, _, _, yref_t = sample["ref"] - ysyn_f, _, _, ysyn_t = sample["syn"] - - yref_f = np.array(yref_f) - yref_t = np.array(yref_t) - ysyn_f = np.array(ysyn_f) - ysyn_t = np.array(ysyn_t) - - distortion = distortion_fn(yref_t, yref_f, ysyn_t, ysyn_f) - results.append((distortion.item(), - len(yref_f), - len(ysyn_f) - )) - return results - - -def eval_gross_pitch_error(samples): - return eval_f0_error(samples, gross_pitch_error) - - -def eval_voicing_decision_error(samples): - return eval_f0_error(samples, voicing_decision_error) - - -def eval_f0_frame_error(samples): - return eval_f0_error(samples, f0_frame_error) - - -def print_results(results, show_bin): - results = np.array(list(filter(lambda x: x is not None, results))) - - np.set_printoptions(precision=3) - - def _print_result(results): - res = { - "nutt": len(results), - "error": results[:, 0].mean(), - "std": results[:, 0].std(), - "dur_ref": int(results[:, 1].sum()), - "dur_syn": int(results[:, 2].sum()), - } - print(tabulate([res.values()], res.keys(), floatfmt=".4f")) - - print(">>>> ALL") - _print_result(results) - - if show_bin: - edges = [0, 200, 400, 600, 800, 1000, 2000, 4000] - for i in range(1, len(edges)): - mask = np.logical_and(results[:, 1] >= edges[i-1], - results[:, 1] < edges[i]) - if not mask.any(): - continue - bin_results = results[mask] - print(f">>>> ({edges[i-1]}, {edges[i]})") - _print_result(bin_results) - - -def main(eval_f0, gpe, vde, ffe, show_bin): - samples = load_eval_spec(eval_f0) - if gpe or vde or ffe: - f0_samples = extract_f0(samples) - - if gpe: - print("===== Evaluate Gross Pitch Error =====") - results = eval_gross_pitch_error(f0_samples) - print_results(results, show_bin) - if vde: - print("===== Evaluate Voicing Decision Error =====") - results = eval_voicing_decision_error(f0_samples) - print_results(results, show_bin) - if ffe: - print("===== Evaluate F0 Frame Error =====") - results = eval_f0_frame_error(f0_samples) - print_results(results, show_bin) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("eval_f0") - parser.add_argument("--gpe", action="store_true") - parser.add_argument("--vde", action="store_true") - parser.add_argument("--ffe", action="store_true") - parser.add_argument("--show-bin", action="store_true") - args = parser.parse_args() - - main(args.eval_f0, args.gpe, args.vde, args.ffe, args.show_bin) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq_cli/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq_cli/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/musicians/deepharmony/lib/MusicXMLParser.py b/spaces/musicians/deepharmony/lib/MusicXMLParser.py deleted file mode 100644 index aa212106c31e4aa764b56c275ef3b1ea5664d09a..0000000000000000000000000000000000000000 --- a/spaces/musicians/deepharmony/lib/MusicXMLParser.py +++ /dev/null @@ -1,1261 +0,0 @@ -# Copyright 2022 The Magenta Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""MusicXML parser. - -Simple MusicXML parser used to convert MusicXML into NoteSequence. -""" - -import fractions -import xml.etree.ElementTree as ET -import zipfile - -import constants - -Fraction = fractions.Fraction - -DEFAULT_MIDI_PROGRAM = 0 # Default MIDI Program (0 = grand piano) -DEFAULT_MIDI_CHANNEL = 0 # Default MIDI Channel (0 = first channel) -MUSICXML_MIME_TYPE = 'application/vnd.recordare.musicxml+xml' - - -class MusicXMLParseError(Exception): - """Exception thrown when the MusicXML contents cannot be parsed.""" - pass - - -class PitchStepParseError(MusicXMLParseError): - """Exception thrown when a pitch step cannot be parsed. - - Will happen if pitch step is not one of A, B, C, D, E, F, or G - """ - pass - - -class ChordSymbolParseError(MusicXMLParseError): - """Exception thrown when a chord symbol cannot be parsed.""" - pass - - -class MultipleTimeSignatureError(MusicXMLParseError): - """Exception thrown when multiple time signatures found in a measure.""" - pass - - -class AlternatingTimeSignatureError(MusicXMLParseError): - """Exception thrown when an alternating time signature is encountered.""" - pass - - -class TimeSignatureParseError(MusicXMLParseError): - """Exception thrown when the time signature could not be parsed.""" - pass - - -class UnpitchedNoteError(MusicXMLParseError): - """Exception thrown when an unpitched note is encountered. - - We do not currently support parsing files with unpitched notes (e.g., - percussion scores). - - http://www.musicxml.com/tutorial/percussion/unpitched-notes/ - """ - pass - - -class KeyParseError(MusicXMLParseError): - """Exception thrown when a key signature cannot be parsed.""" - pass - - -class InvalidNoteDurationTypeError(MusicXMLParseError): - """Exception thrown when a note's duration type is invalid.""" - pass - - -class MusicXMLParserState(object): - """Maintains internal state of the MusicXML parser.""" - - def __init__(self): - # Default to one division per measure - # From the MusicXML documentation: "The divisions element indicates - # how many divisions per quarter note are used to indicate a note's - # duration. For example, if duration = 1 and divisions = 2, - # this is an eighth note duration." - self.divisions = 1 - - # Default to a tempo of 120 quarter notes per minute - # MusicXML calls this tempo, but Magenta calls this qpm - # Therefore, the variable is called qpm, but reads the - # MusicXML tempo attribute - # (120 qpm is the default tempo according to the - # Standard MIDI Files 1.0 Specification) - self.qpm = 120 - - # Duration of a single quarter note in seconds - self.seconds_per_quarter = 0.5 - - # Running total of time for the current event in seconds. - # Resets to 0 on every part. Affected by and elements - self.time_position = 0 - - # Default to a MIDI velocity of 64 (mf) - self.velocity = 64 - - # Default MIDI program (0 = grand piano) - self.midi_program = DEFAULT_MIDI_PROGRAM - - # Current MIDI channel (usually equal to the part number) - self.midi_channel = DEFAULT_MIDI_CHANNEL - - # Keep track of previous note to get chord timing correct - # This variable stores an instance of the Note class (defined below) - self.previous_note = None - - # Keep track of current transposition level in +/- semitones. - self.transpose = 0 - - # Keep track of current time signature. Does not support polymeter. - self.time_signature = None - - -class MusicXMLDocument(object): - """Internal representation of a MusicXML Document. - - Represents the top level object which holds the MusicXML document - Responsible for loading the .xml or .mxl file using the _get_score method - If the file is .mxl, this class uncompresses it - - After the file is loaded, this class then parses the document into memory - using the parse method. - """ - - def __init__(self, filename): - self._score = self._get_score(filename) - self.parts = [] - # ScoreParts indexed by id. - self._score_parts = {} - self.midi_resolution = constants.STANDARD_PPQ - self._state = MusicXMLParserState() - # Total time in seconds - self.total_time_secs = 0 - self._parse() - - @staticmethod - def _get_score(score_string): - """Given a MusicXML file, return the score as an xml.etree.ElementTree. - - Given a MusicXML file, return the score as an xml.etree.ElementTree - If the file is compress (ends in .mxl), uncompress it first - - Args: - filename: The path of a MusicXML file - - Returns: - The score as an xml.etree.ElementTree. - - Raises: - MusicXMLParseError: if the file cannot be parsed. - """ - score = None - score = ET.fromstring(score_string) - return score - - def _parse(self): - """Parse the uncompressed MusicXML document.""" - # Parse part-list - xml_part_list = self._score.find('part-list') - if xml_part_list is not None: - for element in xml_part_list: - if element.tag == 'score-part': - score_part = ScorePart(element) - self._score_parts[score_part.id] = score_part - - # Parse parts - for score_part_index, child in enumerate(self._score.findall('part')): - part = Part(child, self._score_parts, self._state) - self.parts.append(part) - score_part_index += 1 - if self._state.time_position > self.total_time_secs: - self.total_time_secs = self._state.time_position - - def get_chord_symbols(self): - """Return a list of all the chord symbols used in this score.""" - chord_symbols = [] - for part in self.parts: - for measure in part.measures: - for chord_symbol in measure.chord_symbols: - if chord_symbol not in chord_symbols: - # Prevent duplicate chord symbols - chord_symbols.append(chord_symbol) - return chord_symbols - - def get_time_signatures(self): - """Return a list of all the time signatures used in this score. - - Does not support polymeter (i.e. assumes all parts have the same - time signature, such as Part 1 having a time signature of 6/8 - while Part 2 has a simultaneous time signature of 2/4). - - Ignores duplicate time signatures to prevent Magenta duplicate - time signature error. This happens when multiple parts have the - same time signature is used in multiple parts at the same time. - - Example: If Part 1 has a time siganture of 4/4 and Part 2 also - has a time signature of 4/4, then only instance of 4/4 is sent - to Magenta. - - Returns: - A list of all TimeSignature objects used in this score. - """ - time_signatures = [] - for part in self.parts: - for measure in part.measures: - if measure.time_signature is not None: - if measure.time_signature not in time_signatures: - # Prevent duplicate time signatures - time_signatures.append(measure.time_signature) - - return time_signatures - - def get_key_signatures(self): - """Return a list of all the key signatures used in this score. - - Support different key signatures in different parts (score in - written pitch). - - Ignores duplicate key signatures to prevent Magenta duplicate key - signature error. This happens when multiple parts have the same - key signature at the same time. - - Example: If the score is in written pitch and the - flute is written in the key of Bb major, the trombone will also be - written in the key of Bb major. However, the clarinet and trumpet - will be written in the key of C major because they are Bb transposing - instruments. - - If no key signatures are found, create a default key signature of - C major. - - Returns: - A list of all KeySignature objects used in this score. - """ - key_signatures = [] - for part in self.parts: - for measure in part.measures: - if measure.key_signature is not None: - if measure.key_signature not in key_signatures: - # Prevent duplicate key signatures - key_signatures.append(measure.key_signature) - - if not key_signatures: - # If there are no key signatures, add C major at the beginning - key_signature = KeySignature(self._state) - key_signature.time_position = 0 - key_signatures.append(key_signature) - - return key_signatures - - def get_tempos(self): - """Return a list of all tempos in this score. - - If no tempos are found, create a default tempo of 120 qpm. - - Returns: - A list of all Tempo objects used in this score. - """ - tempos = [] - - if self.parts: - part = self.parts[0] # Use only first part - for measure in part.measures: - for tempo in measure.tempos: - tempos.append(tempo) - - # If no tempos, add a default of 120 at beginning - if not tempos: - tempo = Tempo(self._state) - tempo.qpm = self._state.qpm - tempo.time_position = 0 - tempos.append(tempo) - - return tempos - - -class ScorePart(object): - """"Internal representation of a MusicXML . - - A element contains MIDI program and channel info - for the elements in the MusicXML document. - - If no MIDI info is found for the part, use the default MIDI channel (0) - and default to the Grand Piano program (MIDI Program #1). - """ - - def __init__(self, xml_score_part=None): - self.id = '' - self.part_name = '' - self.midi_channel = DEFAULT_MIDI_CHANNEL - self.midi_program = DEFAULT_MIDI_PROGRAM - if xml_score_part is not None: - self._parse(xml_score_part) - - def _parse(self, xml_score_part): - """Parse the element to an in-memory representation.""" - self.id = xml_score_part.attrib['id'] - - if xml_score_part.find('part-name') is not None: - self.part_name = xml_score_part.find('part-name').text or '' - - xml_midi_instrument = xml_score_part.find('midi-instrument') - if (xml_midi_instrument is not None and - xml_midi_instrument.find('midi-channel') is not None and - xml_midi_instrument.find('midi-program') is not None): - self.midi_channel = int(xml_midi_instrument.find('midi-channel').text) - self.midi_program = int(xml_midi_instrument.find('midi-program').text) - else: - # If no MIDI info, use the default MIDI channel. - self.midi_channel = DEFAULT_MIDI_CHANNEL - # Use the default MIDI program - self.midi_program = DEFAULT_MIDI_PROGRAM - - def __str__(self): - score_str = 'ScorePart: ' + self.part_name - score_str += ', Channel: ' + str(self.midi_channel) - score_str += ', Program: ' + str(self.midi_program) - return score_str - - -class Part(object): - """Internal represention of a MusicXML element.""" - - def __init__(self, xml_part, score_parts, state): - self.id = '' - self.score_part = None - self.measures = [] - self._state = state - self._parse(xml_part, score_parts) - - def _parse(self, xml_part, score_parts): - """Parse the element.""" - if 'id' in xml_part.attrib: - self.id = xml_part.attrib['id'] - if self.id in score_parts: - self.score_part = score_parts[self.id] - else: - # If this part references a score-part id that was not found in the file, - # construct a default score-part. - self.score_part = ScorePart() - - # Reset the time position when parsing each part - self._state.time_position = 0 - self._state.midi_channel = self.score_part.midi_channel - self._state.midi_program = self.score_part.midi_program - self._state.transpose = 0 - - xml_measures = xml_part.findall('measure') - for measure in xml_measures: - # Issue #674: Repair measures that do not contain notes - # by inserting a whole measure rest - self._repair_empty_measure(measure) - parsed_measure = Measure(measure, self._state) - self.measures.append(parsed_measure) - - def _repair_empty_measure(self, measure): - """Repair a measure if it is empty by inserting a whole measure rest. - - If a only consists of a element that advances - the time cursor, remove the element and replace - with a whole measure rest of the same duration. - - Args: - measure: The measure to repair. - """ - # Issue #674 - If the element is in a measure without - # any elements, treat it as if it were a whole measure - # rest by inserting a rest of that duration - forward_count = len(measure.findall('forward')) - note_count = len(measure.findall('note')) - if note_count == 0 and forward_count == 1: - # Get the duration of the element - xml_forward = measure.find('forward') - xml_duration = xml_forward.find('duration') - forward_duration = int(xml_duration.text) - - # Delete the element - measure.remove(xml_forward) - - # Insert the new note - new_note = '' - new_note += '' + str(forward_duration) + '' - new_note += '1whole1' - new_note += '' - new_note_xml = ET.fromstring(new_note) - measure.append(new_note_xml) - - def __str__(self): - part_str = 'Part: ' + self.score_part.part_name - return part_str - - -class Measure(object): - """Internal represention of the MusicXML element.""" - - def __init__(self, xml_measure, state): - self.xml_measure = xml_measure - self.notes = [] - self.chord_symbols = [] - self.tempos = [] - self.time_signature = None - self.key_signature = None - # Cumulative duration in MusicXML duration. - # Used for time signature calculations - self.duration = 0 - self.state = state - # Record the starting time of this measure so that time signatures - # can be inserted at the beginning of the measure - self.start_time_position = self.state.time_position - self._parse() - # Update the time signature if a partial or pickup measure - self._fix_time_signature() - - def _parse(self): - """Parse the element.""" - - for child in self.xml_measure: - if child.tag == 'attributes': - self._parse_attributes(child) - elif child.tag == 'backup': - self._parse_backup(child) - elif child.tag == 'direction': - self._parse_direction(child) - elif child.tag == 'forward': - self._parse_forward(child) - elif child.tag == 'note': - note = Note(child, self.state) - self.notes.append(note) - # Keep track of current note as previous note for chord timings - self.state.previous_note = note - - # Sum up the MusicXML durations in voice 1 of this measure - if note.voice == 1 and not note.is_in_chord: - self.duration += note.note_duration.duration - elif child.tag == 'harmony': - chord_symbol = ChordSymbol(child, self.state) - self.chord_symbols.append(chord_symbol) - - else: - # Ignore other tag types because they are not relevant to Magenta. - pass - - def _parse_attributes(self, xml_attributes): - """Parse the MusicXML element.""" - - for child in xml_attributes: - if child.tag == 'divisions': - self.state.divisions = int(child.text) - elif child.tag == 'key': - self.key_signature = KeySignature(self.state, child) - elif child.tag == 'time': - if self.time_signature is None: - self.time_signature = TimeSignature(self.state, child) - self.state.time_signature = self.time_signature - else: - raise MultipleTimeSignatureError('Multiple time signatures') - elif child.tag == 'transpose': - transpose = int(child.find('chromatic').text) - self.state.transpose = transpose - if self.key_signature is not None: - # Transposition is chromatic. Every half step up is 5 steps backward - # on the circle of fifths, which has 12 positions. - key_transpose = (transpose * -5) % 12 - new_key = self.key_signature.key + key_transpose - # If the new key has >6 sharps, translate to flats. - # TODO(fjord): Could be more smart about when to use sharps vs. flats - # when there are enharmonic equivalents. - if new_key > 6: - new_key %= -6 - self.key_signature.key = new_key - else: - # Ignore other tag types because they are not relevant to Magenta. - pass - - def _parse_backup(self, xml_backup): - """Parse the MusicXML element. - - This moves the global time position backwards. - - Args: - xml_backup: XML element with tag type 'backup'. - """ - - xml_duration = xml_backup.find('duration') - backup_duration = int(xml_duration.text) - midi_ticks = backup_duration * (constants.STANDARD_PPQ - / self.state.divisions) - seconds = ((midi_ticks / constants.STANDARD_PPQ) - * self.state.seconds_per_quarter) - self.state.time_position -= seconds - - def _parse_direction(self, xml_direction): - """Parse the MusicXML element.""" - - for child in xml_direction: - if child.tag == 'sound': - if child.get('tempo') is not None: - tempo = Tempo(self.state, child) - self.tempos.append(tempo) - self.state.qpm = tempo.qpm - self.state.seconds_per_quarter = 60 / self.state.qpm - if child.get('dynamics') is not None: - self.state.velocity = int(child.get('dynamics')) - - def _parse_forward(self, xml_forward): - """Parse the MusicXML element. - - This moves the global time position forward. - - Args: - xml_forward: XML element with tag type 'forward'. - """ - - xml_duration = xml_forward.find('duration') - forward_duration = int(xml_duration.text) - midi_ticks = forward_duration * (constants.STANDARD_PPQ - / self.state.divisions) - seconds = ((midi_ticks / constants.STANDARD_PPQ) - * self.state.seconds_per_quarter) - self.state.time_position += seconds - - def _fix_time_signature(self): - """Correct the time signature for incomplete measures. - - If the measure is incomplete or a pickup, insert an appropriate - time signature into this Measure. - """ - # Compute the fractional time signature (duration / divisions) - # Multiply divisions by 4 because division is always parts per quarter note - numerator = self.duration - denominator = self.state.divisions * 4 - fractional_time_signature = Fraction(numerator, denominator) - - if self.state.time_signature is None and self.time_signature is None: - # No global time signature yet and no measure time signature defined - # in this measure (no time signature or senza misura). - # Insert the fractional time signature as the time signature - # for this measure - self.time_signature = TimeSignature(self.state) - self.time_signature.numerator = fractional_time_signature.numerator - self.time_signature.denominator = fractional_time_signature.denominator - self.state.time_signature = self.time_signature - else: - fractional_state_time_signature = Fraction( - self.state.time_signature.numerator, - self.state.time_signature.denominator) - - # Check for pickup measure. Reset time signature to smaller numerator - pickup_measure = False - if numerator < self.state.time_signature.numerator: - pickup_measure = True - - # Get the current time signature denominator - global_time_signature_denominator = self.state.time_signature.denominator - - # If the fractional time signature = 1 (e.g. 4/4), - # make the numerator the same as the global denominator - if fractional_time_signature == 1 and not pickup_measure: - new_time_signature = TimeSignature(self.state) - new_time_signature.numerator = global_time_signature_denominator - new_time_signature.denominator = global_time_signature_denominator - else: - # Otherwise, set the time signature to the fractional time signature - # Issue #674 - Use the original numerator and denominator - # instead of the fractional one - new_time_signature = TimeSignature(self.state) - new_time_signature.numerator = numerator - new_time_signature.denominator = denominator - - new_time_sig_fraction = Fraction(numerator, denominator) - - if new_time_sig_fraction == fractional_time_signature: - new_time_signature.numerator = fractional_time_signature.numerator - new_time_signature.denominator = fractional_time_signature.denominator - - # Insert a new time signature only if it does not equal the global - # time signature. - if (pickup_measure or - (self.time_signature is None - and (fractional_time_signature != fractional_state_time_signature))): - new_time_signature.time_position = self.start_time_position - self.time_signature = new_time_signature - self.state.time_signature = new_time_signature - - -class Note(object): - """Internal representation of a MusicXML element.""" - - def __init__(self, xml_note, state): - self.xml_note = xml_note - self.voice = 1 - self.is_rest = False - self.is_in_chord = False - self.is_grace_note = False - self.pitch = None # Tuple (Pitch Name, MIDI number) - self.note_duration = NoteDuration(state) - self.state = state - self._parse() - - def _parse(self): - """Parse the MusicXML element.""" - - self.midi_channel = self.state.midi_channel - self.midi_program = self.state.midi_program - self.velocity = self.state.velocity - - for child in self.xml_note: - if child.tag == 'chord': - self.is_in_chord = True - elif child.tag == 'duration': - self.note_duration.parse_duration(self.is_in_chord, self.is_grace_note, - child.text) - elif child.tag == 'pitch': - self._parse_pitch(child) - elif child.tag == 'rest': - self.is_rest = True - elif child.tag == 'voice': - self.voice = int(child.text) - elif child.tag == 'dot': - self.note_duration.dots += 1 - elif child.tag == 'type': - self.note_duration.type = child.text - elif child.tag == 'time-modification': - # A time-modification element represents a tuplet_ratio - self._parse_tuplet(child) - elif child.tag == 'unpitched': - raise UnpitchedNoteError('Unpitched notes are not supported') - else: - # Ignore other tag types because they are not relevant to Magenta. - pass - - def _parse_pitch(self, xml_pitch): - """Parse the MusicXML element.""" - step = xml_pitch.find('step').text - alter_text = '' - alter = 0.0 - if xml_pitch.find('alter') is not None: - alter_text = xml_pitch.find('alter').text - octave = xml_pitch.find('octave').text - - # Parse alter string to a float (floats represent microtonal alterations) - if alter_text: - alter = float(alter_text) - - # Check if this is a semitone alter (i.e. an integer) or microtonal (float) - alter_semitones = int(alter) # Number of semitones - is_microtonal_alter = (alter != alter_semitones) - - # Visual pitch representation - alter_string = '' - if alter_semitones == -2: - alter_string = 'bb' - elif alter_semitones == -1: - alter_string = 'b' - elif alter_semitones == 1: - alter_string = '#' - elif alter_semitones == 2: - alter_string = 'x' - - if is_microtonal_alter: - alter_string += ' (+microtones) ' - - # N.B. - pitch_string does not account for transposition - pitch_string = step + alter_string + octave - - # Compute MIDI pitch number (C4 = 60, C1 = 24, C0 = 12) - midi_pitch = self.pitch_to_midi_pitch(step, alter, octave) - # Transpose MIDI pitch - midi_pitch += self.state.transpose - self.pitch = (pitch_string, midi_pitch) - - def _parse_tuplet(self, xml_time_modification): - """Parses a tuplet ratio. - - Represented in MusicXML by the element. - - Args: - xml_time_modification: An xml time-modification element. - """ - numerator = int(xml_time_modification.find('actual-notes').text) - denominator = int(xml_time_modification.find('normal-notes').text) - self.note_duration.tuplet_ratio = Fraction(numerator, denominator) - - @staticmethod - def pitch_to_midi_pitch(step, alter, octave): - """Convert MusicXML pitch representation to MIDI pitch number.""" - pitch_class = 0 - if step == 'C': - pitch_class = 0 - elif step == 'D': - pitch_class = 2 - elif step == 'E': - pitch_class = 4 - elif step == 'F': - pitch_class = 5 - elif step == 'G': - pitch_class = 7 - elif step == 'A': - pitch_class = 9 - elif step == 'B': - pitch_class = 11 - else: - # Raise exception for unknown step (ex: 'Q') - raise PitchStepParseError('Unable to parse pitch step ' + step) - - pitch_class = (pitch_class + int(alter)) % 12 - midi_pitch = (12 + pitch_class) + (int(octave) * 12) - return midi_pitch - - def __str__(self): - note_string = '{duration: ' + str(self.note_duration.duration) - note_string += ', midi_ticks: ' + str(self.note_duration.midi_ticks) - note_string += ', seconds: ' + str(self.note_duration.seconds) - if self.is_rest: - note_string += ', rest: ' + str(self.is_rest) - else: - note_string += ', pitch: ' + self.pitch[0] - note_string += ', MIDI pitch: ' + str(self.pitch[1]) - - note_string += ', voice: ' + str(self.voice) - note_string += ', velocity: ' + str(self.velocity) + '} ' - note_string += '(@time: ' + str(self.note_duration.time_position) + ')' - return note_string - - -class NoteDuration(object): - """Internal representation of a MusicXML note's duration properties.""" - - TYPE_RATIO_MAP = {'maxima': Fraction(8, 1), 'long': Fraction(4, 1), - 'breve': Fraction(2, 1), 'whole': Fraction(1, 1), - 'half': Fraction(1, 2), 'quarter': Fraction(1, 4), - 'eighth': Fraction(1, 8), '16th': Fraction(1, 16), - '32nd': Fraction(1, 32), '64th': Fraction(1, 64), - '128th': Fraction(1, 128), '256th': Fraction(1, 256), - '512th': Fraction(1, 512), '1024th': Fraction(1, 1024)} - - def __init__(self, state): - self.duration = 0 # MusicXML duration - self.midi_ticks = 0 # Duration in MIDI ticks - self.seconds = 0 # Duration in seconds - self.time_position = 0 # Onset time in seconds - self.dots = 0 # Number of augmentation dots - self._type = 'quarter' # MusicXML duration type - self.tuplet_ratio = Fraction(1, 1) # Ratio for tuplets (default to 1) - self.is_grace_note = True # Assume true until not found - self.state = state - - def parse_duration(self, is_in_chord, is_grace_note, duration): - """Parse the duration of a note and compute timings.""" - self.duration = int(duration) - - # Due to an error in Sibelius' export, force this note to have the - # duration of the previous note if it is in a chord - if is_in_chord: - self.duration = self.state.previous_note.note_duration.duration - - self.midi_ticks = self.duration - self.midi_ticks *= (constants.STANDARD_PPQ / self.state.divisions) - - self.seconds = (self.midi_ticks / constants.STANDARD_PPQ) - self.seconds *= self.state.seconds_per_quarter - - self.time_position = self.state.time_position - - # Not sure how to handle durations of grace notes yet as they - # steal time from subsequent notes and they do not have a - # tag in the MusicXML - self.is_grace_note = is_grace_note - - if is_in_chord: - # If this is a chord, set the time position to the time position - # of the previous note (i.e. all the notes in the chord will have - # the same time position) - self.time_position = self.state.previous_note.note_duration.time_position - else: - # Only increment time positions once in chord - self.state.time_position += self.seconds - - def _convert_type_to_ratio(self): - """Convert the MusicXML note-type-value to a Python Fraction. - - Examples: - - whole = 1/1 - - half = 1/2 - - quarter = 1/4 - - 32nd = 1/32 - - Returns: - A Fraction object representing the note type. - """ - return self.TYPE_RATIO_MAP[self.type] - - def duration_ratio(self): - """Compute the duration ratio of the note as a Python Fraction. - - Examples: - - Whole Note = 1 - - Quarter Note = 1/4 - - Dotted Quarter Note = 3/8 - - Triplet eighth note = 1/12 - - Returns: - The duration ratio as a Python Fraction. - """ - # Get ratio from MusicXML note type - duration_ratio = Fraction(1, 1) - type_ratio = self._convert_type_to_ratio() - - # Compute tuplet ratio - duration_ratio /= self.tuplet_ratio - type_ratio /= self.tuplet_ratio - - # Add augmentation dots - one_half = Fraction(1, 2) - dot_sum = Fraction(0, 1) - for dot in range(self.dots): - dot_sum += (one_half ** (dot + 1)) * type_ratio - - duration_ratio = type_ratio + dot_sum - - # If the note is a grace note, force its ratio to be 0 - # because it does not have a tag - if self.is_grace_note: - duration_ratio = Fraction(0, 1) - - return duration_ratio - - def duration_float(self): - """Return the duration ratio as a float.""" - ratio = self.duration_ratio() - return ratio.numerator / ratio.denominator - - @property - def type(self): - return self._type - - @type.setter - def type(self, new_type): - if new_type not in self.TYPE_RATIO_MAP: - raise InvalidNoteDurationTypeError( - 'Note duration type "{}" is not valid'.format(new_type)) - self._type = new_type - - -class ChordSymbol(object): - """Internal representation of a MusicXML chord symbol element. - - This represents a chord symbol with four components: - - 1) Root: a string representing the chord root pitch class, e.g. "C#". - 2) Kind: a string representing the chord kind, e.g. "m7" for minor-seventh, - "9" for dominant-ninth, or the empty string for major triad. - 3) Scale degree modifications: a list of strings representing scale degree - modifications for the chord, e.g. "add9" to add an unaltered ninth scale - degree (without the seventh), "b5" to flatten the fifth scale degree, - "no3" to remove the third scale degree, etc. - 4) Bass: a string representing the chord bass pitch class, or None if the bass - pitch class is the same as the root pitch class. - - There's also a special chord kind "N.C." representing no harmony, for which - all other fields should be None. - - Use the `get_figure_string` method to get a string representation of the chord - symbol as might appear in a lead sheet. This string representation is what we - use to represent chord symbols in NoteSequence protos, as text annotations. - While the MusicXML representation has more structure, using an unstructured - string provides more flexibility and allows us to ingest chords from other - sources, e.g. guitar tabs on the web. - """ - - # The below dictionary maps chord kinds to an abbreviated string as would - # appear in a chord symbol in a standard lead sheet. There are often multiple - # standard abbreviations for the same chord type, e.g. "+" and "aug" both - # refer to an augmented chord, and "maj7", "M7", and a Delta character all - # refer to a major-seventh chord; this dictionary attempts to be consistent - # but the choice of abbreviation is somewhat arbitrary. - # - # The MusicXML-defined chord kinds are listed here: - # http://usermanuals.musicxml.com/MusicXML/Content/ST-MusicXML-kind-value.htm - - CHORD_KIND_ABBREVIATIONS = { - # These chord kinds are in the MusicXML spec. - 'major': '', - 'minor': 'm', - 'augmented': 'aug', - 'diminished': 'dim', - 'dominant': '7', - 'major-seventh': 'maj7', - 'minor-seventh': 'm7', - 'diminished-seventh': 'dim7', - 'augmented-seventh': 'aug7', - 'half-diminished': 'm7b5', - 'major-minor': 'm(maj7)', - 'major-sixth': '6', - 'minor-sixth': 'm6', - 'dominant-ninth': '9', - 'major-ninth': 'maj9', - 'minor-ninth': 'm9', - 'dominant-11th': '11', - 'major-11th': 'maj11', - 'minor-11th': 'm11', - 'dominant-13th': '13', - 'major-13th': 'maj13', - 'minor-13th': 'm13', - 'suspended-second': 'sus2', - 'suspended-fourth': 'sus', - 'pedal': 'ped', - 'power': '5', - 'none': 'N.C.', - - # These are not in the spec, but show up frequently in the wild. - 'dominant-seventh': '7', - 'augmented-ninth': 'aug9', - 'minor-major': 'm(maj7)', - - # Some abbreviated kinds also show up frequently in the wild. - '': '', - 'min': 'm', - 'aug': 'aug', - 'dim': 'dim', - '7': '7', - 'maj7': 'maj7', - 'min7': 'm7', - 'dim7': 'dim7', - 'm7b5': 'm7b5', - 'minMaj7': 'm(maj7)', - '6': '6', - 'min6': 'm6', - 'maj69': '6(add9)', - '9': '9', - 'maj9': 'maj9', - 'min9': 'm9', - 'sus47': 'sus7' - } - - def __init__(self, xml_harmony, state): - self.xml_harmony = xml_harmony - self.time_position = -1 - self.root = None - self.kind = '' - self.degrees = [] - self.bass = None - self.state = state - self._parse() - - def _alter_to_string(self, alter_text): - """Parse alter text to a string of one or two sharps/flats. - - Args: - alter_text: A string representation of an integer number of semitones. - - Returns: - A string, one of 'bb', 'b', '#', '##', or the empty string. - - Raises: - ChordSymbolParseError: If `alter_text` cannot be parsed to an integer, - or if the integer is not a valid number of semitones between -2 and 2 - inclusive. - """ - # Parse alter text to an integer number of semitones. - try: - alter_semitones = int(alter_text) - except ValueError: - raise ChordSymbolParseError('Non-integer alter: ' + str(alter_text)) - - # Visual alter representation - if alter_semitones == -2: - alter_string = 'bb' - elif alter_semitones == -1: - alter_string = 'b' - elif alter_semitones == 0: - alter_string = '' - elif alter_semitones == 1: - alter_string = '#' - elif alter_semitones == 2: - alter_string = '##' - else: - raise ChordSymbolParseError('Invalid alter: ' + str(alter_semitones)) - - return alter_string - - def _parse(self): - """Parse the MusicXML element.""" - self.time_position = self.state.time_position - - for child in self.xml_harmony: - if child.tag == 'root': - self._parse_root(child) - elif child.tag == 'kind': - if child.text is None: - # Seems like this shouldn't happen but frequently does in the wild... - continue - kind_text = str(child.text).strip() - if kind_text not in self.CHORD_KIND_ABBREVIATIONS: - raise ChordSymbolParseError('Unknown chord kind: ' + kind_text) - self.kind = self.CHORD_KIND_ABBREVIATIONS[kind_text] - elif child.tag == 'degree': - self.degrees.append(self._parse_degree(child)) - elif child.tag == 'bass': - self._parse_bass(child) - elif child.tag == 'offset': - # Offset tag moves chord symbol time position. - try: - offset = int(child.text) - except ValueError: - raise ChordSymbolParseError('Non-integer offset: ' + str(child.text)) - midi_ticks = offset * constants.STANDARD_PPQ / self.state.divisions - seconds = (midi_ticks / constants.STANDARD_PPQ * - self.state.seconds_per_quarter) - self.time_position += seconds - else: - # Ignore other tag types because they are not relevant to Magenta. - pass - - if self.root is None and self.kind != 'N.C.': - raise ChordSymbolParseError('Chord symbol must have a root') - - def _parse_pitch(self, xml_pitch, step_tag, alter_tag): - """Parse and return the pitch-like or element.""" - if xml_pitch.find(step_tag) is None: - raise ChordSymbolParseError('Missing pitch step') - step = xml_pitch.find(step_tag).text - - alter_string = '' - if xml_pitch.find(alter_tag) is not None: - alter_text = xml_pitch.find(alter_tag).text - alter_string = self._alter_to_string(alter_text) - - if self.state.transpose: - raise ChordSymbolParseError( - 'Transposition of chord symbols currently unsupported') - - return step + alter_string - - def _parse_root(self, xml_root): - """Parse the tag for a chord symbol.""" - self.root = self._parse_pitch(xml_root, step_tag='root-step', - alter_tag='root-alter') - - def _parse_bass(self, xml_bass): - """Parse the tag for a chord symbol.""" - self.bass = self._parse_pitch(xml_bass, step_tag='bass-step', - alter_tag='bass-alter') - - def _parse_degree(self, xml_degree): - """Parse and return the scale degree modification element.""" - if xml_degree.find('degree-value') is None: - raise ChordSymbolParseError('Missing scale degree value in harmony') - value_text = xml_degree.find('degree-value').text - if value_text is None: - raise ChordSymbolParseError('Missing scale degree') - try: - value = int(value_text) - except ValueError: - raise ChordSymbolParseError( - 'Non-integer scale degree: ' + str(value_text)) - - alter_string = '' - if xml_degree.find('degree-alter') is not None: - alter_text = xml_degree.find('degree-alter').text - alter_string = self._alter_to_string(alter_text) - - if xml_degree.find('degree-type') is None: - raise ChordSymbolParseError('Missing degree modification type') - type_text = xml_degree.find('degree-type').text - - if type_text == 'add': - if not alter_string: - # When adding unaltered scale degree, use "add" string. - type_string = 'add' - else: - # When adding altered scale degree, "add" not necessary. - type_string = '' - elif type_text == 'subtract': - type_string = 'no' - # Alter should be irrelevant when removing scale degree. - alter_string = '' - elif type_text == 'alter': - if not alter_string: - raise ChordSymbolParseError('Degree alteration by zero semitones') - # No type string necessary as merely appending e.g. "#9" suffices. - type_string = '' - else: - raise ChordSymbolParseError( - 'Invalid degree modification type: ' + str(type_text)) - - # Return a scale degree modification string that can be appended to a chord - # symbol figure string. - return type_string + alter_string + str(value) - - def __str__(self): - if self.kind == 'N.C.': - note_string = '{kind: ' + self.kind + '} ' - else: - note_string = '{root: ' + self.root - note_string += ', kind: ' + self.kind - note_string += ', degrees: [%s]' % ', '.join(degree - for degree in self.degrees) - note_string += ', bass: ' + self.bass + '} ' - note_string += '(@time: ' + str(self.time_position) + ')' - return note_string - - def get_figure_string(self): - """Return a chord symbol figure string.""" - if self.kind == 'N.C.': - return self.kind - else: - degrees_string = ''.join('(%s)' % degree for degree in self.degrees) - figure = self.root + self.kind + degrees_string - if self.bass: - figure += '/' + self.bass - return figure - - -class TimeSignature(object): - """Internal representation of a MusicXML time signature. - - Does not support: - - Composite time signatures: 3+2/8 - - Alternating time signatures 2/4 + 3/8 - - Senza misura - """ - - def __init__(self, state, xml_time=None): - self.xml_time = xml_time - self.numerator = -1 - self.denominator = -1 - self.time_position = 0 - self.state = state - if xml_time is not None: - self._parse() - - def _parse(self): - """Parse the MusicXML